Optimizing Machine Learning Workloads On Power-Efficient Devices


Software frameworks for neural networks, such as TensorFlow, PyTorch, and Caffe, have made it easier to use machine learning as an everyday feature, but it can be difficult to run these frameworks in an embedded environment. Limited budgets for power, memory, and computation can all make this more difficult. At Arm, we’ve developed Arm NN, an inference engine that makes it easier to target di... » read more

Bridging Machine Learning’s Divide


There is a growing divide between those researching [getkc id="305" comment="machine learning"] (ML) in the cloud and those trying to perform inferencing using limited resources and power budgets. Researchers are using the most cost-effective hardware available to them, which happens to be GPUs filled with floating point arithmetic units. But this is an untenable solution for embedded infere... » read more

Verifying AI, Machine Learning


[getperson id="11306" comment="Raik Brinkmann"], president and CEO of [getentity id="22395" e_name="OneSpin Solutions"], sat down to talk about artificial intelligence, machine learning, and neuromorphic chips. What follows are excerpts of that conversation. SE: What's changing in [getkc id="305" kc_name="machine learning"]? Brinkmann: There’s a real push toward computing at the edge. ... » read more

Speeding Up Neural Networks


Neural networking is gaining traction as the best way of collecting and moving critical data from the physical world and processing it in the digital world. Now the question is how to speed up this whole process. But it isn't a straightforward engineering challenge. Neural networking itself is in a state of almost constant flux and development, which makes it something of a moving target. Th... » read more