Machine Learning Drives High-Level Synthesis Boom


High-level synthesis (HLS) is experiencing a new wave of popularity, driven by its ability to handle machine-learning matrices and iterative design efforts. The obvious advantage of HLS is the boost in productivity designers get from working in C, C++ and other high-level languages rather than RTL. The ability to design a layout that should work, and then easily modify it to test other confi... » read more

From AI Algorithm To Implementation


Semiconductor Engineering sat down to discuss the role that EDA has in automating artificial intelligence and machine learning with Doug Letcher, president and CEO of Metrics; Daniel Hansson, CEO of Verifyter; Harry Foster, chief scientist verification for Mentor, a Siemens Business; Larry Melling, product management director for Cadence; Manish Pandey, Synopsys fellow; and Raik Brinkmann, CEO ... » read more

The Automation Of AI


Semiconductor Engineering sat down to discuss the role that EDA has in automating artificial intelligence and machine learning with Doug Letcher, president and CEO of Metrics; Daniel Hansson, CEO of Verifyter; Harry Foster, chief scientist verification for Mentor, a Siemens Business; Larry Melling, product management director for Cadence; Manish Pandey, Synopsys fellow; and Raik Brinkmann, CEO ... » read more

Edge Inferencing Challenges


Geoff Tate, CEO of Flex Logix, talks about balancing different variables to improve performance and reduce power at the lowest cost possible in order to do inferencing in edge devices. https://youtu.be/1BTxwew--5U » read more

Pros, Cons Of ML-Specific Chips


Semiconductor Engineering sat down with Rob Aitken, an Arm fellow; Raik Brinkmann, CEO of OneSpin Solutions; Patrick Soheili, vice president of business and corporate development at eSilicon; and Chris Rowen, CEO of Babblelabs. What follows are excerpts of that conversation. To view part one, click here. Part two is here. SE: Is the industry's knowledge of machine learning keeping up with th... » read more

Optimizing Machine Learning Workloads On Power-Efficient Devices


Software frameworks for neural networks, such as TensorFlow, PyTorch, and Caffe, have made it easier to use machine learning as an everyday feature, but it can be difficult to run these frameworks in an embedded environment. Limited budgets for power, memory, and computation can all make this more difficult. At Arm, we’ve developed Arm NN, an inference engine that makes it easier to target di... » read more

Bridging Machine Learning’s Divide


There is a growing divide between those researching [getkc id="305" comment="machine learning"] (ML) in the cloud and those trying to perform inferencing using limited resources and power budgets. Researchers are using the most cost-effective hardware available to them, which happens to be GPUs filled with floating point arithmetic units. But this is an untenable solution for embedded infere... » read more

Verifying AI, Machine Learning


[getperson id="11306" comment="Raik Brinkmann"], president and CEO of [getentity id="22395" e_name="OneSpin Solutions"], sat down to talk about artificial intelligence, machine learning, and neuromorphic chips. What follows are excerpts of that conversation. SE: What's changing in [getkc id="305" kc_name="machine learning"]? Brinkmann: There’s a real push toward computing at the edge. ... » read more

Speeding Up Neural Networks


Neural networking is gaining traction as the best way of collecting and moving critical data from the physical world and processing it in the digital world. Now the question is how to speed up this whole process. But it isn't a straightforward engineering challenge. Neural networking itself is in a state of almost constant flux and development, which makes it something of a moving target. Th... » read more

Newer posts →