5G Design Changes


Mike Fitton, senior director of strategic planning at Achronix, talks with Semiconductor Engineering about the two distinct parts of 5G deployment, how to get a huge amount of data from the core to the edge of a device where it is usable, and how a network on chip can improve the flow of data. » read more

Accelerating Endpoint Inferencing


Chipmakers are getting ready to debut inference chips for endpoint devices, even though the rest of the machine-learning ecosystem has yet to be established. Whatever infrastructure does exist today is mostly in the cloud, on edge-computing gateways, or in company-specific data centers, which most companies continue to use. For example, Tesla has its own data center. So do most major carmake... » read more

Machine Learning Drives High-Level Synthesis Boom


High-level synthesis (HLS) is experiencing a new wave of popularity, driven by its ability to handle machine-learning matrices and iterative design efforts. The obvious advantage of HLS is the boost in productivity designers get from working in C, C++ and other high-level languages rather than RTL. The ability to design a layout that should work, and then easily modify it to test other confi... » read more

Holes In AI Security


Mike Borza, principal security technologist in Synopsys’ Solutions Group, explains why security is lacking in AI, why AI is especially susceptible to Trojans, and why small changes in training data can have big impacts on many devices. » read more

Inferencing At The Edge


Geoff Tate, CEO of Flex Logix, talks about the challenges of power and performance at the edge, why this market is so important from a business and technology standpoint, and what factors need to be balanced. » read more

Driving AI, ML To New Levels On MCUs


One of the most dramatic impacts of technology of late has been the implementation of artificial intelligence and machine learning on small edge devices, the likes of which are forming the backbone of the Internet of Things. At first, this happened through sheer engineering willpower and innovation. But as the drive towards a world of a trillion connected devices accelerates, we must find wa... » read more

Neural Network Performance Modeling Software


nnMAX Inference IP is nearing design completion. The nnMAX 1K tile will be available this summer for design integration in SoCs, and it can be arrayed to provide whatever inference throughput is desired. The InferX X1 chip will tape out late Q3 this year using 2x2 nnMAX tiles, for 4K MACs, with 8MB SRAM. The nnMAX Compiler is in development in parallel, and the first release is available now... » read more

Rushing To The Edge


Virtually every major tech company has an "edge" marketing presentation these days, and some even have products they are calling edge devices. But the reality is that today no one is quite sure how to define the edge or what it will become, and any attempts to pigeon-hole it are premature. What is becoming clear is the edge is not simply an extension of the Internet of Things. It is the resu... » read more

Week In Review: Design, Low Power


IP Flex Logix debuted its new InferX X1 edge inference co-processor, which incorporates the interconnect technology from its eFPGAs and its inference-optimized nnMAX clusters. The chip focuses on high throughput in edge applications with a single DRAM and is optimized for small batch sizes in edge applications where there is typically only one camera/sensor. InferX X1 will be available as chip... » read more

Machine Learning on Arm Cortex-M Microcontrollers


Machine learning (ML) algorithms are moving to the IoT edge due to various considerations such as latency, power consumption, cost, network bandwidth, reliability, privacy and security. Hence, there is an increasing interest in developing Neural Network (NN) solutions to deploy them on low-power edge devices such as the Arm Cortex-M microcontroller systems. CMSIS-NN is an open-source library of... » read more

← Older posts Newer posts →