New Uses For Assertions


Assertions have been a staple in formal verification for years. Now they are being examined to see what else they can be used for, and the list is growing. Traditionally, design and verification engineers have used assertions in specific ways. First, there are assertions for formal verification, which are used by designers to show when something is wrong. Those assertions help to pinpoint wh... » read more

Neural Networks Without Matrix Math


The challenge of speeding up AI systems typically means adding more processing elements and pruning the algorithms, but those approaches aren't the only path forward. Almost all commercial machine learning applications depend on artificial neural networks, which are trained using large datasets with a back-propagation algorithm. The network first analyzes a training example, typically assign... » read more

Custom Designs, Custom Problems


Semiconductor Engineering sat down to discuss power optimization with Oliver King, CTO at Moortec; João Geada, chief technologist at Ansys; Dino Toffolon, senior vice president of engineering at Synopsys; Bryan Bowyer, director of engineering at Mentor, a Siemens Business; Kiran Burli, senior director of marketing for Arm's Physical Design Group; Kam Kittrell, senior product management group d... » read more

AI Inference Acceleration


Geoff Tate, CEO of Flex Logix, talks about considerations in choosing an AI inference accelerator, how that fits in with other processing elements on a chip, what tradeoffs are involved with reducing latency, and what considerations are the most important. » read more

Nvidia To Buy Arm For $40B


Nvidia inked a deal with Softbank to buy Arm for $40 billion, combining the No. 1 AI/ML GPU maker with the No. 1 processor IP company. Assuming the deal wins regulatory approval, the combination of these two companies will create a powerhouse in the AI/ML world. Nvidia's GPUs are the go-to platform for training algorithms, while Arm has a broad portfolio of AI/ML processor cores. Arm also ha... » read more

AI & IP In Edge Computing For Faster 5G And The IoT


Edge computing, which is the concept of processing and analyzing data in servers closer to the applications they serve, is growing in popularity and opening new markets for established telecom providers, semiconductor startups, and new software ecosystems. It’s brilliant how technology has come together over the last several decades to enable this new space starting with Big Data and the idea... » read more

Compiling And Optimizing Neural Nets


Edge inference engines often run a slimmed-down real-time engine that interprets a neural-network model, invoking kernels as it goes. But higher performance can be achieved by pre-compiling the model and running it directly, with no interpretation — as long as the use case permits it. At compile time, optimizations are possible that wouldn’t be available if interpreting. By quantizing au... » read more

How ML Enables Cadence Digital Tools To Deliver Better PPA


Artificial intelligence (AI) and machine learning (ML) are emerging as powerful new ways to do old things more efficiently, which is the benchmark that any new and potentially disruptive technology must meet. In chip design, results are measured in many different ways, but common metrics are power (consumed), performance (provided), and area (required), collectively referred to as PPA. These me... » read more

From Data Center To End Device: AI/ML Inferencing With GDDR6


Created to support 3D gaming on consoles and PCs, GDDR packs performance that makes it an ideal solution for AI/ML inferencing. As inferencing migrates from the heart of the data center to the network edge, and ultimately to a broad range of AI-powered IoT devices, GDDR memory’s combination of high bandwidth, low latency, power efficiency and suitability for high-volume applications will be i... » read more

For AI Hardware, Power Optimization Starts With Software And Ends At Silicon


Artificial intelligence (AI) processing hardware has emerged as a critical piece of today’s tech innovation. AI hardware architecture is very symmetric with large arrays of up to thousands of processing elements (tiles), leading to billion+ gate designs and huge power consumption. For example, the Tesla auto-pilot software stack consumes 72W of power, while the neural network accelerator cons... » read more

← Older posts Newer posts →