Nvidia To Buy Arm For $40B


Nvidia inked a deal with Softbank to buy Arm for $40 billion, combining the No. 1 AI/ML GPU maker with the No. 1 processor IP company. Assuming the deal wins regulatory approval, the combination of these two companies will create a powerhouse in the AI/ML world. Nvidia's GPUs are the go-to platform for training algorithms, while Arm has a broad portfolio of AI/ML processor cores. Arm also ha... » read more

AI & IP In Edge Computing For Faster 5G And The IoT


Edge computing, which is the concept of processing and analyzing data in servers closer to the applications they serve, is growing in popularity and opening new markets for established telecom providers, semiconductor startups, and new software ecosystems. It’s brilliant how technology has come together over the last several decades to enable this new space starting with Big Data and the idea... » read more

Compiling And Optimizing Neural Nets


Edge inference engines often run a slimmed-down real-time engine that interprets a neural-network model, invoking kernels as it goes. But higher performance can be achieved by pre-compiling the model and running it directly, with no interpretation — as long as the use case permits it. At compile time, optimizations are possible that wouldn’t be available if interpreting. By quantizing au... » read more

How ML Enables Cadence Digital Tools To Deliver Better PPA


Artificial intelligence (AI) and machine learning (ML) are emerging as powerful new ways to do old things more efficiently, which is the benchmark that any new and potentially disruptive technology must meet. In chip design, results are measured in many different ways, but common metrics are power (consumed), performance (provided), and area (required), collectively referred to as PPA. These me... » read more

From Data Center To End Device: AI/ML Inferencing With GDDR6


Created to support 3D gaming on consoles and PCs, GDDR packs performance that makes it an ideal solution for AI/ML inferencing. As inferencing migrates from the heart of the data center to the network edge, and ultimately to a broad range of AI-powered IoT devices, GDDR memory’s combination of high bandwidth, low latency, power efficiency and suitability for high-volume applications will be i... » read more

For AI Hardware, Power Optimization Starts With Software And Ends At Silicon


Artificial intelligence (AI) processing hardware has emerged as a critical piece of today’s tech innovation. AI hardware architecture is very symmetric with large arrays of up to thousands of processing elements (tiles), leading to billion+ gate designs and huge power consumption. For example, the Tesla auto-pilot software stack consumes 72W of power, while the neural network accelerator cons... » read more

Apples, Oranges & The Optimal AI Inference Accelerator


There are a wide range of AI inference accelerators available and a wide range of applications for them. No AI inference accelerator will be optimal for every application. For example, a data center class accelerator almost certainly will be too big, burn too much power, and cost too much for most edge applications. And an accelerator optimal for key word recognition won’t have the capabil... » read more

Intelligent System Design


Electronics technology is proliferating to new, creative applications and appearing in our everyday lives. To compete, system companies are increasingly designing their own semiconductor chips, and semiconductor companies are delivering software stacks, to enable substantial differentiation of their products. This trend started in mobile devices and is now moving into cloud computing, automotiv... » read more

Manufacturing Bits: Sept. 1


AI, quantum computing R&D centers The White House Office of Science and Technology Policy, the National Science Foundation (NSF), and the U.S. Department of Energy (DOE) have announced over $1 billion in awards for the establishment of several new artificial intelligence and quantum information science (QIS) research institutes in the U.S. Under the plan, the U.S. is launching seven new... » read more

Getting Particular About Partitioning


Partitioning could well be one of the most important and pervasive trends since the invention of computers. It has been around for almost as long, too. The idea dates back at least as far back as the Manhattan Project during World War II, when computations were wrapped within computations. It continued from there with what we know as time-sharing, which rather crudely partitioned access by p... » read more

← Older posts Newer posts →