Chip Industry Week In Review


By Susan Rambo, Karen Heyman, and Liz Allan The Biden-Harris administration designated 31 Tech Hubs across the U.S. this week, focused on industries including autonomous systems, quantum computing, biotechnology, precision medicine, clean energy advancement, and semiconductor manufacturing. The Department of Commerce (DOC) also launched its second Tech Hubs Notice of Funding Opportunity. ... » read more

Making Connections In 3D Heterogeneous Integration


Activity around 3D heterogeneous integration (3DHI) is heating up, driven by growing support from governments, the need to add more features and compute elements into systems, and a widespread recognition that there are better paths forward than packing everything into a single SoC at the same process node. The leading edge of chip design has changed dramatically over the last few years. Int... » read more

Partitioning Processors For AI Workloads


Partitioning in complex chips is beginning to resemble a high-stakes guessing game, where choices need to extrapolate from what is known today to what is expected by the time a chip finally ships. Partitioning of workloads used to be a straightforward task, although not necessarily a simple one. It depended on how a device was expected to be used, the various compute, storage and data paths ... » read more

Everyone’s A System Designer With Heterogeneous Integration


The move away from monolithic SoCs to heterogeneous chips and chiplets in a package is accelerating, setting in motion a broad shift in methodologies, collaborations, and design goals that are felt by engineers at every step of the flow, from design through manufacturing. Nearly every engineer is now working or touching some technology, process, or methodology that is new. And they are inter... » read more

Application-Optimized Processors


Executing a neural network on top of an NPU requires an understanding of application requirements, such as latency and throughput, as well as the potential partitioning challenges. Sharad Chole, chief scientist and co-founder of Expedera, talks about fine-grained dependencies, why processing packets out of order can help optimize performance and power, and when to use voltage and frequency scal... » read more

Patterns And Issues In AI Chip Design


AI is becoming more than a talking point for chip and system design, taking on increasingly complex tasks that are now competitive requirements in many markets. But the inclusion of AI, along with its machine learning and deep learning subcategories, also has injected widespread confusion and uncertainty into every aspect of electronics. This is partly due to the fact that it touches so many... » read more

Generative AI: Transforming Inference At The Edge


The world is witnessing a revolutionary advancement in artificial intelligence with the emergence of generative AI. Generative AI generates text, images, or other media responding to prompts. We are in the early stages of this new technology; still, the depth and accuracy of its results are impressive, and its potential is mind-blowing. Generative AI uses transformers, a class of neural network... » read more

An Ideal Architecture For Always-On Camera Subsystems


Always-sensing camera implementations offer a wealth of user experience advantages but face significant power, latency, and privacy concerns if not done right. To be successful, always-sensing camera subsystems must be architected to preserve battery life and to ensure privacy and security while delivering key user experience improvements. In a joint white paper with Rambus, Expedera discuss... » read more

A Packet-Based Architecture For Edge AI Inference


Despite significant improvements in throughput, edge AI accelerators (Neural Processing Units, or NPUs) are still often underutilized. Inefficient management of weights and activations leads to fewer available cores utilized for multiply-accumulate (MAC) operations. Edge AI applications frequently need to run on small, low-power devices, limiting the area and power allocated for memory and comp... » read more

The Road Ahead For SoCs In Self-Driving Vehicles


Automakers have relied on a human driver behind the wheel for more than a century. With Level 3 systems in place, the road ahead leads to full autonomy and Level 5 self-driving. However, it’s going to be a long climb. Much of the technology that got the industry to Level 3 will not scale in all the needed dimensions — performance, memory usage, interconnect, chip area, and power consumption... » read more

← Older posts Newer posts →