Managing Voltage Variation


Engineers make many tradeoffs when designing SoC’s to better meet design specifications. Power, Performance and Area (PPA) are the primary goals and all three impact the cost of the implementation. For example, higher power and performance can both require more expensive packaging for power and signal integrity as well as cooling. The larger the die area the fewer die per wafer which drives u... » read more

Virtualization: A Must-Have For Embedded AI In Automotive SoCs


Virtualization, the process of abstracting physical hardware by creating multiple virtual machines (VMs) with independent operating systems and tasks, has been in computing since the 1960s. Now, with the need to optimize the utilization of large AI and DSP blocks in automotive SoCs, along with the need for increased functional safety in autonomous driving, virtualization is coming to power- an... » read more

AI Transformer Models Enable Machine Vision Object Detection


The object detection required for machine vision applications such as autonomous driving, smart manufacturing, and surveillance applications depends on AI modeling. The goal now is to improve the models and simplify their development. Over the years, many AI models have been introduced, including YOLO, Faster R-CNN, Mask R-CNN, RetinaNet, and others, to detect images or video signals, interp... » read more

Semiconductor Industry Is Pulling AI Across A Diversity Of End Uses And Applications


Earlier this month, I had the pleasure of joining a group of industry peers during SEMICON West and the Design Automation Conference in San Francisco for an enlightening panel discussion that we organized titled, “How AI Is Reinventing the Semiconductor Industry Inside and Out.” Moderated by Gartner, I was joined on the panel by senior executives from Advantest, Synopsys and the TinyML Foun... » read more

Reducing Chip Test Costs With AI-Based Pattern Optimization


The old adage “time is money” is highly applicable to the production testing of semiconductor devices. Every second that a wafer or chip is under test means that the next part cannot yet be tested. The slower the test throughput, the more automatic test equipment (ATE) is needed to meet production throughput demands. This is a huge issue for chip producers, since high pin counts, blazingly ... » read more

A Packet-Based Architecture For Edge AI Inference


Despite significant improvements in throughput, edge AI accelerators (Neural Processing Units, or NPUs) are still often underutilized. Inefficient management of weights and activations leads to fewer available cores utilized for multiply-accumulate (MAC) operations. Edge AI applications frequently need to run on small, low-power devices, limiting the area and power allocated for memory and comp... » read more

DAC 2023: Megatrends And The Road Ahead For Design Automation


As Silicon Valley is in the midst of the heat wave the world is experiencing, the recent Design Automation Conference and its exhibition discussed hot technologies. Three megatrends defined the current situation – artificial intelligence (AI), chiplets, and integration. To me, the more exciting aspect of DAC was the discussion of what is ahead for EDA in the decade to come, and for that, the ... » read more

Solving 5G And 6G Challenges With Artificial Intelligence


Wireless networks are inherently complex, generate massive amounts of data, and have grown in complexity with each new generation of technology. This combination of large data sets and complexity makes wireless networks an ideal candidate for AI. People are receiving first-hand experiences of the power and potential of deep neural networks and machine learning (ML) as the technology begins t... » read more

Using AI To Close Coverage Gaps


Verification of complex, heterogeneous chips is becoming much more difficult and time-consuming. There are more corner cases, and devices have to last longer and behave according to spec throughout their lifetimes. This is where AI fits in. It can help identify redundancy and provide information about why a particular device or block may not be able to be fully covered, and it can do it in less... » read more

Generative AI Training With HBM3 Memory


One of the biggest, most talked about application drivers of hardware requirements today is the rise of Large Language Models (LLMs) and the generative AI which they make possible.  The most well-known example of generative AI right now is, of course, ChatGPT. ChatGPT’s large language model for GPT-3 utilizes 175 billion parameters. Fourth generation GPT-4 will reportedly boost the number of... » read more

← Older posts Newer posts →