A Collaborative Data Model For AI/ML In EDA


This work explores industry perspectives on: Machine Learning and IC Design Demand for Data Structure of a Data Model A Unified Data Model: Digital and Analog examples Definition and Characteristics of Derived Data for ML Applications Need for IP Protection Unique Requirements for Inferencing Models Key Analysis Domains Conclusions and Proposed Future Work Abstra... » read more

Power Models For Machine Learning


AI and machine learning are being designed into just about everything, but the chip industry lacks sufficient tools to gauge how much power and energy an algorithm is using when it runs on a particular hardware platform. The missing information is a serious limiter for energy-sensitive devices. As the old maxim goes, you can't optimize what you can't measure. Today, the focus is on functiona... » read more

Low Power Still Leads, But Energy Emerges As Future Focus


In 2021 and beyond, chips used in smartphones, digital appliances, and nearly all major applications will need to go on a diet. As the amount of data being generated continues to swell, more processors are being added everywhere to sift through that data to determine what's useful, what isn't, and how to distribute it. All of that uses power, and not all of it is being done as efficiently as... » read more

Convolutional Neural Network With INT4 Optimization


Xilinx provides an INT8 AI inference accelerator on Xilinx hardware platforms — Deep Learning Processor Unit (XDPU). However, in some resource-limited, high-performance and low-latency scenarios (such as the resource-power-sensitive edge side and low-latency ADAS scenario), low bit quantization of neural networks is required to achieve lower power consumption and higher performance than provi... » read more

Brute-Force Analysis Not Keeping Up With IC Complexity


Much of the current design and verification flow was built on brute force analysis, a simple and direct approach. But that approach rarely scales, and as designs become larger and the number of interdependencies increases, ensuring the design always operates within spec is becoming a monumental task. Unless design teams want to keep adding increasing amounts of margin, they have to locate th... » read more

What’s Next In AI, Chips And Masks


Aki Fujimura, chief executive of D2S, sat down with Semiconductor Engineering to talk about AI and Moore’s Law, lithography, and photomask technologies. What follows are excerpts of that conversation. SE: In the eBeam Initiative’s recent Luminary Survey, the participants had some interesting observations about the outlook for the photomask market. What were those observations? Fujimur... » read more

Artificial Intelligence For Sustainable And Energy Efficient Buildings


According to the goals of Europe’s green deal missions, the continent strives for becoming carbon neutral by 2050. Since buildings are a major contributor to the overall consumption of energy, improving their energy efficiency can be a key to a more sustainable and greener Europe. On the way towards zero-emission buildings, several challenges have to be met: In modern energy systems, several ... » read more

The Benefits Of Using Embedded Sensing Fabrics In AI Devices


AI chips, regardless of the application, are not regular ASICs and tend to be very large, this essentially means that AI chips are reaching the reticle limits in-terms of their size. They are also usually dominated by an array of regular structures and this helps to mitigate yield issues by building in tolerance to defect density due to the sheer number of processor blocks. The reason behind... » read more

AI Design In Korea


Like many in the semiconductor design businesses, Arteris IP is actively working with the Korean chip companies. This shouldn’t be a surprise. If a company is building an SoC of any reasonable size, it needs network-on-chip (NoC) interconnect for optimal QoS (bandwidth and latency regulation and system-level arbitration) and low routing congestion, even in application-centric designs such as ... » read more

Compiling And Optimizing Neural Nets


Edge inference engines often run a slimmed-down real-time engine that interprets a neural-network model, invoking kernels as it goes. But higher performance can be achieved by pre-compiling the model and running it directly, with no interpretation — as long as the use case permits it. At compile time, optimizations are possible that wouldn’t be available if interpreting. By quantizing au... » read more

← Older posts Newer posts →