A Collaborative Data Model For AI/ML In EDA


This work explores industry perspectives on: Machine Learning and IC Design Demand for Data Structure of a Data Model A Unified Data Model: Digital and Analog examples Definition and Characteristics of Derived Data for ML Applications Need for IP Protection Unique Requirements for Inferencing Models Key Analysis Domains Conclusions and Proposed Future Work Abstra... » read more

Power Models For Machine Learning


AI and machine learning are being designed into just about everything, but the chip industry lacks sufficient tools to gauge how much power and energy an algorithm is using when it runs on a particular hardware platform. The missing information is a serious limiter for energy-sensitive devices. As the old maxim goes, you can't optimize what you can't measure. Today, the focus is on functiona... » read more

Low Power Still Leads, But Energy Emerges As Future Focus


In 2021 and beyond, chips used in smartphones, digital appliances, and nearly all major applications will need to go on a diet. As the amount of data being generated continues to swell, more processors are being added everywhere to sift through that data to determine what's useful, what isn't, and how to distribute it. All of that uses power, and not all of it is being done as efficiently as... » read more

Standard Benchmarks For AI Innovation


There is no standard measurement for machine learning performance today, meaning there is no single answer for how companies build a processor for ML across all use cases while balancing compute and memory constraints. For the longest time, every group would pick a definition and test to suit their own needs. This lack of common understanding of performance hinders customers' buying decis... » read more

Tapping Into Purpose-Built Neural Network Models For Even Bigger Efficiency Gains


Neural networks can be categorized as a set of algorithms modelled loosely after the human brain that can ‘learn’ by incorporating new data. Indeed, many benefits can be derived from developing purpose-built “computationally efficient” neural network models. However, to ensure your model is effective, there are several key requirements that need to be considered. One critical conside... » read more

Infrastructure Impacts Data Analytics


Semiconductor data analytics relies upon timely, error-free data from the manufacturing processes, but the IT infrastructure investment and engineering effort needed to deliver that data is, expensive, enormous, and still growing. The volume of data has ballooned at all points of data generation as equipment makers add more sensors into their tools, and as monitors are embedded into the chip... » read more

Forward And Backward Compatibility In IC Designs


Future-proofing of designs is becoming more difficult due to the accelerating pace of innovation in architectures, end markets, and technologies such as AI and machine learning. Traditional approaches for maintaining market share and analyzing what should be in the next rev of a product are falling by the wayside. They are being replaced by best-guesses about market trends and a need to bala... » read more

Brute-Force Analysis Not Keeping Up With IC Complexity


Much of the current design and verification flow was built on brute force analysis, a simple and direct approach. But that approach rarely scales, and as designs become larger and the number of interdependencies increases, ensuring the design always operates within spec is becoming a monumental task. Unless design teams want to keep adding increasing amounts of margin, they have to locate th... » read more

What’s Next In AI, Chips And Masks


Aki Fujimura, chief executive of D2S, sat down with Semiconductor Engineering to talk about AI and Moore’s Law, lithography, and photomask technologies. What follows are excerpts of that conversation. SE: In the eBeam Initiative’s recent Luminary Survey, the participants had some interesting observations about the outlook for the photomask market. What were those observations? Fujimur... » read more

Difficult Memory Choices In AI Systems


The number of memory choices and architectures is exploding, driven by the rapid evolution in AI and machine learning chips being designed for a wide range of very different end markets and systems. Models for some of these systems can range in size from 10 billion to 100 billion parameters, and they can vary greatly from one chip or application to the next. Neural network training and infer... » read more

← Older posts Newer posts →