AI Takes Aim At Chip Industry Workforce Training


When all the planned fabs become operational, the semiconductor industry is likely to face a worker shortage of 100,000 each in the U.S. and Europe, and more than 200,000 in Asia-Pacific, according to a McKinsey report. Since the dawn of technology, people have worried that robots, automation, and AI will steal their jobs, but these tools also can be put to use to help fill the chip industry ta... » read more

Cadence Cerebrus In SaaS And Imagination Technologies Case Study


Artificial Intelligence (AI) has made noteworthy progress and is now ready and available for electronic design automation. The Cadence Cerebrus Intelligent Chip Explorer utilizes AI—specifically, reinforcement machine learning (ML) technology—combined with the industry-leading Cadence digital full flow to deliver better power, performance, and area (PPA) more quickly. However, this highl... » read more

The Data Crisis Is Unfolding — Are We Ready?


The rapid advancement of technology has led to an unprecedented amount of data being generated, captured, and consumed globally. However, this reliance on data comes at a considerable cost. The widespread sharing and processing of data is necessary to navigate our everyday lives. Still, any disruption to this process can have severe consequences, threatening our ability to function as a society... » read more

Thanks For The Memories!


“I want to maximize the MAC count in my AI/ML accelerator block because the TOPs rating is what sells, but I need to cut back on memory to save cost,” said no successful chip designer, ever. Emphasis on “successful” in the above quote. It’s not a purely hypothetical quotation. We’ve heard it many times. Chip architects — or their marketing teams — try to squeeze as much brag-... » read more

AI Races To The Edge


AI is becoming increasingly sophisticated and pervasive at the edge, pushing into new application areas and even taking on some of the algorithm training that has been done almost exclusively in large data centers using massive sets of data. There are several key changes behind this shift. The first involves new chip architectures that are focused on processing, moving, and storing data more... » read more

Applications Of Large Language Models For Industrial Chip Design (NVIDIA)


A technical paper titled “ChipNeMo: Domain-Adapted LLMs for Chip Design” was published by researchers at NVIDIA. Abstract: "ChipNeMo aims to explore the applications of large language models (LLMs) for industrial chip design. Instead of directly deploying off-the-shelf commercial or open-source LLMs, we instead adopt the following domain adaptation techniques: custom tokenizers, domain-ad... » read more

Vision Transformers Change The AI Acceleration Rules


Transformers were first introduced by the team at Google Brain in 2017 in their paper, "Attention is All You Need". Since their introduction, transformers have inspired a flurry of investment and research which have produced some of the most impactful model architectures and AI products to-date, including ChatGPT which is an acronym for Chat Generative Pre-trained Transformer. Transformers a... » read more

CMOS-Based HW Topology For Single-Cycle In-Memory XOR/XNOR Operations


A technical paper titled “CMOS-based Single-Cycle In-Memory XOR/XNOR” was published by researchers at University of Tennessee, University of Virginia, and Oak Ridge National Laboratory (ORNL). Abstract: "Big data applications are on the rise, and so is the number of data centers. The ever-increasing massive data pool needs to be periodically backed up in a secure environment. Moreover, a ... » read more

Why A DSP Is Indispensable In The New World of AI


Chips being designed today for the automotive, mobile handset, AI-IoT (artificial intelligence - Internet of things), and other AI applications will be fabricated in a year or two, designed into end products that will hit the market in three or more years, and then have a product lifecycle of at least five years. These chips will be used in systems with a large number and various types of senso... » read more

Energy Usage in Layers Of Computing (SLAC)


A technical paper titled “Energy Estimates Across Layers of Computing: From Devices to Large-Scale Applications in Machine Learning for Natural Language Processing, Scientific Computing, and Cryptocurrency Mining” was published by researchers at SLAC National Laboratory and Stanford University. Abstract: "Estimates of energy usage in layers of computing from devices to algorithms have bee... » read more

← Older posts