Dealing With AI/ML Uncertainty


Despite their widespread popularity, large language models (LLMs) have several well-known design issues, the most notorious being hallucinations, in which an LLM tries to pass off its statistics-based concoctions as real-world facts. Hallucinations are examples of a fundamental, underlying issue with LLMs. The inner workings of LLMs, as well as other deep neural nets (DNNs), are only partly kno... » read more

Research Bits: April 23


Probabilistic computer prototype Researchers at Tohoku University and the University of California Santa Barbara created a prototype of a heterogeneous probabilistic computer that combines a CMOS circuit with a limited number of stochastic nanomagnets. It aims to improve the execution of probabilistic algorithms used to solve problems where uncertainty is inherent or where an exact solution... » read more

Predicting And Preventing Process Drift


Increasingly tight tolerances and rigorous demands for quality are forcing chipmakers and equipment manufacturers to ferret out minor process variances, which can create significant anomalies in device behavior and render a device non-functional. In the past, many of these variances were ignored. But for a growing number of applications, that's no longer possible. Even minor fluctuations in ... » read more

Advanced Packaging Design For Heterogeneous Integration


As device scaling slows down, a key system functional integration technology is emerging: heterogeneous integration (HI). It leverages advanced packaging technology to achieve higher functional density and lower cost per function. With the continuous development of major semiconductor applications such as AI HPC, edge AI and autonomous electrical vehicles, traditional chips are transforming i... » read more

How AI 2.0 Will Shape The Memory Landscape


AI is such a big part of our lives that we don’t even think about it as “AI”; it’s simply normal life these days. If you’ve asked your home assistant for the weather, used a search engine, or been recommended something to watch today, then that’s all been AI discretely at work. While these AI-enabled applications represent notable advancements in incorporating intelligence into syst... » read more

MIPI In Next Generation Of AI IoT Devices At The Edge


The history of data processing begins in the 1960s with centralized on-site mainframes that later evolved into distributed client servers. In the beginning of this century, centralized cloud computing became attractive and began to gain momentum, becoming one of the most popular computing tools today. In recent years however, we have seen an increase in the demand for processing at the edge or ... » read more

Hybrid Methodology To Extract Kinetic And Magnetic Inductances For Superconductor Technologies


Integrated circuits (ICs) using superconductors have emerged as the technology of choice for artificial intelligence (AI), data centers, and cloud computing. However, innovative technology requires equally innovative physical verification solutions to ensure that these superconductor ICs deliver the performance and reliability they promise. We introduce an innovative hybrid methodology to extra... » read more

New Strategies For Interpreting Data Variability


Every measurement counts at the nanoscopic scale of modern semiconductor processes, but with each new process node the number of measurements and the need for accuracy escalate dramatically. Petabytes of new data are being generated and used in every aspect of the manufacturing process for informed decision-making, process optimization, and the continuous pursuit of quality and yield. Most f... » read more

Paradigms Of Large Language Model Applications In Functional Verification


This paper presents a comprehensive literature review for applying large language models (LLM) in multiple aspects of functional verification. Despite the promising advancements offered by this new technology, it is essential to be aware of the inherent limitations of LLMs, especially hallucination that may lead to incorrect predictions. To ensure the quality of LLM outputs, four safeguarding p... » read more

Cache Coherency In Heterogeneous Systems


Until recently, coherency was something normally associated with DRAM. But as chip designs become increasingly heterogeneous, incorporating more and different types of compute elements, it becomes harder to maintain coherency in that data without taking a significant hit on performance and power. The basic problem is that not all compute elements fetch and share data at the same speed, and syst... » read more

← Older posts