Dealing With AI/ML Uncertainty


Despite their widespread popularity, large language models (LLMs) have several well-known design issues, the most notorious being hallucinations, in which an LLM tries to pass off its statistics-based concoctions as real-world facts. Hallucinations are examples of a fundamental, underlying issue with LLMs. The inner workings of LLMs, as well as other deep neural nets (DNNs), are only partly kno... » read more

Predicting And Preventing Process Drift


Increasingly tight tolerances and rigorous demands for quality are forcing chipmakers and equipment manufacturers to ferret out minor process variances, which can create significant anomalies in device behavior and render a device non-functional. In the past, many of these variances were ignored. But for a growing number of applications, that's no longer possible. Even minor fluctuations in ... » read more

Embrace The New!


The ResNet family of machine learning algorithms was introduced to the AI world in 2015. A slew of variations was rapidly discovered that at the time pushed the accuracy of ResNets close to the 80% threshold (78.57% Top 1 accuracy for ResNet-152 on ImageNet). This state-of-the-art performance at the time, coupled with the rather simple operator structure that was readily amenable to hardware ac... » read more

AI/ML Challenges In Test and Metrology


The integration of artificial intelligence and machine learning (AI/ML) into semiconductor test and metrology is redefining the landscape for chip fabrication, which will be essential at advanced nodes and in increasingly dense advanced packages. Fabs today are inundated by vast amounts of data collected across multiple manufacturing processes, and AI/ML solutions are viewed as essential for... » read more

A Hypermultiplexed Integrated Tensor Optical Processor (USC, MIT et al.)


A technical paper titled “Hypermultiplexed Integrated Tensor Optical Processor” was published by researchers at the University of Southern California, Massachusetts Institute of Technology (MIT), City University of Hong Kong, and NTT Research. Abstract: "The escalating data volume and complexity resulting from the rapid expansion of artificial intelligence (AI), internet of things (IoT) a... » read more

Tackling Variability With AI-based Process Control


Jon Herlocker, co-founder and CEO of Tignis, sat down with Semiconductor Engineering to talk about how AI in advanced process control reduces equipment variability and corrects for process drift. What follows are excerpts of that conversation. SE: How is AI being used in semiconductor manufacturing and what will the impact be? Herlocker: AI is going to create a completely different factor... » read more

Pressure Builds On Failure Analysis Labs


Failure analysis labs are becoming more fab-like, offering higher accuracy in locating failures and accelerating time-to-market of new devices. These labs historically have been used for deconstructing devices that failed during field use, known as return material authorizations (RMAs), but their role is expanding. They now are becoming instrumental in achieving first silicon and ramping yie... » read more

2023: A Good Year For Semiconductors


Looking back, 2023 has had more than its fair share of surprises, but who were the winners and losers? The good news is that by the end of the year, almost everyone was happy. That is not how we exited 2022, where there was overcapacity, inventories had built up in many parts of the industry, and few sectors — apart from data centers — were seeing much growth. The supposed new leaders we... » read more

Fabs Begin Ramping Up Machine Learning


Fabs are beginning to deploy machine learning models to drill deep into complex processes, leveraging both vast compute power and significant advances in ML. All of this is necessary as dimensions shrink and complexity increases with new materials and structures, processes, and packaging options, and as demand for reliability increases. Building robust models requires training the algorithms... » read more

Data Formats For Inference On The Edge


AI/ML training traditionally has been performed using floating point data formats, primarily because that is what was available. But this usually isn't a viable option for inference on the edge, where more compact data formats are needed to reduce area and power. Compact data formats use less space, which is important in edge devices, but the bigger concern is the power needed to move around... » read more

← Older posts