Chip Industry Week In Review


By Jesse Allen, Karen Heyman, and Liz Allan Japan's Rapidus and the University of Tokyo are teaming up with France's Leti to meet its previously announced mass production goal of 2nm chips by 2027, and chips in the 1nm range in the 2030s. Rapidus was formed in 2022 with the support of eight Japanese companies — Sony, Kioxia, Denso, NEC, NTT, SoftBank, Toyota, and Mitsubishi's banking arm, ... » read more

Center For Deep Learning In Electronics Manufacturing: Bringing Deep Learning To Production For Photomask Manufacturing


The Center for Deep Learning in Electronics Manufacturing (CDLe) was formed as an alliance between D2S, Mycronic and NuFlare Technology in autumn 2018. Assignees from each alliance partner worked with deep learning (DL) experts under the leadership of Ajay Baranwal, director of CDLe. The CDLe’s mission was to 1) turn DL into a core competency inside each of the companies and 2) do DL projects... » read more

Maximizing Edge Intelligence Requires More Than Computing


By Toshi Nishida, Avik W. Ghosh, Swaminathan Rajaraman, and Mircea Stan Commercial-off-the-shelf (COTS) components have enabled a commodity market for Wi-Fi-connected appliances, consumer products, infrastructure, manufacturing, vehicles, and wearables. However, the vast majority of connected systems today are deployed at the edge of the network, near the end user or end application, opening... » read more

Transforming Semiconductor Manufacturing: How AI And ML Boost Productivity And Beat The Skill Shortage


In the fast-paced world of semiconductor manufacturing, where innovation and efficiency are essential, there is a serious challenge – the persistent skilled labor shortage. As evident by billboards looking for workers along major highways, this shortage is not just a concern but a pressing reality. Semiconductor manufacturers in the United States face a multifaceted problem—tightened... » read more

DRAM Choices Are Suddenly Much More Complicated


Chipmakers are beginning to incorporate multiple types and flavors of DRAM in the same advanced package, setting the stage for increasingly distributed memory but significantly more complex designs. Despite years of predictions that DRAM would be replaced by other types of memory, it remains an essential component in nearly all computing. Rather than fading away, its footprint is increasing,... » read more

Applications Of Large Language Models For Industrial Chip Design (NVIDIA)


A technical paper titled “ChipNeMo: Domain-Adapted LLMs for Chip Design” was published by researchers at NVIDIA. Abstract: "ChipNeMo aims to explore the applications of large language models (LLMs) for industrial chip design. Instead of directly deploying off-the-shelf commercial or open-source LLMs, we instead adopt the following domain adaptation techniques: custom tokenizers, domain-ad... » read more

Neural Network Model Quantization On Mobile


The general definition of quantization states that it is the process of mapping continuous infinite values to a smaller set of discrete finite values. In this blog, we will talk about quantization in the context of neural network (NN) models, as the process of reducing the precision of the weights, biases, and activations. Moving from floating-point representations to low-precision fixed intege... » read more

SRAM In AI: The Future Of Memory


Experts at the Table — Part 1: Semiconductor Engineering sat down to talk about AI and the latest issues in SRAM with Tony Chan Carusone, CTO at Alphawave Semi; Steve Roddy, chief marketing officer at Quadric; and Jongsin Yun, memory technologist at Siemens EDA. What follows are excerpts of that conversation. Part two of this conversation can be found here and part three is here. [L-R]: ... » read more

The Power Of HBM3 Memory For AI Training Hardware


AI training data sets are constantly growing, driving the need for hardware accelerators capable of handling terabyte-scale bandwidth. Among the array of memory technologies available, High Bandwidth Memory (HBM) has emerged as the memory of choice for AI training hardware, with the most recent generation, HBM3, delivering unrivaled memory bandwidth. Let’s take a closer look at this important... » read more

Vision Transformers Change The AI Acceleration Rules


Transformers were first introduced by the team at Google Brain in 2017 in their paper, "Attention is All You Need". Since their introduction, transformers have inspired a flurry of investment and research which have produced some of the most impactful model architectures and AI products to-date, including ChatGPT which is an acronym for Chat Generative Pre-trained Transformer. Transformers a... » read more

← Older posts Newer posts →