Using Deep Learning ADC For Defect Classification For Automatic Defect Inspection


In traditional semiconductor packaging, manual defect review after automated optical inspection (AOI) is an arduous task for operators and engineers, involving review of both good and bad die. It is hard to avoid human errors when reviewing millions of defect images every day, and as a result, underkill or overkill of die can occur. Automatic defect classification (ADC) can reduce the number of... » read more

Using Automatic Defect Classification To Reduce The Escape Rate Of Defects


Automated optical inspection (AOI) is a cornerstone in semiconductor manufacturing, assembly and testing facilities, and as such, it plays a crucial role in yield management and process control. Traditionally, AOI generates millions of defect images, all of which are manually reviewed by operators. This process is not only time-consuming but error prone due to human involvement and fatigue, whi... » read more

Design Optimization Of Split-Gate NOR Flash For Compute-In-Memory


A technical paper titled “Design Strategies of 40 nm Split-Gate NOR Flash Memory Device for Low-Power Compute-in-Memory Applications” was published by researchers at Seoul National University of Science and Technology and University of Seoul. Abstract: "The existing von Neumann architecture for artificial intelligence (AI) computations suffers from excessive power consumption and memo... » read more

(Vision) Transformers: Rise Of The Chimera


It’s 2023 and transformers are having a moment. No, I’m not talking about the latest installment of the Transformers movie franchise, "Transformers: Rise of the Beasts"; I’m talking about the deep learning model architecture class, transformers, that is fueling anticipation, excitement, fear, and investment in AI. Transformers are not so new in the world of AI anymore; they were first ... » read more

Nightmare Fuel: The Hazards Of ML Hardware Accelerators


A major design challenge facing numerous silicon design teams in 2023 is building the right amount of machine learning (ML) performance capability into today’s silicon tape out in anticipation of what the state of the art (SOTA) ML inference models will look like in 2026 and beyond when that silicon will be used in devices in volume production. Given the continuing rapid rate of change in mac... » read more

Achieving Greater Accuracy In Real-Time Vision Processing With Transformers


Transformers, first proposed in a Google research paper in 2017, were initially designed for natural language processing (NLP) tasks. Recently, researchers applied transformers to vision applications and got interesting results. While previously, vision tasks had been dominated by convolutional neural networks (CNNs), transformers have proven surprisingly adaptable to vision tasks like image cl... » read more

ML Architecture for Solving the Inverse Problem for Matter Wave Lithography: LACENET


This recent technical paper titled "Realistic mask generation for matter-wave lithography via machine learning" was published by researchers at University of Bergen (Norway). Abstract: "Fast production of large area patterns with nanometre resolution is crucial for the established semiconductor industry and for enabling industrial-scale production of next-generation quantum devices. Metasta... » read more

SOT-MRAM-based CIM architecture for a CNN model


New research paper "In-Memory Computing Architecture for a Convolutional Neural Network Based on Spin Orbit Torque MRAM", from National Taiwan University, Feng Chia University, Chung Yuan Christian University. Abstract "Recently, numerous studies have investigated computing in-memory (CIM) architectures for neural networks to overcome memory bottlenecks. Because of its low delay, high energ... » read more

New Neural Processors Address Emerging Neural Networks


It’s been ten years since AlexNet, a deep learning convolutional neural network (CNN) model running on GPUs, displaced more traditional vision processing algorithms to win the ImageNet Large Scale Visual Recognition Competition (ILSVRC). AlexNet, and its successors, provided significant improvements in object classification accuracy at the cost of intense computational complexity and large da... » read more

Artificial intelligence deep learning for 3D IC reliability prediction


New research from National Yang Ming Chiao Tung University, National Center for High-Performance Computing (Taiwan), Tunghai University, MA-Tek Inc, and UCLA. Abstract "Three-dimensional integrated circuit (3D IC) technologies have been receiving much attention recently due to the near-ending of Moore’s law of minimization in 2D IC. However, the reliability of 3D IC, which is greatly infl... » read more

← Older posts