Comparing Analog and Digital SRAM In-Memory Computing Architectures (KU Leuven)


A technical paper titled "Benchmarking and modeling of analog and digital SRAM in-memory computing architectures" was published by researchers at KU Leuven. Abstract: "In-memory-computing is emerging as an efficient hardware paradigm for deep neural network accelerators at the edge, enabling to break the memory wall and exploit massive computational parallelism. Two design models have surge... » read more

Optimizing Projected PCM for Analog Computing-In-Memory Inferencing (IBM)


A new technical paper titled "Optimization of Projected Phase Change Memory for Analog In-Memory Computing Inference" was published by researchers at IBM Research. "A systematic study of the electrical properties-including resistance values, memory window, resistance drift, read noise, and their impact on the accuracy of large neural networks of various types and with tens of millions of wei... » read more

Performance Of Analog In-Memory Computing On Imaging Problems


A technical paper titled "Accelerating AI Using Next-Generation Hardware: Possibilities and Challenges With Analog In-Memory Computing" was published by researchers at Lund University and Ericsson Research. Abstract "Future generations of computing systems need to continue increasing processing speed and energy efficiency in order to meet the growing workload requirements under stringent en... » read more

Can Compute-In-Memory Bring New Benefits To Artificial Intelligence Inference?


Compute-in-memory (CIM) is not necessarily an Artificial Intelligence (AI) solution; rather, it is a memory management solution. CIM could bring advantages to AI processing by speeding up the multiplication operation at the heart of AI model execution. However, for that to be successful, an AI processing system would need to be explicitly architected to use CIM. The change would entail a shift ... » read more

A Hierarchical And Tractable Mixed-Signal Verification Methodology For First-Generation Analog AI Processors


Artificial intelligence (AI) is now the key driving force behind advances in information technology, big data and the internet of things (IoT). It is a technology that is developing at a rapid pace, particularly when it comes to the field of deep learning. Researchers are continually creating new variants of deep learning that expand the capabilities of machine learning. But building systems th... » read more

Learning The AMS Circuit Representation From Layout Positions (UT Austin/ NVIDIA)


A recent technical paper titled "TAG: Learning Circuit Spatial Embedding From Layouts" was published by researchers at UT Austin and NVIDIA. Abstract "Analog and mixed-signal (AMS) circuit designs still rely on human design expertise. Machine learning has been assisting circuit design automation by replacing human experience with artificial intelligence. This paper presents TAG, a new parad... » read more

Co-Design View of Cross-Bar Based Compute-In-Memory


A new review paper titled "Compute in-Memory with Non-Volatile Elements for Neural Networks: A Review from a Co-Design Perspective" was published by researchers at Argonne National Lab, Purdue University, and Indian Institute of Technology Madras. "With an over-arching co-design viewpoint, this review assesses the use of cross-bar based CIM for neural networks, connecting the material proper... » read more

A Hierarchical And Tractable Mixed-Signal Verification Methodology For First-Generation Analog AI Processors


Artificial intelligence (AI) is now the key driving force behind advances in information technology, big data and the internet of things (IoT). It is a technology that is developing at a rapid pace, particularly when it comes to the field of deep learning. Researchers are continually creating new variants of deep learning that expand the capabilities of machine learning. But building systems th... » read more

Asynchronously Parallel Optimization Method For Sizing Analog Transistors Using Deep Neural Network Learning


A new technical paper titled "APOSTLE: Asynchronously Parallel Optimization for Sizing Analog Transistors Using DNN Learning" was published by researchers at UT Austin and Analog Devices. Abstract "Analog circuit sizing is a high-cost process in terms of the manual effort invested and the computation time spent. With rapidly developing technology and high market demand, bringing automated s... » read more

Safety, Security, And Reliability Of AI In Autos


Experts at the Table: Semiconductor Engineering sat down to talk about security, aging, and safety in automotive AI systems, with Geoff Tate, CEO of Flex Logix; Veerbhan Kheterpal, CEO of Quadric; Steve Teig, CEO of Perceive; and Kurt Busch, CEO of Syntiant. What follows are excerpts of that conversation, which was held in front of a live audience at DesignCon. Part one of this discussion is he... » read more

← Older posts Newer posts →