Is In-Memory Compute Still Alive?


In-memory computing (IMC) has had a rough go, with the most visible attempt at commercialization falling short. And while some companies have pivoted to digital and others have outright abandoned the technology, developers are still trying to make analog IMC a success. There is disagreement regarding the benefits of IMC (also called compute-in-memory, or CIM). Some say it’s all about reduc... » read more

Energy Analysis: 2D and 3D Architectures with Systolic Arrays and CIM (Cornell)


A new technical paper titled "Energy-/Carbon-Aware Evaluation and Optimization of 3D IC Architecture with Digital Compute-in-Memory Designs" was published by researchers at Cornell University. "In this paper, we investigate digital CIM (DCIM) macros and various 3D architectures to find the opportunity of increased energy efficiency compared to 2D structures. Moreover, we also investigated th... » read more

In Memory, At Memory, Near Memory: What Would Goldilocks Choose?


The children’s fairy tale of ‘Goldilocks and the Three Bears’ describes the adventures of Goldi as she tries to choose among three choices for bedding, chairs, and bowls of porridge. One meal is “too hot,” the other “too cold,” and finally one is “just right.” If Goldi were faced with making architecture choices for AI processing in modern edge/device SoCs, she would also face... » read more

Hardware Acceleration Approach for KAN Via Algorithm-Hardware Co-Design


A new technical paper titled "Hardware Acceleration of Kolmogorov-Arnold Network (KAN) for Lightweight Edge Inference" was published by researchers at Georgia Tech, TSMC and National Tsing Hua University. Abstract "Recently, a novel model named Kolmogorov-Arnold Networks (KAN) has been proposed with the potential to achieve the functionality of traditional deep neural networks (DNNs) using ... » read more

Mixed Signal In-Memory Computing With Massively Parallel Gradient Calculations of High-Degree Polynomials


A new technical paper titled "Computing high-degree polynomial gradients in memory" was published by researchers at UCSB, HP Labs, Forschungszentrum Juelich GmbH, and RWTH Aachen University. Abstract "Specialized function gradient computing hardware could greatly improve the performance of state-of-the-art optimization algorithms. Prior work on such hardware, performed in the context of Isi... » read more

Intel Vs. Samsung Vs. TSMC


The three leading-edge foundries — Intel, Samsung, and TSMC — have started filling in some key pieces in their roadmaps, adding aggressive delivery dates for future generations of chip technology and setting the stage for significant improvements in performance with faster delivery time for custom designs. Unlike in the past, when a single industry roadmap dictated how to get to the next... » read more

Ultra-Low Power CiM Design For Practical Edge Scenarios


A technical paper titled “Low Power and Temperature-Resilient Compute-In-Memory Based on Subthreshold-FeFET” was published by researchers at Zhejiang University, University of Notre Dame, Technical University of Munich, Munich Institute of Robotics and Machine Intelligence, and the Laboratory of Collaborative Sensing and Autonomous Unmanned Systems of Zhejiang Province. Abstract: "Compute... » read more

3D Integration Supports CIM Versatility And Accuracy


Compute-in-memory (CIM) is gaining attention due to its efficiency in limiting the movement of massive volumes of data, but it's not perfect. CIM modules can help reduce the cost of computation for AI workloads, and they can learn from the highly efficient approaches taken by biological brains. When it comes to versatility, scalability, and accuracy, however, significant tradeoffs are requir... » read more

CiM Integration For ML Inference Acceleration


A technical paper titled “WWW: What, When, Where to Compute-in-Memory” was published by researchers at Purdue University. Abstract: "Compute-in-memory (CiM) has emerged as a compelling solution to alleviate high data movement costs in von Neumann machines. CiM can perform massively parallel general matrix multiplication (GEMM) operations in memory, the dominant computation in Machine Lear... » read more

Modeling Compute In Memory With Biological Efficiency


The growing popularity of generative AI, which uses natural language to help users make sense of unstructured data, is forcing sweeping changes in how compute resources are designed and deployed. In a panel discussion on artificial intelligence at last week’s IEEE Electron Device Meeting, IBM’s Nicole Saulnier described it as a major breakthrough that should allow AI tools to assist huma... » read more

← Older posts