Increasing AI Energy Efficiency With Compute In Memory


Skyrocketing AI compute workloads and fixed power budgets are forcing chip and system architects to take a much harder look at compute in memory (CIM), which until recently was considered little more than a science project. CIM solves two problems. First, it takes more energy to move data back and forth between memory and processor than to actually process it. And second, there is so much da... » read more

Energy Usage in Layers Of Computing (SLAC)


A technical paper titled “Energy Estimates Across Layers of Computing: From Devices to Large-Scale Applications in Machine Learning for Natural Language Processing, Scientific Computing, and Cryptocurrency Mining” was published by researchers at SLAC National Laboratory and Stanford University. Abstract: "Estimates of energy usage in layers of computing from devices to algorithms have bee... » read more

A Hierarchical Instruction Cache Tailored To Ultra-Low-Power Tightly-Coupled Processor Clusters


A technical paper titled “Scalable Hierarchical Instruction Cache for Ultra-Low-Power Processors Clusters” was published by researchers at University of Bologna, ETH Zurich, and GreenWaves Technologies. Abstract: "High Performance and Energy Efficiency are critical requirements for Internet of Things (IoT) end-nodes. Exploiting tightly-coupled clusters of programmable processors (CMPs) ha... » read more

Impact of Semiconductor Optical Amplifers On Performance & Power Consumption in Data Centers (UCSB)


A new technical paper titled "Integrated SOAs enable energy-efficient intra-data center coherent links" was published by researchers at UC Santa Barbara. "In this work, we analyze the impact of integrated semiconductor optical amplifiers (SOAs) on link performance and power consumption, and describe the optimal design spaces for low-cost and energy-efficient coherent links. Placing SOAs afte... » read more

Digital Neuromorphic Processor: Algorithm-HW Co-design (imec / KU Leuven)


A technical paper titled "Open the box of digital neuromorphic processor: Towards effective algorithm-hardware co-design" was published by researchers at imec and KU Leuven. "In this work, we open the black box of the digital neuromorphic processor for algorithm designers by presenting the neuron processing instruction set and detailed energy consumption of the SENeCA neuromorphic architect... » read more

Using More Germanium In Chips for Energy Efficiency & Achievable Clock Frequencies


A new technical paper titled "Composition Dependent Electrical Transport in Si1−xGex Nanosheets with Monolithic Single-Elementary Al Contacts" was published by researchers at TU Wien (Vienna University of Technology), Johannes Kepler University, CEA-LETI, and Swiss Federal Laboratories for Materials Science and Technology. Find the technical paper here. Published September 2022. Abstrac... » read more

Energy of Computing As A Key Design Aspect (SLAC/Stanford, MIT)


A technical paper titled "Trends in Energy Estimates for Computing in AI/Machine Learning Accelerators, Supercomputers, and Compute-Intensive Applications" was published by researchers at SLAC/Stanford University and MIT. Abstract: "We examine the computational energy requirements of different systems driven by the geometrical scaling law, and increasing use of Artificial Intelligence or Ma... » read more

How Climate Change Affects Data Centers


Data centers are hot, and they may get even hotter. As climate change impacts temperatures around the world, designers are changing the computing hubs that are tied to nearly every aspect of modern life to make them more efficient, more customized, and potentially more disaggregated. These shifts are taking on new urgency as the tech industry grapples with months of sweltering temperatures o... » read more

Efficient Neuromorphic AI Chip: “NeuroRRAM”


New technical paper titled "A compute-in-memory chip based on resistive random-access memory" was published by a team of international researchers at Stanford, UCSD, University of Pittsburgh, University of Notre Dame and Tsinghua University. The paper's abstract states "by co-optimizing across all hierarchies of the design from algorithms and architecture to circuits and devices, we present ... » read more

Techniques For Improving Energy Efficiency of Training/Inference for NLP Applications, Including Power Capping & Energy-Aware Scheduling


This new technical paper titled "Great Power, Great Responsibility: Recommendations for Reducing Energy for Training Language Models" is from researchers at MIT and Northeastern University. Abstract: "The energy requirements of current natural language processing models continue to grow at a rapid, unsustainable pace. Recent works highlighting this problem conclude there is an urgent need ... » read more

← Older posts Newer posts →