Week In Review: Design, Low Power


Renesas Electronics Corporation will acquire Dialog Semiconductor in an all-cash deal worth about US $5.9 billion. Dialog is a supplier of mixed-signal ICs targeting IoT, consumer, automotive, and industrial. The company's primary areas of focus were communications and power control. These products are complementary to existing Renesas embedded compute products. Dialog CEO Dr. Jalal Bagherli... » read more

Manufacturing Bits: Feb. 2


Capacitor-less DRAM At the recent 2020 International Electron Devices Meeting (IEDM), Imec presented a paper on a novel capacitor-less DRAM cell architecture. DRAM is used for main memory in systems, and today’s most advanced devices are based on roughly 18nm to 15nm processes. The physical limit for DRAM is somewhere around 10nm. DRAM itself is based on a one-transistor, one-capacito... » read more

Manufacturing Bits: Dec. 29


Chiplet-based exascale computers At the recent IEEE International Electron Devices Meeting (IEDM), CEA-Leti presented a paper on a 3D chiplet technology that enables exascale-level computing systems. The United States and other nations are working on exascale supercomputers. Today’s supercomputers are measured in floating point operations per second. The world’s fastest supercomputers c... » read more

Week In Review: Design, Low Power


Tools Mentor unveiled Tessent Streaming Scan Network software for its Tessent TestKompress software. The new solution includes embedded infrastructure and automation that decouples core-level DFT requirements from the chip-level test delivery resources for a simplified bottom-up DFT flow. The bus-based scan data distribution architecture enables simultaneous testing of any number of cores and ... » read more

Blog Review: Sept. 23


Arm's Matthew Mattina introduces a method to reduce the cost of neural network inference by combining both low-precision representation and the complexity-reducing Winograd transform while maintaining accuracy. Cadence's Paul McLellan checks out some of the biggest machine learning systems from Nvidia, Google, and Cerebras that were presented at the recent Hot Chips. Mentor's Robin Bornof... » read more

Zeroing In On Biological Computing


Artificial spiking neural networks need to replicate both excitatory and inhibitory biological neurons in order to emulate the neural activation patterns seen in biological brains. Doing this with CMOS-based designs is challenging because of the large circuit footprint required. However, researchers at HP Labs observed that one biologically plausible model, the Hodgkins-Huxley model, is math... » read more

Spiking Neural Networks: Research Projects or Commercial Products?


Spiking neural networks (SNNs) often are touted as a way to get close to the power efficiency of the brain, but there is widespread confusion about what exactly that means. In fact, there is disagreement about how the brain actually works. Some SNN implementations are less brain-like than others. Depending on whom you talk to, SNNs are either a long way away or close to commercialization. Th... » read more

Week In Review: Auto, Security, Pervasive Computing


Automotive Imagination Technologies and BAIC Capital have formed an automotive joint venture to create a new automotive fabless semiconductor company focused on China as a client. The JV will be headquartered in the Zhongguancun Integrated Circuit Design Park in Beijing, China, with Bravo Lee serving as CEO. The JV will license IP and software from Imagination to create automotive-grade SoCs. ... » read more

Power/Performance Bits: May 5


CMOS-compatible laser Researchers at Forschungszentrum Jülich, Center for Nanoscience and Nanotechnology (C2N), STMicroelectronics, and CEA-Leti Grenoble developed a CMOS-compatible laser for optical data transfer. Comprised of germanium and tin, the efficiency is comparable with conventional GaAs semiconductor lasers on Si. Optical communications provide much higher data rates, and are be... » read more

Scaling Up Compute-In-Memory Accelerators


Researchers are zeroing in on new architectures to boost performance by limiting the movement of data in a device, but this is proving to be much harder than it appears. The argument for memory-based computation is familiar by now. Many important computational workloads involve repetitive operations on large datasets. Moving data from memory to the processing unit and back — the so-called ... » read more

← Older posts Newer posts →