Power/Performance Bits: Jan. 28


Accelerator-on-chip Researchers at Stanford University and SLAC National Accelerator Laboratory created an electron-accelerator-on-chip. While the technique is much less powerful than standard particle accelerators, it can be much smaller. It relied upon an infrared laser to deliver, in less than a hair’s width, the sort of energy boost that takes microwaves many feet. The team carved ... » read more

Thermal Guardbanding


Stephen Crosher, CEO of Moortec, looks at the causes of thermal runaway in racks of servers and explains why accurate temperature measurement in AI and advanced-node chips is more critical, and what impact this has on performance when temperatures begin approaching acceptable limits. » read more

Non-Volatile Memory Tradeoffs Intensify


Non-volatile memory is becoming more complicated at advanced nodes, where price, speed, power and utilization are feeding into some very application-specific tradeoffs about where to place that memory. NVM can be embedded into a chip, or it can be moved off chip with various types of interconnect technology. But that decision is more complicated than it might first appear. It depends on the ... » read more

Power/Performance Bits: Jan. 21


Two-layer MRAM Scientists at Tokyo Institute of Technology propose a simpler MRAM construction that could perform faster with less power than conventional memories. The idea relies on unidirectional spin Hall magnetoresistance (USMR), a spin-related phenomenon that could be used to develop MRAM cells with an extremely simple structure. The spin Hall effect leads to the accumulation of elect... » read more

How Chips Age


Andre Lange, group manager for quality and reliability at Fraunhofer IIS’ Engineering of Adaptive Systems Division, talks about circuit aging, whether current methods of predicting reliability are accurate for chips developed at advanced process nodes, and where additional research is needed. » read more

More Knobs, Fewer Markers


The next big thing in chip design may be really big — the price tag. In the past, when things got smaller, so did the cost per transistor. Now they are getting more expensive to design and manufacture, and the cost per transistor is going up along with the number of transistors per area of die, and in many cases even the size of the die. That's not exactly a winning economic formula, which... » read more

Three Steps To Faster Low Power Coverage Using UPF 3.0 Information Models


Controlling power has its costs. The added power elements and their interactions make verification of low-power designs much more difficult and the engineer’s job overwhelmingly complex and tedious. Early versions of the Unified Power Format (UPF) provided some relief, but lacked provisions for a standardized methodology for low-power coverage. Ad hoc approaches are error prone and highly ... » read more

Dynamic CDC Jitter For Clock Domain Crossing (CDC) Signoff


By Himanshu Bhatt and Paras Mal Jain Detecting and debugging deep sequential CDC convergences using structural CDC verification is extremely difficult since doing a flat analysis on large designs has capacity related challenges, and even if verification tools can complete the analysis, it becomes a nightmare to debug the violations with complex sequential logic. Thus arises the need for dyna... » read more

Digital Twins In Automotive


The term “digital twin” refers to a new principle that is gaining importance in the development of complex hardware/software systems. In general, it refers to a virtual representation of the real system. This model serves to simulate the functional interactions of the parts, saving time and money by avoiding unnecessary redesign cycles and enabling considerably better optimization of the ov... » read more

Accelerating AI And ML Applications With PCIe 5


The rapid adoption of sophisticated artificial intelligence/machine learning (AI/ML) applications and the shift to cloud-based workloads has significantly increased network traffic in recent years. Historically, the intensive use of virtualization ensured that server compute capacity adequately met the need of heavy workloads. This was achieved by dividing or partitioning a single (physical) se... » read more

← Older posts