Research Bits: Jan. 28


Optical memory unit Researchers from Nokia Bell Labs developed a new type of optical memory called a programmable photonic latch that enables temporary data storage in optical processing systems. It is modeled after a set-reset latch. The integrated programmable photonic latch is based on optical universal logic gates using silicon photonic micro-ring modulators and can be implemented in co... » read more

The Road To Super Chips


Reticle size limitations are forcing chip design teams to look beyond a single SoC or processor in order to achieve orders of magnitude improvements in processing that are required for AI. But moving data between more processing elements adds a whole new set of challenges that need to be addressed at multiple levels. Steve Woo, distinguished inventor and fellow at Rambus, examines the benefits ... » read more

Research Bits: Jan. 20


Self-correcting memristor array Researchers at Korea Advanced Institute of Science and Technology (KAIST), Seoul National University, Sungkyunkwan University, Electronics and Telecommunications Research Institute (ETRI), and Yonsei University developed a memristor-based neuromorphic chip that can learn and correct errors, enabling it to adapt to immediate environmental changes. The system c... » read more

Advanced Packaging: A Curse Or A Blessing For Trustworthiness?


In recent years, the issue of trustworthiness in electronics has become increasingly important, especially in areas where security is of the essence such as the automotive sector, industry, and critical infrastructure. These sectors depend on electronic systems that are not only powerful but also absolutely reliable and, above all, secure. This represents a major challenge, as the increasing co... » read more

How Ultra Ethernet And UALink Enable High-Performance, Scalable AI Networks


By Ron Lowman and Jon Ames AI workloads are significantly driving innovation in the interface IP market. The exponential increase in AI model parameters, doubling approximately every 4-6 months, stands in stark contrast to the slower pace of hardware advancements dictated by Moore's Law, which follows an 18-month cycle. This discrepancy demands hardware innovations to support AI workloads, c... » read more

Choosing The Right Memory Solution For AI Accelerators


To meet the increasing demands of AI workloads, memory solutions must deliver ever-increasing performance in bandwidth, capacity, and efficiency. From the training of massive large language models (LLMs) to efficient inference on endpoint devices, choosing the right memory technology is critical for chip designers. This blog explores three leading memory solutions—HBM, LPDDR, and GDDR—and t... » read more

MACs Are Not Enough: Why “Offload” Fails


For the past half-decade, countless chip designers have approached the challenges of on-device machine learning inference with the simple idea of building a “MAC accelerator” – an array of high-performance multiply-accumulate circuits – paired with a legacy programmable core to tackle the ML inference compute problem. There are literally dozens of lookalike architectures in the market t... » read more

The When, Why, And How Of Waiting And Backoff In Multi-Threaded Applications On Arm


With multithreaded applications, there are situations where it is unavoidable or desirable to wait for other threads. Implementing such wait instruction sequences correctly is important for both multithreaded scalability and power efficiency. Scalability is measured both in terms of aggregated throughput and fairness. Fairness is when all contending threads get an equal share of the contended ... » read more

Is Liquid Cooling The Future Of Your Data Center?


The data center industry is facing unprecedented challenges. With chip densities skyrocketing, high-performance computing is being pushed to its limits, all while energy costs are soaring and environmental concerns are escalating. Securing approvals for new data center facilities has become more complex, often plagued by community objections and grid supply issues. However, amidst these hurd... » read more

Power Budgets Optimized By Managing Glitch Power


“Waste not, want not,” says the old adage, and in general, that’s good advice to live by. But in the realm of chip design, wasting power is a fact of physics. Glitch power – power that gets expended due to delays in gates and/or wires – can account for up to 40% of the power budget in advanced applications like data center servers. Even in less high-powered circuits, such as those fou... » read more

← Older posts