The Next Big Chip Companies


Rambus’ Mike Noonen looks at why putting everything on a single die no longer works, what comes after Moore’s Law, and what the new business model looks like for chipmakers. https://youtu.be/X6Kca8Vm-wA » read more

Hybrid Memory


Gary Bronner, senior vice president of Rambus Labs, talks about the future of DRAM scaling, why one type of memory won’t solve all needs, and what the pros and cons are of different memories. https://youtu.be/R0hhDx2Fb7Q » read more

Intel’s Next Move


Gadi Singer, vice president and general manager of Intel's Artificial Intelligence Products Group, sat down with Semiconductor Engineering to talk about Intel's vision for deep learning and why the company is looking well beyond the x86 architecture and one-chip solutions. SE: What's changing on the processor side? Singer: The biggest change is the addition of deep learning and neural ne... » read more

A Primer On Last-Level Cache Memory For SoC Designs


System-on-chip (SoC) architects have a new memory technology, last level cache (LLC), to help overcome the design obstacles of bandwidth, latency and power consumption in megachips for advanced driver assistance systems (ADAS), machine learning, and data-center applications. LLC is a standalone memory that inserts cache between functional blocks and external memory to ease conflicting requireme... » read more

Energy-Efficient AI


Carlos Maciàn, senior director of innovation for eSilicon EMEA, talks about how to improve the efficiency of AI operations by focusing on the individual operations, including data transport, computation and memory. https://youtu.be/A3p_w7ENefs » read more

Carbon Nanotube DRAM


An IP design house has developed a scalable DRAM replacement using carbon nanotubes (CNTs) that abolishes the DRAM refresh rate, stores the content permanently, has better timing than DRAM and is scalable. And it lasts for somewhere between 300 and 12,000 years. “Carbon nanotube memory—it sounds so sexy that I could just shut up and not say anything,” said Bill Gervasi, principal syste... » read more

Architects Firmly In Control


Moore's Law isn't dead, but it certainly isn't what it used to be. While there may be three or four more generations of node shrinks ahead, the power/performance benefits of scaling are falling off. This is evident in new chip architectures that were introduced at this year's Hot Chips conference. Originally started to show off the latest CPUs and co-processors, in past years the focus has b... » read more

The New Deep Learning Memory Architectures You Should Know About


Artificial intelligence (AI) has come a long way. While our parents grew up with the dream to one day roam with robots, today we are interviewing Sophia, a citizen of Saudi Arabia, who is also the first humanoid robot to be granted a citizenship in any country. Deep learning, a brain-inspired discipline of AI has been around for a long time but has only recently taken off due to abundant data, ... » read more

AI Architectures Must Change


Using existing architectures for solving machine learning and artificial intelligence problems is becoming impractical. The total energy consumed by AI is rising significantly, and CPUs and GPUs increasingly are looking like the wrong tools for the job. Several roundtables have concluded the best opportunity for significant change happens when there is no legacy IP. Most designs have evolved... » read more

High-Performance Memory At Low Cost Per Bit


Hardware developers of deep learning neural networks (DNN) have a universal complaint – they need more and more memory capacity with high performance, low cost and low power. As artificial intelligence (AI) techniques gain wider adoption, their complexity and training requirements also increase. Large and complex DNN models do not fit on the small on-chip SRAM caches near the processor. This ... » read more

← Older posts Newer posts →