The Promise Of GDDR6 And 7nm


Research Nester, a market research and consulting firm, estimates that the “global market of computer graphics may witness a remarkable growth and reach at the valuation of $215.5 billion by the end of year 2024.” Plus, it says this market is expected to grow at a significant compound annual growth rate or CAGR of 6.1% over the forecast period 2017 to 2024. Computer graphics is just the ... » read more

PCIe 4.0 Hangs In, PCIe 5.0 Coming On Strong


First introduced in 2003 as a universal serial chip-to-chip interface running at 2.5 Gbps, PCI Express (Peripheral Component Interconnect Express), also known as PCIe, has advanced several revisions with significant improvements to performance and other features with each new generation. Through broad support, backwards compatibility, and a consistent cadence of upgrades that doubled lane sp... » read more

Taking Steps Toward Hybrid Memory


What is the memory subsystem of the future, and how do we get there? Since our Hybrid Memory research program began, Rambus Labs and its industry partners and collaborators have made significant progress under the banner of OpenPOWER and OpenCAPI Foundations, an open development community based on the POWER microprocessor (mP) architecture. Rambus Labs is using the Wistron POWER9 systems’ Ope... » read more

Die-to-Die Interconnects for Chip Disaggregation


Today, data growth is at an unprecedented pace. We’re now looking at petabytes of data moving into zettabytes. What that translates to is the need for considerably more compute power and much more bandwidth to process all that data. In networking, high-speed SerDes PHYs represent the linchpin for blazing fast back and forth transmission of data in data centers. In turn, demand is increa... » read more

Die-To-Die Interconnects For Chip Disaggregation


Today, data growth is at an unprecedented pace. We’re now looking at petabytes of data moving into zettabytes. What that translates to is the need for considerably more compute power and much more bandwidth to process all that data. In networking, high-speed SerDes PHYs represent the linchpin for blazing fast back and forth transmission of data in data centers. In turn, demand is increasin... » read more

ADAS Further Extends 7nm Challenges


As we discussed previously on the LPHP blog, 7nm nodes hold great promise for reducing power, improving performance and increasing density for next-generation chips, but also present a set of engineering challenges. When you factor in the standards set for autonomous vehicles (AV) and advanced driver assistance systems (ADAS) system-on-chips or SoCs, those challenges can more than double. Autom... » read more

High-Performance Memory At Low Cost Per Bit


Hardware developers of deep learning neural networks (DNN) have a universal complaint – they need more and more memory capacity with high performance, low cost and low power. As artificial intelligence (AI) techniques gain wider adoption, their complexity and training requirements also increase. Large and complex DNN models do not fit on the small on-chip SRAM caches near the processor. This ... » read more

5G Wireless Infrastructure Pushes High-Speed SerDes Protocols


5G is the 5th generation wireless system standard that, through high speeds and increased accessibility, promises to change the way we stream, communicate, work, and travel. Boasting speed capabilities of 20Gbps and network densities of 1 million connected devices per square kilometer, 5G is the required technology for the implementation of highly anticipated technologies like autonomous vehicl... » read more

Deep Learning Neural Networks Drive Demands On Memory Bandwidth


A deep neural network (DNN) is a system that is designed similar to our current understanding of biological neural networks in the brain. DNNs are finding use in many applications, advancing at a fast pace, pushing the limits of existing silicon, and impacting the design of new computing architectures. Figure 1 shows a very basic form of neural network that has several nodes in each layer that ... » read more

Navigating The Foggy Edge Of Computing


The National Institute of Standards and Technology (NIST) defines fog computing as a horizontal, physical or virtual resource paradigm that resides between smart end-devices and traditional cloud or data centers. This model supports vertically-isolated, latency-sensitive applications by providing ubiquitous, scalable, layered, federated and distributed computing, storage and network connecti... » read more

← Older posts