The incorporation of AI into game engines will push demand for a new class of PCs with higher memory bandwidth and capacity.
The global gaming market across hardware, software and services is on track to exceed annual revenues of $500B in 2025.1 That’s bigger by an order of magnitude than the combination of movies and music. On the cutting edge of that enormous market is open world gaming, where the driving goal is to give players the freedom to do anything they can imagine in a coherent and immersive environment. As the word “world” would suggest, these environments are increasingly vast in scope and can even scale to many planets.
Concurrently, we are witnessing the meteoric rise of generative AI, a technology capable of creating multimodal content on demand. And that content spans images, audio, video and even 3D models. In other words, all the building blocks necessary to build digital worlds. With typical development cycles of the leading open world games from major studios running five years or longer, the promise of gen AI on the production side is enormous.
As such, we’re seeing game developers embracing gen AI across a wide range of disciplines to both accelerate the creation process and add greater detail for immersion. Procedural generation can create the vast, highly detailed environments into which designers can incorporate hand-crafted bespoke elements. Gen AI is streamlining the creation and rigging of 3D models, synthesizing character voices, and generating music and ambient soundscapes, thereby greatly magnifying the productivity of game developers.
Now we’re on the threshold of gen AI being incorporated not just into the tools but into the engines that run the games themselves. For instance, rather than non-player character (NPC) dialogue being scripted and deterministic, it can be generated in real-time guided by the NPC’s programmed persona. Storylines can be non-linear in nature and adversaries can adapt to the player’s actions, all thanks to gen AI.
The implication is that inference models that power gen AI will be increasingly incorporated into game engines. As the demands for greater fidelity and increased capability will only grow, so too will the size of these models. Client CPUs with Neural Processing Units (NPUs) to run these inference models are being introduced as you read this. These will enable a new class of AI PCs, which will get increasingly powerful with each new generation.
The demand for memory will be for more, more, more… namely, higher bandwidth and greater capacity. For higher bandwidth in AI PCs, DDR5 main memory will push to data rates of 6,400 Megatransfers per second (MT/s) and beyond. At 6,400 MT/s, we reach a point where on-module clocking is needed to close timing of the synchronous memory system. As such, the chipset for client DDR5 DIMMs at 6,400 and above will include a Client Clock Driver (CKD) IC. Client DIMMs, being UDIMMs and small form factor SODIMMs, are rechristened CUDIMMs and CSODIMMs with the addition of the CKD.
As part of our commitment to advancing performance across the computing landscape, Rambus has introduced a DDR5 CKD chip. The CKD, alongside the Rambus Serial Presence Detect (SPD) Hub IC introduced earlier, comprises a DDR5 client DIMM chipset that supports CUDIMMs and CSODIMMs with operation up to 7,200 MT/s. In addition to the CKD and SPD Hub, Rambus DDR5 memory interface chips include Gen1 to Gen4 Registering Clock Drivers (RCDs), Power Management ICs (PMICs) and Temperature Sensors for leading-edge servers. With over 30 years of high-performance memory experience, Rambus is renowned for chip solutions that deliver superior signal integrity and power efficiency at higher yield for server and client DIMMs.
For more information:
Reference
Leave a Reply