All You Need Is Cache (Coherency) To Scale Next-Gen SoC Performance


Life on the SoC performance front remains a withering battle sometimes, because things can seem fairly bleak. As transistor scaling becomes more expensive below 10-nanometer feature sizes, every day it becomes harder to double performance every 18-months or so and stay competitive. Nowhere is the pain of this battle more acute than in consumer and automotive systems, where low cost is the key t... » read more

New Architectures, Approaches To Speed Up Chips


The need for speed is back. An explosion in the amount of data that needs to be collected and processed is driving a new wave of change in hardware, software and overall system design. After years of emphasizing power reduction, performance has re-emerged as a top concern in a variety of applications such as smarter cars, wearable devices and cloud data centers. But how to get there has cha... » read more

How Cache Coherency Impacts Power, Performance


Managing how the processors in an SoC talk to one another is no small feat, because these chips often contain multiple processing units and caches. Bringing order to these communications is critical for improving performance and [getkc id="106" kc_name="reducing power"]. But it also requires a detailed understanding of how data moves, the interaction between hardware and software, and what c... » read more

Heterogeneous Multi-Core Headaches


Cache coherency is becoming more pervasive—and more problematic—as the number of heterogeneous cores used in designs continues to rise. Cache coherency is an extension of caching, which has been around since the 1970s. The notion of a cache has a long history of being utilized to speed up a computer's main memory without adding expensive new components. Cache coherency's introduction coi... » read more