Towards Decarbonization: Keeping Electronics Energy Consumption In Check

Have advances in design and semiconductor technology managed to hold energy requirements to a reasonable level?

popularity

The International Technology Roadmap for Semiconductors (ITRS) roadmap famously said in 2001 that “cost of design is the greatest threat to the continuation of the semiconductor roadmap.” For years, the industry followed the ITRS updates on productivity improvements provided by automating design and hardware to counteract the looming design cost. The discussion on decarbonization has some similar elements. While the capabilities supplied by electronics have grown by leaps and bounds, data shows that energy consumption mostly has been kept in check. Electronic design automation (EDA), in combination with advances in semiconductor technology, holds the energy consumption of electronics within acceptable levels while enabling significant performance increases. And, as it turns out, some seemingly small things like optimizing memory transactions can have very surprisingly high impacts because of the enormous scale in numbers.

With the ITRS focus on design cost, UC San Diego Professor Andrew B. Kahng’s 2013 paper, “The ITRS Design Technology and System Drivers Roadmap: Process and Status,” referred to the design cost of a consumer portable system on chip (SoC) in 2011, about $40M. Without EDA technology advances from 1993 to 2009, the same chip would have cost $7.7B (with a B as in billion) to design. All the performance improvements made in electronics made me think about energy consumption. Is it getting out of bounds?

Let’s look at the hyperconnectivity of consumer devices and their impact on data centers.

According to the 2020 Science article, “Recalibrating global data center energy-use estimates,” from 2010 to 2018, global data center energy consumption “only” grew a surprising 6% to 205TWh, while workloads increased more than six-fold, internet traffic increased 10-fold, and storage capacity rose by 25X. The Wired article, “Data Centers Aren’t Devouring the Planet’s Electricity—Yet,” adds additional insights.

Similarly, the paper, “On Global Electricity Usage of Communication Technology: Trends to 2030,” predicts the global electricity demand of consumer devices to decrease to 670TWh in 2030. It suggests the global electricity consumption of fixed wired and fixed Wi-Fi networks to grow to 2,641TWh and 889TWh by 2030, respectively, wireless access networks—2G, 3G, 4G and 5G combined—to 204TWh, data centers to 1,137TWh. The global energy demand to produce all these will grow to 903TWh. Bottom line, communication technology is expected to become 21% of global electricity usage.

So, what about the impact of hardware and software design?

Computational software, EDA, and semiconductor improvements on power optimization are critical. Like with the design cost prediction of the ITRS, global electricity usage of electronics would be unmanageable without the continuing advances in EDA and semiconductor technology. Great examples are the low-power optimization achieved by combining low-power-optimized semiconductor IP like DSPs and CPUs with advanced-node digital implementation flows that use hardware/software activity data created in simulation, emulation, and prototyping. And of course, this extends to system-level analysis flows for thermal analysis and eventually computational fluid dynamics (CFD) to design proper packaging for servers and consumer devices that allow the best airflow to keep them cool.

Looking closer, one finds an entire ecosystem at work here. The recent Arm DevSummit showed some great examples. Arm’s vision and mission of “Sparking the World’s Potential” really resonates well here. In the session “High-Performance Computing That’s Earth-Friendly,” Unity’s Brett Bibby illustrated how seemingly small, individual changes can significantly impact the overall compute performance and power consumption, having a real environmental impact. Memory accesses are critically important per a paper quoted by Brett: “We find that across all of the applications we study, 62.7% of the total system energy, on average, is spent on data movement between main memory and the compute units.”

Where is all of this going?

Hyperconnectivity will only grow further and faster. According to IDC/Seagate estimates, the “Global DataSphere” will grow to 175ZB by 2025. According to IBS, endpoint data creation of devices you and I use grows at an 85% CAGR from 2019 to 2025. And Ericsson estimates 2.8B 5G mobile subscriptions by 2025, with 164EB of data per month transmitted over networks—76% of it being video.

Our ecosystem will have to stay as creative as it has been to optimize energy consumption even further, and experience in computational software and intelligent system design will be critical.



Leave a Reply


(Note: This name will be displayed publicly)