Running Out Of Energy?

A new report documents what happens if computing continues at its current rate of consumption.

popularity

The anticipated and growing energy requirements for future computing needs will hit a wall in the next 24 years if the current trajectory is correct. At that point, the world will not produce enough energy for all of the devices that are expected to be drawing power.

A report issued by the Semiconductor Industry Association and Semiconductor Research Corp., bases its conclusions on system-level energy per bit operation, which are a combination of many components such as logic circuits, memory arrays, interfaces and I/Os. Each of those contributes to the total energy budget.

For the benchmark energy per bit, as shown in the chart below, computing will not be sustainable by 2040. This is when the energy required for computing is estimated to exceed the estimated world’s energy production. As such, significant improvement in the energy efficiency of computing is needed.

Screen Shot 2016-03-15 at 9.23.59 PM
Fig. 1: Estimated total energy expenditure for computing, directly related to the number of raw bit transitions. Source: SIA/SRC

“[The chart] is obviously a back-of-the-envelope extrapolation that is meant to make a point,” said Celia Merzbacher, VP of Innovative Partnerships at Semiconductor Research Council. “It’s not realistic to expect that the world’s energy production is going to be devoted 100% to computing so the question is, how do we do more with less and where are the opportunities for squeezing more out of the system?”

This leads back to the various areas of technology and research that are big buckets of opportunity and need. That includes everything from storage to sensing and security and manufacturing — all of the components that go into the business of creating computational systems. And it also points to some big problems that need to be tackled by government, industry and academia, Merzbacher said.

Whether those conclusions are accurate remains to be seen, but they do at least point out what might happen if nothing changes.

“Anytime someone looks at a growth rate in a relatively young segment and extrapolates it decades into the future, you run into a problem that nothing can continue to grow at exponential rates forever,” said , Cadence Fellow and CTO of the company’s IP Group. “By definition you’re going to run into some limits sooner or later. There may be hand-wringing over it but very rarely does it mean the end of the world if you can’t extrapolate in the conventional fashion. In fact, the whole system tends to follow, in a sense, economic principles in that when something is really, really cheap you use a lot of it. As you start to consume a scarce resource it gets more expensive and you find ways to shift the usage to something else.”

Case in point: Once upon a time gasoline cost 5 cents a gallon and cars were built that got 5 or 10 miles to the gallon. If it were extrapolated to say that if the whole world was driving as much as the Americans do in their cars at 5 miles to the gallon, we are going to run out of oil by 2020. But as time goes on, technology comes along that changes the usage patterns and cars are not built to get 5 miles to the gallon. “While at the moment gasoline isn’t a lot more expensive on an inflation-adjusted basis then it was back then, given that we have seen what expensive gasoline looks like, we know how the whole economic system is going to respond to a new level of scarcity in what was once a nearly free good,” said Rowen. “And the free good has been computing horsepower.”

He noted that the core of the argument here is really a formula about the relationship between computing performance and energy, and it is primarily tracking the evolution of CPUs and capturing the correct observation that in order to go really really fast it takes more than linear increases in power and complexity. “To push the envelope you have to do extraordinary things and the core assumption of the whole report is that you will continue on that same curve as you ramp up computing capability further still. It tends to overlook a couple of factors or insufficiently deals with a couple of factors. One, in many cases the way that we are going to consume more computing is not making the individual processors run faster but using more of them and that means not a super linear but probably a more like a linear increase in power as we go through the increase in demand. And since they are extrapolating so far out into the future, if you take that exponent in the equation which is something like 1.56, it says it’s growing not as fast as square but certainly much faster than linear. If you were to replace that with 1.0 you would find that the crisis is decades further out into the future.”

Although he did not say the core observation that in order to go faster you have to spend more than linear amount of power to do it is wrong, he pointed out that conclusion is highly sensitive to that exponent.

Rowen added that there may be more specialized kinds of computing which are more efficient, in particular devices such as GPUs and DSPs, along with other kinds of special purpose processors. All of those represent a growing fraction of the total compute. “Sometimes they are routinely one order of magnitude, and sometimes two orders of magnitude, more efficient than the general purpose processors in question so I think that factor of two may understate what the benefits are. That doesn’t necessarily say that changes the exponent in that equation. It certainly does say that as we demand more computing we are going to consume more electricity perhaps a little more slowly. Then it says but we can still have the conversation. If it is not 2040 as doomsday, is it 2050? Any way you construct a model you will inevitably reach a crossover point just by doing this kind of extrapolation where apparently world-ending crisis occurs. In reality we make shifts in where we make our investments, to put more emphasis on efficiency, to use the special-purpose processors more effectively, and to look for new computing modes.”

Drew Wingard, CTO of Sonics, likewise does not argue that the report has a point. But he said that seven decimal orders of magnitude between current energy dissipation and the Landauer limit in the chart cited above is a lot. “There is a lot you can do between here and there, and we are nowhere near fundamental limits, which is good news of course. The people I’ve seen who have tried to do more analysis of this phenomenon point down to where the energy goes in trying to do computation. They typically point quite a lot at the phenomenon that academia has been looking up for a long time, which is the huge amount of energy that is spent moving data as opposed to actually doing operations on the data.”

As a result, he said, they often end up going after the implications from an energy perspective of the very deep memory hierarchies that are present in today’s systems with disk drives at the bottom, SSDs, DRAM, various flavors of on-chip SRAM, then finally getting to the flip-flops that are available inside the processor. “The two things that people talk about doing are trying to do the base computation in a more energy efficient way, which typically means you slow down and you do it wider — but you can do it at a lower voltage and spend less energy per operation. The second thing that we are doing is trying to make sure that you spend less energy moving data to and from the places you’re going to use it. There is a lot of work, a lot of ideas that have been going on around the idea that perhaps the days of moving memory closer to the processor are coming to an end, and we need to be moving the processor closer to the memory.”

There is also a methodology angle to the power problem. “We’ve been talking about hitting the wall on so many things all these years, and we always seem to go over or around it,” observed Aveek Sarkar, vice president of product engineering and support at Ansys. “From a methodology point of view, when people say, ‘If we keep on doing what we are doing, we are going to hit this wall,’ or even if we going down to the physics level … this is assuming we keep doing things the way we have been doing them.”

As far as methodology goes, Sarkar said there is much more that can be done to improve efficiency that is not being done today. “People have started to look at power reduction more seriously, and we have seen technologies like RTL power analysis become more and more interesting but has it become the standard during the design process? That still has not happened yet. While people use it, they do not use it to the full potential of the solution.”

Existing tools have the capability to allow early visibility into the power or the capability to track the power. While large mobile chip companies have done this for many years, only recently have other design teams recognized the impact of errors in coding — and even then the focus has primarily been logic bugs. Sarkar explained that if the power is increased, these errors are not caught because there isn’t a mechanism in place to catch them for most design teams. He suggesteds the use of power regression, for example, as a design discipline, to look at power on a daily/weekly basis, and track the power number similar to how a bug count is done.

There’s another problem with these kinds of long-range reports, too. Looking at such long-range projections, it is difficult to give a conclusive opinion about the future. Nonetheless, it’s a good idea to think about the long-term impacts because it forces everyone to think about what is incremental and evolutionary.

“Continuing with is still incremental and evolutionary and is certainly becoming a lot more difficult,” said Juan Rey, senior director of engineering for Calibre at Mentor Graphics. “So we are seeing now that people are more seriously considering not just pure silicon for some of the devices of the future, but trying to think about how to incorporate more silicon germanium, more III-V type of devices. When we are looking to an even longer timeframe, people are becoming more serious about thinking about some form of radically different type of technologies.”

That includes quantum computing and different types of circuit architectures, such as those inspired in neuromorphic computing, Rey said. “There are good reasons to believe that there are whole families of algorithmic problems that can be solved with much less computing power with those radically different types of architectures. So when looking to that longer timeframe, it’s likely that those types of things will find niches where they can be applied.”

The part that is much less certain today is how to use it to solve more common, mainstream algorithmic problems, which are going to be the bulk of a lot of that growth for such application areas as big data and the Internet of Things. “It’s trying to focus on what are going to be the requirements for that very long timeframe without losing perspective of the shorter timeframe, and in reality that is where most of the focus is going on today,” he continued.

And a key area of focus today is the interface between design and manufacturing where there is a constant need to keep focusing on how to contribute to getting more done with less power, and ultimately with less total energy consumed.

When looking into that, there is the normal trend related to continuing to bring Moore’s Law to the limit, and that requires a lot of computational power by itself just to accomplish the activities that are required to enable a smooth transfer of the design to the manufacturing floor.

That puts a tremendous amount of pressure especially on the manufacturing industry that even today needs to use a very large number of computer cores to be able to tape out, Rey said. “We are talking about hundreds and thousands of computer cores per layer per design that need to be processed. It’s actually interesting to see the report that puts a lot of focus and emphasis on saying we need to continue pushing for the most advanced processes that can deliver on these. In addition to that, we need to put a lot of effort on the algorithms that are going to do the data processing. In our case, that translates into trying to use and becoming smarter on how to process any information and not have to process it multiple times. Designs today still contain a sizeable amount of hierarchy, and hierarchy essentially can be exploited in order to get good performance, in order to get lower overall computational power. We have to keep putting a lot of emphasis and focus in getting better algorithms — not processing the same type of things over and over again, so we are able to actually stay within the allocated amount of resources that are required node after node.”

That also requires adapting to the new hardware architectures that bring more parallelism and more compute efficiency, and then working with very large distributed systems. Given the immense challenges of lowering the energy requirements of computing in the future, it is obvious the task will be accomplished with all hands on deck. And given the impressive accomplishments of the semiconductor industry in the past 50 years, there’s no doubt even more technological advancements will emerge to hold back the threat of hitting the energy wall.

Related Stories
New Memory Approaches And Issues
How Many Cores?
Preparing For The IoT Data Tsunami



1 comments

EnimKlaw says:

The energy plot above shows total world energy production constant at 2010 levels. Is that realistic? If this is true then a growing population has a lot more to worry about than joules/bit of computing power.

Leave a Reply


(Note: This name will be displayed publicly)