Use this year to consider efficiency of what we do, what we create, how we do it, and whether we could make positive changes.
Every year I run a predictions article. It is a mashup of ideas from many people within the industry, and while many predictions are somewhat self-serving, there are other which come more from the heart — or perhaps they are dreams rather than expectations. I see hope in some of those, particularly the ones that look toward sustainability within our industry, and of our industry.
Just like in verification, there are two words that describe its purpose — verification and validation. Verification is the act of showing a design matches a specification, while validation is making sure that the specification is what you wanted. One is inward-looking, the other more outward. The same is true for sustainability.
There are two aspects of sustainability — are we doing everything in the most sustainable way, and does what we create lead to a more sustainable future?
When I think about verification, I see huge amounts of wasted time, effort, and massive amounts of computation that should not be required. The methodology in use is, to be frank, childish. The industry’s best minds have failed to come up with a methodology that has any notion of efficiency. We wave our arms in the air, saying it is an impossible task and that we can never reach closure. And yet the best the industry can come up with is a random methodology that drives stimulus and performs ad-hoc checking, collecting implied coverage data.
Constrained random test pattern methodology, as defined today, drives the sale of more simulator licenses, and increasing design sizes have changed that to emulators. But coverage is defined in a way that where it is almost impossible to think about true completeness, or an optimal stimulus set, and the same things getting re-verified probably billions of times more than required.
I am very happy to see that some companies are beginning to think about true hierarchical approaches to a number of problems in the industry, and verification is one that must be rethought. The automatic generation of abstract models from detailed ones is a key element of this. The verification at the block level should create a higher-level model that can be used for integration verification, or other higher forms of verification. Those generated models are specific to the purpose of the higher-level verification. For example, a higher-level model might be an abstract function and a statistical model for timing, or it might just capture an I/O model that flags a warning if it sees a set of patterns and states that were not covered by block-level verification. There are so many possibilities.
Then there are efficiencies within the design. It is clear that companies are trying hard to reduce power consumption based on the number of chip failures related to this task. The industry needs much better tools to help find efficiencies and verify the impact of them.
Does what you are working on feed into a world that is more energy-efficient than it was before your product became available? In some cases that may be fairly easy to answer, such as producing a processor that does more operations per watt than the previous generation. But there are many levels to this.
One thought pattern has disturbed me for a long time. The software programming paradigm is so entrenched that the industry will do anything to preserve it, even when it is so inefficient that it should be scrapped and replaced with something else. It may result in more time required for software, but the product would finish up being orders of magnitude more energy efficient. For example, who does ML using a general-purpose CPU? They did for a while before finding more suitable alternatives, but there are many other tasks that continue to use the wrong processing architecture.
Similarly, within AI/ML, researchers have been reducing the need for unnecessarily high precision. It was used initially because there was nothing else, but using full precision floating point is wasting so much energy. Edge inference has improved faster, because without it products would not be possible. But a lot more thought needs to be put into massive reductions in learning energy.
Then there are the class of products that defy all notions of being sustainable. Their only reason for existence is to make money at the expense of the environment. The example I always pick on is recommendation engines. Can we stop this stupidity? They don’t work and they have no good purpose. For people working on these products, please rethink where you are placing your talent, and if you have an opportunity to change to something that is for the good of society, then please do so.
Our industry has tremendous power to influence every aspect of society. While I think we have a reasonable track record, it is far from perfect. We have taken the easy path every time, and that means we are a long way from where we could be in terms of energy efficiency. We need to be thinking about it in every corner of what we do. COVID showed that even a change in working conditions can have a major impact. We need to find the balance between office work and utilizing ‘local’ resources. We need to stop thinking that compute power is infinite and concentrate more on how we reduce the amount of compute we need, or how to perform the compute more efficiently.
We all can make a difference. Please use the New Year to start thinking about it a little more. Individually, we cannot solve the problem, but every one of us can make a small contribution.
Leave a Reply