Discussions are shifting to what can be done with technology, not how to improve it.
Debate has been raging for years about whether software or hardware should be the starting point for improving power and performance, or whether it should elevated another notch by fusing hardware and software into a system-level approach. Now, groups like Leti, IEEE, SEMI, and a number of researchers in leading universities around the globe are beginning to talk about moving the starting point for technology up yet another step, targeting specific use models in specific markets.
This may sound like hair-splitting, but the implications of this approach are intriguing. This is not just market-driven design based upon complex SoCs and/or software. The first step here is to look beyond the technology to the problem that needs to be solved, and then working downward to figure out how to achieve that goal. In effect, this is another level of abstraction that extends well beyond an individual system. The vision is a collection of systems that may or may not interact, or which may interact sometimes and not others.
How this all proceeds in various markets is unknown at this point. It may never materialize in the way these groups initially have conceived, and it may not materialize anytime soon. So far, there are no time frames being set. But it does represent the most ambitious view of technology to date, one that reaches well beyond current visions of the IoT and deep learning/machine learning/artificial intelligence.
For the past 50 years, the semiconductor industry has been focused on shrinking features to reduce cost and improve performance. And ever since the introduction of the smart phone, there has been a growing emphasis on doing more with a single battery charge. There are tweaks in every direction that will continue to improve performance and power, from better implementation to better design methodologies and approaches. Add in software improvements, beginning with earlier bring-up on virtual hardware prototypes, and the advances continue to be nothing short of astounding from an engineering perspective.
The next step is a bit more mind-bending than astounding. It’s figuring out what can be done with technology, not how to improve the technology. After a half-century of development, with more advances on the way from research labs around the world, technology is firmly on track to continue making huge dents in performance, power and cost reduction. What hasn’t advanced so far yet is an understanding now of how this technology can be applied to harness, augment and in some cases sidestep the physical world.
Computing certainly will be a key component in all of this. In fact, there is almost universal agreement there will be a massive increase in the amount of data that needs to be processed at every level, from the sensor to edge devices and ultimately out to the cloud. And there is work to change computing at a fundamental level, shifting from processors as we know them to spintronics, quantum devices, and perhaps even optical processing and storage.
But how we interact with those devices may be as radically different as when people were able to light up a building by flicking a switch, or to communicate instantly (telegraph/telephone/e-mail/Internet). Those were all technological improvements, but the changes that resulted extended well beyond the technology. Computing has reached that point where much more is possible than the computing itself, and as the possibilities begin to unfold it could radically change the technology that supports it.
The starting point for modern technology is under scrutiny, and that is a significant departure from the past.