There are so many new twists and opportunities that the industry may not miss the waning influence of shrinking features every couple years.
In the decades when Moore’s Law went unquestioned, the industry was able to migrate to the next smaller node and receive access to more devices that could be used for increased functionality and additional integration. While less significant transistor-level power savings have been seen from the more recent nodes, as leakage currents have increased, the additional levels of integration have brought down or eliminated many of the most power-hungry functions.
If Moore’s Law is slowing down, or even coming to an end for some companies and applications, what impact will this have on the design of systems? Will we have to spend more time refining the design itself so that it uses less area? Will we have to continue to find better ways to reduce power consumption? And are there better ways in which integration can be performed? Semiconductor Engineering has been asking the industry about the implications for an end to Moore’s Law, and in this article we will examine the effects it may have on semiconductor design companies and the IP industry.
The industry is divided about the general approach that should be taken for optimizing designs in the future. One option is for increased specialization, while the other is for more generalization. These two approaches would appear to be diametrically opposite each other, but there are some common factors between them.
“The trend to create ever bigger and more powerful processors is going to dramatically slow in favor of smaller, more efficient designs that are good enough and that can be used as multi-function IP,” says George Janac, CEO of Chip Path Design Systems. “Designers will try to build the same IP in a smaller form factor. Folks with processing engines, such as GPUs and DSPs, will try to multi-task with video coding, vision and audio processing instead of using special dedicated hardware for every function.”
Another means to that end is a change in architecture. “The microarchitecture of systems has changed,” says Gene Matter, vice president of application engineering at Docea Power. “Today, people are looking to achieve lower power and higher performance using parallelism. This can be thought of as a slow and wide approach, such as a superscalar architecture using many cores. This is in contrast to the fast and narrow designs that use deep pipelines. This change moves the optimization point more toward software, and we have to concentrate on energy-efficient software design, scheduling and task assignment.”
Both of those approaches would move more functionality into software while making the hardware as flexible as possible. Anand Iyer, director of product marketing for Calypto, also sees the need for flexibility. “IP reuse is growing and IP blocks need to be designed for multiple usage conditions.”
This trend is partially supported by Bernard Murphy, chief technology officer at Atrenta, who notes, “There’s plenty of room to improve performance using clever parallelism and acceleration, and there is definitely area to be squeezed out, especially in bus-centric designs.” However, Murphy also believes much of the optimization still lies in the hardware. “This requires clever partitioning, splitting part of the bus and IPs between different physical units. Of course, these methods don’t offer the promise of indefinite scaling, but apparently neither does Moore’s law. In addition, there is 2.5D/3D, which could extend the roadmap and the cost-curve quite a bit.”
Changing the design paradigm
This isn’t necessarily bad news, though. Opportunities may actually grow, and for more companies.
“Smaller design teams will tackle more interesting system opportunities, carefully selecting where to optimize cost vs. feature vs. power-for-the-desired-performance,” says Drew Wingard, chief technology officer of Sonics. “If we get this right, there will be much more experimentation at the silicon level. Yes, I’m saying that total design starts will go up, because the form factor and power requirements of IoT applications are so stringent that integration is a requirement even to explore the market requirements.”
The strategies start to come together when higher-level architectural analysis is employed. Tooling can then be used to drive optimization. With this type of flow in place, a more generalized IP can be optimized for a specific application. “Use of higher levels of abstraction like C and SystemC are growing,” says Iyer. “We can also see increased use of automatic optimization across multiple usage conditions for IPs.”
High-level synthesis (HLS) enables exploration of different architectures. When operating on IP blocks defined using C++ or SystemC, it becomes possible to produce alternative implementations and estimate their power at RTL. Unfortunately, few IP companies are providing IP at this level of abstraction today, and it could be argued that this would decrease the value of the IP. In many cases, the input description may be an executable specification for the block or interface. IP providers invest significant resources finding the best implementation, given the intended usage scenarios.
At the RT-level, there are many more opportunities for optimization. Murphy uses dynamic voltage and frequency scaling (DVFS) as an example: “The application processor guys are pushing this really hard – down even to the sub-IP level.”
Iyer adds that once an architecture has been selected, there are tools that can produce optimized, low-power RTL. “Giving design engineers the tools to do low-power RTL design is still fairly new, and most of the tools provided in the past were basic analysis tools. This is hugely attractive to designers who can improve their design performance without impacting the functionality.”
In addition, there are optimizations that can be performed at the back end, and this is where IP may need to be cognizant of the process technology in which it is to be manufactured. “Focus areas, such as the need for better dynamic power optimization, parametric on-chip variation (POCV) and layer-awareness for critical nets and buffering will work for any process geometry,” says Mary Ann White, director of product marketing for the Galaxy Design Platform in Synopsys. “These are extremely important for the smaller process geometries.”
As with all change, huge opportunities are created, but the direction that the IP industry should go in is not particularly clear. While the path for the processor vendors has been somewhat forced on them, new applications, such as the IoT may change the equation more than a slowing of the number of transistors a designer has to work with. As a result, for many the end of Moore’s Law will likely be a non-event.