After Moore’s Law: More With Less

There are so many new twists and opportunities that the industry may not miss the waning influence of shrinking features every couple years.

popularity

In the decades when Moore’s Law went unquestioned, the industry was able to migrate to the next smaller node and receive access to more devices that could be used for increased functionality and additional integration. While less significant transistor-level power savings have been seen from the more recent nodes, as leakage currents have increased, the additional levels of integration have brought down or eliminated many of the most power-hungry functions.

If Moore’s Law is slowing down, or even coming to an end for some companies and applications, what impact will this have on the design of systems? Will we have to spend more time refining the design itself so that it uses less area? Will we have to continue to find better ways to reduce power consumption? And are there better ways in which integration can be performed? Semiconductor Engineering has been asking the industry about the implications for an end to Moore’s Law, and in this article we will examine the effects it may have on semiconductor design companies and the IP industry.

The industry is divided about the general approach that should be taken for optimizing designs in the future. One option is for increased specialization, while the other is for more generalization. These two approaches would appear to be diametrically opposite each other, but there are some common factors between them.

“The trend to create ever bigger and more powerful processors is going to dramatically slow in favor of smaller, more efficient designs that are good enough and that can be used as multi-function IP,” says George Janac, CEO of Chip Path Design Systems. “Designers will try to build the same IP in a smaller form factor. Folks with processing engines, such as GPUs and DSPs, will try to multi-task with video coding, vision and audio processing instead of using special dedicated hardware for every function.”

Another means to that end is a change in architecture. “The microarchitecture of systems has changed,” says Gene Matter, vice president of application engineering at Docea Power. “Today, people are looking to achieve lower power and higher performance using parallelism. This can be thought of as a slow and wide approach, such as a superscalar architecture using many cores. This is in contrast to the fast and narrow designs that use deep pipelines. This change moves the optimization point more toward software, and we have to concentrate on energy-efficient software design, scheduling and task assignment.”

Both of those approaches would move more functionality into software while making the hardware as flexible as possible. Anand Iyer, director of product marketing for Calypto, also sees the need for flexibility. “IP reuse is growing and IP blocks need to be designed for multiple usage conditions.”

This trend is partially supported by Bernard Murphy, chief technology officer at Atrenta, who notes, “There’s plenty of room to improve performance using clever parallelism and acceleration, and there is definitely area to be squeezed out, especially in bus-centric designs.” However, Murphy also believes much of the optimization still lies in the hardware. “This requires clever partitioning, splitting part of the bus and IPs between different physical units. Of course, these methods don’t offer the promise of indefinite scaling, but apparently neither does Moore’s law. In addition, there is 2.5D/3D, which could extend the roadmap and the cost-curve quite a bit.”

Changing the design paradigm
This isn’t necessarily bad news, though. Opportunities may actually grow, and for more companies.

“Smaller design teams will tackle more interesting system opportunities, carefully selecting where to optimize cost vs. feature vs. power-for-the-desired-performance,” says Drew Wingard, chief technology officer of Sonics. “If we get this right, there will be much more experimentation at the silicon level. Yes, I’m saying that total design starts will go up, because the form factor and power requirements of IoT applications are so stringent that integration is a requirement even to explore the market requirements.”

The strategies start to come together when higher-level architectural analysis is employed. Tooling can then be used to drive optimization. With this type of flow in place, a more generalized IP can be optimized for a specific application. “Use of higher levels of abstraction like C and SystemC are growing,” says Iyer. “We can also see increased use of automatic optimization across multiple usage conditions for IPs.”

High-level synthesis (HLS) enables exploration of different architectures. When operating on IP blocks defined using C++ or SystemC, it becomes possible to produce alternative implementations and estimate their power at RTL. Unfortunately, few IP companies are providing IP at this level of abstraction today, and it could be argued that this would decrease the value of the IP. In many cases, the input description may be an executable specification for the block or interface. IP providers invest significant resources finding the best implementation, given the intended usage scenarios.

At the RT-level, there are many more opportunities for optimization. Murphy uses dynamic voltage and frequency scaling (DVFS) as an example: “The application processor guys are pushing this really hard – down even to the sub-IP level.”

Iyer adds that once an architecture has been selected, there are tools that can produce optimized, low-power RTL. “Giving design engineers the tools to do low-power RTL design is still fairly new, and most of the tools provided in the past were basic analysis tools. This is hugely attractive to designers who can improve their design performance without impacting the functionality.”

In addition, there are optimizations that can be performed at the back end, and this is where IP may need to be cognizant of the process technology in which it is to be manufactured. “Focus areas, such as the need for better dynamic power optimization, parametric on-chip variation (POCV) and layer-awareness for critical nets and buffering will work for any process geometry,” says Mary Ann White, director of product marketing for the Galaxy Design Platform in Synopsys. “These are extremely important for the smaller process geometries.”

As with all change, huge opportunities are created, but the direction that the IP industry should go in is not particularly clear. While the path for the processor vendors has been somewhat forced on them, new applications, such as the IoT may change the equation more than a slowing of the number of transistors a designer has to work with. As a result, for many the end of Moore’s Law will likely be a non-event.



6 comments

John Swan says:

From a design methodology perspective I’ve been thinking for a long time about what will happen at the slowing, or end of, Moore’s Law. I have often thought that it would accentuate the need for better design methodology. This is what I am hearing here. A significant part of the improvement will be in better tools and methodologies for higher level abstraction (HLS for IP, for example) in support of earlier architectural and power exploration. The design and product opportunities will be plentiful and those who adapt their design methodologies will be a step ahead.

Brian Bailey says:

Thanks John – I couldn’t agree more. I feel that parts of the IP community will have to reinvent themselves. Some IP, such as processors, memory etc, have considerable tooling associated with them, which will protect them, but as you say – anything that can go through HLS, is a different matter. This part of the IP market may dissapear unless they can add additional value.

Donnacha O'Riordan says:

Couldn’t agree more Brian. Post Moore, Architecture matters and doing a custom ASIC, with higher levels of integration, even high performance mixed signal, has never been more accessible for Products that in the past had volumes too low to justify a custom approach.

http://www.s3group.com/silicon/resource-center/download/dlitem/104/

Rob Neff says:

There is also significant room for improvement in the software arena. For years we’ve been doing our best just trying to keep up with the changes in hardware. There’s really been no time to do things over, just add more code to what exists and keep going. If a system isn’t running as fast as desired, it’s easier to just update to a newer processor than it is to refactor the code. This might even be encouraged to reduce end-of-life supply issues.

I think back to the early days, when I was playing games on the Apple II, we had games like Space Invaders and PacMan that occupied a few tens of KB of code space and ran at 1 MHz. Yet they were responsive enough to keep us entertained for hours. Part of it was that systems were small then, so that just a few engineers could wrap their heads around the entire project, and also at least on the Apple II, the same processor was used for many years, allowing optimization to be done at the human level.

If we really had time to create a new OS with modern features but with code simplicity and speed of execution as top design goals, the result could be impressive.

Brian Bailey says:

I think you have on an important issue there. Because we can no longer conceptualize the whole system, we fail to see optimization that can be made. But we need to do packaging, data hiding and other things to make it possible to build systems out of components. The cost of re-architecting could indeed be very high.

[…] In fact, Semiconductor Engineering believes design will become more important than in the past (see After Moore’s Law: More With Less). Design and the selection of the optimal architecture will be necessary to increase performance or […]

Leave a Reply


(Note: This name will be displayed publicly)