The Growing Need For A Systems Approach

Analysis: At 14nm the whole industry will require a fundamental shift in thinking. Blame it on the laws of physics.

popularity

By Gabe Moretti
Electronic computing systems have gone through an evolutionary cycle since the invention of the mainframe, and the process is continuing. Semiconductor technology, mostly based on CMOS fabrication methods, has enabled an increase in design complexity and device functionality that have revolutionized the world.

But 20nm processes may be the last that obey Newtonian physics. The next process, 14nm, must deal not only with atomic effects, but most likely with quantum effects. At those dimensions, the laws of physics laws are different and much more costly to obey.

The development of electronic computing devices
A brief history will help clarify this. Until the late ’60s, mainframes were the only way to execute computer programs. The hardware was built by electronic engineers, the operating system and device drivers by system programmers, and application programmers developed application programs to automatically solve specific problems.

The introduction of microprocessors followed the increased capacity of manufacturers to diffuse sufficient transistors on a die, and opened the opportunity to build a computing system on very few—and eventually only one—printed circuit board. The systems were still designed using standard parts from suppliers offering both digital and analog components and continued the separation between hardware and software.

Synopsys’ introduction of the first commercially available digital logic synthesis marked a watershed in the design of computing systems. By 1992 Design Compiler was mature enough to understand the semantic flavor of both VHDL and Verilog. It allowed designers to develop integrated circuits that replaced the functionality of standard components to such an extent that it made that industry relatively obsolete. Only analog components survived, because no one had found a way to synthesize an analog design in a cost-effective manner.

Today ICs containing hundreds of million of transistors are built almost routinely. Analog functions can be easily integrated with digital function using IP cores, or, in some cases, even take advantage of specific analog synthesis applications.

The EDA industry is focused on enabling engineers to do the same thing with every processing node. Integrating more functionality in an IC seems the “thing to do”.

Reality will force a new paradigm.
But during the impressive growth in computing capacity of an IC, the semiconductor industry has failed to manage costs. Financial realities are replacing engineering capabilities as the determining factor in deciding what to build and what markets to serve. The ITRS road map shows that unless a new development paradigm is found, the cost of developing an ASIC could reach the $1 billion mark by 2015. This, of course, is an unsustainable proposition. It just does not make sense.

The generally accepted architectural solution today is to design and build a family of devices. Starting from a common hardware architecture, designers count mostly on software to tailor each device to specific tasks or human interfaces. The aim is to keep manufacturing costs constant, or to slightly lower them by increasing yield, while producing end products that look different enough to entice consumers to purchase a new device in the same family every couple of years.

This strategy is enabled by the fact that CMOS fabrication technology has remained constant enough, from a product design point of view, to allow engineers to focus on integration as the primary goal. The increasingly complex DFM, meanwhile, has followed the rules of Newtonian physics. The last process that supports this approach is now being ready for commercial release—the 20nm node.

To design and develop such complex devices, engineers have had to face the daunting task of designing hardware/software systems in the same amount of time as was required for hardware systems. This has turned out to be impossible, because no one can debug a software system without executing it on the hardware it is destined to use. EDA vendors have used their creativity to provide virtual prototyping capabilities. Just like in the old days when engineers developed instructions set simulators of the CPU, today virtual prototyping aims to allow the development of a model of the hardware system before the system prototype is available.

This is not an easy task, and one that has yet to reach maturity. In the last couple of years Synopsys has focused on developing and supporting efficient virtual prototyping. After purchasing Virtio a few years ago, it discovered that having a virtual prototype of a CPU alone was not enough. It looked for complementary technology and eventually purchased both CoWare and VaST. Last week Synopsys announced the result of the integration of these technologies: the Virtualizer. The product is a big step forward, but it still is based on the common methodology of building a compute system in CMOS.

A possible future
The EDA industry was built to provide tools for hardware designers. This has now morphed in supporting system designers, as well. But the tools still keep hardware and software as separate disciplines that require new methods to co-exist efficiently. In the future designers will have to look at providing complete system solutions that erase this distinction. A system architecture should not be based on the characteristics of hardware or software, but instead on the characteristics of the system itself.

Companies still have to address four fundamental issues in deciding to develop a product: cost (time and resources), power use, chip size and packaging, and product family ROI. These four items taken individually for a new ASIC program may be difficult to solve until development is so far along as to make it impractical to even start the project. That means system tools will be needed not just to estimate, but to predict with enough accuracy both the power consumption and the size of the device.

Reusable IP is a partial solution to the problem, but it is still marketed as a replacement to the old component market. In insufficient number of standards make the choice non-deterministic, at best. Hardware IP integration is still difficult and time consuming as busses become more and more complex. Even with advances in virtual prototyping, software development and integration will continue to increase in difficulty, especially due to the lack of standards in software architecture. There also are no standard platforms—true computing systems tailored to a specific application market. Standard platforms can deliver solutions to both power consumption and packaging, thus simplify the problem considerably.

This isn’t without risk, of course. It potentially puts EDA vendors in the position of competing with their customers. But if the customers fail, so will the EDA industry. EDA must still provide all of the tools necessary to develop a product specific platform, should a customer choose to do so, but an increasing number of companies will require system solutions, not just an inventory of IP components. To successfully grow, EDA must be in the system business, not “just” in the tools business.



Leave a Reply


(Note: This name will be displayed publicly)