Avoiding A $7.7B Chip Design Cost

Examples of the next productivity leap in verification.

popularity

For years, the story about semiconductor development cost and about EDA contributions has been pretty simple. Cost has been, is, and will likely be for a while, the single biggest issue in scaling development for more complex designs. The next big leap for verification productivity will be the close integration of verification and design engines, both vertically and horizontally as I have written about previously. At the recent CDNLive Silicon Valley, the presentations in the tracks for System Verification, IP Verification, and even Front-End Implementation showed great examples of these improvements, with two emulation-related presentations actually winning best presentation awards.

But first, what’s up with cost? The ITRS roadmap famously said in 2001 that the “cost of design is the greatest threat to continuation of the semiconductor roadmap.” Andrew Kahng, Professor of CSE and ECE, UC San Diego, has been involved with the ITRS semiconductor efforts for years and at DAC 2013 he gave a presentation called “The ITRS Design Technology and System Drivers Roadmap: Process and Status.” In it, he reported that the design cost of a SoC consumer portable chip in 2011 was about $40M. Without EDA technology advances from 1993 to 2009, the same chip would have cost $7.7B to design.

That’s a huge productivity improvement! The associated graph looked like this, focusing on IP re-use and impact of compute platforms:

EDA Impact on IC Design Cost, Source: Andrew Kahng, 2013

Since then, the Roadmap for Devices and Systems (IRDS) has continued the work of the ITRS. Some of the key improvements that were suggested in the past included items like the “Intelligent Testbench” and “Concurrent Software Compiler.” In my mind, one of the next key items for productivity improvement is the close integration of verification engines, as we have been working on in the Cadence Verification Suite. Anirudh Devgan pushed the integration message for both the verification and the implementation flows during his keynote at CDNLive.

Yours truly was the session chair for the System and Software Development track of the two verification tracks. In the parallel track that was simulation focused, we heard a lot about the recently introduced next-generation Xcelium simulation, which is directly addressing the multiprocessor aspects shown in Andrew’s graphic above. Here are some of the actual highlights for the systems and software track that were presented during the two days, starting with the presentations that won best presentation awards:

  • Satyadev Muchukota and Rakesh Mehta from NVIDIA presented on “Emulation of Multi-GPU Configurations with VirtualBridge.” The integration between Palladium Z1 emulation and virtual I/O peripherals that run in simulation gave NVIDIA debug flexibility, better scaling of the number of PCIe ports, support for multiple generations of protocols, and an easy-to-replicate setup. In terms of productivity improvements, NVIDIA concluded that the VirtualBridge and SpeedBridge interfaces are complementary and together shorten NVIDIA’s time to market. The SpeedBridge environment yields higher performance and traffic fidelity while the VirtualBridge environment yields flexibility and can be used earlier in development cycle.
  • MicroSemi’s Theodore Wilson presented on “Rapid Turns with Palladium and Joules,” an example of what I call vertical integration. He presented on how they were able to reduce the time for turns from Palladium platform runs to power estimation with the Joules solution to below 12 hours, significantly improving what he identified as key metric—the throughput time from “activity trace to actionable power reports.”
  • Salma Mirza and Rajesh Vaidheeswarran from Netronome presented on a “Holistic, Multi-Platform, Production Software-Driven Methodology to Verify a 22nm Design.” While the core of the presentation was on emulation and the improvements it brought them for their networking designs, they concluded that the multi-platform, holistic approach to verifying with the Cadence Verification Suite works very well, enables them to meet the challenges posed in each phase of the product cycle with a unique set of tools, maximizes the chance of success for the next generation by having a closed loop system, minimizes the need for ECOs due to defects showing up “late in the game” and optimizes reuse of elements in each phase of the life cycle.
  • Jerry Cao, Director, Platform Software, presented on “Pre-Silicon Software Development with Protium/Palladium.” The combination of the two hardware engines enabled their software to be READY when silicon returned. They had basic Android running in 30 minutes after silicon was back and were able to give a full Android demo to customers after three days, a 5X improvement compared to not having pre-silicon development systems for software.
  • Joshua Kiepert from Marvell presented on “Migrating Palladium Designs to Protium for Enhanced Performance.” He gave some real-world examples and concluded that the combination of the Protium and Palladium platforms has enabled his team to develop firmware and software months sooner than was possible previously with FPGA prototyping only, while also providing them with improved design verification coverage.
  • Soummya Mallick and Pritam Chopda from MicroSemi presented in “Transactor-Based Simulation Acceleration on Palladium” how they achieved simulation acceleration performance by optimizing their simulation randomization techniques into “intelligent” randomization. This included leaving data fields unconstrained and using intelligent protocol header constraints, the combination of simulation and emulation—what we call Verification Acceleration—that gave them up to 40X speedup.

And these are just the highlights. We also had ARM present on connections to fast models and cycle models—a combination of both horizontal and vertical integration as this stretches to the transaction level—as well as Qualcomm on how to generate SoC verification cycles with the Perspec solution for all engines, enabling horizontal re-use. And the parallel IP/Block Verification track gave additional details on embedded software debug with Indago ESWD by Texas Instruments, as well as mixed-signal and formal verification examples by NXP and Arteris.

The bottom line? Integration of the verification engines will provide the next leap in productivity. And we are in the midst of its adoption. We have come a long way since we introduced the concept back in 2011.

I am looking forward to your thoughts!



Leave a Reply


(Note: This name will be displayed publicly)