Design Complexity Drives New Automation

It now takes an entire ecosystem to build a chip—and lots of expensive tools.

popularity

As design complexity grows, so does the need for every piece in the design flow—hardware, software, IP, as well as the ecosystem — to be tied together more closely.

At one level, design flow capacity is simply getting bigger to accommodate massive finFET-class designs. But beyond sheer size, there are new interactions in the design flow that place much more emphasis on collaboration than in the past because it is viewed as the best way to realize efficiencies and optimize designs.

“The ability to share data between the end customer, the ASIC provider and key members of the supply chain (IP, foundry, packaging) is critical,” said Mike Gianfagna, vice president of marketing at eSilicon. “Advanced 2.5D designs typically will be doing at least one thing that has never been done before. It’s not a question of ‘if’ there will be design challenges, but ‘when’ they arise, how are they dealt with? Crisp, clear communication and collaborative problem-solving between the key players becomes the margin of victory. The ability to collaborate during design through cloud-based technologies helps. An open, transparent culture is required as well. The ASIC vendor typically takes the lead here.”

Beyond the mechanics of the process, new technologies are critical enablers, as well. For example, the company has developed new thermal and mechanical stress analysis techniques to de-risk the design of complex 2.5D systems. “We’ve also developed new testability methods. A robust knowledge base coupled with optimization technology is also a critical piece of the flow. The knowledge base helps to find areas that are likely to present design challenges, and the optimization technology helps to guide the team in the right direction.”

eSilicon also used this technology to create a memory synthesis capability. “Logic synthesis is widely used to re-target logic as needed for an optimal result, but memories are usually pre-designed and static throughout the design process. Since half or more of advanced designs are occupied by memories, this approach doesn’t work very well. Using our knowledge base and optimization technology, we can build parameterized, virtual memory models for embedded memories that can be modified throughout the design process to achieve the best result,” he explained.

Multi-patterning is impacting the design flow, as well. As a result, place-and-route and physical verification tools need to be color aware at 16/14nm and beyond, which adds to complexity. And timing closure now includes multi-parasitic corners that have been introduced by multi-patterning.

Fig. 1: Multi-patterning complexity grows. Source: Mentor Graphics

IP choices becoming more difficult
Complexity is creating issues everywhere. IP selection has shifted from a simple process based on performance, power and cost, to an iterative one involving selection, configuration, integration and verification.

Different parts of the design flow are changing due to the need to greatly increase efficiency across what has become a highly iterative design flow from IP selection, configuration, “The tight linkage across these aspects of the design flow is tooling, but it all starts with the right IP,” said Simon Rance, senior product manager, Systems and Software Group at ARM.

From this perspective, some of the key changes happening at each part of the design flow are:

• For IP selection, tools are being introduced to help architects and designers with the exploration and selection of highly configurable IP. Architects and designers need to select the right set of interoperable IP that also works well with tools across the design flow for configuration, integration, verification and implementation.
• For configuration, designers are leveraging tooling with embedded algorithms to manage the complexities of highly configurable IP. Configuration is no longer just IP-specific because many IPs are configurable based on their surrounding system context. The number of configuration options across IP are growing into the hundreds. Managing this at an RTL level is very difficult and error-prone, if not impossible.
• For integration, tools are being leveraged by system designers to integrate and assemble the system as quickly as possible so that verification can start sooner. Integration tools are being enhanced continually to utilize more IP and system design meta-data to drive the automation of system assembly.
• For verification, tools are now leveraging the same meta-data and are being enhanced to tightly link with the configuration and integration tools. This tight linkage is key in reducing the number of iterations and getting to design closure as quickly as possible.

In addition to clearer and better communication between ecosystem partners, better access to information between tools in a flow is also coalescing today.

Sharing data between tools
“The integration of the engines is probably the next big thing where looking back we’ll see that we’ve been able to reduce cost on the software and hardware side by 1.5X and 2X,” said Frank Schirrmeister, senior group director for product management in the System & Verification Group at Cadence. “It’s all about the correlation to the early estimates, and the way you do this is by making the data of other tools available even though you’re not using them yourself right now. If you don’t correlate well to what you can only know later in the flow, you may end up having to redo things which will be very painful the later you do it.”

Schirrmeister pointed to the old ITRS road map, which said cost is still the main driver. He said that without automation in general, and without EDA specifically, it would not be possible to design these chips because costs would get out of control. [The ITRS is being replaced by the IEEE’s International Roadmap For Devices And Systems.]

Looked at another way, it’s all about how simulation, emulation and prototyping work together. “If I do something in simulation where I get really fast turnaround, as in I’m running this at the block level and then going in and adding more, if I have to rebuild the verification environment again when I go to emulation, that’s a non-useful cycle,” he said. “It’s all about how do I integrate these engines as much as possible.”

If simulation and emulation are combined into what is referred to as verification acceleration (or simulation acceleration) where the testbench is run in the simulator, and the DUT is in the emulator, speedups can range between 10X and 100X. “It’s not like the 1 millionX or the 10,000X, which can be achieved [if the entire design is contained] in the emulation box due to the communication between the host and the hardware but still, there’s very exciting stuff happening there to bring these together and tightly integrate them including things like hot swap,” Schirrmeister said, noting that a common compile between the two makes this possible.

Ties between emulation and FPGA prototyping are viewed as the next big breakthrough, and FPGA projects are an indication of how design and verification flows are changing.

“SoC FPGAs with the embedded processors require new tools for the processor subsystem integrated with the programmable logic,” said Zibi Zalewski, general manager for Aldec’s Hardware Division. “Hardware engineers don’t want to learn the software, while software developers want to stay on the software ground, but in this case software and hardware teams need to cooperate very closely and end results need to be synchronized on both domains. Those requirements increase the complexity of the work environment and serve as triggers for new verification solutions that allow you to test the project at SoC level—not in separation or subsystem level only. That brings us to techniques and methodologies widely used in ASICs. Virtual platforms are a great example of the software modeling environment, which after connecting with an RTL simulator may bring a co-simulation mode for development and verification before going to the target hardware. Such an approach allows you to test the hardware modules with the target operation system and drivers, which increases coverage, the scope of testing, and enables complete debugging of an SoC project.”

The similarity to hybrid co-emulation solution isn’t accidental. SoC-level testing comes from the ASIC world, where emulation tools became a standard and important part of the verification flow. “Complexity of today’s projects requires integration of multi-domain tools working across different teams,” Zalewski said. “It is no longer enough to test at subsystem level. It doesn’t matter what is the end device. Integration of the tools at the SoC level shortens the verification stage by enabling higher coverage and complexity of testing.”

The changes aren’t limited to development and verification tools and methodologies, either. The certification of safety-critical applications based on SoC devices is another challenge. “In the past it was relatively easy to determine if that’s a software or hardware certification requirement, but now a project may require both or new specifications created by certification authorities. A good example is the DO-254 specification, which covers hardware requirements, while DO-178 specification refers to software. If we use an FPGA with an embedded processor we will need to pass both certifications, which is an extremely time-consuming and expensive process due to the lack of regulations for modern SoC projects,” Zalewski said.

Dave Kelf, vice president of marketing at OneSpin Solutions, agreed that one of the most dramatic changes in the development flow relates to verification techniques. “Simulation has given away to the three-legged stool of simulation, emulation and formal verification, each with its own attributes and issues. Tying these technologies into one common methodology is complex to say in the least. Common coverage methods provide a cornerstone for evaluating progress across the three solutions, and indeed the Accellera UCIS (Unified Coverage Interoperability Standard) Working Group jumped on this idea to extend coverage cross platforms and vendors.”

But consolidating coverage models between the techniques as well as a database remains a largely unsolved problem, even though end users demand it.

Software efficiency
Along with hardware changes, much continues to evolve on the software side as electronic products have moved to a software-driven focus. But even with products being defined by a tremendous amount of software, , CEO of Imperas Software, observed that most programmers aren’t that good. “They might be artistically creative at the point of writing the code, but in terms of managing the processes, the data, the verification and the deployment, it always has been a bit ad hoc when it comes to software.”

Software programmers have always been able to rely on the next release, something that often isn’t possible in hardware. As a result, an entire verification methodology was built, along with a management methodology, release processes, sign-off and many other automation tools. This is because at $10 million for a chip spin, or now $50 million, it was too expensive not to do that. But now with so many electronic products being defined by the software, the software side has been forced to develop new ways of working, Davidmann said.

To this point, what’s happened is that the software world has worked very hard to make a process to the philosophical and methodology of designing software; as well as the tools, and technologies, and optimization.

“We are finding that more and more people are trying to find solutions to the test problem,” he said. “That’s what we’re trying to focus on with our models and our simulators and our tools. Intriguing at the moment is how in the electronic product world people are adopting more efficient methodologies for software development based around these modern approaches of agile and continuous deployment and integration. There are tremendous benefits and gains to be had by all this automation. It’s a methodology change. It requires more efficient tools and high-speed simulation, among other things, and it allows you to do things that you couldn’t really do easily before, such as test for security holes, check the verification in terms of securities to see if things could be broken into. In the hardware world we test thoroughly to get the sort of coverage and look for faults from the manufacturing point of view almost, whereas in the software world you’re looking for vulnerabilities and checking that the code doesn’t crash if somebody put some random data in which you don’t recognize. In the electronics world, we’re doing a pretty good job in the hardware development, but as it comes to the software, there’s still a lot to be done.”

But getting engineers to adopt new approaches, technologies, and methodologies isn’t always so simple.

“In all these different companies you have to fight the antibodies, but all of the consolidations in the semi space have led to the diffusion of ideas when one company acquires another, and acquires another,” said Anush Mohandass, vice president of marketing and business development at NetSpeed Systems. “The guys who bubble up on top know what is effective from a management perspective, and what is effective for getting a product out sooner. They’re not just focused on what gives the best hardware, or what is squeezing the last megahertz out of the design. It really depends on who you talk to. One thing is for sure—lack of automation is the real dinosaur.”

Related Stories
Custom Hardware Thriving
Predictions about software-driven design with commoditized IoT hardware were wrong.
Gaps In The Verification Flow (Part 1)
The verification task is changing and tools are struggling to keep up with them and the increases in complexity. More verification reuse is required.
Power State Switching Gets Tougher
Understanding and implementing power state switching delays can make or break a design.



Leave a Reply


(Note: This name will be displayed publicly)