Speeding Up The Design Process

As time-to-market pressure mounts, companies are looking at different ways to reach their end goal.

popularity

A rush to plant a stake in new markets, coupled with uncertainty about how to generate a reasonable return on investment in those markets, is ratcheting up pressure on chipmakers. They now must come up with more customized solutions in less time, frequently in smaller volumes, and with the ability to modify them in shorter time spans if market opportunities shift in unexpected ways.

This affects how chips are architected, designed and verified, as well as how they are manufactured and packaged. But it also impacts the markets themselves, some of which until very recently have not been major users of advanced semiconductor technology, such as automotive and health care. Add in machine learning and different approaches to handle an explosion of data, with region-specific requirements or preferences, and that completes the circle to drive more designs even faster.

“There is very interesting growth at the fringes,” said Mike Gianfagna, vice president of marketing at eSilicon. “At the very tip of the pyramid, which is 16/14/7nm, companies are becoming much more flexible and collaborative. We’ve been invited into a lot of interesting discussions that we were never a part of before. At the opposite end, we’re seeing traction around universities and startups for multi-project wafer services operations. A foundry has a minimum die size of 12mm. Most universities are running at 2mm. This is a fast-turn business, and time to revenue is extremely short.”

While there is high value at the leading edge of design, the time to profit is much longer and the amount of work that needs to be done to work the bugs out of a complex SoC at 16/14nm is enormous. Gianfagna said that some companies are mixing it up, so that short-term revenue fuels the ability to go after more complex tier-one deals.

Changing times
A telling signal of how widespread these changes have become was the subtext at Semicon West this year: “Definitely not business as usual.” But there is an upside here. This kind of change can be good for business, and companies able to take advantage of it expect the trend line will continue to grow up and to the right.

“It’s a more disruptive period than we normally have because there are more things changing,” said Wally Rhines, chairman and CEO of Mentor Graphics. “The good news about disruption is that’s the only way we grow our business. Established design tools have fairly flat markets—simulation, place and route, PCB. These markets don’t grow particularly. What grows are new capabilities. While the PCB design market hasn’t grown, all of the signal integrity products that go along with PCB have grown. And while the simulation market hasn’t grown, the things you add onto simulation to handle new problems have. Emulation has grown significantly. With every generation there’s something new. So with every node—10nm, 7nm, 5nm—there will be new tools because there will be new products. That’s all good.”

How those tools get used is changing, as well. The emphasis over the past 20 years has been on speeding up verification in complex designs. But Frank Schirrmeister, senior group director for product marketing at Cadence, pointed to some subtle changes that are underway even in this sector of EDA. One is a renewed interest in platform-based design. A second is less emphasis on hardware models in favor of hardware-assisted design.

“I’ve been a proponent of models for the majority of my working life, but people still haven’t moved up to the place where I thought they would be by now,” Schirrmeister said. “For the effort associated with building those models, there has to be a return. So for a company like ARM, it’s a no-brainer because they need models and they have the volume to drive them. But with bigger adoption of hardware-assisted technology, you can brute force it in RTL. For architectural analysis, the problem is that you need everything earlier and you need a certain level of accuracy. That makes models less important because you have a version of the real thing, but you can do a lot of assessment based on more accurate measurements because you do have the real thing.”

Creating a standard way for IP to talk to the AMBA ACE protocol based upon the Transaction Level Modeling 2.0 standard from Accellera could indeed be useful in many designs, according to multiple industry sources. TLM 2.0 provides a framework for how IP blocks can communicate, but there is no standard way to implement the protocols. As a result, a fair amount of manual porting and integration is required today.

TLM 2.0, along with IP-XACT, remain two of the cornerstones of a LEGO-like scheme to assemble IP. Experiences with these standards vary widely. However, there is renewed interest in both of them as time-to-market concerns continue to increase.

Pre-fab bridges
One of the reasons why large commercial IP sales are up these days is that it’s quicker to use third-party IP blocks than to try to develop everything in-house. But it also has established another market for network-on-chip IP as a way of connecting everything together more quickly, and it is a major driver for the next piece of the puzzle—heterogeneous cache coherency architectures

Both Arteris and NetSpeed Systems have approached this problem from different angles, but the over-arching goal is the same—to allow multiple processing elements to work together more efficiently while also providing enough flexibility for making changes and optimizing performance.

“You have to understand the customer application, where the user profiles are, the bandwidth, and the scalability and partitioning to find a solution,” said Jean-Philippe Loison, senior corporate application architect at Arteris, during a panel discussion at the Linley Mobile & Wearables Conference this week. “The challenges are scalability, heterogeneity and how many IPs you need to integrate that are not cache coherent. You also have to look at how to optimize power. Managing copies in hardware requires a lot of communication.”

Joe Rowlands, NetSpeed’s chief architect, added another problem to the list during a panel discussion at the conference: “One of the biggest concerns we have is that the divide-and-conquer approach used in many designs makes it difficult to detect deadlocks. You need to analyze all of the dependencies in a system. The risk is when you integrate two different devices.”

The goal in all of these efforts is to customize quickly and efficiently by controlling on- and off-chip resources with much more granularity than in the past.

The unknowns
That becomes particularly important in advanced packaging, which is another piece of puzzle for speeding up design. While most of the recent focus in 2.5D and fan-out packaging has been on performance, the initial driver for that technology—and one that is being talked about again in more industry conferences—is the ability to mix and match IP developed at different process nodes using a standardized interconnect between the die.

While that opens the door to new problems, particularly on the manufacturing side, it greatly simplifies the challenges for chips developed at older nodes. And that, in turn, can speed up the design process.

“There is huge momentum for 2.5D with HBM and increased bandwidth,” said Venky Sundaram, principal research engineer at the Georgia Institute of Technology’s Packaging and Research Center. “This is not just about density and performance, though. It’s also about time to market and cost. With IoT and automotive applications, you need very fast time to market.”

The good news is that planar nodes are extremely well understood.

“If you take existing tools and apply them to 40nm, the problem is a piece of cake,” said , chairman and co-CEO of Synopsys. “By definition, the problems are more difficult because they continue to increase in scale complexity and systemic complexity. On the other hand, the people working on these problems are actually a lot more experienced and have learned how to be in multi-dimensional optimization for now 40 years. This is an incredible skill. Our industry delivers among the most sophisticated software ever built. We have 400 million lines of code, not a single bug in it, as far as I understand. This is a big endeavor, so managing this on an ongoing basis is very exciting because you have to periodically redo things to simplify them again.”

The bad news is that economics and logistics are not.

“There is a need for specialized IP and custom silicon, and a need to integrate at a higher level,” said eSilicon’s Gianfagna. “Deep learning and machine learning are coming together as an interesting segment. That’s a fundamentally different business than an ASIC, though. You can capitalize on the similarities, but you also have to let it be different. The Maker Faire companies were using FPGAs and Raspberry Pi in the past as standard platforms, but now we’re getting inquiries from companies that may want to look at custom silicon to integrate. That could mean huge volumes of smaller things, and it creates an interesting challenge of scale. Conventional wisdom is that you use a compute farm and a set of EDA tools, but with these designs the expansion and contraction can be astronomical. We may have to create a different kind of EDA license to deal with huge bubbles.”

Even the understanding of where the value will be in the future is not obvious at this point. “There is a shift to everything becoming more data-centric,” said Georgia Tech’s Sundaram. There is value creation from moving data, not just the computing. This is moving data between devices and systems.”

Related Stories
Making Verification Easier
Verification IP is finding new uses to speed up and simplify verification particularly when coupled with emulation technology.
What’s Next For UVM
The ‘U’ in UVM was meant to be for ‘Universal’ but the notion of universality needs to be updated if it is to stay relevant.
Verification Engine Disconnects
Moving seamlessly from one verification engine to another is a good goal, but it’s harder than it looks.
Executive Insight: Raik Brinkmann
Why formal verification is suddenly a must-have technology.



Leave a Reply


(Note: This name will be displayed publicly)