Pressure Builds To Revamp The Design Flow

Some contend it needs to be rebuilt from scratch, others believe it can be tweaked and fixed.

popularity

Without EDA there would be no as we know it today, and without Moore’s Law there would be a much more limited need for EDA. But after more than three decades of developing design flows packed with sophisticated tools to automate semiconductor design through verification, and thereby enable feature shrinks that are the basis of Moore’s Law, the existing approach isn’t working so well anymore. It takes too much time from design through tapeout on complex SoCs, and after 28nm the economic benefits of scaling disappear with the introduction of multi-patterning and 3D transistor structures.

Many companies are asking, “What’s next?” There is no simple answer to that question, however. Change is clearly necessary, but how fast and how deep? Abrupt change is risky. There is a lot riding on existing methodologies—$340 billion per year just for semiconductors, and trillions of dollars more in electronics that depend on semiconductors, which includes everything from toys to industrial machinery to data centers for banks to missile guidance systems. So whether the solution is a series of incremental “shifts left” or a total overhaul remains to be seen. There are proponents of both, often within the same company.

“The methodology we have now is outdated,” said Karim Arabi, vice president of engineering at Qualcomm. “It’s like an old house that we keep renovating. There are lots of things that are in there for good reason, but the industry is locked into certain steps to getting things done.”

Arabi said the shift may be as fundamental as moving away from standard cells and changing how memory is used. “Doing this in bits and pieces may be good because it’s safer, but at some point we need to revamp the whole thing. EDA tools need to be more aware of what’s going on in the design. If you look at a CPU, it’s a huge part of a design. So why isn’t there a CPU design tool that knows how to design a CPU and all the compilers and the best implementation?”

The great shift left
The need for change is echoed by EDA companies. The whole shift left concept, which began as a way of shortening the amount of time spent on verification—an estimated 60% to 75% of total design time—has now been extended to include everything across the flow. Everything needs to be integrated earlier, debugged earlier, and it needs to be checked and re-checked to improve reliability.

“There are two perspectives that need to be considered here,” said Wilbur Luo, senior group director for R&D at Cadence. “One is to shorten the design cycle for electrical, and the other is to shorten it for physical. On the physical design side, you need to do design rule checks while doing the design to catch problems earlier. On the electrical side, you need to do in-design electrical analysis while you’re working on the design. electromigration is a big problem in automotive and high reliability, for example. You also need to make sure these thin wires can handle the current.”

He noted an estimated 50% of chipmakers have started initiatives to decrease design cycle time. For those companies he said the typical goal is 30% reduction in overall time in the design flow.

There is no direct correlation between development time and total cost of developing a chip at advanced nodes. However, time to market is the key metric for companies working at the most advanced process geometries, not NRE. While chipmakers are always mindful of cost, the bigger concern with high-volume chips is getting them out the door on time with sufficient reliability. The typical verification scenario these days is verify everything possible within a defined market window, then fix whatever is left in the field with software updates. But if more can be verified prior to tapeout, there are fewer software patches required, lower total cost of ownership, and more satisfied customers.

Analog is one of the big bottlenecks for improving time to market. While it has always been considered part engineering, part black arts by digital design teams, there was very little interaction between these teams prior to 40nm. From that point on, synchronizing schedules of both analog and digital teams has proven to be a major and worsening headache, and there is little in existing flows to improve upon that.

“Of all the places that need to shift left it’s analog,” said Fred Sendig, a Synopsys fellow. “The challenge changed from higher nodes for analog. They’ve taken a 3X productivity hit, and some have taken a 10X productivity hit. That’s compounded by several things—the physical design is taking longer, the impact of physics on circuit design is taking longer, and you can no longer just do an ECO—in some cases you have to start all over again. But analog is taking a double hit because pre-estimation is not adequate anymore and the physical cycle is longer.”

Sendig noted that if this can be brought back in line with analog development times at older nodes that would be a good first step. All of the big EDA vendors are working on this problem.

Software isn’t far behind, either. At 65nm, the bulk of the software being developed by chipmakers was embedded code. At 28nm and beyond, they are developing much more embedded code and firmware, and they are providing hooks for higher-level software to make sure it can take advantage of all the hardware features.

“People are doing software debug earlier than before, in part because now they have tools to do it,” said Jean-Marie Brunet, director of marketing for the emulation division of Mentor Graphics. “There’s a fundamental problem and we are seeing a real shift in methodology. The software guys who are coming to us are very different people than in the past.”

Brunet said there are two classes of customers—those who say it makes sense to make changes but who aren’t doing anything, and those who are looking for solutions and seeking information about what can and should be changed. “This is a methodology and architecture shift because underneath it all the chip is no longer being measured against the spec, it’s being benchmarked. For that you need to run different OSes, firmware and applications.”

The other shift left
There are things happening well outside the flow that could impact time to market, as well. eSilicon has been building an online quoting system that can quickly identify which IP blocks work with which software and processors and foundry processes.

“What we’re developing is the ability to optimize a design to improve predictability and decrease risk,” said Mike Gianfagna, vice president of marketing at eSilicon. “The only way to make this work is to use big data, machine learning, and analytics. You can’t just bet on the smartest guys in the room anymore. At advanced nodes, there are bigger bets and fewer facts. And at the other end of the spectrum with the IoT, you need really fast turnaround and the lowest power possible. These nodes are really well understood, which means that if you can hit your target the first time you don’t need to do two or three iterations.”

In similar vein, Arteris is pushing companies to consider the on-chip network much further up into the design process—as far left as the PowerPoint stage. “Timing closure increasingly is related to the interconnect, not the IP blocks,” said Kurt Shuler, the company’s vice president of marketing. “You need to do analysis up front, and the only way to do that effectively is to visualize the system and automate timing closure. We’re finding that at the end of the design cycle companies aren’t meeting timing because of the long path on the interconnect. If you can parallelize everything, you can save three to six months for some designs.”

One big advantage there is that when changes are made to designs, it takes far less time to figure out if they’re manufacturable. “Basically you’re accelerating the physical design by feeding it better information,” Shuler said.

A totally different approach, which is being driven by Sonics and Extreme DA, is to apply methodology to hardware design for what amounts to . In essence, the approach removes development from any flow, leveraging creative problem solving rather than standardized solutions.

“The real advantage is that you can start projects sooner and adapt them more easily, so that when you get ECOs you have a process for dealing with them,” said Randy Smith, vice president of marketing at Sonics. “This is not common today in system-level flow. It makes deliverables more flexible. With Agile approaches, all the hooks are in the physical design and in the system side, so you can basically take some steps with an incomplete design such as QoS—who talks to who with what QoS.” (A meeting has been scheduled at DAC to discuss this subject further.)

The shift up and out
Finally, there are new architectural approaches that ultimately could greatly speed time to market—fanouts, 2.5D, multi-chip modules and system-in-package—all rather fluid terms with significant overlap. But the goal of all these approaches is the same—hook together multiple chips rather than build everything on the same die.

Qualcomm’s Arabi said full 3D IC integration is still about four years away, but at least one of the big problems has been solved, which is that metal bonds will be used for bonding the die, and that will act as a via. “Stacking and packaging is very important because 80% of performance degradation will be in the interconnect. But full 3D is still three to four years away before it is ready.”

He said that what will be needed are tweaks to tools to do place and route of logic-on-logic across the die that are being bonded together, and to figure out how these approaches need to fit into the flow, as well. He’s not alone here, but commitment levels from EDA vendors seem to vary greatly from one year to the next.

“The debate about mixed signal on one substrate or multiple substrates has been going on for a long time,” said Synopsys’ Sendig. “There are places where it will be expedient and going 3D will be perfectly fine. But the package can be half the cost of a part. You can get it to market quickly and for some things it will make a lot of sense, but we will continue to see mixed signal design at lower nodes, as well.”



1 comments

Kev says:

“Without EDA there would be no Moore’s Law as we know it today..”

Got to stop you right there, the physicists and chemists ran us down to 28nm, the software and EDA guys have not done anything terribly significant since the 80s. As far as I can tell Synopsys’s rewrite of IC compiler is just fixing the tool they wrote ~ 1990 to work on 2010’s hardware – I didn’t see any announcement of new methodology. They’re pretty clueless about mixed-signal too.

Revamp? – yes, about time, but don’t hold your breath.

Leave a Reply


(Note: This name will be displayed publicly)