More Choices But Less Design Freedom

There are more variables in SoC designs these days than ever before, and that’s forcing more limitations in design.

popularity

By Ed Sperling

“What if” is an indelible part of the lexicon of every SoC architect and design engineer from the front end of the design flow all the way to manufacturing, but while the terminology will persist for years to come the answers and the value of those answers are starting to change.

Complexity, cost and the need for better integration have simultaneously increased the number of variables and limited the amount of variation that is acceptable. It also has pushed more responsibility to the front of the design process where these kinds of tradeoffs can be made, while increasing the risk that something can go very wrong downstream.

“Complexity has tremendously affected the what-if scenarios for developing chips,” said Jon McDonald, technical marketing engineer for the design creation business unit at Mentor Graphics. “Historically, hardware development has been fairly straightforward at the high levels. An experienced designer could make an educated decision on what should be done in hardware, how much parallelism was required and what performance was required from each block. The number of choices was relatively small and the tradeoffs between the choices was relatively clear. Today the explosion of gates, performance points, IP and tradeoffs between software and hardware has made it much more difficult to make an educated decision on what should be done in hardware and what performance is required from each of the hardware blocks.”

McDonald said the only way to quantify architectural tradeoffs is to create a model and run a simulation. That’s a lot different than the old way of creating the architecture and fiddling with the layout after the fact, which some companies continue to do even at 65nm. Even with advances like computational scaling for manufacturing, most foundries still believe new rules will be imposed by advanced processes, starting at 32nm and continuing forever afterward. The changes at the architectural level add rules at both sides of the design.

“This what-if analysis on the users application scenarios is even more important today than in the past simply because the number of choices has exceeded the ability to understand the interaction of those choices,” said McDonald. “Yes, figuring out the variables and deciding which variable combinations to run the simulations on is more difficult today, but it’s the best option for making the tradeoffs that need to be made.”

More choices but less freedom
At 45nm and beyond, there are far more choices to be made. Smaller geometries mean more real estate, whether that’s taken up by additional CPU cores, more memory or different sizes of buses. In the future, it is likely to be split into multiple power domains, as well, which so far has largely been a challenge for the smart phone chip makers.

“We have a ton more gates available and that gives us a lot more wiggle-room in some ways,” said James Aldis, SoC architect at Texas Instruments. “But I suppose it is true that in other ways we are more restricted. Our power architecture is more complex and the constraints imposed by software are more severe, for example. So working out how to integrate that extra CPU is a more detailed and challenging task than it used to be.”

What that also means is the answer to “What if” has to be more accurate than in the past. Instead of an estimation of possibilities, they need to be more tightly defined further up in the design process.

“There isn’t an explosion in technical capability, but there is an explosion in the complexity of what the tool users need to do,” said Bernard Murphy, chief technology officer at Atrenta. “This is being driven by handset applications where there is incredibly complex optimization of everything from the power domain to retention and clock and voltage states. If you’re using a cell phone and you flip to an MP3, how do you optimize the battery charge life.”

While in the past, answers to “What if” often cam in the form of estimations, they increasingly are looking like specifications. Murphy noted that the gap is growing between the architectural “What if” and the implementation side. “If you place all your trust at the architectural end vs. implementation, is that overly trusting of the architecture?”

The risks of rigidity
Murphy isn’t alone in asking that question. Frank Schirrmeister, director of product development in Synopsys’ solutions group, said that with each new solution to the complexity problem there are risks.

“You need more rigid data, but your also have to be more careful because the impact you have with your decisions early on is much bigger,” said Schirrmeister. “That makes it riskier. If you make a wrong decision and go down the wrong path, it’s much harder to reverse that.”

At least part of that has to do with the goals of a chip developer, which is yet another facet of what’s changed in the “What if” approach.

“You have to be very precise about the orders of importance of the properties you’re looking at,” Schirrmeister said. “Let’s assume you’re interested in power, time to market and cost. If you’re not clear about what’s most important, then you’re potentially making unwise decisions. If it’s time to market, people will always go toward a solution with more cores and more flexibility. You put more flexibility into the software. If cost is your biggest concern, you may have to put some effort into accelerators to get everything into a smaller footprint, but you may have to sacrifice on time to market. If power is your most important element, you will probably go away from too many flexible processors and do more things in a dedicated fashion. That means you have to create more.”



Leave a Reply


(Note: This name will be displayed publicly)