Keeping Up With Complexity

The gap between abstraction and granularity is impossible to close if you can’t do more exploration throughout the flow.

popularity

By Ed Sperling
There are two schools of thought in designing complex SoCs. One says that increasing complexity requires a higher level of abstraction. The other says providing enough detail to get the design right is the only effective way to do it.

There are staunch proponents of both approaches, but what has been missing are bridges to tie the higher level of abstraction to the more laborious—and much slower—gate-level details. Tools that allow more exploration on both sides are beginning to emerge, along with a recognition that they are definitely necessary to complete designs.

These bridging exploratory or path-finding approaches rely heavily on what-if questions. What if a certain piece of IP was used next to another piece of IP, for example? Would a different IP block work better? How about if the frequency of a core was changed or a different I/O was used? And what happens if the voltage in one area is lowered or raised?

These kinds of tradeoffs are common, but increasingly each step of the way can update other pieces along the design flow. At 45nm and beyond, there are many of these kinds of tradeoffs, and the number increases dramatically at 28nm and in 2.5D and 3D structures.

“There are some path finding methodologies available today,” said Riko Radojcic, director of engineering at Qualcomm. “The part that is missing is incorporation or spatial awareness into these methods and tools, which is why it is especially important for 3D exploration. I started feeling the lack of this capability when we first looked at the tradeoffs between regular ‘complex’ design rules and ‘gridded’ design rules, and when we were looking at the tradeoffs associated with aggregating memories into a smaller set of larger instances and equipping them with redundancy versus having many small instances all over the place. In both cases we needed spatial awareness that was not easy to import up to the SoC level. “

Help is on the way
All of the major EDA vendors see opportunity in path finding at the architectural level and early in the design process. For one thing, there is already market demand for these kinds of tools among their largest customers. For another, these kinds of capabilities can be added into existing tools, so even though the development work is difficult it’s not the same as developing and marketing new products.

“The conflict on the design side is between design complexity and schedules remaining the same,” said Gal Hasson, senior director of marketing for RTL Synthesis at Synopsys. “To make matters worse, many designs are done by geographically dispersed teams. Once the RTL is complete the constraints might be way off. So you clean up the constraints, and based on this data you check whether the design is likely to meet the requirements of area and clock speed. Many times the answer is ‘no.’ This is the big challenge we’re wrestling with, and it’s causing designers to re-think the flow. You need early exploration that enables RTL and constraints to get a detailed analysis.”

The addition of software development only complicates things further.

“It used to be one architecture on a spreadsheet,” said Shabtay Matalon, Mentor’s ESL market development manager. “Now there’s an architecture team and companies are bringing in people from different disciplines into that team.”

Matalon said it’s still possible to build everything using SystemC and TLM 2.0 models, but those models require additional tradeoffs to be analyzed. Rather than starting at functionality and then moving onto timing and performance, power has to be analyzed from the outset. And then it all has to be tweaked and reanalyzed as part of the exploration process.

“As you go through the refinement process in a design you have to recalibrate your initial assumptions,” Matalon said. “The number of ‘what ifs’ and the dimensions of exploration are exploding as we move to multicore. And then you have do deal with another dimension of how you achieve performance and power.”

Where bridges are needed
One of the most important areas where this kind of bridging is necessary is in the hardware and software arena, which may be one of the hardest areas to fix. While tools are available for prototyping software, the real-time links between hardware and software design changes are sparse. From a chipmaker perspective, though, these are part of the same thing.

“We don’t see hardware and software as separate worlds,” said Bill Bench, senior director of the WLAN Media Business unit at Broadcom, which is designing wireless video chips. “You can’t do hardware without the software.”

Bench said what Broadcom also finds important for engineers is a deep understanding of cores, the electrical characteristics of a particular process node, and the availability of the right libraries. “What’s most important is that engineers are paying attention to these problems. We expect that EDA tools will mature over time,” he said

Another trouble spot is in the mixed signal world, where different engineering teams can lead to power problems and proximity effects that can impact signal integrity.

“We need to break down the silos,” said John Stabenow, group director for customer/analog design management at Cadence. “At 28nm the SoC is still distinctly divided into digital and analog. We need to bring more and more intelligence into layout at the designer’s desk and be able to abstract an analog block. The challenge is that the schematic has to wait for the layout engineer to complete the job, but there are layout-dependent effects that can impact circuit performance.”

Starting at the beginning
A common theme that runs through all of the current thinking in EDA and system-level design these days is that complex design is best addressed at the architectural level and very early in the design phase rather than later in the design.

“There is very limited opportunity to go back and change a decision,” said Ravi Varadarajan, an Atrenta fellow. “If you change a decision because the chip is not feasible, it’s a very long and painful process. The best way to avoid that is to explore more at the front end. And if it’s important in a 2D chip, it’s a necessity at 3D.”

This is easier said than done at advanced geometries, where process rules are constantly changing, process variation is higher, and even some of the structures are likely to change. At 20nm there almost certainly will be some double patterning required. At either 20nm or 16nm designs it’s also likely that designs will begin to incorporate transistor structures such as FinFETS. How those will work with new types of memory and at a variety of voltages is unknown, but it’s almost certain that all of this will require far more exploration than existing designs.

“But even in the early stages the exploration has to involve fundamental techniques that correlate to the back end,” said Varadarajan. “You still need to set up constraints and test against those constraints.”

Conclusion
Simplifying the complexities in a complex design process will probably never be complete. It will be a long-running process of development. There are simply too many variables and tradeoffs, and at advanced nodes and in stacked die those tradeoffs can have much larger effects than at older nodes. Some of those effects are physical, some are electrical, and many are business-related.

But it’s also clear that preventing problems at the front end can avoid many problems at the back end if there is a better understanding of all the factors that go into make a design work. At the very least, awareness of this problem is rising, and some of the tools that have hit the market over the past year are a reflection of that. Still, far more work needs to be done for the industry to continue progressing along the Moore’s Law road map and into 2.5D and 3D designs.



Leave a Reply


(Note: This name will be displayed publicly)