What’s Ahead For System-Level Design

Propelled by the need to estimate at the architectural level, plan for software earlier and to deal with more complexity in SoCs, the industry is focusing on a number of technologies to improve design at the system level.

popularity

By Ann Steffora Mutschler
Architecting an SoC today is incredibly difficult. When you add in the number of available transistors, the manufacturing effects of smaller nodes, IP and software that must be integrated, among other things, the challenges just keep mounting. Depending on what market segment the SoC will be designed into has a huge impact, as well.

“It is impossible to overestimate the power behind the move to mobile,” said John Chilton, senior VP of marketing and strategic development at Synopsys. “We are constantly surprised by how quickly that’s going. We all know mobile is important, etc., etc., but it just gets more important.”

An interesting dynamic that occurred during 2012 was centered on mobile advertising.

“Not only is there a lot of money being made in mobile on devices and apps, but remember earlier in the year there was a big controversy around mobile,” Chilton said. “Facebook was having problems because people couldn’t monetize advertising on mobile. Now people have figured that out and there is a huge amount of money coming into the space through advertising. So it’s just more and more money being poured in there. That’s really important because our whole value chain really depends as mobile comes on either as an incremental stream to PCs, or maybe displaces PCs to some extent. It’s very important to us that people continue to buy innovative, high-margin mobile devices and turn them over pretty quickly.”

Connected to this need to bring advanced mobile devices to market as quickly as possible is that the design flow has changed more in the last two years than in the prior 15, he believes. “As engineers, there’s a natural bent towards conservatism: ‘Don’t do it differently because it worked last time and let’s do that again. I’m only going to do something different if I really have to,’ but everyone increasingly realizes that they have to do something different.” The design flow shifts were marked by strong adoption of architectural exploration tools, virtual prototyping and other technologies, he said.

From a high level, 2012 was a transition year for the semiconductor industry with the move from 40nm to 28nm. “A lot more designs started on 28nm and the 28nm collateral was developed by the IP industry, which then became available in a user-friendly manner,” observed Taher Madraswala, vice president of engineering at Open-Silicon.

“The other big change I have seen is the complexity of ASICs, which isn’t really a trend. Associated with that, as the dies become more complex we need more processing power and we need more optimized versions of tools to allow us to finish the design task within a reasonable time. Customers still expect us to finish a design because consumers demand it. If the chip complexity doubles, we don’t have the luxury of doubling our schedule, which means that even with the complexity doubling itself, we have to find ways to cut down our design cycles in half to keep the pace with the consumer cycle. The only way to make that happen is by improving our analysis times or the runtimes. We are also running things in parallel,” Madraswala said.

Moving to the architectural planning space, Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence, noted an interesting counter-trend. “A lot of decisions, which we thought in the past need to be made using system-level tools—the architecture decisions—are pushed downstream. If you look at the architecture analysis tools [in the market], it’s not as straightforward to make those decisions without actually going to the RTL. Think about a complex interconnect. We are seeing people making those decisions now at the RTL level—generating the RTL and then making the decisions there. If you look at something like a complex ARM interconnect, then you have a lot of configuration options. Figuring out the right interconnect becomes so complex that you cannot do it with abstraction anymore. You really need to generate the RTL and do it at the RTL level.”

Jack Browne, vice president of marketing at Sonics, observed a similar, emerging problem in with the need for time-domain accuracy, for memory system efficiency/performance, for throughput necessary for the required user experience, and for power estimates to ensure meeting power specifications, e.g. battery life, in different usage scenarios. One approach is to divide and conquer with subsystems, where the detailed functionality is addressed by the subsystem, then abstracts the complexity for top level integration. This provides better performance models, but only works to a point. An SoC, represented by a collection of subsystems, has points of contention where traffic is shared—such as the memory system and on-chip communications network. “We do not see an alternative to cycle-accurate models here as the guarantee of end-user experience performance and power,” he explained.

This trend also is driving a lot of cycles on emulation boxes because many more cycles need to be run since the accuracy needed can’t be gained from too high of an abstraction.  Schirrmeister said this is especially true for interconnects because of the complexity and configuration and the need to be able to regenerate the environment fast, along with verification environment, and run it fast. “To make architectural decisions requires a fair amount of detail. You had to build a fairly detailed model and that decision making has become significantly more complex.”

Integration continues to be quite a challenge, he continued. “System-level design is figuring out how different pre-defined and already designed components fit together, and how new components you design all fit together. The whole integration piece is becoming more and more important and cannot be overlooked. That’s where standards like IP-XACT and exchanging these pieces of information from the system level down become more and more important.”

PCBs are systems, too
Integration is the name of the game for the PCB side of things, as well. Dave Wiens, business development manager of the Systems Design Division at Mentor Graphics said that with customers designing the most complex of systems, such as the Iron Dome System by Rafael Advanced Defense Systems, “People are designing systems today; they are not waiting for us to figure out how to help them.”

In the 2.5D realm, which falls in Wiens’ space at Mentor, when you put them side by side you have to find a way to interconnect them other than with through-silicon vias, and that enters the world of PCB. “It’s a small PCB. You could call it a multi-chip module, chip on package—you can call it all sorts of things—but it’s that intermediate space between ICs and PCBs. There are some technologies that are useful or would be useful, things like trying to figure out the mapping of pins. I’m trying to partition this out. How would I break it down. How would I decide what’s connected to what? In the process of breaking that all down, that’s something that’s frequently done manually today. We see opportunity in that space to create some automation to assist in that partitioning process, while at the same time leveraging technologies from the PCB side for placement and routing, analysis and traditional things.”

The issues there cross boundaries, he said. “The boundary between the element (chip or SoC) and the unit (the PCB) and trying to find a way to enable the packaging that goes on the silicon before it gets placed on a board, whether it is an SoC or a multi-chip package—today it’s a challenge of size and making the most out of the space. It’s a challenge of performance and performance modeling. How do I effectively model a signal that goes from a die pin-out, through a bond wire in 3D, onto some sort of substrate or board through vias maybe and then all the way back up to the other piece of silicon. How do I model that? How do I model it in a timely way so that somebody can actually get results for signal integrity, power integrity and thermal. How do I enable the effective partitioning of that design.”

Another area being looked at in R&D today is the area between multiple PCBs that together comprise that first level of systems, Wiens explained. Engineering teams today know they want to build multi-board systems but they don’t have the tools to integrate them. They are all designed separately but often run into problems with the boards not lining up or connector problems where the pins are swapped, or mechanical problems with how they fit together. These are time-consuming delays because of the manual synchronization that goes on between the board and too many iterations. “It’s cycles of 10 and above to try to get these things built, so there are lots and lots of prototypes in the process.”

The next big thing
“This desire to assemble components on a single die became necessary for cost reasons, but what we’re finding out is not everything on the die needs that high speed technology. A lot of the die can be run at lower speed. It can be done in a lower node. However, what you want to do is still have small form factors because the world has moved onto miniaturization. System in a package is going to be a big thing moving forward. That’s what I’m betting on,” Madraswala concluded.

Whether it is using system on a package technology, leveraging cycle-accurate models for better architectural planning, optimizing the software for a mobile design using a virtual prototype or a number of other choices, there are many options for the system architect to leverage in 2013.



Leave a Reply


(Note: This name will be displayed publicly)