SoC Architects Face Big Challenges

While the geometries of advanced node processes may not change SoC architectures, they bring intense challenges.

popularity

By Ann Steffora Mutschler
While the geometries of advanced node processes such as 28nm and below may not greatly impact SoC architectures, the complexity enabled by the leading edge brings intense challenges all the same.

With the ability to put more transistors onto a chip come new possibilities such as the increasing use of multi-core architectures and lots of integrated hardware engines, as is widely understood. To address this, Steve Carlson, group director for silicon realization at Cadence observed that from an architectural perspective more and more complicated memory subsystems are being utilized to efficiently use all of those processing capabilities. “You see multi-level cache architectures and lots of local memories—that’s a functional complication and complexity associated with architecture.”

Correspondingly, he said, there is a topological complexity because all these engines have to talk to the memories, and there ends up being a very congested topology by nature that then puts pressure on the utilization, which in turn puts pressure on how much integration can be done.

Along with this is power, of course, Carlson continued. “When you’re trying to run a whole bunch of parallel processors that puts pressure on the overall power dissipation and even if you’re not using them you have leakage power, and if you are you get the dynamic power.” As such, this is another layer of architectural complexity associated with the power architecture to be used and this too adds to the complexity of the overall architectural planning of the system.

Jeff Scott, principal SoC architect at Open-Silicon, noted that in general with an SoC—depending on what the end-product requirements are—if power is a concern and the leakage problem needs to be managed, various power domains and power-down strategies must be included in the SoC architecture. And while leading-edge process geometries may not drive new architectural challenges in and of themselves, power will become more of an issue even in areas where they have not been in the past.

“The things that we’ve been doing in mobile for years are now going to be applied in servers and storage controllers and computer systems, just like they have been in mobile,” Scott said. “Maybe not to the same extent, but the same types of practices, the same types of architectural considerations have to be made.”

Tallying the costs
Set against an economic backdrop, some key trends begin showing up at leading-edge process nodes. “As you go to these finer-grain geometries it’s a lot more expensive if you make a mistake with the architecture, so avoiding overdesign is really important,” said Pat Sheridan, senior staff product marketing manager in the system level team for Synopsys. “You can’t just design an architecture by building margin for error. Maybe that was okay in the past, but when you have the expense, the cost of the die, you really don’t want to do that. You want to be more efficient with the architecture.”

Hand-in-hand with this is avoiding underdesign, which also can cause huge issues. Learning about over- or under-design issues late in the design cycle is a problem because the closer to GDS the design gets, the harder it is to change. “You are basically forcing yourself to fix a problem or go to market with it, and that can affect the competitiveness of the project or the schedule,” he said.

A good starting point for addressing these issues is better collaboration between the people who have the knowledge of the system-level issues and the people who are responsible for the architecture. One area in particular where better collaboration is needed is hardware-software partitioning.

“When you are contemplating these new SoCs, the multicore partitioning is really important to understand. The number of processors may be the easiest thing to think about, but perhaps it is secondary in importance to how the software is mapped or assigned to the available processors. This mapping task has a big impact on performance as well as power. There are a variety of different things you can do, and the relationship between the mapping and the system performance and power at the end is not always obvious. This partitioning is definitely something that you can do with earlier simulation. There are techniques to do that even before you have the software,” Sheridan explained.

A second approach is the use of workload models instead of the actual software running on the processors, he continued. “Workload modeling is getting much more important to enable the earlier analysis, and [SoC architects] do that by focusing on the most important use cases. There are techniques available to create workload models that are performance workload but are not functional applications. It’s a way of representing the overhead from a performance point of view. The data gets moved around with traffic generators that are intelligent, and which you can set up to mimic different applications.”

A third technique is sensitivity analysis. “Business people know about this because spreadsheets are really helpful in doing sensitivity analysis. You can look at many factors and you can understand the relationships and whether you have to optimize a certain thing. You can see what the parameters are most sensitive. You can apply that to system-level architecture analysis by basically taking the results of hundreds of simulations, where the configuration of the architecture is varied piece by piece for the parameters that it supports, and bring the simulation results back into a spreadsheet environment. This is beyond looking at specific simulation results, but it’s being able to analyze and do sensitivity analysis in the charts or pivot tables. That’s really helpful when the design space is large and you want to be able to efficiently look at these multiple factors to narrow things down. That can help with performance, power and cost,” Sheridan added.

With the move to smaller geometries, costs go up, there is more area on the die, more integration is required, and the range of IP that is being combined on a single SoC is just getting larger and larger. “Everybody knows this, but the implications are what are causing some significant changes,” said Jon McDonald, technical marketing engineer for the design and creation business at Mentor Graphics. “The cost of getting something wrong, the cost of not verifying the system, is becoming really extreme. Hardware engineers have been doing extensive verification on the hardware forever, but now we’re seeing this more and more, and it’s not just the hardware. There is not a single SoC made today that is not heavily dependent on software. So now it’s not just the hardware. It’s the hardware and the software. The system has to be done correctly and concurrently.”

Add to this IP obtained from a number of sources that then must be integrated together on the SoC, and the whole thing has to be verified in a system context and the application context it’s targeted at. Taking all factors into account, the SoC architect’s job is quite intense.

All of this has led to an uptick in the adoption of architectural design techniques. More engineering teams are leveraging technologies for basic functional verification and also linking into the assumptions being made at the architectural level in order to determine if what is being implemented will meet those assumptions—with an underpinning of standards like UVM and others, McDonald concluded.

The work will never be done in this space as design evolves and new challenges arise, but for now, at least the SoC architect has a number of options to begin making architecture decisions and tradeoffs.



Leave a Reply


(Note: This name will be displayed publicly)