Complexity, power constraints and area force design engineers to reconsider the way things have been done in the past.
By Ann Steffora Mutschler
When you hear the words, “block interface,” your ears may not perk up, but as system architects well understand, making the right choice between a bus or non-bus interface on an SoC is absolutely critical to design’s success in terms of power efficiency, reusability and performance.
How many of the problems in new chip designs have to do with the interconnect and the bus as opposed to any functional block issue is a matter of debate because it is extremely difficult to re-architect a chip once it has been done one way. The design team could start over, but most teams don’t have that choice and are left to try to fix something that isn’t quite working right. The chip probably works, but it’s a whole lot slower or uses a lot more power than the spec.
From a system design tool perspective most designs today are bus-centric, but there are problems with that approach. Steve Roddy, vice president of marketing at Tensilica, says the traditional reliance on buses—where there is one master at a time doing one transaction at a time—doesn’t scale.
“We see more and more of our customers taking advantage of the ability to add what we call designer-defined ports and queues that are interfaces on a processor specific to a given chip architecture or design,” Roddy says. “That has been one of the drivers of our business more and more as people really utilize that to either get higher performance or to better balance bus traffic or to reduce power in their system. As a general proxy, if half or more of our customers are taking advantage of that, it would suggest that it’s a fairly widespread problem and half or more of the complex design need something other than just straight, traditional buses.”
One of the big challenges, in general, with block interfaces is reuse and the ability to be able to hook up to something that can accept a variety of protocols and standards quickly, according to Charles Janac, chairman and CEO of Arteris. As such, new approaches have come into the design arena. He notes that Arteris’ technology performs protocol conversion at the edge of the network and, in effect, acts as a protocol converter so AXI and OCP IPs, for example, can run side by side without making any modification to them whatsoever.
“We’ve gotten to the point where the network interface units are very low latency and they don’t cost very many gates,” Janac says. “It’s a pretty proven technology at this point, so it’s one of the key approaches to effective IP reuse. Once you start writing different interfaces, and different wrappers, it just gets too costly and too complex in a hurry. Then you’re kind of stuck with one kind of IP. As SoCs get more and more complex, no one provides all the IP — the IP has to come from internal sources, external IP vendors, legacy, some of it is designed from scratch – and all of it has to play together as easily as possible.”
That’s not to say that bus-based architectures are always bad. But getting it right is becoming much more difficult.
“If you have a bus-based system, it may make reuse of a component easier because you plug it to a bus, you define a base address and then you program it, with all the drivers ported fairly easily,” says Frank Schirrmeister director of product marketing for system-level solutions at Synopsys. “But if you don’t get the right bandwidth to this component at the right time in your design because you have a scenario which you didn’t foresee, then your design won’t work. That’s why you have issues with cell phones not receiving a call while taking a picture and playing a video game at the same time. There’s simply too much going on. Those are the scenarios that are easily overlooked.”
Still, complexity is driving the use of direct block interfaces.
“Two years ago, we were seeing 50 to 80 IP blocks,” says Jack Browne, senior vice president of sales and marketing at Sonics, an interconnect IP provider. “At 45nm and 32nm we are seeing up to 150 IP blocks, and there can be two dozen masters that want some share of the memory bandwidth.”
Browne notes that he is starting to see design activity pick up as people get to the point where they have to do new platforms not just derivatives of existing designs.
Who’s adopting non-bus approaches?
Early adopters of the network-on-chip approach are people designing mobility SoCs – complex application processors — that share the problems of complexity, low power, and limited space and which are made in very high volumes. As a result, they are hitting all the constraints at once. Digital televisions, set-top box applications are not too far behind in complexity, and are good candidates for non-bus (or direct) block interfaces approaches like network-on-chip.
“Once you get beyond 65-nm and below, the network-on-chip has a broad application in those kinds of SoCs,” says Janac.
Tensilica’s Roddy observes that the folks who do system modeling and iterative analysis tend to be more proactive at looking at the newer forms of on-chip interconnect. “Where people are going to make substantive changes and, let’s say, have had three successive and successful projects from older architectures and now they are entering a new market, or adopting some new standard, they recognize they are going to have to have four times as much data flow and four times as much bus traffic and know they are going to have to do something different. It is those technology dislocations and new platform designs that will cause people to look at their bag of ingredients and determine if they need to add something new.”
But even with the benefits seemingly clear, there are still roadblocks to adoption of direct block interfaces.
“The No. 1 issue is unfamiliarity,” Roddy says. “If the hardware designer and software programmer have been accustomed to a monolithic view of the world, one that has stayed largely static for 20 or 30 years, even before the notion of integrating things together in a more complex SoC, it’s kind of like everything is memory mapped, the programmer’s view of the world is relatively simplified, and he didn’t have to think about how things actually happen on chip. To the degree that the changes in underlying hardware architecture preserve that idealistic abstraction of the world for the programmer, all the better. If you can have a dedicated link in the hardware that recognizes a particular type of transfer is being requested and gets mapped to a specific hardware channel as opposed to the common bus channel, that would certainly make life easy for the great mass of programmers. And there are always 10 or 100 programmers trying to write code for something versus one hardware designer trying to build something.”
Leave a Reply