Faster IP Integration

Standards are making it easier to hook various components together, to make comparisons between different blocks, and to get to market more quickly.

popularity

By Ed Sperling
System-Level Design sat down with Laurent Moll, chief technology officer at Arteris, to talk about interoperability, complexity and integration issues. What follows are excerpts of that conversation.

SLD: What’s the big challenge with IP?
Moll: Interoperability is always a concern. Because of ARM’s dominance, a lot of people are moving to AMBA protocols, whether that’s APB or AXI. The bigger companies typically have something they’ve developed internally, or an existing protocol they’re using. They tend to still have a sizeable legacy piece. They will move away from that eventually, but it will take time. Anytime the entire environment is built around that, it takes a while. The only other thing we have involving interoperability are port interfaces for memory controllers. There’s a lot of baggage around there.

SLD: But there also are lots of little processors on an SoC, too. What impact does that have?
Moll: There are lots of things happening in a modern SoC. In mobile SoCs, there are subsystems for cameras, video and a lot of hardware acceleration. They started from the main CPUs from ARM and it trickled down to the subsystems, and that’s taking over a lot of the SoC. From a verification perspective and an assembly perspective, people don’t want to deal with too many things. So if one of them is dominant, you might as well use it as often as possible.

SLD: Where does the network on chip technology fit in?
Moll: It’s everywhere. We like the fact that people are standardizing, because if IP comes in with a standard protocol like AMBA it means we can connect to it more easily. What happened before was people were essentially growing chips rather than assembling them. It meant that the interfaces to all the IP blocks were custom. It was hard scale beyond a certain point.

SLD: So what you’re starting to see are the first signs of a maturation of the commercial IP business?
Moll: That’s correct. A lot of companies have moved to a model where, instead of having one flat organization, it’s more of a silo-type of organization where you have a lot of people building subsystems and a separate group assembling them. This whole process, as we see it, is the maturing of the industry. It’s possible to assemble a chip. It’s not easy, but it is the fastest and most efficient way.

SLD: Does that make it easier to choose one IP block versus another?
Moll: Absolutely. People can try different things. They can swap them in and out very easily. And we’re also starting to see virtual prototypes where the software vendors are building hardware that will never actually tape out. But they can test their software and how it works with different pieces from different vendors. This is the first time we’re seeing platform assembly bubbling up the food chain to people building software or systems. If you’re Microsoft, for example, for Windows 8 you can build a virtual platform with a NoC and test out how it operates on an ARM processor or a GPU. This platform will never exist in reality, but it can run tests, it can run software stacks and you can shop it around to vendors. It is part of the maturation.

SLD: If you’re comparing one piece of IP to another, what is it like today versus five years ago?
Moll: Five years ago, a lot of IP was internal rather than commercially available. Interoperability was a problem. You had this thing that you grew internally that didn’t connect to anything very easily. Then you had this other IP with a standard interface and you couldn’t connect them. People still build internal IP, but they build them in a way that they can be connected easily. Even now nothing connects seamlessly. AMBA helps because it’s a standard, but it’s more of a catalog of things you could do with the interface. So there is still a lot of tweaking. You can connect them in a basic fashion. But if you really want take advantage of all of this IP, there are still some nuts to turn and things that are necessary to make them work together really well.

SLD: Is the goal lower NRE or time-to-market improvements with the same NRE?
Moll: For the consumer markets, time to market is everything. And it’s time to market not just for the first chip and making sure it works, but also for the 10 derivatives they’re going to make. In the past, it was like a butcher shop. You had to cut things up carefully and make sure it all still worked. Our largest customers can just crank out derivatives where most of the work is on the back end. The interconnect is in place, and they just take one thing off and replace it, re-do all the performance regression and they’re done. NRE is less important for them, because when you’re doing large volumes that doesn’t show up. Missing one day on the market does.

SLD: That’s time to market in a small slice of a market, too, right?
Moll: Absolutely, and this is why platforms are so important. Making very big chips work well is still a difficult process. You still have to worry about performance, use cases, power, security, and all these types of things. So it takes awhile to get a big platform together. But once you have one that works, you can create derivatives, shrink it, and customize it for all these niches very quickly. That’s where the time to market comes into play.

SLD: There has been a lot of talk about platforms over the years, but they’ve been slow to catch on. What’s changed?
Moll: With the most complex chips, people are moving to platforms. There is so much NRE invested into one thing that you want to be able to get your return. There is a lot of verification and checking the back end to make sure it works. You want to make sure this one thing works really well, and then you use it for two years or three years. You get quite a bit of use out of it. So it’s true that platforms aren’t universal yet, but we do see them with companies that need to build one really complex thing. They invest in a platform so they don’t have to invest a lot of money and time in derivatives.

SLD: We’re starting to see the rise of subsystems, which are a step in that direction.
Moll: Subsystems have been around for quite some time. The subsystem is, in many ways, a political entity in many companies. It’s a silo issue. So the guys in imaging are all the guys who know imaging. The guys in 3D are all the guys who know 3D. They tend to have their own requirements. If they need a microcontroller, they go around the company looking to see if there’s a microcontroller. Or do they contract with ARM for an M0. That has existed for a while. What’s newer is that the interface to that subsystem is becoming standard, and the components inside the subsystem are becoming standard. For the interconnect, we’re finding that companies are using our technology between the subsystems as well as inside the subsystems. When you assemble things that are built to work together, the probability is higher that they will work.

SLD: IP has always been a black box, and subsystems increasingly are collections of IP. What does that do for connectivity?
Moll: It depends on the subsystem, which can look completely different. For a security subsystem, where you have CPU, an SRAM and a bunch of devices, this is like a small system, so it’s better to have transparency. When you get into QoS and security, at the top level it’s easier to understand if it’s not a black box. Otherwise you may have to spend time looking at an interface and trying to understand what that interface does. We see that in the subsystems where the interconnect is used for what look like small systems. For a CPU, there are a bunch of things that hang together in a very specialized way. They don’t look like subsystems. They’re just a big block and there’s an interface to them.

SLD: If you’re creating an SoC with a NoC, where are you seeing issues in connecting everything up?
Moll: In the past five years there is a new job description called SoC architect. Before that there was just a chip architect. So for this new job, you have a bunch of IPs. You know what’s inside some of them and some of them you don’t, and your job is to put them together in such a way that works. The reason why this job exists is that the whole thing flattened it way too complicated, just as people 20 years ago realized that flattening the whole netlist to make a chip isn’t going to work as well. At the architectural level, the problems are transversal, such as power, performance, security and debug infrastructure. Something like a standard interconnect helps you solve a number of those issues. Then you just have to verify that it does what you want. The other thing involves all the stuff at the back end, which is also an assembly process. You also have subsystems, which may look different from the others. Some have hard macros, others don’t. There’s a whole top-level assembly process of all your clock-domain crossings, power domains, DFT. The assembly issue is making these cross-functional topics work together. That’s where you spend a lot of time, and then verifying everything.

SLD: The new wrinkle in this is software, which can interact at many different levels. Does the network now have to account for the software?
Moll: When you’re building your own RTL, it means it won’t be ready for software until late in the process. If you’re assembling parts, you can start assembling the chip early. You either have functional or RTL models, and you can create a virtual platform way before tapeout. We see a lot of people doing that. The advantage for the NoC is that you have RTL right away. It may not be the final RTL, but functionally for the software guys it’s good enough.



Leave a Reply


(Note: This name will be displayed publicly)