Experts At The Table: Next-Generation IP Landscape

Final of three parts: Software; subsystems; prototyping; the elephant in the room.

popularity

By Ann Steffora Mutschler
System-Level Design sat down to discuss predictions about the next generation design IP landscape with Robert Aitken, R&D fellow at ARM; Laurent Moll, chief technical officer at Arteris; Susan Peterson, group director, product marketing for verification IP & memory models in the system & software realization group at Cadence; and John Koeter, vice president of marketing and AEs for IP and systems at Synopsys. What follows are excerpts of that conversation. For Part One, click here. For Part Two, click here.

SLD: What are users looking for today in terms of software, especially for subsystems?
Koeter: Most semiconductor companies in my experience right now are…hiring more software people than hardware people and the threshold is just about switching. Right now, at the leading edge about 45 to 55% of the manpower that is on a project is on the software side. The traditional method, by the way, has been to develop the chip and then develop the software on the chip, which is a disaster in the scenarios we’ve been talking about. One of the ways that we’ve addressed this is we’ve worked very closely with ARM to come up with what we call virtual development kits. These are ARM-based processors with all the common peripherals around it that people can have literally nine or 12 months before silicon is available. It’s a virtualized model of the SoC, or at least the key components of the SoC. That’s a technology which is evolving and becoming more common especially with the leading Tier Ones. Another really important prototyping technology for software development is to do an FPGA-based prototype. You just can’t get the same type of time to market advantage that you can with virtual prototyping because your RTL has to be relatively stable at that point but still, you can do it six months in advance of when you have silicon.
Moll: I agree. Virtual platforms and FPGA-based platforms are really the key. One interesting thing that we see happening that ARM has been doing really well in the past is actually getting our customer’s customer to start virtual prototyping for themselves trying to figure out what they want by creating an RTL-less platform that they’re never going to instantiate anywhere but that is assembled enough that they can actually have drivers and pieces and OSs running way before or at the same time as they engage with their customers which are shared with our customers to make a real platform. Virtual platforms and FPGA platforms are absolutely key to our customers but they’re also very useful to our customer’s customer to start doing design way ahead of time. You can build a generic platform basically to run Android and you take a core or a bunch of cores from ARM, you go fish for IP at a couple of places for the key things that you want to work, you obviously have to put it together and use something like an interconnect and you get a virtual platform. You can have software guys with minimal expertise – you still have to have a few key engineers who understand the hardware — but you can feed that to enough software people to make it really useful.
Koeter: The good thing about virtual prototypes is you can plug it directly into these software guys’ environments – they use Lauterbach debuggers or ARM’s DS-5, for example, so you’re not trying to change their software flow and environment, what you’re trying to do is give a virtualized model. One industry that’s really taken this up is automotive and it’s exactly because of all the sensor data coming in, all the high cost of failures once it gets out into the field. Automotive has historically often done testing especially around fault injection and hardware in the loop, now they are virtualizing hardware in the loop.
Aitken: IP companies are definitely taking software much more seriously than they were many years ago.
Peterson: Emulation is another important factor in terms of another vehicle for doing that kind of software modeling in advance of actually having the hardware. I would also say that providing IP that has some sort of bare metal software and firmware in the stack that can go into any of these virtualized venues or that a software engineer can interact with more easily in an environment that’s more familiar to them than RTL, for instance, is a really important part of the component.
Aitken: We found that when we acquired a graphics company a few years ago. The software guys there tended to view C as a low-level language. It’s been good learning about more of the system. Having the software view of things gain more traction inside the IP business has been good for everybody.

SLD: What’s the elephant in the room? What are the problems that don’t get talked about with IP integration?
Moll: I think system verification is what everybody’s talking about. It is still the elephant in the room and we see this in particular with those Chinese customers you were talking about earlier who are just assembling a chip and it is possible to go from zero to functioning RTL in a few months. The only issue is how much of this is going to be verified because as you’re assembling IP, unless you’re assembling a whole chip from IP from one provider — which is not possible today because there are a lot of competing systems parts of the ecosystem — you don’t know what you have. And so you have to spend a lot more time than assembling the chip or even doing back end, verifying that all the pieces work together. And obviously you can’t verify every single gate; you can’t redo 100% coverage of every single gate on the chip – it’s impossible. The people right now who have the toughest job are those system verification people who are trying to figure out what they need to cover and what they can cover in the time that’s allotted to them.
Peterson: I would agree and one of my standing jokes as I’m out there talking to customers is that the most common method of verification at the system level is prayer and everybody always nods their heads and says, ‘Oh my gosh, she’s kind of right. We hope it works.’ That’s a big issue and I think that we talk a bit cavalierly about, ‘OK, you just pull this IP together: a little bit from you, and a little bit from you, and a little bit from you and a little bit from me, and that’s all going to work.’ But most of our mainline customers who’ve been in this business a long time have a large group of people that all they do is develop scripts to pull that SoC or that system together. Like a couple hundred people. I think in the new world – and I keep coming back to because I saw that Facebook CTO on stage and he looked about my son’s age and I just kept thinking, ‘Boy, they’re going to do things really, really differently and I don’t think they’re going to be willing to invest in those 200 people – they’re going to expect that it’s going to be easier to pull it together and make sure that it all works.’
Aitken: That’s going to be really hard because so far, every new verification technique – we were talking about emulation and simulation and formal – they’re all ‘and’ techniques: whatever we were doing before ‘and’ this new thing. We need to get to some that are ‘or.’ ‘We will do this new thing instead of what we were doing before.’ And that so far, that’s the elephant. We’ve never been able to get rid of anything. We just add new things.
Peterson: If our customers were really confident in the IP they were getting from us, then the necessity for doing block level verification could become an ‘or.’ You could say, ‘I trust this. All I have to do is verify the traffic in and out of it. I don’t have to verify that block itself.’
Koeter: I was actually thinking one of the elephants in the room is IP quality. You have to get to IP to get to your chips and actually the Chinese are a great example of this because they started with a greenfield design flow, they didn’t have legacy. Right from the get go they adopted huge amounts of IP to build their chips and turn them very, very rapidly. But for us, when I think about IP, what’s the fundamental value? Processor IP has a different value than interface IP—but if you look at the fundamental value of the type of IP that we do, it’s risk reduction and schedule acceleration, both of which are absolutely and utterly useless if you don’t have quality. That’s another area where you really have to focus on and right now, there’s no good industry standards for quality so TSMC is doing TSMC 9000 – that’s a good initiative. Proving that you have high quality IP is something you have to work very closely with your customers and that it will work in their systems.
Aitken: A key piece is the trust angle. If you’re working with someone who’s got a zillion customers then there’s probably a good chance they know what they’re doing. It’s not guaranteed obviously but it’s certainly more likely than if they have zero customers. [In that case] you can be fairly confident that you’re going to be finding a lot of quality problems. Even if they have the best of intentions, there are some things that are so complicated that there are things that only come out over time and people develop best practices and learn things.



Leave a Reply


(Note: This name will be displayed publicly)