Not all IP will work as we move to smaller geometries and more complex systems including power, noise susceptibility and characterization for stacking.
By Ann Steffora Mutschler
Whether it is a smartphone, tablet, video game with home networking feature or any other digital device, each contains multiple subsystems with a mixture of IP blocks from either in-house development or licensed from third parties. Managing the subsystems, let alone the individual IP blocks and the interplay between all of them, is not getting any easier. In fact, with the move to smaller geometries, sensitivity to all things power and characterization for stacking techniques, complexity continues to explode.
For chipmakers, the most important thing to keep in mind with IP management is to have a product roadmap and IP that’s derived from that roadmap.
“You define the features that you want in your product in the future and then you derive the IPs,” said Alex Haggenmiller, director of central R&D at Lantiq, an SoC developer for next generation networks and the digital home. “Maybe this is scheduled for about three years and then we know which IPs are needed. This is the same for internal and external.”
Currently at the 65nm HP node, Lantiq plans to make the move to 40nm sometime next year. Haggenmiller assumes the engineering teams will be able to re-use much of the 65nm digital RTL IP. “In our 65nm products it depends also on the throughput requirements and the complexity. We use USB 2.0, but for the next generation of our architecture we will use USB 3.0 and of course then you have a different RTL. But for those products that have the same standard—USB 2.0—maybe we will use the same IP due to a consistency of software.”
When it comes to power, it depends. “If it is on RTL and is related only to implementation, then you can do it anyway. If you need to modify the architecture of an IP, then I think it’s pretty hard to modify the architecture of an IP because you lose the quality of the IP and risk functional correctness. This we try to avoid. Then it is better to choose another IP provider than to modify an IP because one additional advantage of re-using IPs where a vendor has several customers is the quality. If you are not an early adopter of an IP maybe you get a better quality than if you do it on your own,” he added.
Mike Gianfagna, vice president of marketing at Atrenta observed that the IP consumption vector is moving more toward soft or synthesizable IP. “Hard IP is still important but the complexity is demanding that a lot of IP is delivered in soft or synthesizable form because you need to retarget for different process technologies more easily, you need to change architectures slightly, you need to integrate more easily. And clearly all that is easier at the soft level than the hard level.”
He explained that this has created a very interesting problem because the completeness, robustness and integration risks associated with soft IP are a lot more insidious and a lot less obvious than they are with hard IP.
“With hard IP, you can run design rule checks, you can run DFM checks, you can put it on a cyber shuttle, you can see if it wiggles. You kind of know what you are getting. With soft IP you have the privilege of licensing this stuff and then going through the synthesis, place-and-route loop and saying, ‘Oh boy, this was weak from a power consumption point of view,’ or ‘This is pain in the neck to test,’ or ‘I can’t route this block because it has these ridiculously wide MUXs,’ or ‘the interconnect in this IP is a disaster’ and you find this out the hard way. We hear a lot of horror stories about that. At advanced nodes, the problems become worse because power consumption and routing congestion can bite you even more so its even more important to see those problems coming earlier,” Gianfagna said.
Integration, power consumption and routing congestion are some of the roadblocks that SoC development teams are running into today just to get their subsystems out the door.
Simon Butler, CEO of Methodics, noted that many SoC developers are running into the issue of releasing with confidence these hierarchical subsystems where there are so many components on them now. With so many interdependencies between the components and subsystems, when it comes to actually making a release of one of the IP components on the subsystem blocks, the regressions may have passed internally, “and the team involved for that particular deliverable may know exactly what is required of them. But what they don’t know is how the dependencies around them—the expectations on the other blocks around them in the subsystem—have changed. So they find that they spend a lot of time going back and forth beyond just making their block work according to spec, but making it work in the context of all this other stuff. It’s quite a chaotic environment.”
Some customers have described the situation as ‘paralysis,’ Butler said, and that they just can’t release. “It’s just really hard to make releases because there’s no convergence between everyone else’s requirements and yours. They’re tweaking things. You’re tweaking things. You think you’re tweaking things to get in line with them and vice versa, but there’s really never been a consistency-checking and context-checking environment that gives you that confidence that what you are releasing is going to work up the food chain.”
Part of the problem is that there are the engineers doing software, which is part of the IP within a block, while others are doing test. These are different skill sets, and there is no clear connection between any of them. “There’s no placeholder for everything,” Butler said. “Everything is kind of being shoe-horned in. What they are looking for is a way that basically gets them off the hook for having to watch all these moving parts and manually decide when an IP is ready. What they want is a system that lets them go off and focus on their individual deliverables, and as these things become ready, as they pass regressions and make releases, the system decides whether that subsystem is ready for promotion or not.”
With all of the chaos, this is taking a toll on the business. It becomes harder and harder to get designs out the door. Corners are cut, designs are done conservatively, and margin is added wherever possible.
Power issues
The power problem rears its head in a few ways in complex devices. “One is just the simple fact that because you’ve got so many more things happening in parallel, you need to be much more careful about the aggregate power of each block so you try and optimize each block by itself,” Atrenta’s Gianfagna said. Another thing that gives design engineers trouble is that because everything is controlled by software, they need to start thinking about how to define the power domains and what the metaphors are for ‘on’ and ‘off’ for the different blocks. Blocks need to be shut off in a coherent way and then brought back up when needed. A third issue that causes problems is the fact that with every new technology node, leakage gets worse.
“You can define your power domains but you really need a really good hardware-software interface to figure out if you’ve got it right,” Gianfagna said. “We hear from a lot of customers that they’ve developed a scenario and they really need a model that is fast enough to run some scenarios but accurate enough to reflect what’s really happening. I’ve decided I can shut down the browser when a call comes in but what does that really do? How does that really work? What’s the real power profile of that if I run real data, a real scenario through that? Customers want to do that. That’s a somewhat under-served market today. There are not a lot of good tools, there are not a lot of good flows, yet that interface of the software emulation with the hardware design world it is an absolute opportunity for growth in EDA.”
The marriage of the hardware architecture and the software emulation world for architectural analysis still requires work to get it right, but it could well be a big opportunity for better management of IP in complex systems.
Leave a Reply