Experts At The Table: Who Takes Responsibility?

Second of three parts: Reducing risk; when is IP ready; what falls through the cracks; pre-verifying IP; additional challenges with analog IP and stacked die.

popularity

By Ed Sperling
Semiconductor Engineering sat down with John Koeter, vice president of marketing and AEs for IP and systems at Synopsys; Mike Stellfox, technical leader of the verification solutions architecture team at Cadence; Laurent Moll, CTO at Arteris; Gino Skulick, vice president and general manager of the SDMS business unit at eSilicon; Mike Gianfagna, vice president of corporate marketing at Atrenta; and Luigi Capodieci, director of DFM/CAD and an R&D fellow at GlobalFoundries. What follows are excerpts of that conversation.

SemiEngineering: How do you divide up the cost?
Capodieci: You have to look at total cost. Then you get to who share’s the cost.
Even if the fab picks it up, it’s still a cost that prevents the system from being fast.

SemiEngineering: How do we reduce risk of something going wrong? Is it more restrictive design rules, better verification methodology?
Capodieci: Physical IP has to be designed with coverage in mind—where it’s going to go in the future.
Koeter: When we’re profiling a chip, we evaluate all the IP that’s going to go into it and determine whether that IP has been silicon-verified and then how much of that is fed back to tools so silicon is matching some of the verification results. When that has not happened, and we’re in the early design phase and some of the IP is not proven, we will run a test chip before we commit to full production. We have to.

SemiEngineering: That’s an expensive option though, isn’t it?
Koeter: Yes, but it may be only a little piece of the overall chip so you get on a shuttle for that. We’re by far the largest provider of hard IP and we always do test chips. Typically GlobalFoundries will run lots for us and we will test it over the different process corners. We’ll do that prior to customers going into production.
Capodieci: This model actually does work. It’s the only way to develop the IP jointly and to accept it. We have a good relationship with IP suppliers. But the problem is what falls through the cracks. Nobody offers you an extended warranty like they do at Fry’s.
Koeter: One of the things we struggle with as an IP provider is that it’s not uncommon to be using 0.1 PDKs and 0.5 PDKs. So when is it done? There are PDK updates all the time. Just because it’s version 1.0 doesn’t mean it’s done. There are versions 1.1 and 1.2 and 1.5. How many times do you go back and do silicon spins? It’s not practical to do all of them.
Stellfox: We’ve been focused on metric-driven verification. From an IP level, there’s a pretty good set of tools for verification and people who have adopted those have seen fairly good quality IP. That includes UVM, people using SystemVerilog or e constrained random languages for exhaustively verifying IP. What’s often missing, though, is some abstraction back to what are the features of that IP that are verified and with clear metrics of what’s been verified. The approach we take is providing a verification plan which, at an abstract level, shows the intent of the IP provider as to what features they’re actually verifying in that IP. That helps eliminates assumptions. If you’re an integrator, you can see there was never a section verifying a particular feature. That doesn’t mean you won’t make mistakes in verification, but at least it helps remove some assumptions. Functional coverage, coverage-driven, or what we call metric-driven, is a proven approach that a lot of people have taken up to the IP level. The bigger problem is with the integrators. SoC teams are the Wild West. People are doing all kinds of ad hoc things. There is room for a systematic approach there, and that’s where my team is focused right now—trying to figure that out. It’s not just the functional issues. It’s also the performance issues. Providing tools to the customers to analyze use cases of the system with those IPs to make sure they meet performance very early is key.
Gianfagna: There’s a whole issue of what you do at the IP level and how that propagates to the chip level. The question is how well are IP blocks verified and can you capture that and re-use it and extend it at the chip level. With a better methodology that is do-able. People take assertions with a dynamic tool, look at simulation and generate assertions based on simulation. You can define whether that’s correct or not correct based on coverage, but you need more vectors. It would be nice if you could capture that information and ship the assertions with the IP and then build from there. That’s what’s missing. You develop IP but you have no clue what the design intent is. Once you do the SoC, then the design intent comes into play. Wouldn’t it be nice to start with a set of known good verification metrics and known good quality and build out your SoC verification strategy based on what you started with.
Moll: The interesting thing is that today we have customers in Asia who do nothing but assemble chips. They go shopping, put stuff in their cart, and at checkout they assemble all of this. It’s so easy that the whole system part of verification has to be trailing that process. The first time these things are put together, you’re ready to go to the back end and tape out. But your system verification is nowhere at that point. So are you going to tell your customers that you have to spend nine months doing system verification? Their bosses don’t think so. There really is an issue there. It’s partly a methodology issue. It would be great if we could push a lot of what we know about the IP to the system level. But there’s also a more basic issue. If you’re doing USB 3.0, for example, half of your IP has to do with USB 3.0. The rest is the SoC-facing side, which is tougher because there is a lot of interaction with things you don’t control. But when you’re assembling an SoC, it’s worse. Someone has to verify this big piece of Verilog, which is millions of gates that come from a bunch of different places. There’s still nothing like a PowerPoint to the system verification environment, but this is what’s needed at the top level. It’s also intractable, because it’s so easy to assemble a chip but we’re nowhere near the ease of assembly on the verification side of things.

SemiEngineering: What happens with analog/mixed signal?
Moll: We’re seeing our customers trying to stay away from anything physical as much as possible. They know they have to have physical IP in places, but at the assembly level they’re highly reliant on tools. They know if they put their finger in there, between their circuit guys and layout guys, it’s going to backfire very quickly. At the assembly level, they don’t want to touch this stuff.
Stellfox: Because of the amount of integration and so much analog IP going in, that has been a killer for a lot of companies. You could have done a really nice job methodologically in verifying all of your digital parts, and then with a couple pieces of analog content they hook up the wiring wrong. We’ve been investing a lot in the last few years in being able to model analog in a digital context to get the performance you need, and then having a methodology by which you can verify the higher level of abstraction down to an analog SPICE model. You need to bring more analog methodology to the mixed signal flow. A lot of customers have had really good success doing that. Traditionally analog was considered black magic, but as it becomes integrated their methodologies have to be updated to deal with that.
Gianfagna: The mixed signal has introduced an unbelievable level of complexity. That’s an opportunity for traditional mainstream methodologies.
Skulick: We need domain experts. For guys like us, who are buying third-party IP, the quality is crucial—and so is the belief in the quality of the IP—for us to create a successful SoC. We don’t create any of that analog IP ourselves, so we’re tapping into that ecosystem. This is where the silicon validation has to back up what they’re seeing. When I was at HP, we had a lab for testing PHYs, because they had to ensure they would have uptime all the time. All the IC guys were coming in and asking HP to test their PHYs to make sure they were robust and reliable. We have to see that same kind of thing around physical analog IPs, especially high-speed stuff like SerDes, USB 3.0 and MIPI.
Koeter: We’ve spent $20 million CapEx over the past several years on lab characterization equipment and compute infrastructure specifically to address these kinds of things. We also have to do reliability simulations and performance margin, which has nothing to do with megahertz. It has to do with how much margin you have, so when you put your IP in a noisy SoC environment the noise doesn’t reject it. These are things we all have to face and solve.
Capodieci: We don’t see a difference between analog and digital. When integration occurs at the physical level, more attention is paid to issues that have been mentioned upstream. The problems are more where people think all of this stuff fits together like LEGO blocks. It doesn’t. It’s more in the glue. Maybe we need to take more responsibility. The hard part is the filling, which in the analog stuff needs to be extremely delicately balanced. When you replace something with something else, you run the risk of ruining something.

SemiEngineering: What happens when we start moving into stacked die, where we have issues of known good die and IP? Where does the blame go?
Koeter: Generally the issues that surface are more manufacturing or packaging issues.
Gianfagna: Assembly is a non-trivial issue. When you put them together, whether you’re using wirebonds or TSVs or interposers, there are a lot of things that can go wrong.
Skulick: We have R&D initiatives around manufacturability. Some of those are customer-funded programs. We are trying to get to the point where we can understand the manufacturability of 2.5D designs in mass production. It’s not clear yet whether you can control the process and repeat it over and over again. We have silicon interposer initiatives. We have organic initiatives. We’re finding that customers don’t even know the difference between known good die and sorted die. There’s a big difference. Yield profiles around using known good die versus sorted die are completely different. There’s a lot of learning that’s necessary. As an integrator, we’re operating on the assumption that if we do the package design and we’re buying all the components that go in there, the onus will be much higher than in the past.

To view part one of this roundtable, click here.



Leave a Reply


(Note: This name will be displayed publicly)