High-Level Gaps Emerge

Experts at the table, part two: Hiding complexity; a balancing act; design goals; power modeling.

popularity

Semiconductor Engineering sat down to discuss the attributes of a high-level, front-end design flow, and why it is needed at present with Leah Clark, associate technical director for digital video technology at Broadcom; Jon McDonald, technical marketing engineer at Mentor Graphics; Phil Bishop, vice president of the System Level Design System and Verification Group at Cadence; and Bernard Murphy, CTO at Atrenta. What follows are excerpts of that discussion. For part 1, click here.

SE: What are some other challenges coming in at a high level in the front end of the design flow?

Murphy: Another thing that may be possible is test. What they’re saying is if you’re having these multifin finFETs, you’re going to have much more complex failure models.

Clark: We may be able to find out which device is failing but maybe not how, initially.

Murphy: Yes. What you want to know is who got respectable coverage. Maybe it might push a trend towards logic BIST because maybe ATPG doesn’t work so well anymore. This is all guesswork.

SE: When we look at 20 and 14nm, what does it look like if we do not hide all of the complexity from the designer?

Clark: The EDA companies do their best to figure out all of our usage models but they never get them all. And all of a sudden I want to do something a little bit differently and then it doesn’t work and I have no power to tweak the tool or do anything to make it work. And then i’m at the mercy of the EDA company to want to support my usage model and that works great if another logo wants it too.

Bishop: It’s actually very dependent on the style of the customer and their style of design. There are some customers that we can hide certain details from and they like it; that’s what they prefer. And they’re typically customers who are really mostly concerned with the system aspects but not necessarily the advancement that a Broadcom or a Qualcomm or somebody like that would want.

Clark: It’s so frustrating. We’re looking at the UPF implementation, the vendors on the implementation side versus the verification side have decided to follow UPF to the letter. Well, to me UPF is an evolving spec: it’s a baby. It hasn’t figured everything out and we can’t do anything. It look me like three days once to figure out how to put a level shifter on a constant connection versus tying to VSS. I had to create an entire UPF file for one connection. That doesn’t make sense but there was no way around it because the rules were so strict. On the verification side, they are much more free. But on the implementation side they don’t want to go outside of the box but 85% of our design fits in the box but the other 15% doesn’t.

Bishop: Verification needs to be very close. Implementation….

Clark: ….needs to be exact. If you’re missing a level shifter, your circuit will still work, so you can take some leeway on that. But if you’re missing other things, it might break.

SE: Is there a way to mitigate this concern?

McDonald: It’s all about working at different levels of abstraction because if you had to deal with all the details, all the time, you’d never get done with anything. So you need to be able to work as abstractly as possible for as much of the design as possible until you get to a point and then you need to be able to break in at that point.

Clark: You also have to be careful — I support a lot of users within Broadcom and people with a little bit of knowledge can be really scary so you don’t want to provide all the switches and buttons to just anybody because they’re going to start tweaking everything and say, ‘How come my design doesn’t work?’ Stop touching it.

Bishop: That’s our big concern. We can give people access to the database, they can play with the physical implementation but generally……[if they break something] then they blame us.

Clark: It’s a balancing game. Some people will be careful turning those knobs and take responsibility for, ‘Oh, I did that.’ And some people will say, ‘Well, you let me do it. It should still work.’ There’s no way you can test for every different complication.

McDonald: One of the customers we’ve worked with on the high level, they had performance issues at the implemenation and their customer slammed them, blamed them up and down, complained and was threatening severe action and they finally came back and couldn’t figure out why the customer had this performance problem. Through some high level modeling, they were able to find that it was actually the customer’s software using their design in a poor way that was causing the problem but unless we have the tools that let people really identify where the problems are, it’s just pointing back and forth — you’ve got to have that visibility.

Murphy: I was on another roundtable where we were talking about finFETs and one of the things I heard as a concern was the variability — you’ve got a lot more corners and you’ve got a lot more uncertainty in performance, power and everything else. So, you have this concept of an IP, which is a black box that is fully characterized and you can just drop it into your design. Well, how well does that stand up?

Clark: And who the IP talks to because you could have the same IP, operating at the same voltage on two different chips but if it’s talking to different voltages, it’s going to have different characteristics as they talk to each other.

Murphy: I raised the question at this panel and they said, ‘Don’t worry. We’ll take care of that through services.’ Is this the end of IP? Are you really providing a value-added service and you’re going to figure out how to fit it into that design somehow? Everybody is saying, ‘No, it’s still an IP business.’ But nobody answered the question about how to handle that variability.

Bishop: It seems like it pushes it back into the court of the TSMCs of the world, where they’ve got to work on the variability because that will effect them whether there’s an IP market and an IP that you’re selling or whether it is across the chip. It’s something they’re going to have to deal with.

Murphy: Here’s another wacko idea. How about you start thinking about putting a boundary around the IP? And in that boundary you’re going to absorb or manage all of that variability so you do source-synchronous clocks and everything else you need to do to…

Clark: We already do that in our internal IP: we implement the localization of the IP in a wrapper.

McDonald: The problem is how much performance you’re willing to give up to do that. For some things that will work.

Clark: It depends on your goal. Sometimes it’s more important to get it out the door than to make it perfect and so, if it is outside of your power margin but you need to sample it to customers, ship it if it works. You care about the power for a customer sample but you care that you’re on the path to your final product. We have different design goals at different stages of development. We have taped out chips that are only closed at ‘typical,’ because we need to sample to customers and we need their feedback before we invest more time in that product.

Murphy: That goes to the myth of the first silicon, by the way.

Clark: We’ll do first silicon on a derivative product but we don’t plan on it. In fact, the Broadcom methodology is we tape out multiple chips on a reticle.

Murphy: I think almost everybody does in one way or another.

SE: Is it possible, when we think about modeling, can we have a high level of confidence in power models?

Clark: It’s so contextual that no one model is accurate in all the contexts. Your timing models have a target voltage and the timing in them is based on everything they are talking to as being at that same target voltage. Say I have this IP and the baseline timing is for all of it’s I/O to be talking to the same voltage but I have AVS outside and how do I characterize it across that whole range? Do I do the end points and interpolate or do I do 20 points, and then in what combination because one interface, one bus might be talking to one voltage and a different bus to a different voltage — how do they interact? If you think too deeply about it, you get overwhelmed. I don’t know how to turn all those knobs to get the sweet spot of all the information. I don’t know that it’s going to work. I mean, I don’t know that there is a sweet spot where it’s all going to work across voltage and process and temperature.

SE: The promise of doing the modeling is that you can share this data.

Bishop: I think the promise of modeling is levels of abstraction — being able to share that parasitic information or whatever it is that you modeled all they way up through the design flow; so the different levels you can use, a certain amount of that information and the tradeoffs you make won’t make the design fail. I’m not so sure that as we dig deeper into finFETs and some of these other areas, that the modeling is going to provide the information all the way up the design flow. I’m waiting to see. We haven’t gotten enough real data on parts being built to know that we’re verifying something that’s actually going to work.

Murphy: One of the problems is we don’t have power models that we can use at the system level and we don’t even know what those would look like.

Bishop: How do you do a power model with System C?

Murphy: How do you build a power model for a video codec?

McDonald: That’s something that we’re doing. We do power modeling in System C but it’s at a transaction level and it’s based on estimates. One of the challenges that you have is all of this is just guesses — at the high level you’re guessing and then you get down to an implementation level and now you know. One of the things that we need to do in the modeling is be careful about what question we ask at what level and we can’t make a low level decision with an architectural model. We can use the architectural model in what we see a lot of our customers doing is a sensitivity analysis with the architectural model. They’re trying to say, ‘I guess it’s going to be this, now let me start introducing varying assumptions to see which assumption is going to cause it to break if it’s a little bit off.’ We see a lot of trying to get more information about the implications that I’m going to face in implementation before I get there. The modeling helps in doing that — it’s not going to answer all the questions — but it’s going to help you at the next level make a more information decision about how you go about doing something.

Murphy: The challenge there still is how do you connect that to some kind of implementation reality because if you say it goes this way and it really goes this way, then you’re in trouble.

Bishop: We’ve got to get to the feedback loop. The research that we’ve been doing is trying to take place and route information all the way up through the flow and it’s very challenging. Typically, for me at least, my interface point is physical synthesis because at least some of that information makes sense to me up the clouds where I exist.

Clark: It’s not accurate until GDSII.

Bishop: And that’s when you really know.

Clark: So every experiment you do, you can’t even decide if you’ve got good data until you get all the way through your P&R and extract the real data.

Murphy: Even at the transaction level, if you’ve got multiple things happening in an end-point IP, you can have multiple levels of power consumption and you can tie that to software and say, if these are the right numbers then this is what it’s going to do.



Leave a Reply


(Note: This name will be displayed publicly)