Experts At The Table: Timing Constraints

First of three parts: Defining the issue, identifying problems and figuring out who’s responsible for fixing them; power complicates everything.

popularity

Low-Power Engineering sat down to discuss timing constraints with ARM Fellow David Flynn; Robert Hoogenstryd, director of marketing for design analysis and signoff at Synopsys; Michael Carrell, product marketing for front end design at Cadence; Ron Craig, senior marketing manager at Atrenta; and Himanshu Bhatnagar, executive director of VLSI design at Mindspeed Technologies. What follows are excerpts of that conversation.

LPE: Let’s define timing constraints.
Carrell: Timing constraints seem pretty simple. It’s an SDC file, but it rapidly grows to become a script that includes all the other tools in the flow. There are first-party, second-party and third-party IP included in that, as well. First-party is something you’re writing. Second-party is what some guy down the hall writes. Third-party is written by someone else, and you’re never sure if it’s good or not. It includes basic timing constraints, but also the exceptions that an expert user would apply.
Craig: You can split design into two halves: What is it supposed to do and what is the performance supposed to be. The functional side is the RTL and all the standard verification techniques. On the performance side, that’s where the timing constraints come into play. What speeds are the clocks supposed to go at, I/O constraints, timing exceptions where you can ignore things? What gets tricky is designers are the ones who understand best where the timing constraints should be, but they’re not the ones who are responsible for them. You seesaw back and forth between back-end and front-end. Back-end owns it, but front-end understands it very well. It’s a tricky one to close.
Hoogenstryd: Timing constraints define the performance and help define the context. It’s an intent to define the context so a design, or a piece of a design, will work in terms of an environment. It’s not only the clock frequency, but what the inputs look like. In terms of challenges, one of the big challenges we see—especially for very large designs—is the timing constraints are split among different pieces of the design. You need to get timing conversion between blocks in the chip. Very often the timing constraints for a block are the best estimate of what the block is going to see from the chip, and often the estimate and reality are quite far apart. That leads to lots of iterations back and forth between block designers and chip integrators.
Bhatnagar: The chip is only as good as your timing constraints. If one line in your timing constraints is off, your chip could well be as good as dead. Functionally you may have workarounds, but if clock frequencies are off you’re dead. Over the years everything became timing driven. Now there are different multi-modes, so understanding and managing constraints is very problematic. In addition to that, different tools understand constraints differently. Even though there’s a standard, no tool follows it precisely.
Flynn: From an IP point of view, timing constraints don’t sound too bad. You have internal challenges building these processes up. Processes always get driven by performance. But people always push it too hard. Constraints are not handled well hierarchically. We can provide constraints here, but then some poor person has to call it all up on a global level. That’s a massive challenge. A few simple exceptions become much more complex. It’s going to get even harder when we get to 3D.

LPE: Is there a lot of finger pointing going on as front-end design meets back-end design and there is more IP being integrated?
Bhatnagar: Big time. I’ve never had problem with ARM IP. But with other IP vendors it’s unbelievable what they turn out.
Carrell: The amount of time people spend integrating IP is massive. It’s probably timing constraints, not just squeezing it into a certain amount of area.
Bhatnagar: You’re absolutely right. I personally spent the last few months taping out a 40nm chip. We had DDR PHY IP from which I had to get the DFI (DDR PHY interface) clock. I asked for latency because my interface depends on it and it should have contained that. It’s not just one IP vendor, either. They make very good functionality, but they don’t understand the whole concept of integration. Within the company there is a gap between front-end and back-end. That’s why every company I know of has Excel sheets.
Craig: I talk to the front-end teams and the back-end teams and try to understand their methodology for timing constraints. In at least 50% of the cases the back-end guys throw away what the front-end guys give them and do it from scratch. They don’t even try. Even with vendors and their customers, pretty large ASIC vendors complain their customers give them garbage in terms of constraints. I’ve seen vendors put their engineers on site with the customer to clean up their timing constraints.

LPE: Does it help if more IP comes from a single vendor instead of multiple vendors?
Craig: With some of the soft IP vendors, they really don’t understand constraints anyway. You’re never going to get something different from them.
Carrell: It’s almost like we don’t have a common understanding of whether IP is ready to be integrated or not. There used to be a standard checklist that got absorbed into a standard body.
Bhatnagar: That’s the SPIRIT consortium.

LPE: That was IP-XACT?
Bhatnagar: Yes, same thing.
Carrell: Every field in that sheet had to be filled out. But that was all voluntary compliance. It’s faded into the background because it wasn’t reliable.

LPE: So who becomes the gatekeeper of IP? Is it the vendors, the foundries, or the customers?
Carrell: If it’s on the shelf and it shows very clearly that it has been validated, then it’s more commercially viable.
Bhatnagar: It should be silicon-proven.
Carrell: Yes, but if it’s soft IP and it hasn’t been silicon-proven, that should be on the label, too. It’s like your food. If it doesn’t tell you what’s in it you put it back on the shelf and you buy something else.
Flynn: We used to do hard macros, but it was a challenge we couldn’t keep up with because of the porting needs of the technology. A lot of the complexity depends on the timing constraints in memory. What methodology are you using to handle these constraints? You want to be able to see far enough inside to make your tools work properly.
Hoogenstryd: With soft macros, the constraints provided with those serve a different purpose. They’re not necessarily constraints for defining this thing’s behavior at the chip level. For chip integration, those constraints are for mapping a particular implementation of that piece of IP. You don’t know what your latency is until you map it to a particular process and run it through a particular place and route tool and you get a block. It’s hard to say you can use those constraints you pass with the IP that it’s what you expect when you harden the IP yourself.
Flynn: You have to have that internal view, but you also need to care about the integration and how it affects your clock.
Hoogenstryd: But it’s hard to get that until you harden it.
Craig: Or you take an extremely pessimistic approach. A lot of people do that. ARM does that sometimes. That’s good from the standpoint of a provider because you have extra certainty that it’s going to work. But it can make the integration harder because everyone talking to that IP is consuming so much of that budget.

LPE: Does disaggregation of the industry make it harder?
Craig: Absolutely. At a company like IBM, they can control ownership and tell everyone exactly what they do and do not own. Otherwise, timing constraints are always someone else’s problem.
Carrell: There are much more rigid methodologies at the IDMs.

LPE: So who really is responsible?
Hoogenstryd: Everyone.
Craig: The person who ultimately has a problem is the one who’s responsible.
Bhatnagar: Ultimately it’s the back-end guy who’s driving the chips. They get the blame all the time.

LPE: How are you defining front-end and back-end?
Bhatnagar: Design and simulation and RTL verification are the front-end. Synthesis all the way down is the back-end.

LPE: Do models help? There are TLM models, power models and software models. And then there is timing.
Bhatnagar: A chip goes through many cycles. To avoid doing everything flat and then taking a hit on run time you want to do as much with models as possible. In the first round, models have to be accurate. When you go to full-flat mode you toss the models.

LPE: How about when you add in power domains and on/off?
Carrell: You replace spreadsheets by pulling things out of a catalog, instantiate it in a design and then look at a timing view and power modes. Then you figure out where your problems are and you go from there.
Flynn: It is possible to integrate models pretty well. A lot of external interfaces can be handled with the timing. When it comes to voltage, now you’ve got a problem where the supplies don’t track together. You may have two power supplies with timing closure being incredibly tight in the middle. You can look at it from a high level and estimate what you think it will be, but timing closure is even more difficult than you would expect.



Leave a Reply


(Note: This name will be displayed publicly)