Experts At The Table: Timing Constraints

Second of three parts: The impact of software and 3D, dealing effectively with assertions, creating a new language.

popularity

By Ed Sperling
Low-Power Engineering sat down to discuss timing constraints with ARM Fellow David Flynn; Robert Hoogenstryd, director of marketing for design analysis and signoff at Synopsys; Michael Carrell, product marketing for front end design at Cadence; Ron Craig, senior marketing manager at Atrenta; and Himanshu Bhatnagar, executive director of VLSI design at Mindspeed Technologies. What follows are excerpts of that conversation.

LPE: Do models address timing constraints effectively?
Bhatnagar: No. There’s another problem in the models that we haven’t discussed—the syntax. Let’s say you’ve extracted the timing model and you have a generative clock inside. How do you model that at the chip level? It really depends on how the extraction was done. You have to have visibility into the model itself and how you’re going to define things from the top level. Most failures happen because you didn’t understand what was happening.
Hoogenstryd: There is a challenge with models. Bringing models into the top-level analysis gives you the benefit of the performance-capacity link, but typically those things run faster. Those models are only as good as the context in which you extract or define them. Rather than extracting models you’re better off using timing data from a timing run. Being able to promote that up to the chip level so you have full access to every path is important. Along with the timing data, it saves the constraints. So now you get to see with your real chip context where is your block out of scope. What timing constraint did you use in the block that you’re violating because of the chip-level context? Based on that, you can push in and look in more detail. You may have violated the scope, but that may be okay. If not, you know where to make a directed turn on your design.
Bhatnagar: So you propagate your constraints from the bottom to the top?
Hoogenstryd: They’re available at the top level, yes. You have top-level constraints, timing constraints, and you can determine where they are in violation.
Carrell: That’s in PrimeTime, right?
Hoogenstryd: Yes.
Bhatnagar: So you have full visibility even with hardened blocks?
Hoogenstryd: Yes. You try to address this problem with modeling. Even if it’s hard IP, you’re estimating what the context of that thing will be when it may be used in many different contexts. We’ve seen design teams that know the spec of the IP but they try to push it to the limit, anyway. To meet timing, they violate it.
Craig: They get out of sync at the last minute. You thought you were okay at the block level but you’re not okay at the chip level.
Hoogenstryd: Exactly.

LPE: Which side of the design process is responsible for all of this?
Carrell: We were talking earlier about different tools behaving in different ways. Independent verification is one of the things that might be important because these tools behave differently. PrimeTime is counting the money in the register and doing the books. With constraints there should be independent verification.
Hoogenstryd: The way I look at it is that PrimeTime is the arbitrator. With its usage in the market as the tool they trust for timing, it’s the arbitrator.
Carrell: But people could spend time in place-and-route and find the problem there. They can find it in static timing and synthesis. They can run and run and run with the implementation tools. But isn’t it better to find these problems with a tool designed for that?
Hoogenstryd: That’s why one of the tools we rolled out last year was Galaxy Constraint Analyzer. The front-end team would be responsible for coming up with constraints without knowledge of what synthesis or place-and-route could or could not do. We saw a lot of customers struggling. They were using their place-and-route tool as a constraints scrubber.
Bhatnagar: Out of six months of timing closure, two months you’re scrubbing the constraints.

LPE: Now that the system is no longer just the hardware, how much does software impact this?
Craig: We see it come into play with timing exceptions sometimes because people will set false paths from configuration registers and things like that. When you go through the process of trying to verify those independently, the hardware guy says, ‘I don’t know whether it’s a configuration register or not.’ He then has to find the software guy, and two weeks later you get an answer back about whether it’s too late to use that timing exception. In the meantime he’s stuck with a timing violation he’s unable to fix. The software guys then get sucked into that part of the process.
Bhatnagar: It’s the application.
Flynn: The software may be involved in driving whatever the power state specs are. You need to go back and really validate. It’s very hard to interpret what you have there. There has to be some level of abstraction, though.

LPE: What happens with 3D stacking?
Craig: You have to be pessimistic. You have to isolate one layer from the next. Otherwise it becomes too unwieldy. What happens when I tweak one? What relationship does it have on another one?
Flynn: There’s a brave new hope that memory PHYs are really tight. That’s one of the big problems. The idea that memory I/O is going to be a lot simpler has lots of attraction. There is an upside in all this, but it’s still a long way away.
Carrell: When the handoff happens from the front end to the back end, the constraints start again. How many people are even handing off a placement today? Let’s start with that. You can get physical precision in a tighter cone, which will help you with timing closure. You then can handle 3D modeling forward in a flow and keep some precision there. But we’re having trouble even getting people on board the physical handoff.
Bhatnagar: We are looking at piece parts. We are looking at timing closure by itself, SystemC by itself, and we are trying to solve these problems. All the fragmentation in the industry could be solved with language. Verilog represented functionality. System Verilog incorporated assertions. Again, that’s all about functionality. Why not assertions for timing? You should be quoting timing with functionality. That will get rid of DRC files and handoff problems.
Flynn: But timing is very technology-dependent.

LPE: Don’t you still have to create that language?
Bhatnagar: Yes, and that is the Holy Grail.
Craig: You introduce another variable into the process. It’s either RTL or testbench. If the RTL doesn’t work the testbench has a bug. If you introduce assertions you have the same problem.
Bhatnagar: But when you look at the verification space, a lot of innovation has happened. In timing closure, name one innovation over the past 10 years.
Craig: There is too much trust that the back-end tools know what you’re doing with your timing constraints. That’s the big issue. And the only way to get better is to get more experience, which means making more mistakes.
Flynn: Timing is technology-dependent, but functionality is pretty portable. Do you want to encode the functionality deep inside?
Bhatnagar: About 90% of the constraints can be part of the language itself—especially the exceptions.
Flynn: You can definitely synchronize the clocks.
Bhatnagar: I can code this clock is synchronous or asynchronous from this clock. All these problems can be solved.
Hoogenstryd: It sounds really interesting. If something were started today we might have something in five years. UPF/CPF has taken how long? But the other challenge is that the design approach is very varied. Some companies are top-down while others are bottom-up. Some look at the IP they have and build from there. That approach doesn’t work bottom-up. It may work top-down. Is the front-end guy going to do a better job with this ability?
Carrell: Writing the design at a higher level of abstraction and synthesizing out different flavors of it—this is my low-power on this process, this is my high-speed version on this process.
Hoogenstryd: If we all went back to 350nm this might be possible.
Bhatnagar: I think the industry is heavily fragmented.

LPE: What happens with design for variability? Is it made worse or better?
Craig: It makes it tougher.
Hoogenstryd: There are two camps out there. Some customers are pushing every bit of performance they can get out of their design, whether it’s timing or power. They want all the analysis tools to take into account the third-order variation effects, which puts a lot more pressure on verification. Then there’s the other camp that recognizes there’s variability and they want to design it out. A year or two ago the big discussion was stress. How do you model it? Can you do statistical analysis? One way was to design it out so that filler cells look like dummy cells. They all sort of have the same stress. Then it’s a wash. Either you have to design it out or you analyze the heck out of it.



Leave a Reply


(Note: This name will be displayed publicly)