The Essential Tool Kit

What’s required for chip design these days? Here’s the list of must-have tools and what kinds of choices need to be considered.

popularity

By Ann Steffora Mutschler
Is there an essential chip design tool kit today that has only the ‘must haves?’ Sure, this sounds like a straightforward question, but the answer really depends on what process node the design will be manufacturing on.

According to Jon McDonald, technical marketing engineer for the design and creation business at Mentor Graphics, there’s actually nothing that is a ‘must have.’ “How do you define ‘must have?’ You could go back to paper and pencil; you could do it all on a spreadsheet. Don’t use any simulation tools or any design tools. Is it going to be effective? Is it going to be cost effective? Is it going to be efficient? Of course not, but you could do that. It is possible.”

What it really comes down to is what the ROI is on doing something, he said. “What’s the cost of doing it and what’s the return on doing it? And if the return is more than the cost, it’s ‘must have.’ You could almost say ‘nice to have’ is really something you shouldn’t have because ‘nice to have’ implies that the return isn’t as much as the cost of investment. So why are you doing it at all? The tools have to pay for themselves, they have to be worth the investment and worth the return. And if they are, they it’s ‘must have.’ You have to do it.”

Steve Carlson, group director for silicon realization at Cadence asserted, “Everything we make is essential. I said that tongue in cheek but in the end, we built every tool we have because somebody really needed it. It gets to the question of what design are you talking about? If we talk about advanced node, 20nm/14nm SoC running at 1.5GHz, that’s one set of tools. If we talk about 130nm mixed-signal design with sensors incorporated, there’s going to be a different set of tools that are required.”

Fundamentally, the design engineer needs to be able to capture the design, including integrating IP, and be able to validate or verify the functionality of the design, he explained. They need to add the test structures, go through the implementation process and then get through the signoff. “Even the signoff part is going to be different. There’s a signoff process for both. In advanced node, there are 60 to 100 different corners that people sign-off on and you can’t go through the usual place and route, into signoff, find problems, go back to place and route. You need an environment in signoff that has the optimizations integrated into it. That’s something that’s new for 20nm.”

Smaller technology nodes change the rules of the game
Noting the serious challenges and benefits that come with smaller manufacturing nodes, Taher Madraswala, vice president of engineering for physical design at Open-Silicon pointed out, “As the technology nodes become smaller and as fab rules become more challenging, we are now creating our own specific methodology to handle these lower technologies.”

For example, at the lower nodes—28nm, 22nm and possibly 14nm—the current drawn by these ASICs is going to be large and spiky because the transistors switch very fast. They perform at several GHz. As they switch, they draw current that spikes and the die size is becoming larger, he said. “If you draw current that spikes and it’s a large die, traveling from the edge of the die to the center of the die, you’re going to heat up quite a bit.”

Specific techniques need to be created to keep the die cool or measure the temperature of the die, and maybe even lower the frequency in certain sections of the die. Work already is under way in this area. Libraries are being developed to take care of the impact of these defects in the lower technologies.  “If you don’t do anything, the problems will be there and the architect will be forced to reduce the die size – you can’t take advantage of the lower technologies,” Madraswala added.

Fortunately for designers, many of the new challenges are addressed within the tools. However, sometimes shortcuts will be made if the design manager deems it necessary.

“A lot of people were doing this without looking carefully at power analysis,” said Carlson. “There’s a signoff for power, but during design you can’t just wait until the end to see what happens with power and try to fix it. You need to use power analysis early in the process. That’s a good example of something that people have forgone and will have to have.”

Similarly, when you look at test, “you’re going to have to incorporate the design for test (DFT), particularly in these complex CPU, memory architecture kinds of designs and where you have a lot of different memories,” he added. “The test strategy, the architecture for test in those has a big effect on the congestion in the design—the routing—because those structures need to be connected to everything and you’ve already got a highly connected kind of architecture in these multi-CPU designs. You have something like a crossbar switch architecturally in about all of them, and then on top of that you have to overlay a highly integrated test network. So the congestion issues get amplified and need to be taken care of earlier in the process. And that again needs to be part of the inter-relationship between the logical and physical architecture that you use.”

Another consideration, McDonald said, is to determine the minimum set of tools that are generally accepted as reasonable engineering practice.

“I think that’s really what you need to come down to because it’s not, ‘what do you have to have,’ it’s not, ‘must have,’ ‘nice to have.’ It’s whether this set of tools that has been proven to give a high enough return on investment that it’s worth investing in and they’re proven on a range of designs that is broad enough that people don’t have to think too hard about the ROI – you don’t have to do too much to prove it’s fairly generally accepted,” he said.

Specifically, a design team would think of doing a design without RTL simulation tools. The same goes for static timing and most of the back end tools.

An argument may come up with virtual platforms. “People have been doing virtual platforms for a long time, but the challenge is that they have been decoupled from the hardware so the return on investment has been a little bit sketchy. There have been times when it’s worth it, and there are times when it’s not worth it because now when you’ve got a totally separate modeling effort that are decoupled models that has nothing to do with the hardware–it’s only being done for software,” McDonald observed.

At the end of the day, to determine your ‘essential tool kit,’ there must be a very careful understanding of what the task is, what the tools can do, combined with the ROI analysis.



Leave a Reply


(Note: This name will be displayed publicly)