Executive Briefing: Wally Rhines

Mentor Graphics’ CEO talks about how to verification in the future, what impact will the Internet of Things have on design, and where the future pain points will be.

popularity

By Ed Sperling
System-Level Design, as part of its ongoing executive briefing series, sat down with Wally Rhines, Mentor Graphics’ chairman and CEO, to talk about future problems, opportunities, and the gray areas that could go either way. What follows are excerpts of that conversation.

SLD: Is the amount of time spent on verification increasing?
Rhines: It depends on how you define who spends most of their time on design and who spends most of their time on verification—and who spends most of their time on design and verification. For example, what is an architect? They’re creating an architecture for the design, but he or she is looking at tradeoffs for performance vs. power vs. die size. Is that design? Or is that verification? You’re trying to verify the architect will produce something with an acceptable amount of power dissipation and achieve performance. That sounds like verification. On the other hand, you’re moving blocks around. That sounds like design. So there are gray areas. But on really big designs, the lines do get sharper. There are IP characterizers and there are developers, there are people who do nothing but physical route and others who do nothing but simulation and verification. The lines get cleaner the bigger the design task and the more complex it gets.

SLD: Is that changing?
Rhines: Traditionally, complexity brings specialization. So the bigger the design, the more advanced the technology, the more specialized people have to be in their expertise. The counter trend is the more they will need information from other domains to achieve what they specialize in. So it goes both ways. The specialists are becoming more specialized, but they are more generalists in terms of importing information. There is probably a thermal analysis expert at the chip or board level. That’s a specialist who does nothing but thermal design. But if you wait until your design gets to that person, then you risk having to start all over again. More and more the people who are doing things that feed the design want to get a quick look at the thermal analysis. They take a lightweight thermal analysis tool and they look at things before they release them so they don’t get blindsided.

SLD: As we move up in abstraction, is it easier to pick out potential problems or does it become more difficult?
Rhines: It gets more difficult. The architect or the high-level designer is doing a whole combination of things, both design and verification. The lower you get, the more specific it is. Once you’re in physical verification and tapeout you have physical verification people and routing people. So people tend to be in either design verification or design creation.

SLD: Is there more convergence on approaches, at least?
Rhines: The trend in languages has continued, so SystemVerilog is becoming dominant for verification. The same can be said for base class libraries. UVM will be the dominant standard.

SLD: But that will take some time, right?
Rhines: People are still learning it, but almost half people in our survey have some activity there. Another 20% expect to use UVM on their next design in the next 12 months. It’s pretty substantial.

SLD: How about the Internet of Things?
Rhines: The larger the design size, the more likely they are to use SystemVerilog and the methodology to go with it. They’re also more likely to use emulation, to have embedded cores, and a bunch of other things.

SLD: So the world has split?
Rhines: There are people who do components and people who do systems, and then there are systems on a chip that are more like systems than components. System usually means hardware and software. It may mean multiple technologies. And it means embedded CPUs.

SLD: With the Internet of Things, where is the opportunity for EDA?
Rhines: A lot of people would look at it and say, ‘The Internet of Things’ means low-cost sensors. The world of complex systems says you have to be able to design and simulate large numbers of things tied together and interacting. The complexity increases at least as the square of the number of components. If you count the interactions, as long as you have a limited number of air-pressure, light and motion sensors, you can have dedicated signal processing for each of those sensors. But as they start interacting with each other, those interactions have to be verified and analyzed. When you think about the Internet of Things, we now have more Internet nodes than people. The amount of verification required will go up as the square of the number of nodes.

SLD: The interaction gets more complex as we go forward, too, right? You want to do more things with those things.
Rhines: That’s another dynamic. If you put out a network of 100 sensors, and you’re worried about the interaction between them and the host, that adds to the complexity.

SLD: Are we keeping up with time to market pressures with verification, particularly in light of complexity, or are we slipping behind?
Rhines: There’s been remarkable consistency. One possibility is that tools are increasing at the same rate as complexity. Another possibility is that the schedules are growing at exactly the right rate, even though the complexity is growing. But the third and most likely possibility is that in responding to questionnaires, the same percentage are telling the truth.

SLD: Where are the pain points going forward?
Rhines: There was a panel in Europe where the leaders of companies highlighted the most pressing challenges in verification. Traditionally, productivity and performance overwhelm all the others, while debug shows up less frequently. We saw the same interest in debug as hardware-software verification, low-power verification, coverage, complexity and IP re-use. The challenges are an expanding base of things, and no longer just performance and power.

SLD: And not only internally developed parts, right?
Rhines: You’re hooking blocks together, but it takes a long time to verify even the IP you bought is correct in your application. And then you have to verify that your design incorporates it properly, and that multiple IP blocks are hooked up together. More and more, people are evolving from ad hoc verification solutions to verification processes that are systematic. Once you’ve done all the things you normally do to find the bugs at the block level—and the blocks incorporate IP—then you have to verify connectivity. In the past, how hard could that be? It wasn’t that difficult. But if you put 100 blocks in, each with 100 connect points, and you allow them to run in different modes so that you verify if it’s in one mode then the signal connections are correct and if they’re in another mode that they’re correct—it turns out you can do all of this formally. You can load all of the numbers in the spreadsheet and automatically generate the assertions, and the formal verification will tell you if it’s correct.

SLD: How about the IP integration data path?
Rhines: Now you have to see if all the IP you’ve put together actually works. So you have to try to write to all the control and status registers—or at least set them—and look at data that’s going through them. You have to look at data buses between the IP blocks. And you have to verify all of that inter-block or inter-IP communication, and then you have to put real software on it to do system-level tests. System level without software can be power and timing closure, but more and more it’s running a whole bunch of software starting at a low level but very quickly going into operating systems and applications.

SLD: How do you deal with power domains?
Rhines: You may have dozens of them. The question is do they all turn on and off right?

SLD: Is the number of things you have to verify going up?
Rhines: Yes. Some of that may be done under the hood so they don’t require as much conscious effort as in the past. You’re verifying more things. Every generation of technology brings at least one thing you didn’t have to verify before. Nobody worried about embedded software until we started putting embedded processors on the chip. Nobody worried about electromigration until leads got to such a width that it became a critical issue. Reliability testing is a big change at 28nm, and that’s just for electrostatic discharge and electromigration. Now, all of a sudden, people are verifying all sorts of other things they never thought of before, such as voltage differentials. You can have a whole different set of design rules if you impose upon yourself that for any two voltage domains you route the leads in such a way that you minimize the voltage differential. If you have voltages of 1 volt, 1.5 volts, 2 volts, and so on, you’ll never have a 3 volt next to a 1 volt if you can avoid it. For most of my life, there has been a given spacing that was the same everywhere. Now, people are varying spaces based on voltage differentials, and at some point I anticipate they’ll be varying it based upon topography.

SLD: Are we getting to the point where it’s so complex that the number of companies moving forward to the next node will slow down?
Rhines: The absolute number always increases. You can figure that out just by looking at the employment in the industry.

SLD: Are they moving forward at the same rate, though?
Rhines: They move forward slowly as a distribution. The greater than 0.5 micron is still greater than 8% of all design starts. That includes power devices.

SLD: How about at the advanced process nodes?
Rhines: The people doing the leading edge 20nm and going to 14nm—there can’t be a whole lot of those. It’s a high-level club. You’re spending enough money developing the chip that you’d better get a lot of revenue for it. You don’t just toss one out in the market that costs you $100 million.

SLD: Are we getting the next wave at the same rate in the past?
Rhines: That’s hard to determine.

SLD: What about the wires and the interconnects? They don’t scale like transistors anymore.
Rhines: That’s one reason for doing multi-die stacks and packages. You look at a processor with a flipped memory on top, you’ve just eliminated enormous delay and power problem associated with the interconnect. It’s a big reduction in power. But where do interconnects run out of power? They haven’t scaled at the same rate as transistors since about 1990. The ITRS said that for DRAM, it would be the half pitch of the densest-level metal, and for logic they made it the polysilicon pitch. There was a minority that wanted to do source to drain, led by IBM. The half pitch in a 28nm process for metal is 90nm; the half pitch is 45. The metal has not kept up for logic. And for a 20nm process it only goes from 90 to 80, and at 14nm it doesn’t decrease at all. So metal is not scaling, but we’re making up for that in other ways—finFETs, for example.



Leave a Reply


(Note: This name will be displayed publicly)