Experts at the table, part 3: Can the same techniques used in software apply at the system level, or are they more effective for blocks and subsystems?
Semiconductor engineering sat down to whether changes are needed in hardware design methodology, with Philip Gutierrez, ASIC/FPGA design manager in IBM‘s FlashSystems Storage Group; Dennis Brophy, director of strategic business development at Mentor Graphics; Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence; Tom De Schutter, senior product marketing manager for the Virtualizer Solutions at Synopsys; and Randy Smith, vice president of marketing at Sonics. What follows are excerpts of that conversation. To view part one, click here. To view part two, click here.
SE: For Agile to work in hardware, we need a methodology that covers all systems and deals with how all the different pieces fit together and how the people developing those systems work together. We also may need to change out some of the tools. Is that realistic?
Brophy: Good question. It may even be right brain, left brain. I look at the rigor and precision of step one, two and three, and question how you can toss out that rigor for something that might be a bit more chaotic and collaborative. What it’s going to take is some examination of project results of groups that have introduced this into their worklife and see what they’ve been able to accomplish. We’re not going to have four- or five-to-one design verification engineers to each engineer forever.
Smith: We all have some level of concern about whether the Agile software manifesto applies to IC design. We probably need some group to get together and work on that problem to figure out if we need an Agile IC manifesto or Agile electronics systems manifesto. We have a couple natural standards organizations. Does it become reasonable for any of them to examine this? We still need the precursors of what we’re going to define in a solution before we figure out how we’re going to do them.
Schirrmeister: Sadly, a lot of things don’t happen until something has gone really wrong. The people who are making changes are the ones who have had these bad experiences. But the complexity is so big that it’s mind-boggling. One of the issues is that you don’t necessarily know at the beginning is where to look. Some people found out the hard way. We always knew power was a big issue in mobile, but now we realize it’s extending into thermal. There we were happily adding cores, but then the chip shuts itself down because it’s not supposed to burn through the package.
Brophy: The good news is that it did shut itself down. In the past we would run programs that would fry computers.
Schirrmeister: You don’t always know what you don’t know. Some of the tools we’re working on to make verification smarter fits into that. Software-driven verification, where you can re-use software across different engines horizontally defined with software scenarios, plays into that. The coherency guy needs to talk to the power guy to see whether the shutdown messed up coherency. And the power guy needs to talk to the thermal guy to tell him that if it runs too hot it will slow down the processors, so make sure the benchmarks are still in sync.
Brophy: The real issues may be solved with what we have today, but there’s a resistance and a role for others to help eliminate that resistance. There’s the IEEE 268515 software user for Agile environment documentation. On the hardware side, the methodology we’ve all come to say is globally used, UVM, is from Accellera. But is there a way for some of these groups to come together. What is our 2025 vision? What do we expect to happen in five years? Should we be thinking a little further out? Most of the standards groups don’t. They’re more immediate. IEEE with Agile is based on success and things people are doing today. This almost has to be driven through user success, and then institutionalized as a standard and more proactively pushed throughout the world. Maybe we can take a different way of looking at it and fostering discussions.
De Schutter: There is a lot of emphasis about what people are already designing. One thing that is changing—and UVM is addressing this on the power side—is where you look at early architecture design and how you try to influence it and incorporate what kind of algorithms will you have. What kind of user scenarios are there? What is the power/performance tradeoff? Thermal is starting to come into play on that. Architecture prototyping early on is starting to feed into that. We’re seeing more and more people looking at that as a way to try to get to something, which is more adjusted rather than just a spreadsheet where it assumes you understand. When you’re looking at all the different pieces of Agile, one of the items is how to incorporate all the things your customer wants to do. What we’re seeing is that if you have your architecture prototype, you get early feedback and that helps get through that feedback loop very early on. It’s not just a single, monolithic task you do up front. It’s more of an integrated task. It’s not just the N+2 design. It’s what are the impacts on the power, performance and thermal sides, so you can make adjustments there and revisit the hardware and the software. That upfront piece and merging that into the methodology will help, rather than just looking at it from a verification perspective or a pure hardware side of things.
SE: Is it possible to change out everything that’s been done in the past and put in a broader and different approach?
Gutierrez: We’ve been talking about Agile being used in designs that are somewhat chaotic. That doesn’t mean it’s the only thing Agile can do, though. If you have your specs up front and you do have everything planned out, there’s nothing preventing you from using Agile there, too. We’re breaking up the tasks into smaller chunks, which in our case is two-week sprints. That works, whether you’re goal is fixed or moving. It also helps that we bring together the RTL and the DV and the software guys up front, so we get immediate feedback from the software guys as opposed to first developing the IP block and integrating it into the top level, and then getting feedback from the software guys. We’re able to get that feedback much earlier. We do meet every morning. It’s a very brief meeting, but we do bring everyone together.
Schirrmeister: That’s for what kind of design?
Gutierrez: It’s an embedded processor with half a dozen IP blocks.
Schirrmeister: So this is processor, IP, communication to the outside with all the interfaces. This is equivalent to an ARM subsystem in a mobile SoC, right?
Gutierrez: Yes.
Schirrmeister: That’s where scope comes in. These processes are very different, depending on the scope of what you’re designing. As EDA vendors we tend to gravitate to the most complex things out there, like mobile or complex servers with 60 cores. But if you look out to 2020 or 2025, there’s clearly a bifurcation. You have these complex things, which are very important and valuable, but if the predictions are true there will be a lot of things in the middle, such as subsystems, and a lot of smaller designs. My willingness to accept Agile becomes larger the smaller the design gets.
SE: You’re obviously hinting at the IoT world. In addition to smaller, these devices need to be very customized for very specific applications. The need for getting it to market quickly with a low cost, though, point to a methodology. Is Agile the right one?
Smith: On the hardware side we are way behind software in a couple areas. One is in the use of requirements management. Software teams are used to keeping track of all the different software requirements. They put together the spec and they often can have linkages between one feature and another, and the specs are very completely integrated. In the hardware space, we don’t have full requirements management, so the first linkage is hardware architecture. For hardware architecture we don’t even have an executable language. Architects use Word and Excel to specify what a device will look like and what might be included. They might reference IP-XACT for what some of those blocks might be. And they have tables about who has to talk to what. We’re missing the executable languages. Everyone in the software world has an executable language for talking to the next group. Hardware architects don’t have anything but tables, and going down the other way they don’t have anything except IP-XACT, which is incomplete. When you talk about how to get this to work with a structured methodology, we have to complete the methodology if we want it to be automated. We lack that automation today.
De Schutter: The automation flow isn’t ready yet. SystemC has offered an executable spec, in a sense, for architects. You can describe the most important components. If you look at it from a QoS point of view—latency, throughput, power, thermal—you can extract a lot of pieces and still figure out what your executable spec will be and even exchange that with your customers. In that sense, some pieces are in play. It’s not an automated flow. And then with UVM you have your power overlay. You can get to an environment where you can try things out, and then feed that back on the software side. And then you can figure out what happens if you over-design or under-design, how can you mitigate some of the power issues. It becomes more of a simulated spreadsheet environment, but at least you’re simulating it in an executable spec.
Smith: SystemC is at the beginning of the implementation of the specification.
Schirrmeister: Let’s define executable spec. To me, it’s the piece from which you derive everything and which is golden. The challenge is that RTL doesn’t seem to be there for everything yet. You have the system environment and some analog/mixed signal you might not be able to capture. Looking back to my first design, gate-schematic entry and RTL seemed like the system level to integrate six chips. In EDA, we did everything in synthesis for RTL and we had a semi-golden spec for RTL. I’m close to taking a bet that by the time I retire we will not have gotten to the point where there is something above RTL. I agree with virtual platforms and some level of architectural analysis at the coarse-grain level. But then at the end, all the decisions tend to be delayed until you have the more detailed bus analysis at the interconnect level. Having something above RTL that is golden was something that should have happened 20 years ago, and we don’t seem to be getting there. The question is why not.
De Schutter: It would have been nice to get there. I’ve given up on the dream of one spec that drives everything else. That doesn’t mean there is no value in different types of descriptions. But for software and hardware you need different architects. There is no one single golden reference. There are different kinds of executable tasks.
Schirrmeister: I agree. That makes it localized, and I’m a proponent of agility at a localized level. It’s much more difficult at an SoC level. The challenge in all those localized pieces is they’re not talking to each other, and they’re not being kept up to date when you’re going further down.
The discussion got really interesting at the end, where proper system-level design would help with the RTL implementation downstream (performance, power, thermal). System-level (i.e., ESL) may also be the bridge to model-based engineering upstream. In recent coverage of 2014 Wilson Group study on Verification, Harry Foster of Mentor Graphics mentioned that implementation and verification engineers complained of lack of a specification to reference. Being able to go back upstream could get that for them. And at the system level, they can work with systems and software engineers re-examine and modify the hardware-software partition if necessary.
We have information on our website at Space Codesign on an agile system level approach to optimize your hardware-software partition, from which you can then go downstream. Please take this invitation to find out more!