Experts at the Table, part 3: Shifting left, extending right; using machine learning and data mining to find new bugs and open up new usage options.
Semiconductor Engineering sat down to discuss what impact the IoT will have on the design cycle, with Christopher Lawless, director of external customer acceleration in Intel‘s Software Services Group; David Lacey, design and verification technologist at Hewlett Packard Enterprise; Jim Hogan, managing partner at Vista Ventures; Frank Schirrmeister, senior group director for product management in the System & Verification Group at Cadence. What follows are excerpts of that conversation. To view part one, click here. Part two is here.
SE: Can we really say for sure that an IoT device will work as planned?
Lacey: It’s really about coverage ranking. You can do a coverage ranking on a regression suite and find out there are 20 tests that are not providing any incremental coverage. And you may decide you don’t need to run those anymore. And then you look at your bug data and find out half of those tests are finding 90% of your bugs. So you don’t want to just look at one aspect. That’s where deep learning and machine learning are going to come into play with verification. You want to look across all of the data sets you have, aggregate them, and see how it can improve your results. One metric I would like to see more data on is how to improve the efficiency of our cycles. I don’t want to run cycles just to run them. I want effective and efficient cycles out of simulation.
Hogan: That might seem threatening to people who sell simulators. My background is analog, and my best friend was my SPICE engine. My marketing guy would ask, ‘How fast can you simulate this?’ I would tell him it would be an eternity. Then he would tell me I had six weeks and that was the best he could do. It’s going beyond that. We’ll find things that are not even detectable by humans—relationships that don’t even exist today. We’ll find ways to save power, reliability and security. We’re on the cusp of some exciting times.
Lawless: No one has come around saying they’d like to lower their budget for simulation and emulation and FPGA prototyping. It’s the opposite. The reason is that we’re learning and understanding as we go. There are complex flows for power management. If you build UPF into every single IP and device, it becomes extremely complex. You’re not just running random cycles. You’re focusing in on higher-level flows. We’re learning where to put our efforts to ensure the highest quality silicon comes out the door.
Schirrmeister: It doesn’t have to be that deep in deep learning to find initial things in verification, but you can definitely find things that in the past you could not. You don’t know how things behave. That’s where machine learning will be crucial—to find those items a team of humans wouldn’t ordinarly see. If you look at FPGA-based prototyping and simulation, the amount of data you get out there is huge. There are cycles that have never been looked at. You generate a couple terabytes of data, and then you hone in at the point of interest. But there might be something happening in the other data that you’re not aware of. That will open up completely new opportunities. But at the same time, the complexity grows for single chips, and for multiple chips that are connected together. That means we all have a fair amount of job security. It won’t all be automated.
Lawless: I’ve got a list of 10 projects that I can’t staff today. The need for tools is not going away.
Hogan: We’re going through a big societal change as people move from a manufacturing base to more knowledge workers. That’s an overriding theme for the nation. There will be the need for verification engineers, as always. But there also will be opportunities to mine this data and find these relationships. Bots can’t do that. That’s where we have to go as a society, and particularly as verification engineers. We need to take that data and exploit it.
SE: A couple years ago the buzz phrase was “shift left.” Are we now moving to “extend right,” where we have to follow chips once they’re out the door?
Lawless: Yes. It was pretty easy when you had a couple years between releases. Many times what you make assumptions during the verification cycle, but what you release is something different. It really comes down to planning to be able to be flexible and build that into the entire process. From an Intel perspective, you can count on new releases of OSes coming fairly quickly after silicon is released. We have to work very closely with our partners on this. We have to make sure a lot of the interfaces are being maintained and stabilized. You want to be able to have longevity for that silicon over time and to be able to contemplate those changes without any hiccups.
Lacey: A lot of that involves the architecture. You have to think further out. We have to start architecture planning well ahead of our partners, who are still trying to get out the current generation of product while we’re already starting to look at the next one. But now we have to think further down the road about which features to include in order to provide a piece of silicon that will last even longer.
Schirrmeister: From a verification perspective, you need to extend to the right into the testing domain. We’ve been looking at how to bring the testing flow in earlier and how to re-use the things we did in verification in the testing. It’s getting wider to the left and the right.
Hogan: One of the problems is ultra-low power consumption. That’s going to be very application-specific. But can you really give design customers an EDA platform that is an application-specific software stack and hardware stack? That’s going to be a hard problem to solve, but it has to happen.
SE: So does EDA have to do something different?
Lawless: We have to something fairly dramatically different. You have to plan more and you have to think further ahead. But the bottom line is that several years ago everything was self-contained. Now, you’re part of a system. And if you think about IoT, you’re part of an even broader system. You don’t quite know the interactions that are going to happen. One of the ways we’re dealing with this is that we have to make sure we’re going to school on the usages. If you look at those usages from a validation and test plan perspective and even an architecture perspective, how do we then build something and operate properly within those particular environments. That’s a dramatic change for Intel. Where are we focused? Are we trying to find every potential random or synthetic bug—and we still do that kind of testing—or are we more focused on how is this going to be used. That has changed the whole validation and verification process. We’ve done a lot of things differently over the past several years.
Related Stories
Verification And The IoT
Experts at the Table, part 2: What is good enough, and when do you know you’re there?
Verification And The IoT (Part 1)
Application-specific verification, and why quality may vary from one market to the next; why different models are ready at different times.
Verification Unification (Part 2)
Strategies for using Portable Stimulus to drive formal and simulation, as well as the common ground with coverage.
Rethinking Verification For Cars
Second of two parts: Why economies of scale don’t work in safety-critical markets.
2017: Tool And Methodology Shifts (Part 2)
System definition to drive tool development, with big changes expected in functional verification.
System-Level Verification Tackles New Role (Part 2)
Panelists discuss mixed requirements for different types of systems, model discontinuity and the needs for common stimulus and debug.
Leave a Reply