Experts at the table, part two: Standards are helping address some issues in concurrently designing hardware and software. More challenges are ahead as automotive electronics and cybersecurity issues enter into the equation.
Semiconductor Engineering sat down to discuss parallel hardware/software design with Johannes Stahl, director of product marketing, prototyping and FPGA, Synopsys; Bill Neifert, director of models technology, ARM; Hemant Kumar, director of ASIC design, Nvidia; and Scott Constable, senior member of the technical staff, NXP Semiconductors. Part one addresses the overall issue of hardware-software co-design. Parts two and three will address automotive and security impacts. To view part one, click here.
SE: Where isn’t co-design working?
Kumar: One area where hardware/software co-design is not working so well is around power optimizations and clock optimizations — for example, clock networks, changing between prototyping and emulation, versus real silicon. Virtual prototyping can actually model much better than emulation and FPGA prototyping. If we bring things like UVM into the virtual prototyping world, some of those problems that software faces today can get solved. It would be a much better direction for hardware/software co-design. We do have VCS and things like that, but they are too slow for software to run on. There is no way for software to say, “Hey, is my clock routine, or clock switching, or power gating, or coming up from some sleep state?” All those routines I want to work fully. There are always, I suppose, workarounds that people make today. And if you can make that software in a virtual prototype, with all those things in place, and then make sure the virtual prototype is the same as RTL through verification with cross-state simulators like VCS to connect to, then we have a single standard, UVM, driving the verification model for a SystemC model as well as the RTL.
Neifert: When I was at Carbon [Design Systems], I always saw a need for more power inside the virtual space. I spoke at several different ARM TechCons about bringing power into that space. It was always a well-attended talk, but it never really got much traction afterward with various customers. I’m finally starting to see it again now with folks, where they’re starting to push on that. They’ve eked enough out of the virtual methodologies in the other space and they’re revisiting this. From ARM’s perspective, the cycle models that we have are derived from RTLs, so they can do a lot of this stuff. And they can run fast enough to run your real software and instrument it properly to do that. We’ve had super-advanced users doing that for years now, but the mainstream hasn’t caught up with it. We’re just now really starting to see that get a lot more traction. We’re getting enough pull on now, led a bit by the folks who have been doing it at a higher level at Synopsys. You can expect to see a lot more advancements in there. Obviously, I’m not in EDA any more, but we’re going to partner with our EDA companies to get more of that out there.
Stahl: A lot of the software stack is common. Whatever is on top of it, it’s all running. The real issues are in the lower-level software. It’s all unique to the SoC architecture. It’s no wonder that now the attention is shifting back to this more and more complex power management software at the bottom, where there are really a lot of issues. We hear exactly the same message from many customers: We cannot afford to change the software. It needs to run, similarly, on all platforms we use.
Constable: Yeah, it’s too much work.
Stahl: Yes, and it just doesn’t provide a good enough solution to tweak the software all the time. Now it used to be, that, okay, I want to debug this piece of the software and I give you some simplifications on some other piece. But the timeframes are getting so small that there’s not enough time for these modifications.
Kumar: It would be good if the EDA Consortium looks at UVM and expands it in the SystemC modeling space and enables hooks there for software to use, and take that all the way to silicon.
Neifert: Hasn’t the UPF done some of that? I mean, you guys are doing some of the stuff with UPF now.
Stahl: Maybe I can comment a little bit. At the beginning of this year, IEEE released a new standard that doesn’t deal with the design intent at the RTL level, but it deals with the modeling of power states and power conditions. And this is now standardized into semantics and syntax. So you will see more and more companies providing models for their IP that runs in different EDA environments, which can actually model state transitions and have power values assigned. Sometimes, for example, for the software developer, it’s not so important to actually measure the achieved extra power estimated, just to go through the transitions and to make sure the mechanics work.
Stahl: This mechanics aspect now is completely standardized. Defining these power models, compared to writing a functional model of an IP block, is relatively straightforward. It’s not as much effort. We’re seeing customers that have started to experiment with that. They’re getting a lot more serious because now they can see there’s a standard. The investments they have put in will continue to live. They’re not throwing away these models. And in the exact same parallel to like 10, 15 years ago, when the functional modeling standard was established, until there was a standard for the interfaces nothing really moved. Once there a standard was there, people started to describe IP in that modeling standard. So, we believe that will now finally start to be more mainstream among more users. It still starts with the top guys.
Constable: Low power is very important to us. We put a lot of design effort into low power, and the software effort needs to be there, too. To have more tools to enable is definitely the way to go.
Neifert: Security seems to be driving a lot nowadays. It has to be done in from the ground up. If there is a weak point anywhere, hackers will find it. As we saw, when they took control of the Jeep remotely, and drove it off into a ditch—not because they controlled the engine, but because they controlled the wipers. They’re always looking for that weak link. That aspect is huge.
SE: How much interest is there in automotive these days?
Stahl: The automotive semiconductor content will be growing at the fastest rate compared to everything else. What every company is doing today is trying to get into the best spot at various starting points in the semiconductor industry. From an EDA context here, it’s an important area to understand what’s relevant for the verticals.
Neifert: The compute requirements are going up so dramatically inside cars. If you look at the autonomous driving, just the computers to do it well —and that’s before I’m going to trust my car to drive me anywhere — will have to be a lot smarter than the ones that Google and Tesla are using today. There’s a lot of stuff that needs to be brought into that. As Johannes rightfully pointed out, it’s the big new growth area and we’re all clamoring for that at this point of time. Cellphones have kind of peaked. Everyone wants to get into servers, but there are only so many things that go into that. Automotive is sexy. Most of us drive cars, and we all want the cool ones.
Stahl: Every couple of years, I attend the top conference in Germany in automotive. All the OEMs come to talk about their plans for the future. And I remember very clearly, two years ago, the top three went up and said, “The autonomous driving – we don’t know how to validate that. There are so many test cases.” They were looking at how they do validation today, and there’s this huge hardware for simulation, in boxes that run an entire car. They could not imagine how to scale that for an autonomous driving application. It’s very clear to me today, two years later, they’re pushing this down the supply chain, on the wires and the chip providers. They’re saying, ‘Give me a solution where I can see exactly that you’ve run through all these test scenarios for me.’ That has a dramatic impact on what we need to provide, as IP providers, and then as semiconductor companies up the food chain.
Constable: There are also social barriers to get over on that before that’s a mainstream thing. We do have driver input, like safety input, like radar and lidar, and your mirrors that tell you they sense somebody close to you, and automatic braking to prevent you from doing a crash. So there are kinds of safety things that aren’t quite fully autonomous driving, but still require a lot of hardware and compute power in there. There’s still plenty of work without considering autonomous driving. And to layer on top of that, there’s security. All these devices in your car will be connected to smartphones and networks, so you better be secure in your car.
SE: Google has worked out something with their self-driving cars where they will actually blow the horn occasionally, if someone doesn’t go through the green light or takes too long to get somewhere.
Neifert: Make them a little more aggressive, like a human driver would be. I wonder how long before they drive as aggressively as a Bostonian driver would.
Constable: Maybe they could flip the switch – California driver, New York driver.
Cars, Security, And HW-SW Co-Design Part 1
Hardware and software must be developed at the same time these days to shorten the time-to-market for advanced devices and electronics.
Higher Cost Of Automotive
Suppliers looking to enter this market pay a premium in design time, certification and verification requirements.
Grappling With Auto Security
The search is on for a way to balance connectivity, performance and security.