Cars, Security, and HW/SW Co-Design

Experts at the table, part three: Hardware and software engineers are discovering the advantages of close collaboration, something that is critical in the automotive industry and other areas.

popularity

Semiconductor Engineering sat down to discuss parallel hardware/software design with Johannes Stahl, director of product marketing, prototyping and FPGA, Synopsys; , director of models technology, ARM; Hemant Kumar, director of ASIC design, Nvidia; and Scott Constable, senior member of the technical staff, NXP Semiconductors. Part one addresses the overall issue of hardware-software co-design. Parts two and three will address automotive and security impacts. To view part one, click here. Part two is here.

SE: What other trends in hardware and software development are becoming key?

Neifert: A few years ago, the term ‘shift left’ came out, and it was basically just a rehash of this concurrent design approach that we’ve been doing for the longest time. It just had a clever new marketing term applied to it – at least, from my perspective. I’m sure someone can interject and say how I’m wrong about that. It is interesting – they are pulling more and more and more of that stuff forward. Things are getting more complex. Every time you introduce a new level of complexity, you’ve got to pull it forward along with everything else. That, to me, is the thing. It used to be you could do it with just an emulator, or just a single type of virtual prototype, or just a single FPGA. The leading-edge companies are using all of them on the same project in order to get all of the various aspects brought ahead so they can get, the second they’ve got the real silicon, a real shipping product, or at least good enough that you can do a download later on so I can get the wireless on my TV going. That, to me, has probably been the big thing – the eternal race to keep up with the software complexity and how you need to get yet another layer of things shifted left along with the rest of the schedules.

Kumar: I agree. One of the other things that’s happening as the software is becoming more and more complex, and more and more lines of code are being written, is you have to validate that. Basically, emulation is not fast enough. Virtual prototyping is fast, but it is not giving you the right hardware/software co-validation coverage. So FPGA prototyping is actually being used more and more. I also see on the EDA side, Synopsys, which is perhaps a couple of years back, but now I see Mentor having some acquisitions in the last few years, and Cadence is also developing the rapid prototyping. So all the three big EDA [companies] are playing in that space. I see a lot more need of faster hardware/software co-simulation being required for validating the complex software, along with the hardware. Having that extra speed and dependability in the future is the direction where we’ll see more and more innovation.

SE: Hardware designers and software developers may be more familiar with each other these days. Does that mean they have a conflict-free relationship? Does it get down to the point where finger-pointing goes on, saying, you’re holding up the contract or the product?

Constable: That comes with growing pains in developing a good hardware/software relationship. At first, it’s ‘throw things over the wall, and we’re waiting for you,’ and that’s not a good way to work or to develop a project. When we’re doing things concurrently, and you’re shifting where all the development is happening on top of each other, you’d better be collaborative because it goes for more than just hardware and software. It goes to your documentation process. The software guys are used to not only getting a fully functional silicon model, but a fully formed document where they can look at it and run. So when we shift left, one of the pain points is, ‘Where’s the documentation? Where’s the spec?’ And the hardware guys are like, ‘Well, we didn’t write it yet.’ So, we have to shift left the documentation, too, because now we’re having users, the software guys, even if they’re internal users, using the hardware sooner. We have to have the documentation sooner. We’re going through the process of shifting our documentation left, too, so we can have people starting to develop on our hardware immediately, too. After you’ve done a couple of these projects, and you get past the finger-pointing, everyone realizes that we’re developing the hardware, the documentation and the software all together, and it’s going to be a little bit messy. Then it starts to work and to gel once everybody realizes the benefits of where we’re going, and once they see the benefits of getting that emulation and starting the work earlier. When they get silicon, and they have their full software stack running in a week, they’re making a phone call on a cellphone in a couple of days versus six months, that’s when they see the benefit of it. And that’s when they’re like, okay, the software guys are like, okay, now we need this collaborative environment. What I actually find is that once they embrace that, and they start getting involved earlier, then they want to be involved even earlier, in the product definition, and they should. And that all ends up working even better.

Kumar: There’s a need of education as there’s a mindshift on the software part. Some people have that, some people don’t. I do see a mix of both there; some people love accelerators because they can use them and leverage them, and live with the pain that they have along the way. The software is done early, they have their software ready and available, when the silicon comes back, it all works. At the same time, there are other software engineers who are more used to developing software on a much more mature RTL or developing later. Now they have to deal with bugs on the hardware side, especially the corner-case bugs. When we start prototyping, the RTL is reasonably ready, but all the corner-case testing is not finished. Then software runs on the emulation or FPGA prototyping, we are running millions of cycles, and we hit those corner cases right away. The moment that happens, some of the engineers get frustrated and say, those are all hardware problems, hardware is not ready, make a big thing. I think it requires an education on the software engineer’s part. I do see both types in the mix. It’s a matter of educating them and working with them.

Neifert: The hardware guy is giving up some control, right? He’s got the software guys meddling around in his cycle and finding some bugs in there, too. I remember they were building a house next to mine, and while they were putting it up I was trying to get friendly with the builder because I didn’t want them to create too much noise. He was mentioning that he already had a buyer lined up, and he was only halfway done with the house. I said, ‘Oh, that must be a great thing! You’re taken care of, because you’ve already got somebody lined up to buy.’ He said, ‘No, it’s horrible! I’m giving up control now. They’re going to come in and try to change all these various aspects.’ I said, ‘Well, won’t that make them happier at the end, because they get a better house?’ He said, ‘Yeah, but before they have to deal with my choices in that.’ And it’s the same thing in getting the software guys involved now. The end product is a better product and more suited toward the overall things, but the builder – in this case, the hardware guy – he’s got to be willing to give up some of that control in order to do that, as well. Conflicts will arise, but you get a better end product out of that. I still don’t like the color they painted the house, though.

Kumar: It does require scheduling, working with the software, to say which are the risky pieces for software, and plan that ahead, rather than planning that based on just the FPGA prototyping and the newest capabilities, and who’s ramping up, and so on. It does require more work.

Stahl: The other aspect, which I see now happening—and I wish I had seen that 10 years ago, but it’s finally happening—is that when the software and the hardware guys are not sitting in the same company, so, when they’re not at the semiconductor company but the software guys are at the end-user company, how do the teams work together? And since it’s such a long cycle until they actually get silicon, they might need to cooperate on what the silicon needs to look like. They finally are starting to realize the benefit of really high-level modeling of the architecture and really high-level modeling of the software so they can have a talking point about the software, which will eventually execute on a complete new architecture. So, that’s the next level of shifting the thinking to the left, by having the semiconductor and the end-user working together with software and with hardware, but in very abstract terms, because software is not ready and hardware is not even fully defined. That’s happening, and the prime example right now is automotive. Automotive has one characteristic – it’s a very tightly controlled supply chain. OEMs and Tier Ones, and semiconductor companies. To be good in that supply chain today requires extremely well thought-through collaboration. I think the companies that will be successful in that market will be the ones that are good at that, at collaborating. The OEMs will call it ‘ the pressure’ but I think it is collaboration.

SE: With the international standards for automotive, military/aerospace, health care, documentation has become a really key issue.

Kumar: It is much more than the documentation. It is being able to identify where the point happened. It requires a lot more validation and testing. Knowing, if something goes wrong, what are the consequences you can observe? So, you can infer back and say, ‘If this thing is what you’re observing, what could have happened, so you can go solve that. For each failure, you need to go and look at why it happened. These are some of the requirements for being in automotive. If a car collided, you need to know it was the fault of a self-driving car, or it was the fault of somebody else, or the environment.

SE: That’s a pretty serious fault.

Kumar: You need to get down all the way to the component level. Which component failed? You can’t just say it’s this module or board failed. You can go down to the supplier component. This is not just coming from the self-driving side. But overall, even in the mobile smartphones, more applications, IoT — all of these require more and more collaboration with the hardware and the software. You’ll see a lot more innovation. Synopsys changed their tag – ‘Silicon to Software.’ You see a shift in the company’s EDA direction. Synopsys made some acquisitions, which is more traditional to regular software and how to find bugs in regular software, whether it’s memory leaks or pointers being bad. Improving the quality of the software, stand alone. Some of those things used to exist, but not in the EDA space, but now I do see the EDA space looking after software more because of hardware/software complexity and integration. That’s one of the reasons all this is happening.



Leave a Reply


(Note: This name will be displayed publicly)