Bridging Hardware And Software

Part 1: Different goals and methodologies have long divided hardware and software engineering teams. Some companies have solved these issues, others are working on them.

popularity

Since the advent of embedded systems there has been a struggle between hardware engineers trying to understand the mindset of their software counterparts, and vice versa. That struggle is alive and well today—and it’s costing everyone money.

This divide is rife with passion, territoriality and misunderstanding. It has delayed tapeouts, created errors and inefficiencies that take time and effort to fix, and it has made chip development much harder than it should be. The solution in the past has been to give each side tools that at least recognize the other’s contribution, but the debate continues about whether common ground can ever be firmly established between hardware and software engineers.

Chris Rowen, fellow and CTO of the IP Group at Cadence, believes it not only can be done, but must be done. Because of the rising cost of adding any new feature, getting it wrong the first time and then having to fix it, these two worlds need to be bridged much more effectively.

“This is particularly true in chip design, but a lot of it is also true even in more malleable things like building boards or FPGAs,” Rowen said. “The hardware guys sort of have to say, ‘Well, if I have to do it once, it has to be perfectly right. It’s going to cost me a lot of money to prototype it, and to test any one version of it so I’m going to have to be both very careful and rather general in what I do. I’m going to be measured on some metrics like square millimeters, milliwatts, manufacturing dollars — these sorts of implementation costs.’ It’s not entirely true because after all, software costs storage, and there are constraints there, as well. But usually it’s the hardware guy who directly faces those constraints and the hardware guy who faces the death penalty if they get it wrong. There is a lot at stake and it makes them careful paranoids.”

Software engineers, on the other hand, usually work in a much more malleable environment. “Short of designing something for the Space Shuttle, there is a sense that you can be continuously prototyping,” he said. “You can make a change in minutes. If you have to send out a patch to the field, it’s not the end of the world. You can have a much softer approach to the quality issues, and a much greater premium is put on innovation and building truly complex systems rather quickly. These things then echo through all the other issues that you deal with in terms of reuse, decision-making in the process, and what the metrics are of a better design. To hardware guys a better design is something that is smaller, faster in some very generic terms like megahertz, and higher quality as measured by the absence of bugs or a very long interval between when you have to revise it. But they also are building it on top of something of a moving platform because for their metrics, cost and speed up, those are going to change every two years as new underlying semiconductor technologies come along. So they know they have a revision cycle, but their revision cycle is measured in years, whereas the software guy says, ‘How many builds did you get done today?’”

This is exacerbated by the divide-and-conquer approach to design, which is as true in software as in hardware. Simon Davidmann, CEO of Imperas Software, observed that all too often software engineers have focused down at the point level for a little bit of what the software does, and they don’t see it as part of the whole system.

“An example of that view is the way they do testing,” he said. “If you look at the way people do stuff around a GUI, for example, they’ll do unit testing where the whole methodology is continuous integration and testing. In the Java world they do JUnit testing. As they compile something, it gets run against some tests. In each module, they build a little test harness, which tests that module. So they have this concept of unit testing, and whenever you check anything in, it goes off and does that. But what I’ve found is they don’t really do system-level testing. An example of that is stuff with a GUI. They don’t build into the design a way of automating the testing of the GUI, so they end up with people having to push buttons on simulations of it, or the product, or whatever physical manifestations of it.”

Hardware engineers, by contrast, have been very good about understanding how their devices sit in the whole world. “If it’s a chip, they can test it point to point, but if it’s a product, they do something in-the-loop, such as when it’s in automotive, they do software-in-the-loop, system-in-the-loop or hardware-in-the-loop,” Davidmann said. “They [start with] big test benches; they have models of this -— physical models of bits of cars that they can plug their bits into and test them. From the hardware world, they tend to have a bigger view of the system, whereas in the software world, they tend to be focused on the module, the units, and things like that.”

Successful companies have recognized these problems and created bridges wherever possible. “There’s a systems team which is responsible for bringing it all together because nowadays all hardware blocks have software in it, too, so things aren’t so isolated in hardware software. Things are becoming more integrated,” he added.

But it still isn’t a natural meshing of ideas. From the outset, there are different things each team needs to worry about.

“If you are designing hardware, in the end you only have one, maybe two shots at doing it right for a specific project,” said Tom De Schutter, director of product marketing for hardware prototyping at Synopsys. “If your SoC doesn’t have the right performance, it’s too big, or it doesn’t have the right power consumption, that’s it for that generation of your SoC. Now you need to try to get it right with the next one. If you lose a couple of sockets in mobile, which is so competitive, it has a tremendous impact. The mindset from that comes into very rigorous design processes, specifying things, having very specific ways of doing things that have an advantage of leading toward a specific target. But it also has the disadvantage that they are not very flexible just because you know that you cannot mess up any way down that design path because the impact is gigantic.”

On the software side, it’s different. “You have this more agile approach,” he said. “In the end if something doesn’t work you just update it, you upload some new software. To me, the bases start there where the software guys look at the hardware guys and say they are too rigid and inflexible, and not willing to take their ideas onboard. The hardware guys look at the software guys and say, ‘You guys don’t understand what it is to design hardware. This is a hard thing. If we have to respin something, if we don’t get it right, that’s it. We invested tens of millions of dollars and we’re not getting any return.’ Coming from that background, that’s why they don’t get each other. It’s a completely different mindset and each one of them is right in their own context, but if you look at the pieces together now there is one rigid piece, one agile piece, and you need to try to map them onto each other and it clashes.”

In other words, software engineers are looking to add features and functionality to truly take advantage of the capabilities of the hardware, while hardware developers look at designs where they are including only what is necessary. Their focus is on how to reduce costs, put in minimal resources, put in only the system resources that would be required, without looking at the complexities or taking into full consideration the breadth or amount of software that would be required, noted Andrew Caples, senior product line manager, embedded products at Mentor Graphics. “You have this balance that’s not always in balance from a hardware/software perspective.”

Flexibility on the software side definitely has its advantages. “On the hardware side, when hardware engineers have to design an SoC they have a specific market that they are trying to address or they are trying make it generalized,” said Felix Baum, product manager for embedded virtualization at Mentor Graphics. “Then they have to be bounded by the physical limitations of the chip, meaning they can only fit so much IP in it. They have to prioritize and decide how many cores they can put in the chip, how many serial ports, how many Ethernets, how many IP blocks they can stuff in this device to make it generic enough and make it interesting enough and competitive enough for customers to select and use. When you look from the software perspective, we have a very specific opportunity. We can pick the same device and use it as an automotive application, use it as a medical device, use it as an industrial device. Each one of those markets will have its own requirements, its own configurations, its own use cases, so we have to take this static thing that hardware folks created and mold it to match our particular set of requirements and use cases. Some people take a specific part and use it in safety-critical devices, while others just create some general embedded devices that monitor temperature in the house, like a thermostat. The stringency of requirements varies drastically.”

Check, please
Still, how is all of this — software, hardware, the interaction of both within a system while changes are being made on both sides — verified?

“Even within a single domain, the pain point is really about integration,” said Sudhir Sharma, global marketing and strategy director at Ansys. “When you bring subsystems together and these things don’t work, you wonder why. ‘I created this block and I tested it,’ but now you put it together with something else and it doesn’t work. That’s where the importance of virtual prototyping and simulation plays a critical role. What simulation tools allow you to do is take a higher abstraction level that you’re not really beholden to, whether it is the software code or the hardware code. Just imagine you’ve got a canvas, and you’ve got all these subsystems. In a hardware/software example, you’ve got this piece working in software, that piece working in hardware, but you don’t want to have to think about that partitioning if you’re a system designer. What you want to do as a system designer is first get the algorithm right. ‘Do I have the right idea, and will this thing actually work?’ Before you even go about partitioning what belongs in software and what belongs in hardware, you just want the concept to work. A simulation environment allows you to do that. Then, depending on the maturity of the tools that you are looking at, you could find a platform that allows you to now segment a design to say what needs to go in software, and what needs to go in hardware.”

All of the above also buys into the premise that most silicon hardware designs today accept input from the software team, which Drew Wingard, CTO of Sonics, just can’t agree with. “While everyone says we should be co-designing the hardware and the software, that still doesn’t happen very much. It’s still the case that unless the end customer comes back and demands it, most silicon providers treat their software teams almost like they are a necessary evil.”

He said this point has always frustrated him about SoCs because the exception to this rule comes when the SoC is being designed by a systems company instead of a silicon company. “Then, you see a much more enlightened viewpoint, when the people who are paying for the chip design are the people who want to ship the end device that has to merge both the software and hardware. They see an enormously larger value in the software. In fact, many of them have a very different viewpoint that the hardware is a necessary evil to get their software to run the way they want it to. The perspectives are almost yin and yang, but they are not that nice to each other. They are almost diametrically opposed.”

Gracefully bridging these two worlds is rare. “The one place you see it is in the companies that can afford to spend the least on each iteration of their platform — the microcontroller companies,” Wingard said. “The microcontroller companies have to provide a high-level backwards compatibility at the software level. Traditionally they provided little software to their customers, but they got into the requirement of having to protect customers’ existing software investments when they went to the new version. Plus, they had much smaller hardware design teams, so they can’t really afford to tinker as much. As they grew up and started to provide much more expansive software offerings, both in things like IDEs, libraries, and interesting reference boards, all that stuff that we see as the enabling technologies behind a lot of the maker movement stuff — they’ve kept forward with this idea of backwards compatible to protect these APIs. If I turn the other way and I say what if I was a big systems company who thought that designing chips was an important element, either because they weren’t available or I had some differentiated technology or something like that, then you see a much more enlightened view. Then, either the software team is much more integrated with the hardware team so many of these decisions are taken collectively or sometimes even the software guys dictate to the hardware guys what they’re going to have to do.”

Given all of the anecdotal evidence above, it is clear there is still a huge battle being fought between hardware and software teams. The successful companies are the ones that master bridging the gap along with an intimate understanding of how best to approach whatever market opportunity they are going after.


The second part in this series will examine how hardware and software engineering teams look at and approach IP reuse given that this is a critical part of system design today.



Leave a Reply


(Note: This name will be displayed publicly)