Bridging Hardware And Software

Part 2: Different teams have completely different views about interfaces, reuse, and who’s responsible for making changes.

popularity

Methodology and reuse are two fairly standard concepts when it comes to semiconductor design, but they’re viewed completely differently by hardware and software teams.

It’s a given that hardware and software have different goals and opinions about how best to do design. And while all agree that a single methodology can pay dividends in future chips, there is disagreement over who should shape that methodology.

“The hardware IP engineers would like the software to be easy to use, meaning, lots of reuse of the same code base so that there are minimal changes,” said Navraj Nandra, senior director of marketing for DesignWare analog and mixed-signal IP in the Solutions Group at Synopsys. “The software engineers, when they look at hardware, expect the hardware not to change. They expect lots of reuse. But it’s easy for them to write their code.”

Hardware engineers assume it’s easier to adapt the software than the hardware because the code can be changed pretty quickly on the fly. Software engineers, in contrast, believe it’s the responsibility of hardware engineers to update the hardware or firmware.

Nandra said he regularly moderates disagreements between teams. One team wants to throw the problem or challenge onto the other because they believe it’s easier for the other team to accommodate the change. “If you probe into it on the software side, for example, the idea is to re-use the blocks that do a certain function, or the algorithms that formulate the particular block function, as much as possible because it’s not only the code base that you’re talking about. But you’ve got all the verification around the code base — the “testbenches, which would have to be changed also if you’re not using 100% of the code.”

The problem stems from the fact that when a change is made, it impacts many things outside of the software or hardware reuse. “Testbenches are so complicated now — you’re talking about millions and millions of lines of code for very, very complex hardware with maybe 20 or 30 different CPUs.”

As such, the concept of reuse is really important, Nandra stressed. “Everybody understands why it’s important, but the implementation is something you’ve got to trust both teams to be able to do in a way that can make the reuse job easier.”

That’s easier said than done. On the hardware side, it was accepted that putting systems together could be improved if blocks were integrated, so hardware engineers started defining the interfaces.

“Then, a small part of the industry evolved, which was to do with the verification IP where not only would you get a block, but you’d get ways of testing it,” said Simon Davidmann, CEO of Imperas. “In the software world, there aren’t the sort of standard interfaces in the same way. There are the public standard interfaces. But when people are building their own systems, they don’t tend to architect them with these sort of quality of interfaces and APIs for reusability. The challenge is that if you design an interface for yourself, it’s relatively easy and doesn’t need to be well structured and documented because it will all sort of work. But if you’re building an interface which has got to last five years and be used in 10 or 20 different places, then you have to spend a lot on designing the interface and getting it right so it can be used in different ways. The hardware guys have put much more emphasis on the testing of their modules and their IP than the software guys have.”

In the software world, the whole idea of reuse and verification is less formalized and structured, he said. “Part of this is the fact that the software can change after shipment, where hardware very rarely can. So there isn’t the rigor and formalism in the design methodologies in software as there is in the hardware world. In hardware, you just can’t change it once it’s shipped, but in the software world, you look at the Tesla car, and every week you can upload the software because they’ve designed it in. Software can be updated and changed, so people assume that it’s going to change and are less strict on the first release.”

Davidmann does see engineering teams at the leading edge beginning to use more modern technologies. But in the traditional embedded world the development of the software is still a lot more primitive than the development of the hardware from a system point of view. While methodologies are changing, it is a slow process.

Reuse definitions vary
Even the view of re-use varies between hardware and software.

“When talking about software IP, the software guys have been using what they call libraries for bazillions of years so at that level, they have been fully committed to that kind of IP reuse for many, many, many years,” said Drew Wingard, CTO of Sonics. “Going back to the original C programming language, if you don’t use any of the standard libraries that come along with C that are so standardized they are described in the ANSI specification for C, you’ve got a very incapable language. But you can’t even print out stuff to the screen without using the library.”

In the software world, that kind of code reuse is an incredibly common occurrence, and the automated technologies that they built to make that easy and repeatable and safe is impressive, he observed. “These days, when you access many of the open source or commercial libraries, you’re allowed to specify a version number, and if the version number isn’t on your disk, the user is going to get an error that says this isn’t compatible. Most of the hardware integration schemes we have today don’t even have that most basic level of checking. That’s captured in writing somewhere, it’s not normally part of the automatic deliverables associated with hardware. There are very interesting technologies around what is called dynamic binding. The reason they don’t do that check until runtime is because the code that relies upon that library doesn’t include that library. It just assumes that library is going to be spinning somewhere on a disk and they’re going to find it when the program loads. That changes all kinds of stuff about how you distribute stuff, whereas by the time we distribute a chip all that IP had better already be there.”

Wingard believes the software side has an enlightened view of this, with an interesting history that he expects will start to show up on the chip design side around some libraries that are open source, and some aren’t. “We don’t run with a lot of open source hardware these days. There’s some open source verification technology—Verilog assertions that originated with the Open Verification Library, and there are SystemVerilog versions of all of that. You could consider that verification IP that’s distributed open source. And that’s valuable and practical stuff, but if you compare that to what your average Java programmer expects to have access to when they are writing their code, it’s zero in comparison with what the Java guy has to work with. It makes his environment richer and easier for him to get his work done. The attempts at open source hardware have not proceeded very far, largely because any open source technology that ends up on a chip is considered a liability. That’s the worry.”

Still, Felix Baum, product manager for embedded virtualization at Mentor Graphics, maintains that for the most part, there is a lot of reuse on the hardware side. “If a hardware engineer designs a brand new SoC, they are not starting from scratch. They are going to grab an IP block for the serial port. These days nobody invents their own Ethernet from scratch. Nobody designs their serial ports from scratch, and other IP blocks that the silicon guys put in the SoC.”

This is evidenced by ARM showing a new part in November at TechCon that highlighted how many of the blocks in the new part are re-used from the previous piece of hardware in the hope that it would mean the developers didn’t have to write software from scratch to enable the new devices.

And on the hardware side, he asserted, it’s actually very well sorted out. “On the software side, we have a lot of reuse but it’s not as formal. I can go to a repository and say I want to take a particular UART or piece of code to enable certain features, and assume like in the hardware world, that it’s going to work. I have to validate it, I have to port it, I have to modify something — either memory or adjust registers before it would actually work on a part, but things are getting more and more reused when once you start applying formal methods to it.”

Stick to the plan
On the whole, each engineering team or company has its own internal methodologies for IP reuse. It’s a matter of discipline and making sure those are followed, Nandra said.

On the software side, linting tools can help, along with those that check the quality and efficiency of the algorithms, including code verification tools like Synopsys’ Coverity technology, among others.

This all leads its way back into software reuse. If the engineering team did a good job at the lower levels of the algorithms, that will translate very well when looking at the platform being designed into, and can reuse some of the blocks and some of the code that has been written, he said.

“It basically comes down to foresight and planning right at the beginning of the project, and disciplining yourself to say, ‘We’re going to structure the hardware and software in a way we can re-use as much as possible, so we’re building hardware and software platforms.’ But sometimes with time-to-market pressure people start taking shortcuts, and that’s where you see it becomes much harder to re-use the hardware and the software,” Nandra said.



1 comments

Kev says:

There’s an artificial divide between hardware design and software development. Hardware design is really an exercise in writing software in a particular style in special languages. You could ditch the special languages and use C++ for everything (e.g. http://parallel.cc), but the RTL style (synchronous FSM) doesn’t appeal much to software engineers, so the hardware guys would need to switch up to an asynchronous FSM methodology to get on common ground. The software guys likewise need to shift to an a-FSM methodology for programming things like FPGAs, GP-GPUs, and heterogeneous systems that don’t support the SMP programming model.

http://www.ee.washington.edu/faculty/hauck/publications/AsynchArt.pdf

Leave a Reply


(Note: This name will be displayed publicly)