Mythbusting: Co-Design

popularity

By Ann Steffora Mutschler
It turns out that while there needs to be understanding between hardware and software engineers, the people doing the programming don’t actually want or need to interact. There is not, nor probably ever will be, one single team with hardware and software engineers happily working together on a project.

But it’s not a total disconnect. There are a number of system-level design technologies and approaches that allow for interaction when it is necessary.

“When we are saying there is no one team what we are really referring to is there is no one person who knows everything and has all of the information, all of the knowledge to actually do efficient hardware software debug,” said Frank Schirrmeister, group director of product marketing for system development in the system and software realization group at Cadence.

Within that context the tools become important because what you are enabling is analogous to looking into the same house from different windows, he said. “It’s all the same house, it’s all the same design. But the software developer looks at it from the software side, the hardware developer looks at it from a pure hardware side, and they see different aspects of it. The software guy can see some of the registers and can access the registers and then he has a specification of what the effect of programming the registers will be and he has an executable version of the virtual platform but there is an effect going on which he doesn’t understand. He will not be the one having the knowledge to know how to dig deep and look into the RTL. Just like in the house you wouldn’t let the plumber do the electrical. You are looking at it from different perspectives.”

The tools really offer the ability to provide different design perspectives and allow those people to efficiently interact.

Realistically, it’s not all roses and sunshine.  Drew Wingard, chief technology officer at Sonics pointed out, “There are software people who are doing the work that every semiconductor company believes has to be done, given away for free so they can sell their chips. Unfortunately, our industry puts almost zero value into that bucket. It’s an enabler but it’s a giveaway. Then there are the rest of the software guys who are trying to add value that shows up in a device somewhere that someone or something, in the case of ‘The Internet of Things,’ can add value. The lower-level software guys that are part of the semiconductor companies get the least respect of anybody in our industry. They really do enable the sale of those chips. Without them the chip is not a viable product in the market, but fundamentally in most SoCs design environments I’m familiar with, they are completely ignored. They are not listened to. When it’s time to build the next chip, almost nobody listens to them to say, ‘Hey listen, if you did this, this and this, it would make my job a whole lot easier. Or the whole system would end up better if….’ It’s only when those words come from the end customer that they carry any weight.”

For that reason, the message that filters down to the low-level code writers is almost a snub. It’s a different story, though, with the higher-level software developers.

“Do the software teams at the OEMs actually have real conversations with the hardware architects on the SoC teams?” asked Wingard. “For the critical OEMs, you bet they do. That’s a very different kind of relationship because it does end up being a customer-supplier relationship. It’s in that domain that we’ve seen some of the deliverables that advanced SoC teams can give their customers, like the virtual platform models for early software development and things like that. Those deliverables can be helpful in that, but of course those deliverables are only ever focused on the function of the system so everything around performance always ends up being left for the lab.”

Technology bridges the gap
How are system-level design technologies being used today given the different worlds that hardware and software people function in?

In the case of emulation, Jim Kenney, director of marketing for the emulation division at Mentor Graphics, noted that yes, there is still some separation. But the technology does allow the teams to meet in the middle.

The vast majority of emulation users are running embedded software on the emulator against the hardware, he said, and the way it goes is the hardware engineers get it working, they get it running some rudimentary software like a boot loader or booting the OS, and then they turn it over to the firmware guys. So yes, it still is two separate worlds. There are a few people who know both but they are mostly setting the environment up.

“A lot of what is driving this on the emulator is that the FPGA prototypes are coming later and later in the project because they are getting a lot more complex,” Kenney said. “People are still doing them. We are not replacing FPGA prototypes with emulation. We are definitely augmenting it. They might be able to run 10 MHz on their FPGA prototype and they can run a megahertz or two on the emulator, but it is ready much sooner. And if you run into any hardware-software issues, it is a much better environment for debugging. The firmware guys get handed an environment and they demand one that looks a lot like what they are already using. So you have to cater to their choice of debugger and their choice of operating system because that’s what all of their tools run on. Once you provide that to them they overload the emulator pretty quickly running code.”

As far as closer interaction between the teams with the ultimate goal of one big cohesive hardware/software team, Kenney believes there will always be just software guys and just hardware guys. “The only thing that will shift over time is how many can make the crossover—how many can bridge both. One side can bridge better than the other. The guys that are just doing computer science really don’t have much understanding of hardware at all unless they happen to be personally curious about it. In terms of hardware design…if you think about it, the hardware guys are programming in languages, meaning meaning SystemVerilog. It’s different but it’s a language, so at least they understand writing code, writing good code and bad code, compiling code and debugging code. The software guys don’t understand very much about the hardware except, ‘This is my register set. When I wiggle this register it’s going to do that. If it doesn’t do that, somebody come fix it.””

In a related example, Chad Spackman, design manager at Open-Silicon, explained that his team is working on a network co-processor, which is meant to unburden the core processors (the application layer processors) with protocol processing (TCP/IP protocol processing). The network co-processor presents actual application data to the application processors and it does that by interrupting those processors. There’s some configurability around how to interrupt the main processors called interrupt coalescing, usually defined in a couple of parameters. “You actually have to have not only hardware knowledge but system knowledge, and I don’t quite know how to just take a completely blind approach between those two domains and get it right. It took us quite a bit of time to get interrupt coalescing right and it’s kind of an art rather than a science that requires an innate understanding of the hardware and the system.”

“When one domain has a strong understanding of the other whether it’s software and hardware or whether it’s an analog person integrating a DRAM into the digital world – the deeper that understanding, the more apt it is to result in a better system,” he concluded.

Thankfully, system-level design tools are, and have been, an area of significant focus and investment by the EDA community. Additional adoption of standards, tools and methodologies will only allow hardware and software engineers to meet on a common footing, bring new levels of understanding the system for all parties, and ideally, better and more optimized systems.