Experts At The Table: Hardware-Software Co-Design

First of three parts: Exploding complexity; bridging the language and engineering cultural gap; what’s really driving this market; who needs it and who doesn’t.

popularity

By Ed Sperling
System-Level Design sat down to discuss hardware-software co-design with Frank Schirrmeister, group marketing director for Cadence’s System and Software Realization Group; Shabtay Matalon, ESL market development manager at Mentor Graphics; Kurt Shuler, vice president of marketing at Arteris; Narendra Konda, director of hardware engineering at Nvdia; and Jack Greenbaum, director of engineering for advanced products at Green Hills Software. What follows are excerpts of that conversation.

SLD: We’ve been hearing about co-design for a long time. What are the problems that haven’t been resolved?
Matalon: The industry started moving to co-design 15 to 20 years ago with technologies such as emulation, but the big change is that emulation alone is too late. The RTL needs to be quite solid at this stage. Co-design today means above RTL. Co-design is not even enough because you’ve already defined your architecture. There is one level ahead of it where you need to really validate that your assumptions are being met. There is a major role for ESL in co-design and we’re seeing the industry is taking off. This is the fastest-growing segment because you need to start co-design before the RTL is implemented and validate the assumptions regarding funcationality, performance, power and area before key implementation decisions are made.
Shuler: There is still a tendency to come up with some requirements and hack away at RTL. A lot of companies are getting smarter about that now. They’re starting at higher levels of abstraction and working their way toward more detail. But it still doesn’t happen all the time. When a company is purchasing IP from different vendors, there is still a question about where they get their models. That’s a common issue that slows the adoption of the new way of adoption. There also are disparate cockpits that people use. There are commercial tools you can use for SystemC and TLM simulation that are really good and really help with ease of use. But some of the biggest companies don’t use commercial tools. What slows us down as an IP provider is that we have to make sure our interfaces and IP-XACT information works with internally developed SystemC cockpits. Those things are not specified. There are one or two or three people within those companies that know how it works.
Konda: From a need point of view, the design sizes have been exploding and chip sizes are doubling in size. On top of that, especially at the SoC level, we see a number of processors, 20 to 25 different interfaces—USB, Internet controllers, SATA—so co-design has become an absolute necessity. The second piece of the puzzle is software. If we follow the traditional model of the design and then you start developing the software it’s way too late. We are designing our own design environment—RTL models, C models, and simulating all of this at an SoC level. What we are doing is homegrown, but we also are looking at commercial tools to see if they would make our life easier. What we would like to see is mixing and matching various models. Some parts of the design might be an FPGA, some might be an emulator, some might be C models and others could be RTL models. It’s a complex problem and I don’t see a clean solution at the moment. The end goal is to realize an SoC as early as possible.
Greenbaum: There are two sides to this. One is that we’re frequently asked to fix it in software when the chip is already is in production. We’ve seen mistakes that make it all the way through to several revs of silicon. We also look at this as a tool provider, where there is a continual cultural divide between hardware engineers and software engineers. From the point of view of silicon that’s already in production, the biggest problem is memory bandwidth. We have a board in our lab right now and doing something as pedestrian as video capture while the CPU and GPU are busy causes time-outs on the PCI bus. You can’t stream video to memory while you have the processor and GPU busy. This was a function of the system architecture that the SoC was supposed to handle. The arbiter doesn’t arbitrate properly. There are no knobs or gauges on it. And the problem wasn’t discovered until very late. There’s nothing that can be done in software. This is the most common error I see. Memory bandwidth problems aren’t something organizations can tackle today.
Schirrmeister: There are problems on the tool side, as well, right?
Greenbaum: You would hope that to do hardware-software co-design you could get system architects, software architects and processor architects in the same room and solve the problems up front. Unfortunately they don’t speak the same language. The implementation folks, especially here in the United States, speak Verilog. The software engineers speak C. The system architects know how to draw boxes and lines. When I first learned about SystemC I thought everyone could speak the same language. It’s not happening quickly enough. Language is just the easiest place to see this cultural divide, and it will always prevent us from shortening our cycles.
Schirrmeister: The whole notion of becoming independent of hardware and software increasingly will be adopted. The engineers who know boxes and lines may expect that UML (Unified Modeling Language) and SysML (Systems Modeling Language) can be used, once they know functionally what the system will do, to attach requirements or whatever they need on top of it. SystemC models are great, but how do you get above that? There will need to be more automation. When I did my first design I drew gates and connected them by hand. Then we began connecting bigger blocks and the assembly was automated. Automated scripts will become more common in the future. And as we are spreading out toward hardware-software independence, the underlying automation that will bring them together will have to be created because it’s too complex for human beings.

SLD: What’s the starting point? Is it hardware, software, or both?
Matalon: The world has changed. It used to be primarily hardware-centric. Now there are two challenges. One is hardware-software co-design. The other is that there is no hardware at all. The majority of hardware designs are based on standard processors or embedded processors that can do everything. Hardware is not required unless the product is really addressing a niche in the market where power and performance are important. Not every design cares about those factors. If I want to control a refrigerator, all the computation and control can be done in software. Co-design is not for everything. There are a lot of designs that are getting out the door where the challenge is writing the software. When you get into co-design, it implies you have a problem that cannot only be solved in software. You may need higher performance. You may need a network on chip to implement a design. You need a specific architecture or arbitration or an accelerator to implement graphics, and you need RTL for that because a standard processor won’t do the job. Then you head toward multiple processors, multiple cores combined with hardware—that’s the sweet spot for co-design. There’s no doubt that models have been the drag on this. When the design needs to meet performance and power, and there is a software part where there are certain tasks that will be done with processors and others with network devices, you need to get to the problem of partitioning between hardware and software. You have to do it right so that you account for both hardware and software.

SLD: You’re talking about complete optimization?
Matalon: Yes. And here’s where we need to focus.
Schirrmeister: The challenge lies in not knowing what you don’t know. When I’m writing the software, I may not know the memory that talks back to the fridge doesn’t know that the grocery store where I’m going to buy my milk has run out. I may not have foreseen that scenario. To me the things always start at the functional requirements of the user, whether it’s a graphics design, memory bandwidth. People don’t get the requirements right in the first place before marketing changes them. And then how to transfer those requirements into co-design is a challenge we haven’t solved. That’s where the models come in. If you just do this at the LT level you may not see the memory problem. If you go down to the cycle-accurate level, you may not be able to run enough cases to figure out the configurations.
Matalon: From a practical perspective, you may not need co-design for all designs. In an SoC, where you need to access memory and peripherals and sensors, the ratio is probably 100:1. But the pain level and the complexity aren’t such that everyone needs it.
Shuler: It’s more up-front work.



Leave a Reply


(Note: This name will be displayed publicly)