Experts At The Table: Hardware-Software Co-Design

Last of three parts: Cost of models; synchronization problems; growing sense of optimism about co-design; differences between hardware and software; who’s in charge, and who’s being blamed for power problems.

popularity

By Ed Sperling
System-Level Design sat down to discuss hardware-software co-design with Frank Schirrmeister, group marketing director for Cadence’s System and Software Realization Group; Shabtay Matalon, ESL market development manager at Mentor Graphics; Kurt Shuler, vice president of marketing at Arteris; Narendra Konda, director of hardware engineering at Nvdia; and Jack Greenbaum, director of engineering for advanced products at Green Hills Software. What follows are excerpts of that conversation.

SLD: How much does the business side enter into this equation?
Greenbaum: The cost of the models balanced with the benefits you get from having those models, and the lack of well-defined interfaces, conspire people to just build the chip and do as best as they can. Think about the effort to build drivers for the actual silicon. If you’re going to push real workloads through and connect to the actual Facebook servers, for example, you need a driver that can talk to the model. Did you define that interface between the CPU execution model and the approximately timed video codecs sitting on the other side of a bus in such a way that you can still write a driver quickly enough to derive a benefit from that? Will it result in fewer chip spins to get a design right? It’s a challenge to just bring a model together. Then you have to make it economically useful.
Shuler: It is starting to get better. There is more fixed-cost work to do this up front, but when you’re creating one of these chips you need to develop a platform because you have to amortize the cost over multiple chips. Companies build these chips and then make four or five derivatives that may go all over the world. For them, the cost of putting together the infrastructure up front is paid for in the end because they can get multiple chips predictably from that one. The whole SoC trend is helping the adoption of software-hardware co-design.

SLD: No matter how close we bring the hardware and software, they’re still out of sync. On the front end you’re working off something started by a hardware team, and on the back end you’re working off something that hasn’t been finished by the software team. How does that affect everything?
Schirrmeister: Things are out of sync. The question is whether a driver has been developed to push traffic to a server via the model rather than the actual hardware? I’m convinced we’ll get there, and we’ll know we’re there when the model generation becomes a natural byproduct of the mainstream design flow. So a company like Arteris creates the fabric and models for different needs. Then it’s up to the user to decide which one to use. Maybe they only need the LT model because they don’t need to plug in a more accurate model that will slow things down. But sometimes you don’t know what you don’t know. Your requirement may mean you’re producing more memory bandwidth than the hardware can handle. You might have seen that if you ran it against an AT model, and you certainly would have seen it if you ran it against an RTL model. It all goes back to having the models available. Sometimes it isn’t commercially feasible, but it is getting better. We are going in the right direction.
Matalon: I’m more optimistic on that. For model creation, we need companies to provide models at all levels of abstraction. People are working in isolation. Some of this is because it’s a natural thing of teams working in India and China and North America. But sometimes they need to see the solution. The solution is enabled by providers of models, but you cannot see how, for example, a network on chip will be utilized unless it’s put in the context of an overall platform. To do that, we need models for everything and we need automation to create the AT, the analysis tools to do power and performance, and all of this is something the EDA vendors can provide. There will be some groups that hold back on moving forward, but there are solutions there that are not coming from a single vendor. They’re coming from the collaboration of IP providers, tool providers, semiconductor companies and embedded software companies. Now we need to get users to accept them—to break the hardware-software barriers between engineers and architects and software teams.

SLD: Are the models being updated by everyone and in all places?
Schirrmeister: That goes back to making them a natural byproduct of the design flow. If they’re not, there is no way of synchronizing everything.
Shuler: You can’t do it manually.
Schirrmeister: In the past we had models for a certain platform, but in the next revision do you really go back and update the model to be in sync with it? Only when it’s absolutely necessary. But I do think it is becoming a more natural byproduct of the flow even though we’re not there yet.
Greenbaum: Yes, it is getting better. There are a number of open-source platforms for which you can download a QEMU (quick emulator) model. But it’s still the minority and the opportunities there are being squandered. If you look at a virtual prototype as just a development vehicle that is there pre-silicon and you get no more value out of it than the silicon, then you’re squandering the value from a software development virtual platform, which is visibility. With a virtual platform you don’t have pay the cost to pin out the ETM (Embedded Trace Macrocell) on your ARM core to get trace.
Shuler: The same kinds of problems the hardware guys discuss the software guys are discussing. The way I look at it is that hardware is no different from software. It just has to become fixed at a certain time. RTL is software. If you’re a chip vendor you’re in software development. It’s parallel code, but half of it becomes fixed.
Greenbaum: As long as you have timing closure you’re correct.
Konda: It was the case that the hardware and software teams were doing their own things. But what we see now, because of the market pressure and the increasing software content, is that the software team is taking a much more proactive approach. If we are designing a GPU and a CPU, the hardware team and the architecture team will be developing the C model and the functional model of this. But now the software team is knocking on the door and saying, ‘Hey, give us that model. We want to run our software code on it.’ We are seeing a lot more collaboration.
Schirrmeister: The reason that’s happening is that someone who owns both teams says that if they don’t do it, software will be too late.

SLD: But what happens if the models get out of sync?
Konda: If you look at an SD (secure digital) card, the specs are changing from 3.0 to 4.0. The focus of the hardware team—two or three engineers, one verification guy and two RTL guys—is to write the spec, look at RTL, verify it’s working and it’s done. They don’t care about software, system integration or anything else. But now you’re trying to realize an SoC and searching for ESD 4.0 model. Now you’re searching the globe to find models for these interfaces.
Greenbaum: And they still don’t match your RTL because the DMA (direct memory access) is not a standard part of that SD interface and you lose. Even for standard interfaces there are no models.
Konda: So yes, models do get out of sync. The C model probably doesn’t agree with RTL we are developing. The other problem is trying to find a valid model from somewhere. The audio guy with two engineers doesn’t have time to develop a fast model. It’s not his job description.
Greenbaum: And we haven’t reached the point where we can generate those models, either.
Matalon: In the past, the hardware engineers owned power and performance. Today, performance and power are controlled by software. The hardware can put in hooks to control voltage and scaling, but the hardware engineers have no clue when that will happen. The first big change is that co-design will be a bigger interest for the software guys because the top manager will start blaming the software team if a smartphone runs out of battery in four hours. It’s because the software guys have not used all the resources correctly. We’re seeing a shift of responsibility from the hardware guys to the software guys who have the overall system view. A second change involves the interoperability of models. You can no longer build a complex SoC based on proprietary models that are created ad hoc. They need to be TLM 2.0, they need to be standard, and they need to be re-usable in the next project. If you use standard models and make this investment, the payoff is huge. If you don’t do it, you’re stuck. A lot of software and hardware and architecture teams are aware of that. It’s happening.
Shuler: The dirty little secret is that hardware companies put all of this capability into a chip, but when they create the boards and packages they use a fraction of it. The end device manufacturers use even less. They’re using 10% to 20% of the capability.
Matalon: That’s why you can reduce 80% to 90% of your power if you evaluate power in the early stages of the architecture.



Leave a Reply


(Note: This name will be displayed publicly)