Whatever Happened to High-Level Synthesis?

Experts at the table, part 2: Playing in an IP integration world, defining verification flows and compatibility with a virtual prototype.

popularity

A few years ago, High Level Synthesis (HLS) was probably the most talked about emerging technology that was to be the heart of a new Electronic System Level (ESL) flow. Today, we hear much less about the progress being made in this area. Semiconductor Engineering sat down to discuss this with Bryan Bowyer, director of engineering for high level design and verification at Mentor, a Siemens Business; Dave Kelf, vice president of marketing for OneSpin Solutions; and Dave Pursley, product manager for HLS at Cadence. Part one can be found here. What follows are excerpts from the conversation.

SE: During the period of emergence of HLS, IP reuse and integration became a very popular methodology. What is changing in terms of the architecture of the chip and is that impacting HLS? Isn’t IP integration one of the biggest pressures?

Bowyer: Standard interfaces are really good for high-level synthesis because a tool can always get the interface right. As long as you stay with standard interfaces, you are taking away a lot of the RTL integration problem. If you take C code, without any buses, you can see if it functions and then you tell the HLS tool to do the communications over AXI or whatever. It alleviates a lot of the integration verification work and the debug work. You will still verify it, but it has become a lot simpler. You should eliminate a whole class of exotic system-level bugs.

Pursley: We have also seen a few places, especially in automotive, where they have looked at the overall chip-level interconnect. ‘Does it make more sense to have delay in the interconnect if it improves quality of service, or does it make more sense to have that in the IP blocks?’ You can do those kinds of tradeoffs, which are impossible in the traditional RTL design flow and integration.

Kelf: Functional property specs enable you to test these things at a much higher level. Now you have a spec and you can write a property based on that and figure out if the design can match that property without getting into the minutia of the interface. So you do a lot more verification at that level before you decompose it. These are the advantages of abstraction.

SE: When HLS was first introduced, verification was a roadblock to adoption. Where are we today?

Kelf: We are much better off. Simulation, which is still the mainstay, has improved dramatically. The use of the OSCI open source implementation has been replaced by mainstream simulation, and customers have accepted that. To run real simulation with debugging means you have to use a commercial simulator that is fully supported. When you layer formal verification on top of that, there are so many things you can do with it. We are still finding new things we can do at the algorithmic level. For example, testing the precision of number systems. You cannot do this at the RT level, but formal can do it fairly easily. By layering formal we can eliminate some problems and then add in new apps and properties at the specification level which were not possible before. There are different people doing verification at the SystemC level compared to RTL. RTL verification goes to the UVM team, but at SystemC you see more of the architects and algorithm specialists doing the verification. That may change over time as the UVM guys move up and do more system-level verification.

Bowyer: We are seeing people bringing the algorithm into the hardware verification process. If you look at 5G or Open Media AV1, the development of the standard is done by building hardware while you are developing the standard. A lot of the change in verification is that you are getting more people involved at the same time. We have the foundation done. There is formal, there is Coverage, there is linting, there is simulation, and we have been able to lay out methodologies and how you connect that to RTL verification. Now that is in place, you can go into the C world and say I will move my hardware development earlier and I will design the hardware and the algorithm at the same time. We will see how that works with 5G. It is the biggest thing going on today in the development world.

Pursley: At one point, verification took away from the HLS flow. That is no longer the case. Through methodologies, we have enabled teams to use their verification assets throughout the flow. With HLS you have SystemC and you can simulate the functionality. It can be done in C, too. Then you take it down and simulate it with the interconnect and estimated delays. Then you can do the actual RTL. So you are separating the three phases—the algorithmic, the protocol accurate, and then the fully timed hardware. With a good methodology, you can structure the verification flow to pick up the right problems at the right time. If you just get to the point that the RTL doesn’t work and you have to look at the functionality, the interface, the timing, it becomes too difficult.

Kelf: Shifting to the C level has been critical.

Bowyer: Knowing that your algorithm works before you start protocol verification is incredibly valuable.

Kelf: And there is nothing strange going on in the SystemC code that nobody spotted without any uninitialized states that could leak through and be a nightmare at RTL. These problems have gone away.

SE: There was a lot of confusion about virtual prototyping (VP) and the models for HLS. They were not the same models and that caused problems?

Bowyer: I don’t think it confused the hardware design teams. It confused the virtual prototype teams. But in the hardware design teams they were not using virtual prototypes, and I don’t know if they ever plan on using them. So there is market confusion. Hardware teams use hardware models, which are abstract models, and virtual prototyping teams sometimes don’t even use the real algorithms. They are just traffic generators. The confusion was trying to mix them, and we now know that they should be kept apart.

Kelf: SystemC was supposed to be able to be used for everything, but the change has been that we are no longer talking about SystemC. We are talking about an HLS flow and algorithm implementation flow and software virtual prototyping, and they are separate things. If we could have a model that had the algorithm and the register configuration, and you could plug that into a virtual prototype with a fast model of the processor, it would be awesome. But the VP needs performance, and as soon as you stick that level of detail into the model you lose that performance. I don’t think we will ever see a full-blown HLS model being used in a prototype.

Bowyer: Maybe in the next language.

Kelf: We can fix everything in the next standard.

Pursley: That was the confusion—a common language being used for both. The synthesizable subset made it very clear that it is not the same.

SE: How do you prove that the RTL generated by an HLS tool has the same functionality as the input model? Do we trust the quality of the tool?

Bowyer: It has not gone away but we have been able to come up with methodologies that say here is how to prove that the SystemC and the RTL are the same or more importantly, that the RTL is correct. Even though machine generated, you can have a set of tests that are run on the SystemC and can be rerun on the RTL and you will get 100% coverage, you have formal to back it up, etc. It is not a new technology so much as companies maturing the methodology they have.

Kelf: We know that from RTL to gate level is a fairly clear mapping, at least in an ASIC. From SystemC to RTL, you are inserting pipelines, making timing changes – it is a hard problem to solve. The key was building consistency between the SystemC level and the RTL and while it crosses a language boundary, if we do things such as using SystemVerilog assertions at the RTL and SystemC level, then do formal evaluation at the SystemC level get good coverage there and then use the same assertions for RTL means you are a long way along the proof that you need. We call this design consistency checking. The problem is not completely solved, but it is certainly much farther along. People also trust the synthesis tools to the point that they trust verification to provide the final check.

Pursley: Similar to the virtual prototyping mind-shift change, as it has been used more widely and they trust the verification flows, there is now an understanding. Would it be possible to go from the algorithm to the RTL and prove that the algorithm has not changed? Yes, that is possible in theory. But you are also adding pipelining and changing timing and those things means you have to do cycle accurate verification at some point to make sure it works with the rest of the system. Once they realized this, they also realized that equivalency is not the same as RTL to gate and does not remove the need to do all verification. It is a better way to get to RTL, but it does not remove RTL. You still need to do some verification there, especially around integration.