Experts at the table, Part 3: High-level synthesis has been around for years, but do engineers trust the results they get from using HLS and how well is it suited to low power?
Semiconductor Engineering sat down with Mike Meredith, solutions architect at Cadence/Forte Design Systems; Mark Warren, Solutions Group director at Cadence; Thomas Bollaert, vice president of application engineering at Calypto; and Devadas Varma, senior director at Xilinx. Part 1 of the discussion looked at the changing market for HLS and the types of customers who are adopting HLS today. Divisions among the panelists started when the languages and the target users were discussed. Part 2 delved into the problems associated with defining a synthesizable subset and achieving model interoperability. The subject of verification was also raised. That discussion continues and moves into the areas of sequential equivalence checking, tool quality and power.
SE: In the early days of RTL synthesis, people had problems believing results before static timing and equivalence checking became available. Do people trust the results they get from HLS?
Warren: Everyone would like to have formal technology because it enabled RTL to become the golden model. Today, if we have a verification plan based on the higher levels of abstraction, it would be ideal if we didn’t have to redo RTL simulation and instead had sequential equivalence checking, but at this level of abstraction that is a lot more difficult. Instead we just have to make sure that you can plug your RTL into the same verification environment and rerun the regressions.
Varma: There are two parts to this. The first is how to verify your C/C++ model against RTL and the second is in the trust of the tool itself. There are still a lot of bugs being found in logic synthesis tools and surprisingly the quality of high-level synthesis tools has been very good and has been turning up very few bugs.
Warren: This may be because we have tons of regression examples.
Varma: The problems come from the ambiguity in the language.
Bollaert: Yes, having formal is ideal and is indeed a complex problem. In the past we tried to solve all of the possibilities that HLS had to offer, which was made more difficult because of the different styles in use, different languages and different abstractions. When we brought Catapult into Calypto, we could tackle one problem at a time. This has enabled us to incrementally solve problems and deliver flows. If you are forced to verify the design twice you lose a lot and will be unable to fully capitalize on the benefits of HLS.
Warren: But you only need to debug once.
Bollaert: The ambiguities in the language make life difficult. What happens if you access an array outside of its bounds? The language does not define this. Sometimes it may provide the correct results, but when the design gets changed through synthesis it may produce an error. You can only find this in RTL, but that is not where you want to find it. Formal allows you to see these differences.
Meredith: I have seen users apply formal to get greater confidence between their behavioral model and RTL. Do you see users who forgo RTL simulation? I have not seen a user take this step.
Bollaert: No, they will always do some simulation for peace of mind and the RTL is likely to go into a larger system that will get simulated. However, it reduces the amount they have to do. For many teams it will be difficult to sign off until they have certain coverage on the RTL, but it is easier to get coverage on the C model.
Warren: When the world went from schematics to RTL synthesis, EDA followed with a lot of RTL analysis tools. Now as the use of HLS reaches a critical mass, there is a lot of opportunity for tools that works on the high-level models.
SE: Don’t we already have a wealth of analysis tools for C and C++?
Varma: All of the tools like profiling work on C/C++.
Meredith: Those tools are also applicable to a SystemC environment.
Bollaert: SystemC makes it more difficult. The kernel can confuse tools. Coverage can be confused by what is in the header files, and these tools do not understand the notion of coverage that we get from RTL simulators. They do not have state/transition or toggle coverage. In pure C, you do not have to worry about concurrency and you don’t have to worry about the header files.
Warren: Today most people are using smart testing as defined by things such as UVM. Functional coverage is important. We all know that 100% line coverage does not mean you tested everything in your design. The converse is true; less than 100% means you know there are things that are untested. All companies have different coverage requirements.
SE: Should you be working with the Accellera UCIS (Universal Coverage Interoperability Standard) to define the coverage models for HLS before we get into the same problems that we have for RTL coverage?
Meredith: There are several activities going on (MLWG, UCIS, SystemC Verification) and these need to come together, but it is already too late to solve these issues. Numerous semiconductor and systems companies already have their own methodologies, libraries and flows for addressing these.
SE: Power is an important issue these days. How well does HLS tackle this problem?
Warren: HLS is extremely well suited for low power design. Everyone knows that the biggest gains will come from adjusting the architecture. HLS allows you to do what-if analysis and to see what the power, performance and area are.
Varma: The key is changing the sequential behavior of the design. The way to reduce to power is to increase parallelism so that things can run in lower power threads. Frequency is the enemy of power. What is not there yet is to set a power goal and to tell the tool to satisfy it. But HLS is continuing to see adoption because of its ability to reduce power.
Meredith: HLS can also reduce power in ways that would be very difficult at RTL. For example, state encoding so that only one bit transitions where there is a lot of HW hanging off them can reduce a lot of muxes, switching, toggling and glitching.
Warren: Clock gating is another one. HLS has a much higher view of the design so can move logic around.
Bollaert: I don’t think we are that far away from having HLS meet a power target. One of the missing pieces was that you can’t optimize what you can’t measure and we have solved this now.
Varma: The key here is not accuracy but fidelity. When you have accuracy in the range of +/- 20% you can only track trends, you can’t optimize it. We have had customers ask for power estimation tools that always err on the plus side – so +20%, minus 0.
Bollaert: I don’t think relative power estimation is good enough. You have to know if you are within the power envelope. Accuracy is necessary and you can get this by doing the analysis on RTL, with the target libraries, possibly with parasitics.
Varma: Yes, we have an advantage there because we control the silicon. So we can predict power to within single digit accuracy.
Thanks a lot! That was very interesting!