Experts at the table, Part 2: Programming models and standards maturity still are holding back adoption, and the real prize – verification – remains somewhat elusive.
In part 1 of this experts series on high-level synthesis (HLS), Semiconductor Engineering sat down with Mike Meredith, vice president of technical marketing at Cadence/Forte Design Systems; Mark Warren, Solutions Group director at Cadence; Thomas Bollaert, vice president of application engineering at Calypto; and Devadas Varma, senior director at Xilinx. The initial part of the discussion looked at the changing market for HLS and the types of customers who are adopting HLS today. Divisions among the panelists started when the languages and the target users were discussed.
SE: Where are with standardization of programming models?
Warren: It is a bit different from the challenges faced with the RTL transition. I learned RTL from the purple Synopsys methodology manual (Keating, Bricard – Reuse Methodology Manual for System-on-a-Chip Designs). It showed you how to code a state machine and almost all designers took advantage of that coding style. In HLS we accept the wonderful expressiveness of C++ and there are a thousand ways to code anything. This means it is very difficult to define what the synthesizable subset is and it is very dynamic. We are constantly expanding that.
Bollaert: There is a blue book! (Fingeroff – High-Level Synthesis Blue Book) It is possible to document proper coding styles just as it was done for RTL. You can still write RTL in different ways and different tool will prefer different ways. We can do the same for C, C++, SystemC and people will refer to that. Now when it comes to abstraction, there is a lot of work to be done. When it comes to things like loosely timed and approximately timed models – what does it really mean? None of that is standardized and is not the focus of the synthesis working group (Accellera SystemC Synthesis Working Group). I would like to see more of that because it is limiting the adoption of SystemC for synthesis. We would all benefit from a lot more clarity around synthesis and the abstractions.
Meredith: The efforts of the Accellera working group are limited in several ways. The work is to find the common subset and to document that. Do you or do you support templates and how? What are the semantics about how reset is interpreted? All of the tools that accept C++ are in a fair agreement here. That document is good at capturing both what is agreed upon and identifying the things that are inherently not synthesizable or beyond the scope of the technology we have today.
Warren: This provides confidence that any code I write is not going to lock me into a proprietary tool.
SE: But does this really make you tool-independent?
Meredith: In the first iteration of the subset standard it will not fully satisfy the interoperability requirement because some of the semantics are not being covered. That will be the next job for the group. But documenting some limitations is helpful. Covering a lot of the space is better than having no standard at all.
SE: Where are with verification and the HLS flow?
Warren: Verification is turning out to be the big win. We have always gotten feedback that verification is the longest pole in the tent. HLS, by allowing you to code at a higher level of abstraction, and having that drive your implementation means that every simulation you do is functional verification. Bugs found here are bugs that will not appear in the generated code and since it is higher abstraction code it is much easier to read/write, simulates faster and the debug turnaround is much faster.
Meredith: I often see customers who have a body of code containing algorithms, and they are simulating that as pure C simulation. You can learn some things about it by doing that. This code can be put into a SystemC structure where things can now operate in parallel. You can continue to simulate that using a transaction level interface and there are things you can learn from that. You can do the same with the pin-level handshake and you can learn some things by doing that. Finally you can run some simulation on the RTL and find some additional issues. Users have learned how to maximize the efficiency of their verification by finding each category of potential bugs using the highest level of abstraction possible. This provides the fastest simulations and minimizes the slower simulations.
Varma: One problem associated with higher-levels of abstraction is that it hides a lot of the details and this is both good and bad. Because of this we see four different areas of verification: validation, verification, profiling and debug. When you design a block that is to use HLS it will be a part of a system and there is communications and data transfer. Sometimes there is a clear partition between job functions, but in the FPGA arena, one person or group has to deal with all aspects of verification. For example, in debug, we often have to go to bit level and abstractions below RTL in order to find glitches or timing issues. But people who write HLS code are dealing with much higher abstractions. This creates issues. We had to invent new verification methodologies that cover all of these abstraction levels. There are some API standards related to OpenCL that return profiling information. All tools use this. I don’t think we have reached that level with SystemC.
Bollaert: This discussion started with the statement that verification is the big win. But to qualify that, it is only true if things are done properly in the first place. Verification is the big prize, but today there are a lot of open questions when it comes to verification. If the high-level model is going to become the golden model, you need to be able to truly verify that, and that involves verification completeness and confidence. New questions arise such as how to do coverage and how to do property checking on the high-level model. It is not enough to just get a warm and fuzzy feeling.
SE: Given the current methodologies, is the testbench ready at this stage in the flow?
Bollaert: Yes, the testbench can be ready. The problem is making sure that you can close verification on the high-level model. It has been suggested that you can do more verification as you add more detail until you reach RTL. Is the suggestion that you verify five times? That will not cut it. The reason why many people like C++ rather than SystemC is because they can leverage a lot of the software environment to do coverage, profiling, debug and assertions. You can do some of it in SystemC but it is a lot more difficult. Once you have done synthesis, you also want to make sure that the RTL is true to the source and that is where formal verification becomes an important component.
Meredith: There is an important piece of methodology that is often not obvious to a new user. That is the importance of having a verification suite in place before you dive into the details of synthesis. Why even try synthesizing from a model if you haven’t actually validated that the model does something of interest? Users who are successful with HLS learn to verify early and build an environment that gives them confidence that the behavioral model is correct.
In part three of this roundtable, the panel looks at formal equivalence checking, tool quality and innovation.
[…] Divisions among the panelists started when the languages and the target users were discussed. Part 2 delved into the problems associated with defining a synthesizable subset and achieving model […]