Continuing Challenges For Open-Source Verification

Is there an open-source business model that works for verification and debug?

popularity

Experts at the Table: This is the last part of the series of articles derived from the DVCon panel that discussed Verification in the Era of Open Source. It takes the discussion beyond what happened in the panel and utilizes some of the questions that were posed, but never presented to the panelists due to lack of time. Contributing to the discussion are Ashish Darbari, CEO of Axiomise; Serge Leef, program manager in the Microsystems Technology Office at DARPA; and Philippe Luc, director of verification for Codasip. A few responses were also included in the session comment train. Part two is here, and part three is here.

SE: Can you explain how open source changes our collaboration model?

Leef: This has been demonstrated in the software world. I’m not sure there are reasons to expect things would play out differently on the hardware side.

Darbari: I don’t believe that collaboration model needs to be any different from what it is in a corporate setting. The best way is to get a commitment of time from all parties involved, and run the project as if it was run within an organization.

SE: How will adoption of open-source hardware open opportunities to support new collaborative schemes?

Leef: Open-source hardware IP presents opportunities to the potential attackers to inject nearly undetectable Trojans. There are no tools that can deterministically find the presence or absence of malicious hardware and examination of thousands of lines of RTL by humans is neither practical nor effective.

Darbari: It brings together different domain experts to foster innovation. For example, you have expert designers and verification engineers from completely different organizations sharing a virtual room, hammering away new design changes and verification.

SE: Will it trigger even more open-source SW development?

Darbari: It is already happening with RISC-V, for example, with new ports happening on Android. It won’t be too long before we have full custom Linux for a high-end RISC-V core and SoC.

SE: Instead of personally customizing source code, wouldn’t it be better to contribute back to the initial project?

Leef: There are situations when the extensions represent functionality of limited interest to the community, or contain proprietary or classified ideas. This problem could be mitigated by consistent public interfaces, but they don’t generally exist in an unstructured world of open source.

Darbari: It may make sense to do this for IP blocks that end up having a free specification, but for custom extensions, purpose-built by a silicon vendor to obtain market edge, they may not choose to donate back.

Luc: There are some things that we want to stay closed. These are our competitive advantage. There are technologies that we use internally, that we developed, and discussions are being conducted about making them open-source. It would help the community to leverage a tool for RISC-V customers.

SE: One of the most popular verification tools in use today is UVM, which is available via open source. It has been wildly successful and beneficial for both users and tool vendors. SystemC is another successful open-source verification tool. Are these models for future tools?

Leef: In order for them to succeed long term, there must be an engaged user and developer community that finds the technology sufficiently compelling to continue investing their efforts into. In places where such consistency and continuity don’t exist, there needs to be an economic motivation to prevent the open-source technologies from decaying and reverting to limited use by academics and hobbyists.

Darbari: UVM is not a tool. It is a methodology that is hardly universal. But in the sense that a UVM standard exists, we also have formal verification technologies that are based on open-standards of SVA and PSL. I believe the technology used isn’t the main point. It’s what is done with that technology. Verification is costly, and it takes time and expertise for solid verification. I’m not sure how much of this can be in public space.

Luc: Complexity is a problem. If the testbench is more complex than the RTL, something is wrong. The testbench should be simpler than what we expect to verify.
SystemVerilog/UVM is not my favorite language because it is too verbose. For a CPU, where there is a standard interface that has big potential for reuse, it makes sense. But if we are going inside the CPU, where we do block-level verification, the potential for reuse is more limited. The benefit of UVM and being Universal is low.

SE: There’s always been interplay between tools and methodology. Are we at the point where effectively developing new methodologies involves extending the tools or developing new ones? If so, what is the impact of open-source tools?

Leef: Today’s methodologies should influence the definition of tomorrow’s tools. There are all kinds of verification methodologies that multiple companies implemented and view as their differentiation. But in many cases, significant portions of the methodologies are in place to adapt overly generalized tools to their needs, or to work around their limitations. These methodologies provide an insight into what’s wrong with today’s tools and should be considered when the requirements for the new tools are being devised.

Darbari: I agree. There is room for innovation and building new technologies and methodologies. I can speak for formal methods. In the past, there have been several proprietary tools built that continue to be used in some organizations, and they are great tools and methodology. If they were available for wider consumption, it would accelerate formal verification adoption.

Luc: We have full random, which is too random, and directed test, which is too direct. Some sequences trigger specific optimization within a CPU. Consider a memory prefetcher. These detect a sequence of addresses and then prefetch from subsequent addresses in advance without an explicit load request. It requires sequences of address, and if we just do random, they will never get triggered. In our verification, we apply a mix of sequences or patterns that we know will trigger optimizations within the CPU, or we know will trigger special cases with a little bit of random applied to it.

SE: Why is it that there isn’t a good, open source, mixed language simulator that is capable of running a UVM simulation? is it just the complexity of the languages? is it a lack of interest or expertise in the open-source community?

Leef: The three major commercial simulators are pretty good, and an argument for creation of a fourth would have to pass an ROI test before someone invests effort and/or money. There are capable open-source stand-alone simulators, but they have not naturally evolved to support UVM because the degree of effort required is not supported by economics, and functional aggregation-focused development is not as much fun as inventing new kernels.

Viewer 1: The market for a good simulator is miniscule compared to other open source projects. Silicon verification is a tiny niche in the general software world. In my mind, open source is good for development of verification environments, but it doesn’t address simulation itself and I doubt it will any time soon.

Viewer 2: GHDL is a FOSS VHDL simulator that can run simulations as well as any commercial tool. If you need free and non-constrained simulators to handle your verification, then it’s an option used by many ASIC/FPGA professionals today. The value in commercial simulators comes from handling mixed languages, GUI for debugging, etc.

Darbari: For any tool, as design size scales, performance becomes a major challenge. But that can be addressed. Designing great debuggers is not easy, and debug consumes 70% of the verification time, so unless an open-source debugger beats the commercial ones, verification using only open-source tools will remain a pain.

Luc: Simulators are just calculating what your logic gates outputs are in terms of the inputs. The job is fairly easy. Debugging with waveforms, with trace analysis, viewers with schematics, these are examples of capabilities that commercial tools provide very well. These are very good for debugging RTL. Debugging UVM is more complex with commercial tools.

Viewer 3: It is not very hard for me, as a tool user, to write non-standard code that compiles fine. But that can crash expensive tools. Why is that? Sometimes I think the problem is not so much in the tool vendors’ quality control, but in the languages themselves.

SE: One of the panelists pointed out that open-source hardware is very different from open-source software because hardware cannot be changed. That’s true for ASICs but not for FPGAs. How does that important difference change your answers to open-source verification?

Darbari: Well, that’s not true entirely. While you can refresh the bitstream on your FPGA with much lower cost than an ASIC, you still need to be sure of the quality and no missed bugs. You don’t want an FPGA controller in a nuclear station, where it triggers an alarm, spuriously shutting down the plant. We should use the best tools to hunt down bugs, and I don’t care if they are open source so long as they are the best. Open-source verification should not just focus on the economics of cheaper tools, but look at the cost of missed bugs had another tool been used. So the best tool for the job should be the mantra.

SE: Many of the panelists keep saying the cost of verification tools doesn’t matter, but that’s not true in many cases. An individual cannot afford the licensing costs of EDA vendors, but there are very few alternatives. Startups are limited in their productivity based on the number of tool licenses they can afford. If these products were free and open-source, it would have a significant impact.

Leef: This is the perspective that I rarely saw when I was an executive in commercial EDA, but now I hear this point over and over from startups, research labs, and a multitude of engineering teams around the defense ecosystem. Licensing costs, contracting complexity and general account neglect by the major EDA companies contribute to growth of a significant underserved community that is ripe for someone to monetize with innovative solutions and novel business models.

Darbari: Affordability is an important point, but the developers of tools need to eat and pay the bills. Why don’t we build an ecosystem where everyone benefits from the eco-chain? We currently do not have this. Small and medium sized companies thrive on innovation and can build the next-gen technology, often much better than established players. But if these companies need to survive, they need to be paid market rate for the tools. This is not happening at the moment. The open-source silicon ecosystem happily wants to consume open-source verification technology without making a commitment to the SMEs developing those tools. If you’re an individual engaged in hobbyist projects, then you can use whatever tools you get. But if you’re a commercial entity, you should pay for the tools. If you’re a research lab, Euro-practice type engagements can help you with free access to the commercial tools.

Viewer 4: There’s a related question. When you buy IP or VIP, do you get source or just encrypted blobs’? My heart always drops when I get encrypted blobs, because when something goes wrong, it’s going to take days just to get hold of someone who understands the code. If I get the source code, I can dig into it, debug it, add printfs, and figure out if it’s my problem or theirs. So “open source” in the sense of, “you buy it, you get to look at it,” is a big issue in addition to the free stuff.

SE: Open source (e.g. Python and the wealth of available packages) brings innovation to the design verifciation work by complementing the required existing tasks and not needing to reinvent the wheel over and over again.

Luc: I was a big fan of Python even before I started in verification. Python is a language where I am most efficient at transfering an idea into a program that implements it. It is not the fastest language to execute. A big part of our verification is written in Python. CocoTB is interesting, which is a framework based on Python. We keep a close eye on the evolution of it.

SE: According to SEMI, the total 2019 market of formal, including equivalency and property checking, was about $200 million. In Q1 to Q3 of 2020, it was about $156 million.

Leef: Formal verification is a small market if you compare it to verification, logic synthesis, P&R, DRC/LVS, each of which is in the billions.

Darbari: The formal verification market is currently 40% the size of simulation market, which is not exactly a small chunk. Just because the adoption of formal is currently not at the same level as simulation, it doesn’t mean it is less significant. It only means we need to understand how to get it adopted more. If we are saying that with such widespread simulation usage, 68% of the IC/ASIC projects are getting re-spins, and the same number running late, then something is wrong. Only 17% of the FPGAs get first-time success, according to Harry Foster’s latest Wilson Research report. If formal verification, via property checking, was used earlier and consistently, the results of the future Wilson Research reports would look very different.

SE: 45% of the participants in the Wilson/Mentor/Siemens study are developing with FPGAs, and their risks and costs are very different from the very large ASICs.

Leef: It’s not just FPGAs. There are also a number of small and mid-size ASICs needed in modest volumes present in defense applications.

Darbari: There is already a significant use of Matlab for code generation. In the future this is likely to increase. The question is do we trust this code is free from bugs?

SE: Can adequate tools be an on-ramp to get more people to best-in-class tools?

Leef: Best-in-class tools can be quite demanding in terms of training and getting up to speed. A basic simulator takes a lot less effort compared to a full-blown verification environment with UVM, VIP, test generation and many other aggregated capabilities.

SE: How do you guarantee open-source tools have the same, or better, quality compared to commercial ones?

Leef: They are of lesser quality today, and only an infusion of effort can close this gap. However, since QA and support are viewed as tedious, the community has limited interest in spending time on these things. There needs to be a business perspective in place to realize that reasonable quality and support are required to drive adaption. And these things are not interesting work. Thus, you have to pay someone to do it. There needs to be an economic argument to support these expenditures.

Darbari: Fund the companies (SMEs) so they can bring you the best innovation. It goes back to my earlier point of not having adequate funding to support open-source tools.

SE: Is an open-source coverage database format something the panelists think would be useful? This can improve vendor tool interoperability and encourage integration aspect of open-source world.

Viewer 5: There is an open-source coverage database standard called UCIS.

Darbari: UCIS was a step in the right direction, but commercial interests prevented its growth. I do see that an open format would be ideal. For formal verification, Axiomise tools generate coverage data that can be used with any tool.

SE: How can one of the panelists say that service model tools never build something useful? Have they ever heard of Linux?

Leef: RedHat demonstrated that it’s possible to build a successful scalable business in support of open-source products by coming up with an innovative business model. I believe there are learnings here that open-source EDA community can leverage.

SE: There is capital in China.

Leef: There is indeed capital there. Actually, it may be the only capital available these days for semiconductor and EDA startups. But Chinese investors face challenges when investing in U.S. companies, and frequently pull back in fear of CFIUS regulations.



1 comments

Jim Lewis says:

WRT, Bug fixes and Open Source vs Commercial simulators. When I submit a bug to GHDL with a detailed minimum working example, it takes 1 to 4 days before I have an updated version – assuming that it is a feature that is intended to be working. When I submit a bug to a commercial simulator, I do not get a fixed version until the next revision – which on average is 3 months but I have had to wait a year.

To be fair, we are still waiting on external names to be implemented in GHDL – however, that could be helped if we had someone making the investment.

Leave a Reply


(Note: This name will be displayed publicly)