Could an open source approach make standards creation a more unbiased process?
Open Source often is thought of as an alternative to commercial software licensed using fairly typical business models. For example, variants of open source Linux supplied by companies such as Red Hat charge a subscription for support and maintenance.
Maybe there is an opportunity to leverage Open Source alongside commercial EDA software to provide use model advantages and open development flows. This might allow best-in-class commercial software from different suppliers to be mixed and matched to optimize development flows for particular requirements.
Today, large EDA companies appear to be attempting to create closed verification flows where they own key core technology, including simulation, formal verification and emulation, and focus on solely integrating their own tools. When they need to use a common input or output mechanism, they drive an industry standard but only implement just what they need to suit their purposes, and no more.
This suggests a focus on blocking competitors rather than meeting customer needs.
Verification has several good examples. The Unified Coverage Interoperability Standard (UCIS) would be useful to compare coverage data from various tools, but it enjoys limited success due mainly to implementation barriers. Portable Stimulus could also easily head in this direction.
The FSDB signal database was a de facto standard, but is now owned by Synopsys. Both Mentor and Cadence appear to be concerned that it is becoming closed to them to the point where they are considering an alternative standard. The Universal Verification Methodology (UVM) is useful but is generally considered unwieldy partly due to the way the standard came about. It’s a combination of OVM and VVM from Cadence, Mentor and Synopsys where the standard had to combine most of both with the inevitable disconnects and redundancy.
These standards could benefit from an Open Source approach.
Most successful standards have started with a proprietary format becoming successful and then opened and/or donated to a standards organization such as Accellera. This has the advantage that the warts can be knocked off the format before it’s frozen and implemented. The obvious example is Verilog, but there have been others.
In current competitive environments, this natural process seems to have stalled with more standards being started in committees, without the benefit of practical usage and subjected to infighting among competitive committee players. Companies have trouble accepting donations from competitors. With the EDA startup environment faltering, smaller companies have declined as a source for innovative standards and large players have more power to reject the ones that do make it.
Open Source could solve this problem.
Open source is not new to EDA. SystemC is built upon C++ class libraries. While the language definition is an IEEE standard, the class libraries that provide the underlying execution functions are provided under an Open Source license, allowing for any SystemC code to be simulated without a commercial simulator. This is convenient because class libraries may be incrementally enhanced relatively easily without requirements to purchase simulators to support them. This, in turn, became the basis for a community of SystemC enthusiasts who leverage the language in a variety of ways.
If Open Source could be used as the basis of new standards to get practical usage experience with multiple contributors, then the dynamic around the creation of key standards could be transformed into a more unbiased process. However, it has to start with one entrepreneur willing to put the effort in without a guaranteed payback.
Let’s take an example. Let’s say a SystemVerilog library was developed as an Open Source project to provide an abstract format for testbench sequences or assertions that would provide a natural translation back into lower-level formats. Contributors could add to it for their needs and feed their work into the code. Maybe a company could charge a subscription for maintaining it, while the code itself evolved through use, generating immediate feedback. A community of users could develop around it. Ultimately it might generate a standard, but it will have been implemented, used and refined without the competitive haranguing reminiscent of the committee-engineered approach.
Now, there is a place for the core technology development, which requires investment and concentrated effort by a small team of experts to get the performance required. It is hard to develop these engines through Open Source. Pulling these core components together and provide a basic use model might fit an Open Source model. Maybe, this is the way we should be developing our standards, as well as the framework by which core engines are brought together.
I welcome comments and thoughts on this.