The Trouble With Semantics

When languages are defined with incomplete or inappropriate semantics, the price everyone pays is orders of magnitude greater than spending the time to get it right.

popularity

Semantics are important. They tell us what something means. Without semantics you just have a jumble of syntax. The better defined the semantics are, the less likely something is to be mis-interpreted because they can be more rigidly analyzed.

The semantics of the English language are not very well defined, which is why it is impossible to write a specification where everyone agrees upon what it says. English also has a problem with completeness in that it is not obvious what hasn’t been said. You can’t find the holes in a specification easily.

In an industry like EDA, you would expect that semantics would be very important — and they are. Without well-defined semantics we would not be able to have the types of automation that we have. But are the semantics perfect? Not even close. Have we learned from our mistakes? No. Do we have the expertise within EDA to construct proper languages? It’s debatable.

Connectivity at the gate level was defined using schematic diagrams. While I am not a mathematician, I believe that so long as the electrical constraints are upheld, a schematic has well-defined semantics. There is no ambiguity. Schematics do rely on the leaf models, which were not as well defined, especially how they should operate in the presence of unknown values or pulses. That could result in differences between what a simulator would say a circuit would do versus what it would do in reality.

The first time that the EDA industry really defined a new language was RTL. That was defined by Phil Moorby and went through a few iterations before arriving at Verilog. Moorby was a mathematician and computer scientist. The first iteration of the language was Hilo and it was a simulation language for the PCB industry. Verilog dropped some language construct and added a couple of new ones, designed to retarget the language more towards the burgeoning semiconductor industry.

The key thing is that it was a simulation language and that is what the semantics were defined for. This was fine until logic synthesis came along. Certain constructs, after synthesis, led to different simulation results. The way around this was to place constraints on the usage of the language to make it synthesis friendly. In other words, the problematic synthesis semantics had to be hidden so that they did not cause problems.

VHDL solved those semantic issues and is perhaps the only well constructed language that the industry has ever created. The problem is that in solving those semantic issues, it created a bloated, cumbersome language that was really slow to simulate. Except for a few pockets, the industry rejected VHDL and continued to use Verilog.

As soon as logic synthesis became adopted and the industry saw the productivity gains that came from using abstraction, the hunt was on for the next leap in abstraction. Academics around the world were trying to create languages and semantics that would take us up to what was then called the electronic system level (ESL). Given the difficulties the industry previously had with defining the language for simulation first, and implementation second, they chose to focus on implementation. The semantics were well defined and attempted to bring together hardware and software.

The problem was that almost all software was written in C and C++. Back when ESL was being defined, processors continued to get bigger and faster, and so all existing software was sequential. The semantics of C and C++ are firmly rooted in the computational model for a simple processor. Each instruction is executed in full before the next instruction is fetched. Clever compilers managed to overcome some of the limitations of the semantics and resulted in faster execution using things like pipelines, branch prediction, and out-of-order execution.

With no sight of a standard design language emerging from academia and a growing need for higher abstraction simulation, the EDA industry decided that the only way forward was using C and C++ directly. Virtual prototyping started to appear, and the industry could now co-simulate hardware and software. It also dealt with the legacy issue. There was no software written for any new language or abstraction that came out of academia.

But going from C to hardware created a huge challenge that we continue to struggle with today. C is perhaps the worst language that could possibly be used for designing hardware. Many niche areas of the industry have defined alternative languages and computation models, such as CUDA and OpenCL.  These have been driven by semiconductor companies developing products for which C or C++ programs are not well suited. They have resulted in much more efficient utilization of the underlying hardware and provided rich programming environments for the end users.

We also know that C and C++ are not well suited for the generation of custom hardware. To solve that problem (tongue in cheek) the EDA industry created SystemC — a language that adds some notions of hardware, but fixes nothing. For years, the developers of high-level synthesis technology argued about how to restrict the SystemC language so that the semantics were defined enough that the synthesis process would be predictable. But SystemC is not C, and the subset of SystemC that is synthesizable is not C. So we have obtained none of the benefits of being C-based.

I still think there is a need for a well-defined, abstract hardware language. Perhaps the starting point for the language should be reverse engineered from the SystemC synthesizable subset. Perhaps the industry should take another look at the huge body of work out there for suitable, well-constructed languages that target parallel execution. Perhaps we should accept that, as an industry, we have not had great success being language architects.



2 comments

Ron says:

The patented technology …….. uses Hierarchical Decision Flowcharts as one software language, for; Requirements, Hardware Design, Software Design, Synthesis and Simulation. The patent describes a parallel computational machine called A Flowpro Machine, which synthesizes directly from hierarchical parallel Design flowcharts to clock-less, asynchronous transistor structures. These transistor structures…… are downloaded to an FPGA or ASIC and each executes at the speed of propagation, i.e. a pipeline of execution…..Flowpro Machine consists of many smaller Flowpro Machines and each …. only consume power when that Flowpro Machine is activated.

Jim Lewis, SynthWorks says:

Is FPGA a niche / pocket market? I ask this as VHDL has 60+% of the FPGA market world wide.

Is Europe a niche / pocket market? VHDL is quite popular there too.

You mention bloat in VHDL? VHDL-2008 fixed that – other than component declarations. Being 2020, tool support is finally emerging.

OTOH, component declarations is what gives VHDL a capability similar to System Verilog factory classes without the complication of needing OO. So it is a great capability for Verification that I use frequently.

You regurgitate the speed issue, so I was wondering, is that an RTL speed issue or a gate speed issue? My understanding is that it is a gate issue.

When working toward VHDL-2008, I suggested that we adopt the Verilog netlist as the VHDL netlist language. 🙂 One of the vendors rejected this as their claim is that the speed issue with VHDL gates is a matter of the vendors investing in optimizing Vital and not a fundamental issue with Vital.

So if we are developing an ESL language that targets the current FPGA industry and designers, then we ought start from VHDL. Somewhere we already have a good user base and good semantics.

For the design team, this is the obvious, low risk migration since if the ESL aspect does not prove out then at least we have a good verification model for the system.

Leave a Reply


(Note: This name will be displayed publicly)