Design By Architect Or Committee?

Design and verification are built upon standards, but who are the right people to create standards and when is the right time? These questions are more difficult to answer.

popularity

Everything we do is based on a language. It doesn’t matter if we are talking about design, verification, specification, software or mask data. They all provide a way to communicate intent, and then there are engines that work on the intent to produce something else that is desirable, also based on a language. Over time, the EDA industry has built up a hierarchy of languages from the most detailed implementation aspects of a design to abstract virtual prototypes and the tools to connect them. But it has been a rocky road for many of these languages. It would seem that we learn from the past and repeat our mistakes.

Semiconductor Engineering talked to a number of prominent people in the industry who have been involved in language creation. We asked them about lessons that have been learned and advice they have for those constructing new languages today.

Timing
Timing is important, but this varies upon point of view. Consider Verilog.

“They went too late on that one,” says David Kelf, vice president of marketing for OneSpin Solutions.

Not so, according to , chief executive officer of . “The timing was perfect. People were wasting their time with VHDL and Verilog was a language that worked and had a following. The language was developed by a single architect () so it maintained a consistency of expression and grammar. With a base of loyal users you could open it up.”

But there are people who say Verilog was too early because the real needs for the language were not understood at the time. What was created was a modeling language when the industry perhaps needed a design language.

“For Verilog, decisions were made for performance and ease of implementation,” according to Arturo Salz, scientist at Synopsys. “Some things in the language do not address design challenges, but it would have been hard to do that because we did not understand these until after the language was in place.”

McNamara agrees. “Verilog stands for VERIfy LOGic. It was not about synthesis.”

Bryan Bowyer, senior product marketing manager for Calypto, says that “if one company can do everything with it, then there is no need to standardize. Eventually you may need something from somebody else and at that point you need the standard.” This implies a need from the company who developed the language.

But this is not enough according to Salz. “It has to be a win-win, a benefit for the vendors and a benefit for the users. It has to create a bigger ecosystem that allows more people to play and to grow the pie for everyone.”

In reality, the decision will always come down to money. “If you can balance the commercial and technical issues then you have found the right time to standardize,” says Kelf.

But the question remains, does the industry need a language that is focused on design rather than modeling? Bowyer doesn’t believe it makes sense, but McNamara disagrees. “There is this art of coding related to things such as inferring muxes and establishing resets. I should be able to say I want a mux or a latch. There are design paradigms that are good to capture. With you want to express the high-level intent and get the HDL to do what you want.”

Many of the problems with Verilog have been fixed in the tools. “We have feature for Verilog and VHDL which behaves more like hardware but it is not standardized,” says Salz. “Doing that would be very difficult. We have also worked on race elimination techniques that make simulation behave in a way that is congruent with hardware. This is important when you want to move across platforms (simulation – emulation – silicon) because timing behavior needs to be the same or you can never match results from these sources.”

New language?
Perhaps Verilog wasn’t perfect, but it was good enough and the industry learned to deal with its quirks. But after 20 years, a new language and a new abstraction was required and with it came the opportunity to create a better language, one that didn’t suffer from the same problems. New abstractions have to overlap with what existed in the past, so any new language had to have an RTL abstraction. The question was – what was the right starting point?

A small company, Co-Design Automation, had created a new language called Superlog that was beginning to gain some attention in the late ’90s. “This had a single language architect — — and he did look at the semantics first,” recalls Kelf. “He wanted to use Verilog as the basis, but he slightly changed it and went down a semantically correct path. However, everyone got to him and he eventually had to change it back. He tried to make it clean.”

Then the industry made a sudden move and C-based languages became very attractive. “It has the advantage of having lots of people who know how to program it,” says McNamara. “They may not be great programmers, but there are lots of them. When you introduce a new language such as e, you have to teach people and that takes a lot of time. SystemVerilog needs all sorts of training.”

But the biggest difference with C-based languages was the compilers were free, and when Synopsys launched SystemC that also made the simulator free. “SystemC was not a great success,” Salz concedes. “It was not a win-win. The compiler was free so you left the vendors with no way to make money.” Salz says that Synopsys, and for that matter the whole industry, did not think it through and in the end. “It just looked like another clunky RTL language with the additional problem of how do vendors make money.”

But the users loved the price and some managed to put together environments that allowed them to do useful work. There were also pragmatic reasons that caused its proliferation and adoption as the language of choice for high-level synthesis. “C is just a modeling language,” says Bowyer. “It is a reference for your hardware and many algorithms and protocols are written in it – this is why we went with C. It was a pragmatic decision.”

It never takes long with a discussion about SystemC before everyone is talking about the debug problems associated with it and the number of hardware issues that cannot be effectively described using it. Salz provides an example of a scan chain. “I have seen people try and express a scan chain in SystemC and the compiler dies. A shift register with 2 million elements is not something the compilers were tuned to do. Verilog and VHDL handle these with ease.”

McNamara, who was originally a computer architect and hardware designer, talks about the lack of hardware-oriented tools that have been built around SystemC. “It was a surprise to me when I learned about things such as waveform tools. The software world still has no way to see all of my variables over a defined time and go back and see what was going on. Lint and coverage tools for C aren’t that good. Eclipse seems nice, but there are very few verification tools.”

As reported in the article entitled, “Is SystemC Broken?” the industry is still looking for the right application for SystemC and it is gaining traction as the basis of virtual prototypes. The ability to add tools around a virtual prototype will provide a way for vendors to get a return on their investment. But a problem still remains in that a methodology is required that can help stitching together IP blocks to create such a model, and this is an area of weakness for SystemC and the standards that surround it, such as TLM2.

“EDA has been correct by exploiting economies of scale where everyone is doing something by hand and adding automation,” says McNamara. “IP is the next level. There is also a bunch of software that goes along with an IP block. How do we plug it altogether?”

What can go wrong
The only attempt that has been made so far is IP-XACT, and this has seen very little adoption. Salz points out that engineers tend to focus on the language as the solution. “It has to be methodology that is driving these decisions, so that means we probably have to leave [language definition] until later.”

McNamara feels that IP-XACT was an incomplete standard, as well. “IP-XACT is another one where everything is in the vendor extensions. This makes standards non portable.”

Salz also points to problems with the definition of power formats. “People are really struggling with how to put these things together. What do I do in hardware, what in software? What does my power controller do?” His frustration is that many of the methodologies have not been well defined.

Listening to these industry experts, it feels as if the pace is quickening and the results are becoming less useful, perhaps because we have not learned any lessons from the past. A new standard is just getting started within Accellera to create a . We asked the experts what advice they would have for the committee.

“Make sure there is a strong customer need,” says Kelf. “You need to know how it is going to be used and what is it for? One or two people are not enough.”

“The technology that has been donated does not address all of the problems,” points out Salz. “It tackles part of it and I hear people wanting to use it. However, in the proposed timeline we cannot expect to fix all of the problems.”

McNamara reels off a list of standards that were created and not used. “It may be time to wait a bit.” He also offers a reason why there is pressure to do this quickly. “We often want there to be a second supplier, so long as they are weak. When something becomes multi-vendor it makes the market real, but I want to be top dog. When you have multiple companies looking for the same thing, it cannot be achieved. Are standards the right place to wrestle this out?”

The answer is that this is a useful and necessary effort, and the potential users of it need to spend the time to get involved and make sure they understand the methodology behind it, the limitations it may have and how they will be addressed in a manner that does not leave quirks for the future. Those who don’t spend the time have no reason to complain later.



5 comments

Graham Bell says:

It would be interesting to see McNamara’s list of standards created and not used…

Brian Bailey says:

I think not used is an extreme, but there are many standards that have not seen a lot of adoption, such as IP-XACT, SystemC-AMS, even e.

Brian Bailey says:

BlueSpec is most certainly a language that was designed by an architect, and while it had some nice capabilities, it never caught on and never saw a second vendor interested in adopting it, or even taking concepts from it and implementing it in another language.

Matthieu Wipliez says:

Bluespec is kind of complicated IMO. Interestingly, you say there is no “second vendor”. But the same is true for HLS tools for SystemC. Sure there are half a dozen tools that take SystemC as an input, but each tool uses its own #pragmas and more-or-less synthesizable subset. Each of these subsets is as proprietary as Bluespec, if not more! At least Bluespec properly documents its language, and, unlike big EDA companies, all documentation is accessible without requiring you to sign an NDA in blood bargaining your soul in the process!

Also, I love it how you superbly ignored the first part of my question. Too bad you might have learned something!

Brian Bailey says:

I hear you about the subsets and pragmas. At the moment, I hear of few companies looking to second source HLS, but at some point I am sure they will.

As for Cx, I am aware of it but have not attempted to use it, so I do not have enough knowledge to be able to make informed judgements. Until I have used a language it is not fair for to judge in either direction.

Leave a Reply


(Note: This name will be displayed publicly)