Users Talk Back On Standards Process

How does a standard get created? A lot of hard work and balancing different opinions can be frustrating, but that communication is vital.

popularity

One of the major themes of DVCon this year was the standard that currently goes by the name of Portable Stimulus (see related story, Portable Stimulus – The Name Must Change). It is not ready for prime time yet, but there was plenty to hear and learn about the emerging standard, including what users think about it and the standardization process. The panel gave the users the opportunity to voice their opinions and concerns, which often gets lost in the standards process.

“Portable Stimulus (PS) is a new standard for defining verification intent,” explained Adnan Hamid, CEO of Breker, which has been working on this technology since 2003. “The idea is that once you have a PS model, tools can synthesize tests for the various platforms that you want to run on, such as traditional simulation, post-silicon, etc. There is enough information in these models to be able to generate the stimulus, the checks or score-boarding, and the use-case coverage. This standard is geared toward creating multi-threaded tests that involve many agents to get us past the single-threaded UVM sequence that can coordinate traffic across multiple agents, multi processors and so on, covering both hardware verification and software verification.”

The panelists were asked what PS means to them and what value they believe it will bring. First up was Sanjay Gupta, director of engineering for Qualcomm. “We want to be able to use the same set of specifications across multiple disciplines, across multiple projects and across various teams. Everyone can talk the same language, work from the same specification, and both the designers and the verification team can see what to expect—one standard that can communicate test intent. The biggest problem today is reuse. With PS you can create a test and re-use it across instances.”

Next up was Wolfgang Roesner, fellow for hardware verification and verification tools at IBM. “We consider PS an augmentation of our verification flow. Constrained random is working really well in a lot of spaces, but there are areas where this is not the most efficient method. We are looking at PS as a different mechanism that may be more efficient in creating tests. At the higher levels of the verification hierarchy all the way to the SoC level, what we are looking for is really a different technique based on verification intent. That is really the key. To be able to specify at an abstract level the state space that we want to go after, the scenarios that we want to have run and verified, and be able to close verification from the standpoint of the requirements to the system—and not so much at the lower levels based on the stress we can apply to those model points.”

Mark Glasser, principal engineer at NVIDIA, built on top of the previous value statements, saying that “unlike classic software reuse, we are talking about being able to re-use at different abstraction levels, up and down the design hierarchy from unit level to SoC level. PS will enable us to capture the design intent and be able to use that in all of the different domains. It also means porting across different verification media, so simulators, emulators or FPGA testing, or even on silicon — we want to take the same tests and replay them in all of those different places. The ROI is obvious because we don’t have to rebuild tests in each of those domains. We can start from the same thing and run it once, re-use it in all of the different places, and avoid having to do the same thing over and over in different ways.”

Dave Brownell, a design verification engineer for Analog Devices, agreed: “It really comes down to communication and collaboration. When I was getting started with PS, I drew a map of the development process and there are six key roles: architect, design, design verification, evaluation, software and test. They all have their own programming languages, they all have their own platform, and they don’t share much. If you want to write an effective test in any one of those, you have to be an expert in that area, in the environment and in the programming language. So it is very difficult for one person to be able to do all six of those and work together. PS abstracts it up a level and we can capture the verification intent. You will still have the experts building those platforms, but they will be able to talk to each other and take each person’s unique perspective, put it in PS format, and then be able to share it amongst each other.”

Finally, Asad Khan, director of IC engineering at Cavium Networks, had to find something to add to fairly complete set of value statements. “We need to leverage verification across many platforms to zero in on any defects that we find. If something is found post-silicon, can we reproduce it at the pre-silicon level and unleash all of the different platforms to take a design into production. There is a lot of consensus and agreement that we need to move in that direction. I don’t think it is a difficult issue to get consensus on. The main thing is to agree on a methodology that you will deploy.”

Perhaps the heart of the panel was the feedback the users gave about the standard’s process itself. “UVM is hard for non-DV experts to pick up, and we need to make sure this standard is not the same deal,” said Glasser. “Make it simple and understandable and get it out there so that we can start getting some feedback on it, and then it will take care of itself.”

An audience member asked why PS was being defined with two languages. Glasser responded that there should only be one. “The standard should identify the semantics of what we are trying to get done, and specify how that should be rolled out within the tools without actually specifying a tool or anything like it.”

Gupta noted that “the main thing to consider is should we go with a custom language or should we use something that already exists. I am not sure where we will end up, but it makes sense is to stay with one so we don’t confuse the users.”

Glasser added that “an important thing to note is the way standards committees work. Whatever comes out is what everyone voted on. The only way anything gets produced is because the majority of people voted for it. So if you want to see something different, then you have to show up and vote. It is as simple as that.”

But not all panel members are happy with the way that works. “This has been my first active committee participation and it has been very eye opening,” says Brownell. “I went in expecting it to be like an engineering team in my company where everyone was in the same piece of mind and was trying to create the best possible standard. We are, but that is not how it works. It frustrates me when we spend two weeks arguing about a keyword. It is privilege to work with the people on the committee, who are incredibly smart and have passion and are doing really good work, but at the same time it is very frustrating.”

Part of the problem is that EDA vendors and users have different objectives when it comes to standards participation. “There is a balance that we are keeping every week,” says Roesner. “There is a language that is being defined and designed. It takes a lot of effort, time and knowledge to make something that is self-consistent and that works. We want the language astronauts to not go too far into outer space. They have the advantage of being out there seeing Earth from a distance so they can think in abstractions. We are users down here on Earth and we don’t want to deal with untethered astronauts. At times that is a challenge. It is a challenge for my team to make sure the user pain points and the use cases are really being addressed by this work. This is a healthy struggle.”

Added Brownell: “The vendors have their customer bases and they understand their use cases. But they don’t understand the stuff we are doing. No vendor knows everything we are doing. We are not looking to solve the exact same thing that you have solved, or are thinking about solving. It is the communication and getting into it with each other and understanding the big picture. It goes both ways. Everyone is comfortable in their world and what they understand, but we are trying to make this bigger and for everyone. So we need to be a little more open and sharing with each other.”

Roesner sees a similar issue. “More openness and more inclusiveness are going to help the case of portability. If we make the standard shaped by some preexisting product and nothing else, that will be limiting and limit the portability.”

For users, the big issue is how this new standard will help them close verification faster or cheaper. Brownell provided one quick example of how it has saved in his team already. “Traditionally, we try to re-use tests from the previous project because most of the peripherals were reused. It was manual conversion and we hired five contractors for six months and that was all they did – get the peripheral tests running with the new interrupt maps, the new triggers and that is boring and simple. The first thing we are doing with PS is automating the generation of those interrupt tests and all of the connections at the SoC level. PS can do a better job than the people could.”

But some things need to happen to make that possible. “There will be pain and cost to invest in this to make it work,” says Roesner. “There is a different way of thinking once we switch to scenario-based verification from constrained random. The industry has to address the education part of this. Being able to build a methodology that uses these mechanisms must be clear.”

At the end of the day, the early adopters are already getting value from PS and they believe that they have only started to scratch the surface of its ability. While it may be frustrating to work through the standards process, they are committed and want to make sure that what is produced is the best thing for the whole industry.

Glasser reminds the whole industry that they need to get involved before they lose that opportunity. “It is really important that the user community is involved in addition to the EDA vendors. At the end of the day we—the users—are the people who have to live with this all day long. We are the ones writing the tests, debugging the tests, using the technology that comes out of this. It is very important that the users are involved and their input is heard and contributed to the standard.”

Related Stories
Open Standards For Verification?
Pressure builds to provide common ways to use verification results for analysis and test purposes.
2017: Tool And Methodology Shifts
Second of two parts: System definition to drive tool development, with big changes expected in functional verification.



Leave a Reply


(Note: This name will be displayed publicly)