Is Art Acceptable In Verification?

Fuzzy definitions, complexity and a vast range of designs make it difficult to draw conclusions.

popularity

The industry appears to have accepted that verification involves art as well as science. This is usually based on one of three reasons, namely: the problem is large and complex; there is a lack of understanding and tools that enable it to be automated; and if it could be made a science, all of the jobs would have migrated offshore.

Today, designs are built from pre-verified blocks and new devices targeting markets such as the are simpler and are often variants of pre-verified platforms. Design turnaround times for these devices are a couple of months or less. In addition, there is growing recognition that a product is more than just silicon. The scope of verification is growing to include the entire system, including software. Thus, the industry may have to rethink verification and will certainly need more automation than exists today. So, how can it be made more science than art?

A panel at the recent Design and Verification Conference (DVCon) discussed this issue. Panelists included Harry Foster, chief scientist at Mentor Graphics; Janick Bergeron, fellow at Synopsys; Ken Knowlson, principal engineer at Intel; Bernard Murphy, chief technology officer at Atrenta and JL Gray, senior architect at Cadence. What follows is a condensation of the ideas expressed during that panel.

brianpanel

Unless designs stop adding new features, there will be a need for creativity in both the design and the verification of it. “There has been quite a lot of progress over the past 10 to 15 years,” points out Gray. “constrained random, functional coverage, verification plans are a few examples. This is much better than it was during the days of directed test.” Gray poses an interesting question. “Have we gotten too good at what we are doing and thus allowed the designers to get away with producing lower-quality code?”

The problem space is growing and putting additional pressure on aspects of the flow. “We want to see production firmware and drivers used more for validation purposes,” says Knowlson. “We cannot afford to keep writing verification firmware for these complex systems that rivals the cost of the production systems.” This is putting different demands on the software developers who are not used to being used in the validation flow. “It takes time and effort to meld those two communities.”

The industry also is seeing the fruits of many years of research and investment into static verification methodologies. “There is definitely an increased awareness in the importance of static and formal verification,” points out Murphy. “It is no longer experimental, and it is seen as a necessary component for SoC-level verification.” In an interesting twist, simulation is proving to be unscalable at the system level. “There are too many combinations to check in simulation and formal represents a better solution.”

Murphy also sees problems tying hardware verification into software. “There are bugs in the hardware and the drivers. We should acknowledge that and figure out a way to communicate to those downstream in the process about what they can do as an actionable next step when problems are found.”

As designs have grown, it has also caused the industry to re-look at what were accepted practices. “We have fairly well-established methodologies for the IP and sub-system levels,” says Foster. “Constrained random and formal are part of that. What is not working well is full-system verification. The metrics we have today do not work well in this domain and this presents a lot of opportunity and work.”

But when does sub-system end and SoC begin? “It used to be that a system was defined as what people above me were doing,” points of Bergeron. “Today we see it being based on what fits into a simulator. When it gets too large to be placed in a simulator and has to go into an emulator or FPGA platform, the problem becomes somewhat different. Verification of the sub-system is working quite well and while there is some degree of art, it is well understood. It is at the system level where we have fewer solutions.”

The system-level integration problem is very different than problems of the past. “I am disappointed that I turned out to be wrong about (HLS),” says Bergeron. “Twenty-five years ago I was in a team that introduced logic synthesis at Nortel, and this revolutionized how we did design. I thought we would see the same with HLS, but this hasn’t happened. This is because we rely on derivatives and IP composition.”

Given that each of the IP blocks is pre-verified, it would appear on the surface that the verification of the integrated system should be easier. “It is not that simple,” points out Foster. “It is not like LEGO blocks. Complexity happens when IP blocks interact with each other. You have to verify that at a higher level. Yes, it is critical that IP is high quality, but you are still obligated to verify the interactions between them.”

“I thought I had had a brilliant idea six months ago related to the verification of control systems and things such as clock and reset management,” declares Murphy. “What we could do is abstract away all of the IP and look at the manager that is controlling the clocks and resets, and just verify that. That way we wouldn’t have to look inside the IPs at all. But customers wanted to look inside the IPs a little bit because there was things such as control logic for power management, so the idea started to crumble.”

To make that kind of verification possible, design practices would have to change. “When these systems are being designed, the designers do not consider that they may need to separate them,” explains Bergeron. “When thinking about applying formal you need to think about how you are going to separate out the parts so that it can use this type of technique. If you don’t do this and have to do it later then it will be more difficult.”

Multiple cores connected to a cache and memory is an example of this type of distributed complexity. What may be possible if the interfaces were better defined? “People build what they do because they believe it will deliver the best power, performance and area,” says Murphy. “If they can get an edge by breaking down the walls then they will. Smartphones do precisely that. They are tinkering with power management at every level to save pico-watts. They will not stop doing that to simplify verification.”

System integration also involves software. “You can’t just take the drivers and plug them together and expect them to work,” says Knowlson. “There are interactions that have to be ironed out. The system integration teams spend a lot of time trying to pull all of this together.”

One of the proverbial problems associated with verification is knowing when you are done. It involves a judgment, and that is part of the art. “There are issues associated with what completion means,” observes Murphy. “But the way you get to that point should be scientific. You cannot say something is 100% compliant, but you should be able to get to a known point scientifically. is a necessary but not-sufficient item.”

“When we talk about art, people often get confused between ad-hoc and systematic,” points out Foster. “At the system-level we lack some of the systematic methods today, but for IPs they are well known.”

Gray agrees, liking verification to the scientific process. “If you are doing something that you already know how to do, then you can do it the same way again and it will be fairly predictable. The challenge comes when you add something that you don’t know how to do, and this adds in an artistic element.”

Adds Murphy: “Creativity is a part of science. If the creative process is used to define how far you have got, then you have a problem. You need a quantifiable process and within the bounds of that process you can be innovative and creative.”

Knowlson suggests that the industry faces two types of problems—what is complicated and what is complex. “Complicated can be addressed by best known methods. Complex means that we don’t really know how to do it and best known methods will not work. This is where the art comes in. How do we address these complex systems that have not been tried before? It is not repeatable.”

That’s a key distinction in verification. “As engineers, we have to know which type of problem we are working on,” says Gray. “If we think it is complex and yet is only complicated, then it means that someone has already figured out how to do it. The other way around, we may use the best-known methods and yet find out later that it will not work or converge. Figuring out which is neither art nor science – it is engineering.”

Bergeron adds a further distinction, as well. “The majority of verification does not involve much creativity. There are art and skills involved, but creativity only comes into play when you need to improve the process. For most designs, there is a spec, a schedule, and there is little room for creativity. In each company there are a few people whose job it is to be creative, but the majority are not creative – it just requires lots of skills and experience. It is often called art because this is not easy to codify.”

Tools add structure to a process and turn the art into science. “As a tool provider, it is our job to take the art out so that you can focus your creativity on something more interesting,” says Gray.

Foster provides another way to look at it. “Using something like SystemVerilog is science. When I start to think about what I want to verify, that is art. You cannot automate thinking.”

Semiconductor Engineering would like to hear from verification engineers in the field. How much art and creativity do you see as being required to do your job? Are the EDA companies providing you with the tools necessary to turn your job into a science?



Leave a Reply


(Note: This name will be displayed publicly)