Adding Order And Structure To Verification

How industry experts reacted to their first encounter with a formal capability maturity model.

popularity

You can’t improve what you can’t measure, and when it comes to methodologies the notion of measurement becomes more difficult. Add in notions of the skills, capabilities and experience levels of individuals within an organization, which may affect their ability to adopt certain technologies, and it requires considerable attention.

This is where concepts such as capability maturity models (CMMs) come into play. A CMM allows companies to assess where they stand in terms of skills and capabilities as compared to other companies within an industry, and to identify the areas in which they could optimize their processes to gain additional value from their engineering team. It also can define risk levels associated with bringing new technology into a methodology.

For several years, Wilson Research has conducted a survey, on behalf of Mentor, a Siemens Business, to assess the maturity of the industry for various aspects of functional verification. As with any survey, trends only can be seen when the questions in the survey remain constant. Adding in new questions, or modifying existing ones, upsets results and thus it will take time to see those new trends. One area that has changed significantly since the survey started is the state of formal verification (see figure 1).   

Fig 1. Growth of formal verification techniques. Source: Mentor, a Siemens Business.

Formal verification has been transformed from a technology that required a Ph.D. to understand and operate into a technology accessible to a much wider audience. While additional survey questions have been added, they lack the depth and structure that is seen in other parts of the survey. For example, survey results include applications like SoC integration connectivity and X semantic safety checks but does not include clock-domain crossing. The survey also says nothing about how effectively a certain technology, such as assertion-based verification (ABV), is being used.

The first step in understanding where the industry is today, so that progress can be tracked, is with the creation of a maturity model. That is what Oski Technology has done, and it presented its first version to a group of industry experts earlier this year. Semiconductor Engineering was invited to that unveiling. In order to promote free discussion, all comments have been anonymized, distilling the primary thoughts of the participants. Present in that discussion were Wayne Yun, AMD; Ram Narayan, Arm; Sanjay Deshpande, Arteris IP; Bryan Morris, Ciena; Anatoli Sokhatski, Cisco; Michael Theobald, DE Shaw Research; Adriana Maggiore, Google; Erik Seligman, Intel; Aniket Ponkshe, Xilinx. From Oski: Craig Shirley, Vigyan Singhal, Ninad Vartak, Roger Sabbagh and Gloria Nichols, LaunchM.

Formal impact
There are several ways in which formal technology can have an impact, but the justification for each is different. It is important to initially relate these to a traditional simulation-based verification flow so that the benefits can be assessed.

Fig 2. Formal impact to a simulation-based flow. Source: Oski Technology.

Many companies look toward formal to address the weaknesses of simulation. “People draw these curves for simulation,” says Craig Shirley, president and CEO for Oski. “Maybe they draw these curves for emulation, but they do not draw these curves for formal. Formal tends to show a little benefit here or a little bump in the curve there, but it’s relegated to a nice-to-have feature in some areas. Tonight, we will draw a curve that shows the chip-level impact of various formal verification methodologies.”

The red areas in figure 2 indicate where simulation is weak. One such area is where testbench-based strategies trend toward an asymptotic level, making it difficult for teams to improve coverage. Sometimes this is due to extremely long runtimes necessary to expose these bugs. Emulation helps reduce those runtimes, but then presents additional testbench difficulties.

“The good news is that 90% or more of these bugs are block-level bugs,” adds Shirley. “They are not system-level bugs. We continually ask people how many blocks were modified to fix these bugs? It is almost always one. So we are still doing block-level verification even though we call it system-level. They tend to be highly parallel corner cases.”

That means there is room to improve block-level verification, and one trend with advanced formal methodologies is to push toward formal sign-off for high-risk blocks.

The other red area in figure 2 is associated with the time to first bug. It takes a long time to develop simulation-based testbenches, which means a lot of RTL coding gets done long before the first test is ready. “When you start using formal early, engineers often find bugs 2 or 3 days after the bug was created,” continues Shirley. “The RTL is still fresh in their head. They do not have to context switch from some other project. Another benefit is that the designer may be a little more aggressive if they know that bugs will be found faster, and their fix verified exhaustively. Maybe they have more bandwidth right now to fix bugs because they are not inundated with bugs. Many of the bugs that you find here may be found in simulation—eventually—but you are finding them earlier.”

One participant had an interesting perspective on this. “What shift left means to many people is how to get to mass cycles the fastest. How quickly can we be running on an emulator or FPGA? It is not just about the first bug. For me, time to last bug is more appealing than shift left.”

Formal levels
The Formal CMM defines five levels of formal methodologies that can be applied. All five levels are useful, but the goals, training, and tool requirements for each level are different. In the discussion, it became clear that additional levels may be added in the future, but this represents the state of the industry today.

Fig 3. Formal application levels. Source: Oski Technology.

Roger Sabbagh, vice president of applications engineering for Oski, described the levels shown in figure 3. “We start at level 1, where people are using things like linting. This is very basic. Automatic formal checks that are focused on small, specific problems. You might be looking for an address to memory that is out of range. The user may not even know they are running a formal tool. There are no assertions that need to be written. Execution is painless.

“Level 2 introduces the formal apps. This is where the user specifies something about their design intent, maybe in a spreadsheet. For example, they may capture all the connections in a design, or the different types of registers and behaviors associated with those registers. The tool generates the necessary assertions. They may have to apply some constraints at the interfaces of the design, but it is still relatively easy to adopt because the user is not writing any of the assertions. Examples are connectivity checking, clock gating and sequential equivalency checking.

“Level 3 is where people start to write targeted assertions that are distributed throughout the design. They could be local assertions, written by the RTL designer, such as a bus being a one-hot bus. They may target control logic, such as arbiters and FIFO controllers, or interfaces. Verification engineers can add assertions to the design, plus they can use assertion VIP, which might check a standard interface protocol. You run formal on those assertions as well as running them in simulation.”

The majority of tools associated with levels one through three fall into the class of shift left—finding bugs earlier than would be possible using simulation. In the discussion, it was noted that some of these tools also find bugs that would be very difficult to discover in simulation, such as clock-domain crossing, or which may take very long simulation times, such as connectivity checking.

“Level 4 is where we start to replace simulation for block-level verification,” says Sabbagh. “The goal is to sign off the functional verification of blocks with formal. That means that the types of properties being developed change. They become end-to-end checkers instead of local checkers. Formal sign-off can handle some complicated blocks. Examples include the load/store unit of a CPU, the Warp sequencer of a GPU, a multi-port buffer manager in a networking device, a multi-lane aligner in a PCIe receiver or a MAC level block in the receive path of a wireless interface controller.

“Level 5 builds on level 4. We start doing architectural formal verification, which addresses some system-level requirements that can be very difficult to verify otherwise. We could be proving cache coherence across multiple heterogeneous CPUs using different protocols. Same for addressing deadlocks that may require running full software stacks to exhibit the problem.”

Levels 1 through 3 are largely addressed by tools and languages, but there is a chasm between level 3 and level 4. Level 4 requires a lot more contribution from methodology and a lot more user skill. At this point, several discussions ensued around notions of coverage convergence and skillsets contained within companies. One area of agreement was that any company attempting either levels 4 or 5 probably employs full-time formal verification engineers.

Going to level 4 requires a leap in skills, but it also carries different risk/reward levels. One participant commented, “If you are trying to get your managers to invest in formal, it may be easier to come in at level 4 because formal replaces simulation. At the lower levels, when a manager asks what simulations are avoided or how money will be saved, your answer is that you still have to run all of them. So level 4 is actually more appealing to managers. However, I do not recommend that people start at level 4, because it is a very risky thing to do and you can get into a lot of trouble.”

Another area of agreement was the stickiness of the levels. “Levels 1 and 2 tend to be very sticky. Level 3 changes per design and Level 4 is completely different for each project. The lower levels contain a grab bag of stuff and may or may not be useful for every design. But when you do something that works, then managers tend to be happy and want you to repeat it.”

Formal capabilities
Certain skills are required to support the various organizational levels. This is basically broken down into four Cs (Checkers, Constraints, Coverage, and Complexity). It would be too much to cover all of the details contained in figure 4, or the detailed analysis of each of the skill types, so only those that generated discussion are broken out.

Fig 4. Formal verification engineering skills matrix. Source: Oski Technology.

The suitability of ProofCore coverage was discussed. ProofCore is an observability coverage metric and can be used to make sure the checker set is complete and that all behaviors of the design are covered. It determines if a piece of logic actually feeds into the proof associated with a checker. If logic is modified and no checker fails, then it is not really used in the proof. This is a similar notion to mutation analysis, which takes this a step further.

One comment noted “that ProofCore is dependent upon how the checkers were written, such that if there is a fundamental weakness to your assertion, you may only discover that weakness with mutation analysis.”

Another said that “mutation covers a different area, which is one of the most important roles for a verification engineer, and that is design awareness. There is an element of planning that is not shown here, and part of that is understanding the design, the sources of complexity, understanding how to break down the functionality into individual checkers effectively and to plan the verification strategy.”

The problem with mutation analysis is that it is computationally heavy. However, attendees still thought it contained value. “If you look at simulation, we have coverage for stimulus, but we do not have coverage for checkers. Mutation analysis fills in this blank. This makes it very important. In terms of computational power—yes, it is expensive.”

“Sometimes there is an assertion that people forgot to enable. Sometimes a checker, sometimes they just disabled some test. There could be a problem with a script, and something got left out.”

Coverage becomes a hot topic when progressing to levels 4 and 5. One participant commented, “As a manager, no formal metric is enough for sign off. I am comfortable skipping simulation, but I want to see that the system runs in FPGA or emulation to show that it actually passes the test you are applying and the number of cycles—that is what makes me comfortable. I am concerned that we may putting coverage metrics into the hands of designers and verification people who are very eager to please management, who will be happy that the charts are showing them that we hit 100% coverage but actually they do not know if they have missed something critical. I am consciously moving people away from formal coverage as much as I can.”

This expresses not just the skill levels, but the maturity of the team. Processes can be put in place, but if blindly followed, or if people look for shortcuts in them without understanding why they exist, then they can lead to failure. “If the process does not make sense, then you need a way to modify it. So there is a process that controls the process.”

One technique that attendees felt was missing in the Formal CMM was design reviews. These also are tied to skill and maturity levels. “It is really a function of how much trust the verification engineer has earned with the designer. If you are viewed as a person who is going to come in with very simple questions, things that you could have figured out for yourself, it is less likely you will get a lot of participation.”

“When I did it myself, I did not have any of these coverage measures, so it was a demonstration of adequacy through a review process. I had to make it as easy as possible for them to review. The trust came in after we saw those units work. I do not see a way out of having reviews.”

Another missing element identified by this group was helper properties. These are properties that are used as assumptions and can help to reduce the state space or get to proof convergence sooner.

Management and individuals
Each organization has to work out which levels provide them with the most value. It is not a race to get to level 5. “I don’t think level 5 is more bang for the buck. I would say that level 5 is more bang and more bucks. It takes more effort and more money. Level 1 could also be very high bang for the buck because the effort is so low.”

Attendees questioned how this relates to individual skill levels and organization maturity level. “This is really an engineering skills matrix, and the organizational maturity is tied to the checkers. What is your ambition at the checker level? That is your organizational maturity.”

It all starts with the individual skills. “We envision coupling this skills matrix with a survey so an engineer could grade themselves,” says Sabbagh. “This lets people realize where they are on the scale—what is possible to do with formal, how many different skills are there. What should I learn next? You can use this survey method to grade yourself on each axis and compute an average score.” An example is shown in figure 5.

Fig 5. Example Engineering Skills Grading Chart. Source: Oski Technology.

Conceptually, you have to be able to go from people skills to organizational maturity. “Something that should be on here is the verification plan. It is a key skill. Deciding what to do with formal is very different than knowing how to do it.”

Participants agreed. “This talks about individual engineers, and maybe we need to think about managers, as well. How to decide which block to do which way, how to assign them to people, how to measure when you are done.”

“An organization needs to look at their biggest pain point. The skillset relates to the individuals and talks about your capabilities within the organization.”

One participant suggested this approach: “What held you back on a previous project? What remedies will you apply going forward and how does formal play a role in that. Then pick the level where you will apply your resources and question what skills are required, so you actually go up and down through the charts.”

Mixing formal and simulation
How do simulation and formal work together to solve the verification challenge? Many teams face this dilemma all the time. “This varies a lot from organization to organization,” says Vigyan Singhal. “We deliberately tried to stay away from that because it can become religious. There are some organizations who believe there is a strict wall between formal and simulation and other places where they want collaboration from the beginning, and they want to exchange bugs with each other.”

Participants also felt that emulation can play a significantly different role than simulation. “The strength of emulation is to find hardware/software bugs and not block-level bugs.”

Initial reactions
It can take time for a new concept to become accepted, and initial reactions understandably were mixed. “It is a great start. We have seen fantastic results using this kind of thing. Formal engineers can look at the new skills they need to pick up.”

“I am a little skeptical. It seems as if you are trying to impose a strict linear order on things that aren’t linear. Depending upon your problem, it may not always need to go up from the 3 to 4. Perhaps they should be ABCD.”

No matter how skilled or mature an organization, problems will remain. “If all of your engineers are at level 5, you are still limited to the things that you can think of. What scares me is what we find that we did not think of. You could triple or quadruple resources on the project and still not have found them. I don’t see a good answer to that problem.”



Leave a Reply


(Note: This name will be displayed publicly)