New Uses For Assertions

Proponents point to applications in system reliability, AI, and data analytics.

popularity

Assertions have been a staple in formal verification for years. Now they are being examined to see what else they can be used for, and the list is growing.

Traditionally, design and verification engineers have used assertions in specific ways. First, there are assertions for formal verification, which are used by designers to show when something is wrong. Those assertions help to pinpoint where issues occur. They also are used sometimes in functional verification, but this can be complicated and difficult to debug.

“Most designers that do know about assertions are including them in their code,” said Olivera Stojanovic, project manager/business development manager at Vtool. “But even when engineers have used assertions extensively, if they don’t use them for some time, they forget how. You need to play a bit with the timing, and writing complex assertions is very hard. It requires knowledge and time.”

The assertion landscape is changing, too. “If we turn back five years or so, engineers were using formal verification with assertions where there was some critical part of the logic,” said Pratik Mahajan, group director for R&D, formal verification, at Synopsys. “There was a part of the code that was control logic, and there was arbitration, some deadlock, line lock, or something else happening. Then, maybe there’s a post-silicon bug that occurred. So a formal verification engineer would be called to find the bug and to try to ensure that there’s no bug in a particular part of the design.”

Since then, engineering teams have begun using using assertions in formal verification to do a complete formal sign off of the design.

“We’ve been using simulation or emulation to try to say that a block or a design is completely verified,” Mahajan said. “Now there are tools that provide a complete methodology and infrastructure to assess that the set of assertions is enough and complete and if parts of the code are not being functionally verified by the tools. That’s a big shift.”

Shaun Giebel, director of product management at OneSpin Solutions, agreed. “The ‘app’ trend continues to be the key characteristic of formal adoption. Formal super-experts are rare. Most designers want automation, high ROI, and a quick ramp-up. One of the key differentiators of today’s formal verification solutions and their customer support is the technical and organizational flexibility to deliver high-quality custom solutions quickly. Whether it is about increasing the capacity of an existing app, like connectivity checking or verifying RISC-V cores with custom instructions, having pre-packaged, highly-automated solutions is what most companies want.”

For companies verifying processors, assertions are a key consideration.

“Assertions are an essential part of our verification environment and are, for example, used to check accesses to shared resources such as registers or to detect unknown instructions in the decoder, etc,” said Zdenek Prikryl, CTO of Codasip. “Additionally, we use assertions to check bus protocols. In addition to automatically generating assertions in our tool, we are adding the ability for users to add their own assertions in their HDL description.”

Codasip describes a processor in a high-level language, after which the instruction set simulator (ISS) is generated, along with the RTL, testbenches, and UVM environment. Verifying that the RTL matches the ISS golden reference is critical.

Still, while assertions are an important element in all verification, they are best written by designers and best utilized by verification engineers.

“This has always been a challenge because the assertions ask whether you are seeing something you expect or don’t expect,” said Neil Hand, director of marketing, Design Verification Technology (DVT) Division at Mentor, a Siemens Business. “It’s at a lower level. It’s not a verification activity per se, but is a more of a statement of design intent activity.”

Exploring new ground
Assertions also are maturing, and there is a growing recognition that assertions can be used for new purposes.

“It’s how we can use the data,” said Hand. “And actually, it’s not unique to assertions. It’s also a concept that applies across verification. Can you start to apply analytics more deeply? Can you start to apply machine learning in a deeper way now that there is an extra level of data?”

The data in this case isn’t necessarily failure data. It’s an activity measure, and this can be used to guide the design and verification process. “In the areas of design and verification management, we want to guide people on where to spend their efforts. Assertions are pretty good in that they identify areas of the design of interest. In traditional verification, they add benefit when things go wrong, or they add benefit when you do coverage but they also tell you when things of interest are happening so you could start to look at that and say, ‘In the grand scheme of things, if this is interesting, how can I use that in my verification methodology? Can I use that in data analytics?’

Today, this type of learning is used in data analytics to determine when looking at a particular area of the design. Where there’s more activity, or unusual activity, there’s more likelihood of an error. That data then can be used to create algorithms that can be used to predict failures over time, and ultimately to be used to correct errors by rerouting signals to other parts of a chip or system.

“We can see a trend, we’re going to trigger an alert that this might fail now,” Hand said. “The same can be done with assertions. It’s not a change in what we’re doing. It’s how we utilize that data.

OneSpin provides an app that allows for the precise measure of verification progress and coverage. In particular, it helps to determine whether enough assertions are being used, and if they are the right assertions. “It also helps to determine whether the assertion quality is up-to-snuff by combining the results of simulation and formal into a single view to get a clear understanding of verification coverage and progress so you know where and when to make adjustments,” OneSpin’s Giebel pointed out.

New uses for assertions
An emerging area of research examines how assertions can be made less deterministic, which would make them more useful in AI and machine learning applications, and also help to provide some visibility into systems that are now essentially opaque.

“AI and machine learning designs are non-deterministic. You can give them the same input, and they will give different answers. The answers are still correct, but it’s a learning algorithm, there’s statistics, there’s a probability that the answers will not be the same,” said Hand. “Given that assertions are deterministic, can we make them adaptable to this new domain of AI and ML. On the design side up until now, we’ve been talking about how to use data from assertions in AI and ML for verification. The design itself is still deterministic, but how do you then start to use this in a non-deterministic design, where there is no correct answer. What is the next step? It’s not a ‘yes’ or ‘no.’ It’s a kind of like a ‘maybe.’ For the next generation of designs, there’s more and more of these non-deterministic designs at a low level. At a MAC level, at an interconnect level, at a functional level, they are deterministic. But once you go to an application level they’re non deterministic.”

This also begins to bridge different disciplines. “There are probably some things that we could learn from the analog world,” Hand said. “In the analog world, they’re looking at the eye diagram. Is it within the eye diagram? If it’s outside, that’s not good. If it’s inside, it’s okay. And are there parallels we can draw? The concept of determinism versus non-determinism is new for digital. It’s not new for analog.”

And it begins to challenge long-held beliefs about some basic perceptions about data, including how it is organized, stored, what data needs to be kept, basic read-and-write functions, and what kinds of analytics are needed, said Paul Cunningham, corporate vice president and general manager of the System & Verification Group at Cadence. “The whole world is grappling with a data explosion, but we really do not need to re-invent the wheel. The information flow, from the end application up to the cloud are all on the same bare metal, and we’re going to buy the same kind of discs or solid second storage. At some point in that stack you really need to get that horizontal sharing of insights.”

Another consideration is whether data is persistent or transient. “We use the phrase ‘streaming’ quite a bit in EDA,” Cunningham said. “Sometimes you can have a huge amount of data that’s coming out, and you immediately want to process it and produce certain kinds of meta data or certain kinds of analytics, but the raw stream is not persistent. There are parallels with why we want to put AI at the edge because they’re saying, ‘Look if you’d streamed all the data to the cloud, it’s too much.’ So we need to do a certain amount of processing on the edge, and then we’ll send metadata to the cloud. There are analogies there that in different simulators, for example, you could take the raw waveform stream. You may already do some analytics on it on the fly, and then store some meta-information. Somewhere in there it is possible that we may end up doing the whole stack ourselves. Especially if you look at waveform storage, the most efficient way to organize the data structures and get that kind of read, write and streaming is to go quite low level. But the jury’s out. There’s a fair chance that even the most specialized of use cases can still go through standard big data stacks.”

The future
That’s only part of the picture. Data also can be used to determine assertion coverage.

“You may be able to prove the assertion coverage checks a certain situation only two times instead of a thousand times you were expecting it to,” said Vtool’s Stojanovic. “If it’s not triggered, maybe you’re missing some tests that will trigger this assertion. This kind of information might help you. Additionally, some engineering teams are adding things like property coverage. This is not like functional coverage you are developing according to the transaction, but something you can re-use not just for the checking, but also for the coverage.”

There are two parts to this concept, Mahajan suggested. “The first part is what we have within the tools and people are working with assertions. It’s a very iterative nature. You write a design, you write a constraint, you write assertions, you run the tool, and then you figure it out either you missed a constraint or there is a bug. An ML-based regression mode accelerator (RMA) can help here. In the first run, the RMA would do the learning of what the experience is and create a database into the system. The next time they do a run, the tool is able to see the learnings that can be re-used to get a speed-up on the quality of results, as well as the turnaround time.”

That data is increasingly valuable. “There is definitely information available in terms of the nature of the design, based on all this learning that we have done from assertions,” Mahajan said. “As of today we haven’t exposed it, but if any customer does come back to us, we definitely can do that. The part that becomes very important is the formal code. I have an assertion, and it’s spanning all the way to primary input when I look at it. But it’s very possible that formal engines use a very small part of the design for verification. There’s more information that can be diverted back to the user to understand what’s happening and how the assertion is interacting with the design.”

While it may take time for these new approaches to gain traction, there is much that can be done today to make the most of assertions. For one thing, designers and verification architects need to sit together to discuss applying assertions throughout the project.

“Even before they start creating or writing the RTL itself, at the plan level it’s very clearly available,” he said. “What’s the control part of my logic? What’s the data path there? What are the security regions? All this information is all available to the architect at the time of designing the architect plan there. Do it before you start doing the RTL, and then you can even identify and debug critical parts. These are the parts of the interfaces. This is the IP that I’m going to get from someone else, so I’m not bothered about the verification part of it. I’m only bothered about the interfaces. That’s the aspect users can identify. Designers and a lot of companies are doing it with designers themselves writing block assertions. Those are the most effective, and they are the best for the formal tool, because from a designer’s perspective, they’ve written the assertion in the scope of the module or the block but the scope is very limited there. Formal verification can churn them out very fast there. Once the segregation is very clear that everyone knows, this is the part for which the designers are going to write all the white block assertions.”

Putting the chip architect, the designer, the verification architect, and the security architect, and the functional safety architects in the same room to do planning up front is not that far-fetched as chips and systems become more complicated. Everything shifts left to the same starting point, and from there all the pieces go together more neatly.



Leave a Reply


(Note: This name will be displayed publicly)