The Future of UVM

Discussion is long overdue. At a minimum it has to become easier to use.

popularity

It’s time for a frank discussion on the future of UVM. Given how UVM usage has grown and the number of teams that rely on it, I think this conversation has been a long time coming.

Is continuing to use UVM the right thing to do? Do we have hard evidence that supports our continued usage of UVM? Do we actually benefit from it or do we just think we benefit?

When it was first introduced, did we accept UVM for good reason or accept it out of pure suggestion? EDA suggested it was a natural step to take. We’ve seen the success stories. The headline <Company X> Tapes-out SoC Using UVM implies success. But how often do people publish failures? How often do we see articles titled UVM Adoption Blew Our Development Schedule. When have we seen conference papers dive into the details of how an over-engineered UVM testbench led to a 2x schedule overrun?

Or maybe it wasn’t suggestion. Despite having no clear indication UVM is better than what we had, maybe we accepted UVM because we wanted to; because it was cool. Engineers love to optimize and UVM gives us all kinds of options for doing just that. We love the idea of portability (so much so that the next new thing has the word ‘portable’ right in it’s name!) and UVM offers lots of portability. UVM makes it easy to generalize job postings and rank candidates, too. And let’s not forget that UVM was so, so shiny1. There was so much to learn which was a big draw for engineers. And even though it’s software-y, the language and BCL packaging shielded us from the scarier bits of software theory.

But thinking back through the last 15 years and the evolution of functional verification that culminated in UVM, have we ever considered that UVM is where functional verification possibly went wrong? Should we be considering a future without UVM?

Or… hmmm… uhhh…

Meh.

Never mind.

Let’s scratch the whole time-for-a-frank-discussion-on-the- future-of-UVM thing. The evidence for and against is sketchy at best so there’s probably no point in discussing it. We are where we are so let’s keep thinking of UVM as the Universal foundation of functional verification. Let’s keep adding the features to UVM that produce the anecdotal evidence of it’s own success. Let’s take it beyond simulation.  Let’s keep using it to fill conference programs, filter out qualified job candidates, hone our pseudo-software skills2, sharpen the divide between design and verification, fuel the need for training and complementary tools, etc. Let’s keep doing what we’re doing! Except for one tiny difference:

Let’s make failing miserably with UVM less likely.

Just because we don’t see the failures published and celebrated doesn’t mean they don’t happen. You know they happen. They’re out there; the weekly schedule slips; the ridiculously complicated testbenchs. You’ve seen them. I know you’ve seen them because you told me you’ve seen them. Many of them! And they’ll continue to happen unless we reign in future-UVM to make them less likely.

To get us started, I’d like to propose a set of rules that applies to all future-UVM development:

Rule 1: Features actually have to work. This seems like a no-brainer but it’s a rule that’s currently being broken. Features that don’t work get fixed or removed. Phase jumping… I’m looking at you… unless of course someone has recently fixed the phase jumping.

Rule 2: All features are recommended. If a feature is not recommended, chances are it’s primary purpose is to mislead unsuspecting verification engineers. Instead of recommending no one use a feature, let’s just save people the trouble and remove it. All the phases that run in parallel with run_phase… now I’m looking at you.

Rule 3: Cap the size of the code base. Face it, at several 1000’s of lines of code and growing, size and complexity is what’s going to eventually take this house of cards down. If continuing to prop it up is a long term objective, we’ll need to cap complexity. Easiest objective way to do that is capping the size of the code base. If you want to add a line of code that takes UVM beyond the cap, you need to remove some other line of code first… which means you need to know what features people are actually using… which is another discussion… for another time.

Rule 4: New features come with a price. The price of new features is set in bug fixes. You need to pay for the feature – i.e. fix some existing bug(s) – before your new UVM feature is released. One bug fix for a function/task, five for a class and 15 for a package + 13 for every change that breaks backward compatibility.

Rule 5: All new features are regression tested. Aside from steps 1-4, this is my personal favorite. Your new feature or bug fix has to be delivered with tests that verify it works. The tests go in a regression suite that’s run with every update to the code base.

That’s it. A concise set of rules that improves future-UVM for all of us. Who knows where those steps will take future-UVM over the coming decade, but I do know it’ll make life easier for the teams still catching up on the last decade. And kudos to the people that have already started down this path with the frameworks and papers that are meant to make UVM easier. Just imagine what these people could do if it was easy to use on it’s own!

Side note… I started writing this article to go in a completely different direction. Funny how fast and far things can go off the rails once you really get moving.

 

1 Shiny and new was the promise of functional verification in the early 2000’s and it’s pulled through on that promise big-time. Admittedly, the shine of RVM and VMM is what pulled me into verification in the first place and what got me through to UVM.

2 I’m no software developer but I appreciate it when UVM makes me think I am.



7 comments

Tudor Timi says:

I agree with all of the points regarding future UVM development, except with rule 3. There’s no reliable way to choose this number. Evan if it were possible, do you really want to see Perl style competitions on who can produce the shortest code that does a certain thing (most probably sacrificing readability)? Also, if everything were documented, one could use UVM like “black box” and not really care about how things are implemented. I also suffer from this tinkering mentality, but I’m trying to hold myself back from jumping to the BCL code to figure out how to implement something and try to rely as much as possible on the docs.

nosnhojn says:

Taken literally, yes, rule 3 encourages people to act completely irresponsibly. but I stick by the idea of limiting the size of the code base. uvm is bloated. had it started with a cap, it would have forced value based discussion/decisions to keep it from becoming a dumping ground for good intentions.

Lars Asplund says:

Rule 5 is not just about testing. It will help creating a design of functionally cohesive and loosely coupled modules. That will reduce the complexity such that a larger code base can be maintained. But you’re right, adding features of no value will act in the other direction.

Lars Asplund says:

As advocates for unit testing. Where are our evidences?

nosnhojn says:

awesome question. I’ve published an article that includes data for design v. testbench bugs. also articles for bugs found unit testing legacy code (in uvm-utest and a real project). far from decisive though. for a convincing argument we’d need more people trying and measuring quality.

Brian Hunter says:

Neil, Your condemnation of phases and phase jumping makes me think there is something off with your setup. What exactly has been the problem? We have been using it for years. Are there any papers that state that it is broken?

nosnhojn says:

brian, can’t find the exact reference but here’s a thread from verifacademy that talks about it not being recommended: https://verificationacademy.com/forums/uvm/uvm-phase-jumping. I think I remember it being the protocol between the sequencer and driver as being broken. if the threads are killed in the wrong order, there’s a fatal from one or the other.

Leave a Reply


(Note: This name will be displayed publicly)