The Value Of A Model

How much would you pay for a model? Recently, the answer to that has been $$$.

popularity

Increased talk about the Digital Twin has brought models to the forefront of the discussion. What are the right models for particular applications? What is the correct level of abstraction? Where do the models come from and how are they maintained? How does one value a model?

The semiconductor industry has been reluctant to create any model that is not directly used in the development path. That is to be expected. Within most industries, no additional work will be done unless it is demanded, such as by a regulatory body or customer, or has economic value.

When a model is created for usage outside of the primary development chain, there are problems with keeping that model in sync and making sure it is available at the time when it can provide the most value.

The creation of abstract models for well-contained blocks, such as processors, has been very successful. These models have a long lifetime, provide adequate fidelity for software development and have good performance characteristics when coupled with RTL models for system verification.

There have been attempts in the past to automatically derive higher-abstraction models from RTL. Those attempts were not very successful and in many cases the performance advantages they achieved were not enough to make a significant difference in the amount of analysis that could be performed.

The industry has had more success deriving the RTL model from a more abstracted source using high-level synthesis technology. With the RTL being derived from an abstract SystemC model, a lot more faith can be placed in the source model and can provide significant performance advantages over utilization of the derived model.

But by far the biggest success has been the use of emulation to speed up the execution of the RTL model. It is, by definition, up to date and accurate. The cost to the user is huge, often in the millions of dollars for the ability to execute a single model at a time. But this is the cost that the industry finds acceptable for such a model, when it becomes part of a digital twin.

When coupled to models for other aspects of the system, such as mechanical or thermal models, or receiving streams of data from LIDAR arrays, it enables analysis to be performed that was not possible in the past. It is not just a faster model, it is an enabler for new kinds of analysis.

The digital twin brings a new dynamic into the equation. Up until now, simulation has been predictable and repeatable. The stimulus generators, even though they rely on randomization techniques, always provide the same input given the same seed. The same is true no matter if we are talking about SystemVerilog and UVM stimulus or the emerging Portable Stimulus. Both of these rely on models that are predictable and repeatable.

What this means is that two input events, that supposedly happen simultaneously, will always happen sequentially, one of them happens at an immeasurably small time ahead of the other. This is how the technology works. But that is not true in the real world and the digital twin is looking at bringing real-world stimulus into analysis. Sometimes those two inputs will change order and sometimes they will happen at exactly the same time. We may not have the necessary notions of coverage to handle such differences between the real world and our perfect simulation world.

The real world can be messy. It may be necessary to rethink some analysis tools when this happens.

Today, there is a new tool in the toolbox. AI is the greatest pattern matching technology the industry has ever had access to. Some of this technology has already been used in debuggers – trying to find outliers or patterns that have not been seen before. It was also used by Jasper to attempt to gain knowledge about a block of IP by observing its behavior. The extraction of information and knowledge from data has a tremendous amount of value.

Is it possible to observe the real world and create models from it? Could formal technology, coupled to this model, be used to find places where the real world diverges from the intended world, as captured in the RTL model? Can the technology be used to create higher abstraction models?

There are companies attempting to develop tools for the creation of models and for good reason – there is a lot of money at stake. For anyone who is successful, there is a lot of money to be made and a lot to be lost in emulation sales. This may be a great opportunity for a startup, but at least for the foreseeable future, emulation sales growth looks rosy.



Leave a Reply


(Note: This name will be displayed publicly)