Design And Verification Methodologies Breaking Down

As chips become more complex, existing tools and methodologies are stretched to the breaking point.

popularity

Tools, methodologies and flows that have been in place since the dawn of semiconductor design are breaking down, but this time there isn’t a large pool of researchers coming up with potential solutions. The industry is on its own to formulate those ideas, and that will take a lot of cooperation between EDA companies, fabs, and designers, which has not been their strong point in the past.

It is difficult to optimize something when you can’t analyze it, and analysis is becoming a lot more difficult because many of the issues in large semiconductor products are either multi-physics, or they are a combination of hardware and software, system, board, IC package, interposer, chip, and IP block. In the past, a divide-and-conquer approach has been the way in which problems were dealt with. Sometimes this is done hierarchically, such as fully verifying a block before it is integrated, or sometimes by isolating an issue, such as with clock-domain crossing.

Increasingly, though, some issues resist these types of approaches, and the industry has yet to find an easy solution. For example, issues like security are system-level issues. The same is true for many performance or power issues. Even issues like power and signal integrity have to deal with a hierarchy that spans from IP to system, through a complex interconnect of many layers, each of which has traditionally been tailored towards a different set of tools.

This creates a new set of modeling problems and requires that some existing tools take on a much bigger role than they have in the past. Alternatively, the industry will have to get serious about imposing constraints on designs, such that analysis is possible. While this industry is beginning to recognize the issues, it is tackling it in a piecemeal fashion today. So far, nobody has proposed a general solution that will extend into the future.

It is a numbers game. “If you take the whole system into account, the number of corners is exploding,” says Shekhar Kapoor, senior director of marketing at Synopsys. “Today, the approaches are still going back to the hierarchical divide-and-conquer way of doing things, and also finding ways to reduce the number of scenarios that you have to deal with. Without those, the computational requirements will be huge. And for you to be able to sign off on the systems, the path will be much, much longer.”

Hierarchical approaches are still useful for some things. “The principle of abstraction is utilized in places where the fundamental complexity of analysis is too complex,” says Prakash Narain, president and CEO of Real Intent. “In simulation, we use it in terms of bus functional models, and static timing analysis. We use it by creating I/O-level timing models, clock-domain crossings, static sign-off techniques for clock-domain crossing, reset-domain crossing. These are all places where we are successfully utilizing hierarchical techniques.”

Corner reduction often involves design decisions. “Why not avoid domain crossings,” says Synopsys’ Kapoor. “Just keep the design asynchronous, where each of the pieces is timed on its own. That way you can manage the number of corners for that particular piece. Then you can use corner reduction techniques on top of that. With hierarchical approaches for timing analysis, we time each part separately, and then both together with the constraints, and do the corner merging.”

What is meant by paths increasing everywhere. “Many people want to do analysis of multi-die systems,” says Mick Posner, senior director of HPC IP at Synopsys. “Signal and power integrity solutions used to focus on die, through package, to PCB. Now, it has become die, to interposer, to package, to PCB. This is especially true for high-performance interfaces, such as 112G, and memory interfaces, where there is a lot of focus on the impact of that interposer, or the routing layer. We have to work out how to package that information with the IP, which is sometimes impossible because we don’t know how that IP is being used. We can supply a reference flow that shows them how they do that analysis.”

The problem is that doing some of the necessary abstractions is difficult. “Abstraction requirements are very specific to the application,” says Real Intent’s Narain. “They depend upon the technology, and they are different from product to product even for that application. They are dependent upon the technology that is being used by each product to implement the functionality. Then you have to consider the level of accuracy that you’re seeking. It will be very specific to an application and the technology, and the standards are really going to follow later because that’s a very difficult process to accomplish.”

Posner provides a specific example. “For HBM3, we packaged up a reference design. It is a reference design of our own test chip. We developed a PHY, but when we do a test chip, we also have to develop an interposer that connects to the HBM stack. We have to do everything in a similar fashion to what a customer would have to do. Then they can leverage that flow. But, of course, that was our test chip. They can re-use the flow, but the actual data is going to be specific to how they lay out that interposer.”

The modeling problem
The reason for these difficulties is the lack of models and the means of generating those models. Models are tradeoffs between fidelity, accuracy, and performance. High-accuracy models tend to have good fidelity but execute slowly, whereas models that execute faster give up something in terms of either accuracy, fidelity, or both. The required models are both functional and non-functional models.

We have been dealing with the problem in the functional domain for a while, but more work is required. “For functional verification we do a few models,” says Neil Hand, director of strategy for design verification technology at Siemens EDA. “We have cycle-accurate, instruction-set-accurate, and so on. But you want to have a way of easily moving between them. With hybrid modeling, you have the capability of what they call run-fast, then run-accurate. On the fly, you need to be able to switch the model. For example, someone might boot the operating system on a less accurate, run-fast model, then switch the design state into a run-accurate model. Now they are able to go forward from that point with much more granularity and much more fidelity in the model itself. We need to develop even greater capabilities to switch between the fidelities when you need them.”

Today, a similar methodology is used for block level and integration verification. “When you buy an Arm core, you don’t verify the functionality of the Arm core,” says Simon Davidmann, founder and CEO for Imperas Software. “You verify the integration of it. That’s where companies like Breker come in. You have these blocks, but how do you check that they’re all talking nice to each other? You don’t do that in the same way that you would verify a block with UVM or Verilog, which is what you use for block-level verification. The hierarchy in verification is to get all your blocks working, test them individually, then bring them together and worry about integration tests. But they require different methodologies.”

The problem always has been that creating these models takes time and effort, and each model has to be verified to ensure consistency. “For architecture you also need non-functional properties, such as timing detail,” says Tim Kogel, principal engineer for virtual prototyping for Synopsys. “This entails considerably more effort to build the models. While the industry has established the higher levels of abstraction, it has not been as successful creating tools for building these non-functional performance models. For example, software sees the processing elements as more abstract resource units, and then you may have more detailed models of the interconnect and memory subsystem, or the network between the different chips. Arteris and Arm do provide these for coherent networks, for various type of interconnect IP, and also for the memory controllers, which are the key pieces of the integration.”

More model generation tools are required. “When you analyze a design using particular patterns, you have the capability to create an abstract model,” says Mallik Vusirikala, director and product specialist for Ansys. “For example, when I analyze the internals of a chip, I also know how it behaves from an interface perspective. I can create a model as if I’m seeing this whole part from the periphery, or at the boundary of the chip to the external world. Then when analyzing another chip connected to it, I don’t need the internal details of the chip. I just plug that behavioral model into this analysis and I’m done.”

But there are gaps. “The piece that is missing is a better integration and exchange of data between the physical worlds and the virtual worlds,” says Synopsys’ Kogel. “We need an architectural model based on learned floorplan information, learned geometries, which when migrated to the virtual prototype level help you to validate the performance, power, and thermal based on the real application activity.”

When are you done?
Completion is one of the problems in any analysis task. Have you covered the important cases? Coverage metrics exist for block-level functional verification, but this is yet another model that needs to be migrated to higher levels of abstractions, and into non-functional domains. “If you are running part of your verification in the realm of RTL, and some in the virtual prototype, how do you merge those coverage items together?” asks Siemens’ Hand. “Today that’s done through functional coverage, but there is the opportunity — especially when you look at stimulus generation, when you are using AI on the coverage side of things — to start to infer information from different types of coverage.”

The software world has been very lax in this regard. “I don’t think there is standard approach or methodology for coverage,” says Imperas’ Davidmann. “To my knowledge, there isn’t any automation people have done around software that is equivalent to coverage points and cover groups in HDL. Protocol checkers do exist for verification and for analysis. And you can build statistics, where you can watch the functions, or watch the accesses to variables. Given a lack of standardization, we provide the necessary tooling, but user would have to build it themselves.”

Once you have a notion of coverage, then it becomes possible to think about optimizing verification. “Whether it’s portable stimulus in its current form, or something that builds on those notions, we need scenario generation at the system level,” says Hand. “Can we take that and go one level higher and go with the virtual prototypes and the system modeling and do scenario generation across robust systems? It’s going to become more and more important as systems become more and more integrated.”

Others agree. “You want to have this continuity between IP-level, SoC-level, and then later in-silicon verification,” says Kogel. “Portable stimulus is one approach to achieve that. You then also can run what was an abstracted test case, like a program on an embedded core, then in the virtual prototype. In that broad sense, this is the verification of the architectural concept. Later, you run RTL with software on an emulator, on an FPGA prototype, and that can be used for validation of the performance because it’s more like, ‘What you see is what you get.’ It’s not some high-level virtual model.”

Fig. 1: Multiple levels of models and verification goals. Source: Synopsys

Fig. 1: Multiple levels of models and verification goals. Source: Synopsys

Another way to approach integration verification is through functional compliance. “There’s an attempt in Arm called ‘system ready’ to define what it means to be compliant and capable of booting an operating system,” says Nick Heaton, distinguished engineer and SoC verification architect at Cadence. “If your implementation passes, you will not have to modify the OS releases of Red Hat, or whatever. They will just boot on that. This is a contract between the software and hardware. Portable stimulus is trying to do that in a more generalized manner, and we call it VIP because it’s kind of out-of-the-box content we’re delivering at, say, a coherency level. We test all the permutations of coherency, and we can deliver that basically to any platform, whether it’s Arm or RISC-V or whatever.”

The debug problem
It is one thing to be able to run a model, but it is quite another level of complexity to find and fix a problem in a model or in how the model is being used. “If you are debugging software on hardware or an FPGA, you get a gdb that connects to it, and you can single step a processor’s instruction stream,” says Davidmann. “But the problem comes when they have 10 or more processors, and they need to know when ‘this’ is writing to ‘that,’ what does this look like? Analysis and debug have to be done in a holistic manner so you can see everything. This has to involve the software stacks so you can look at the behavior of the platform.”

This is a different set of demands than just debugging hardware. “As we start to go to hardware/software integration testing, we are starting to see more software debugging capabilities integrated into the virtual prototype debugging environment,” says Hand. “As we get into making it available for system designers, there is an opportunity for us to look at the use models, and what are the design environments those teams want to work on? How can we incorporate that? You want system designers to interact with the virtual prototypes in a way that is meaningful for them. It’s all about identifying the end users and mapping the use models to them. It is an area where there’s a lot we can do, and there’s a lot we should be doing.”

The tools and methodologies have to match the needs at each level. “The guys doing integration verification aren’t the guys who know each of the blocks,” says Cadence’s Heaton. “Time to debug or turnaround time is becoming increasingly important. The number of debug cycles you can run in a day is critically problematic. If the tools can point you to the first order place, it can save hours of debug. We are on the beginning of this journey. The learning is underway, and the way we use those tools is something that is going to get better.”

AI may help. “Despite the fact that humans have the best neural network, our I/O is still more or less serial,” says Matt Graham, product engineering group director at Cadence. “Maybe we can handle two or three parallel tracks, but certainly no more than that. Machines can consider all these things in parallel. They might use a simple algorithm, or a simple set of AI, to do something across that massively parallel, highly integrated thing. But that is different from what we are able to do ourselves. Maybe it is things like the last time we had a revision or what has changed, or identifying where the behavior differs, or what were the parameters that were changed in an IP.”

Conclusion
System complexity is overwhelming many of the tools and methodologies in place today. Techniques used in the past, while still valuable, are not sufficient. The industry has been seeing many of these problems in the area of functional verification, but that is only the tip of the iceberg. Given how little progress has been made in the most well understood area, progress is not likely to be fast in many of the other areas — particularly those being driven by advanced packaging.



Leave a Reply


(Note: This name will be displayed publicly)