Handoff Points Getting Blurry

The way information is shared throughout the design and test flow is changing.

popularity

Whether driven by or just sheer complexity, the way information is passed through the design and test flow is changing.

For the past couple of process generations, there has been a concerted push by tool vendors and their customers to run more steps earlier in a flow, sometimes concurrently. While this so-called “shift left” helps to speed up software development and keeps verification and debug from delaying tape-out, it also has created some confusion around the hand-off process in some of these steps.

There is no denying the benefits of this approach. “Results of this approach provide better time to market and better QoR due to a lot of shared optimization, timing and analysis technologies between the products,” explained Mary Ann White, director of product marketing, Galaxy Design Platform at Synopsys. “Examples of this approach include accounting for more physical aspects such as floor-planning, metal layer awareness and congestion during synthesis, and then forwarding physical guidance from it to the place and route product.”

At the same time, however, changing flows in a large design operation can be disruptive.

“The netlist cannot be thrown over the wall for test insertion,” White said. “DFT insertion is built into synthesis, allowing it to optimize for the design and test goals concurrently, eliminating congestion and timing issues that otherwise will pop up later in the flow.”

Another example of hand-off points changing, White pointed out, which is especially important for FinFET-based designs, is to perform physical verification and/or rail analysis completely in-design within the place-and-route environment, rather than having to perform those steps sequentially with point tools. “With more designs requiring more custom applications, it is integral to have a seamless integration between digital and custom design [tools], as well, such as having the ability to route complex high-speed digital and mixed-signal nets.”

Making sure it is possible to perform timing and power analysis within the signoff tool and have it provide ECO changes back into place and route to ensure design closure, is yet another adjustment in hand-off points in the design flow, she added. 

Behind the changes
Bassilios Petrakis, product management director at Cadence, pointed to a number of scenarios driving the changes in where and how these handoffs occur.

One scenario involves a system company that is going to define an architecture that might have SystemC or transaction level models, eventually creating hardware either through manual or through high-level synthesis. “At some point, they have an RTL design and they are shopping around to figure out who is going to fab it for them,” Petrakis said. “If they do the implementation themselves (which puts them under a COT model), or if they’re going to have somebody else do the implementation for them, there’s a possibility that some folks might take the RTL as a starting point. Others, like IBM or GlobalFoundries, start at the netlist level. Here, when we’re thinking about test, it’s an interesting question as to who is a test architect and what are the considerations? A lot has to do with what sort of tester is going to be used, and what kinds of constraints you have from a tester point of view. You can think now of a guy who’s going to hand off at the system level — the system here referring to a design being handed off to someone that’s going to build it — and they might not have specific requirements. The implementation folks are going to take the design and use whatever methods they have developed that apply best to the tester they have and the diagnostic and yield learning techniques that they use. Just as an example, let’s say it is IBM. They will take the design and put in the test circuitry, and give you back the working part when they are done. They will do the testing on their own testers. They will take care of things.”

For certain market segments it is important to understand how exactly the design is positioned—something that is becoming more important in those markets.

“For example, the automotive segment is looking for more stringent testing requirements, especially if that chip is going to go into a mission critical or safety related type of application,” Petrakis said. “If it is going into the power train or is going into avoidance systems like ADAS and others, they have to meet certain quality criteria for test and for test coverage. If I have a design, how do I guarantee that I’m going to achieve a certain test coverage? As a person who creates the hardware, there are RTL handoff tools that can tell you up front what kind of coverage you can expect to get out of this particular design if you plug it into a test solution. These RTL handoff tools check to see if your code is good, if it is synthesizeable, if it is ready to do test insertion. In many cases, they might even tell you what sort of coverage you might be getting. However, these static checking tools don’t tell you the whole story. Think of it as the design is good enough to be implemented or ready for implementation, but there must be additional steps that you have to go through during implementation to arrive at either the prediction or the target. But it is possible to be overly optimistic or pessimistic at the beginning with linting tools because they make certain assumptions.”

The two constraints boil down to the test requirements. “For instance, if you are doing a lot of mixed-signal, smaller type of components that have a lot of analog and digital, typically pins are at a premium, so access to the internal circuitry sometimes goes down to having just one or two or three pins—and that’s it. Then you say, ‘How do I thoroughly test this particular device in preparation for it going into an automotive application?’ It has to be very stringent. I have limited number of pins. Now the architecture that you’re going to pick becomes very important independent of the circuit itself. The architecture of the test, how it gets applied, what sort of coverage you’re going to get, whether you’re able to diagnose it, becomes very important. You can see that you’ve done your handoff, but the icing is in the implementation of the particular test strategy.”

Changing the rules
There are no hard and fast rules for handoff points. They can be as individual as the design or the company developing it.

“The handoff points are multiple, and it’s often dependent on the design flow itself how people are doing things,” said Steve Pateras, product marketing director for test at Mentor Graphics. “Depending on the tools, you may or may not be compatible with certain parts of the flow, and that may drive where and how you do things. From a high level, that’s important to understand.”

He noted that the most basic handoff in the past has been where there was a full synthesized design netlist and, just before layout, where various forms of DFT are inserted. “Pre-tapeout you will generate test patterns for ATPG, and those get handed off to the test folks using standard formats. Over the past several years, we have definitely seen a migration to, and proliferation of, different interactions between design flows and test. When I say test I primarily mean DFT—adding stuff to the design and creating more efficient test solutions. One trend seems to be moving things up the design flow, so instead of doing things post-synthesis or pre-layout or post-layout, many engineering teams want to move things up the abstraction to the RTL/SystemVerilog level. They want to do more and more at the RTL level, so any kind of DFT needs to be added there as opposed to doing it post-synthesis.”

Pateras said there also are a lot more modifications being done, and for that the concept of introspection comes into play. “For the RTL designers, we have the ability to introspect both the design as well as our IP and be able to more intelligently understand where to put things. For example, if you wanted to do scan insertion more intelligently, or if you wanted to do things like gate-off X states or put in clock-gaters or what have you, we have the ability to scan through the design early on and, based on that, we can decide where to put things more intelligently. This translates to a lot more interactions between the pure functional design and DFT now than there ever was. We’re also making more use of post-layout, physical as well as power information because we want to do things more intelligently.”

Libraries add yet another level of confusion. “Cell-aware technology, for example, characterizes cell libraries to be able to better understand how to create test patterns,” he said. “Instead of just looking at a netlist, we’re now doing detailed analog simulation libaries and taking that information to drive test pattern generation. This is a key handoff point, which is now a library handoff.”

As a result of all of these evolutions in design and test today, the handoff points are more distributed and comprehensive than in the past. “There are a lot of different points where information is handed off, and it is bi-directional. It’s not just taking the design information and using it, it’s going both ways,” Pateras said, noting the information formats are evolving, as well. “Originally it was just very simple pattern data, and now it is physical information, power information, LEF, DEF along with library information. There are also different formats being used now for how things are described, such as the iJTAG standard for IP integration.”

Hand-off points cross new lines
Sharing design and test information in new ways impacts relationships between information providers, as well.

“There are so many companies now that exist in the ecosystem who we previously would have just called design service companies, but who now have really specialized in specific areas where it’s not your father’s design service company anymore,” said Drew Wingard, CTO of Sonics. “There are a couple of situations we are in right now where, in one case, it’s a systems company that to do what they need to do, needs a specialized chip. In the other case, it is a company building a large subsystem, but that subsystem requires chips. In both cases they’re working in the space of signal processing, so their core competence is in different aspects of signal processing either for communications or recognition of certain things. In both cases, they are focused on using a specific collection of processors for attacking the challenge. And in both cases, they really don’t want to have to master the silicon side of it all, so they’ve linked themselves up with an IP vendor who can provide the underlying processing technology, and with a company we might consider a design services company who has a demonstrated strength in helping model that specific style of IP company processor—especially in these signal processing applications. But they also have another part of their business which can provide turnkey or quasi-turnkey silicon services to realize these things.”

Doing turnkey silicon designs for a systems company isn’t new. “But typically the people offering such turnkey designs tend to be pretty large companies that we would have called ASIC companies before, like e-Silicon, Open-Silicon, VeriSilicon, but this is a much smaller outfit that really does have a specialization around this specific signal processing IP in how you model it, how you deliver confidence to the system company that they’re going to be able to optimize their algorithms around it, and model it, and understand its performance,” he explained. “In both cases we’re being brought in as someone who can help that ecosystem company, that design services company, be more efficient in how they get this idea to silicon more quickly. These are the kinds of relationships that often start off in an FPGA context for doing early proving of the algorithms but never with the intention that the FPGA is the end, but rather the FPGA is an intermediate stepping stone to really wring out the algorithms.”

Conclusion
Changing the way information is shared among and between various parties in the design and test flow is ultimately for the better. “This entire shift left methodology enables monotonic convergence, which has resulted in reduced iterations overall along with providing the designer with more downstream tool awareness and the ability to make changes earlier,” said Synopsys’ White.

But as with all changes occurring with complex technology under extreme time-to-market pressure, none of this will happen as smoothly or simply as the benefits might imply.



Leave a Reply


(Note: This name will be displayed publicly)