Are Test Engineers More Highly Evolved?

Being able to test and analyze earlier in the process saves time on the back-end and cuts time to market.

popularity

In a December 2010 blog, my colleague Ron Craig wrote that 94% of respondents to a survey said that timing constraints were a problem. Well, no surprise there. But 70+% said they planned to simply “try harder” during their next project to avoid these problems. Did they really think that was a viable solution?

That blog featured a good illustration of the problem. It gave me a good laugh.

ostrich

But I want to set the record straight. Not everyone thinks “trying harder” is the only solution. Test engineers are proving to be more evolved and forward thinking.

It wasn’t long ago that the design community didn’t ostensibly care about test. At the front end, RTL designers focused more on clocks, reset connectivity and scannability. Yes, they needed to pay attention to design-for-test guidelines and SoC DFT connectivity, but they were also perfectly happy to throw their design over the wall to the DFT expert. It was the job of the DFT expert to focus on test quality and coverage, at-speed test and ATPG/MBIST. For any problems, the DFT expert would throw it back over the wall to the designer for iterations. Seemingly, the two sides lived in different worlds.

That situation is changing. The test problem is seeing more visibility throughout the organization, and more is being done up front. What is driving this?

Of course, a big factor is the increasing complexity of designs. Given the size of today’s SoCs, designers are no longer able to design all the IP blocks from scratch, so there is much wider adoption of third-party IP. With that comes the need to qualify and verify the IP. You need to know if the chip will function according to your design intent, and that requires careful test planning. Also, as we move down the technology curve past 45nm, another issue that has cropped up is the problem of “at-speed” defects. No longer are we looking at merely “stuck-at” fault defects. The cost of the chip is increasing because of the need for at-speed testing based on an actual use model rather than a limited test pattern. And that is making the test problem quite a bit more difficult.

Then there’s the challenge created with additional functionality. Today’s mobile devices function as GPS and MP3 players, along with being phones, cameras, video recorders and much more. And the SoCs behind these mobile devices need to function in these different modes seamlessly, so they need to be designed in such a way that the chip goes into a particular mode of operation as required. That too requires more sophisticated test strategies.

All of this is against the ever-present backdrop of the time-to-market challenge. In a fast-moving, consumer-driven environment, missing a market window can literally be a death knell for a product or even a company. A matter of weeks could make or break the success of a product line. Test can have considerable impact on design schedules. Functional changes made during post-synthesis place-and-route can profoundly impact test quality. Throwing it back to the designer could add weeks to the schedule.

A better way is emerging. We are seeing a growing use of RTL test analysis tools. Tools that verify IP quality early, measure the completeness of at-speed tests, and determine if a design will achieve 98% or 99% stuck-at coverage. Test engineers, engineering managers and RTL designers alike are realizing that addressing DFT early, at RTL, will make things easier on the back-end design flow. RTL engineers and DFT experts are working together more effectively. And that is highly evolved behavior.

atrentagraphic2


Tags:

Leave a Reply


(Note: This name will be displayed publicly)