Systems & Design
SPONSOR BLOG

Redefining ‘Good Enough’

One of the biggest difference between hardware and software engineers is how they view debugging.

popularity

The increasing amount of software content in devices and the ability to add fixes after tapeout is changing the definition of what’s considered a market-ready product.

This is business as usual in the software world, where patches upon patches are considered routine. Service packs are a way of fixing problems when millions of lines of code interact with millions more lines of code in unanticipated ways. It also changes verification from one-time massive debug session into an ongoing debugging and recoding operation that never ends. Even chips that have gone out years earlier may have to be updated with new firmware, embedded software fixes or changes to the operating system and middleware.

This is also one of the areas where hardware and software engineers generally do not see eye-to-eye. Fixing things in hardware after the fact is extremely costly. Often it requires new masks, new trial runs and a whole new batch of wafers. In software, patches can be added as executable files to remove some code, add new code and address issues that were never planned for.

In some ways, this is very good. It can reduce the time it takes to get functional chips out the door, reduce up-front debugging costs and allow companies to debug when there is a revenue stream to pay for it instead of footing the bill in advance. In some ways it is not good, however, because it makes it much easier for companies to roll out less expensive designs simply because it’s “good enough” instead of “good to go.” And as competitive pressures come into play, even companies that make great chips today may be forced to compete on schedules rather than up-front quality, putting less emphasis on really well-designed architectures and deep understanding of verification tools and processes at the back end.

This is particularly worrisome at the advanced nodes, where complexity may lead companies to cut corners and solve problems later—either with patches or turning on functions that are not operational by tapeout. In software there are plenty of examples of this kind of thinking, but in hardware this is a foreign concept. But in the past, hardware has always provided the stability against which software can be modified. If the foundation is less stable then all sorts of problems can erupt that we don’t expect—serious flaws that can cause fatal errors.

The move to increased software content is inevitable, but to avoid problems it may require a rethinking of how software is created, tested and modified—and just how much control software has over the hardware.

—Ed Sperling



Leave a Reply


(Note: This name will be displayed publicly)