Systems & Design
SPONSOR BLOG

Cutting Clock Costs On The Bleeding Edge Of Process Nodes

The clock distribution network consumes a sizable portion of the physical design and verification budget.

popularity

In a recent study done by McKinsey and IDC, we see that physical design and verification costs are increasing exponentially with shrinking transistor sizes. As figure 1 shows, physical design (PD) and pre-silicon verification costs are doubling each process leap. As companies leap from node to leading node, a natural question arises. Why is it becoming harder and more expensive to tapeout a chip on advanced process nodes? We believe two significant contributors to the rising costs are engineering resources and process variation.

Another question, albeit a less grand one, also arises. Why are we talking about rising design costs in Clock Talk? The short and simple answer is size and function. As one of the largest networks on chip, the clock distribution network consumes a sizable portion of the PD and verification budget. In addition to its vast reach, the clock network is also responsible for the synchronization and movement of data, which warrants additional efforts during these two design steps. With a dead clock network, the chip becomes dead on arrival.

As customers select leading nodes, variation effects become increasingly difficult to predict and handle. There are many sources of variation which create an additive effect for the device. Variation can come from many system-level sources, including power and patterning variation. Power variation on the leading process node becomes especially problematic because of heat dissipation. As unpredicted thermal hotspots arise from power distribution variation, these hotspots leak across the chip creating thermal noise, which can lead to an increase in timing jitter.

On leading nodes (especially for shuttle runs), predicting all of the sources of variation and their downstream effects becomes a titanic task. Many companies don’t have the resources or expertise to effectively map these effects, which leaves clock architects only one option. They must hedge their bets in the clock cycle by employing guard bands. These guard bands eat up useful clock period, which also hampers the freedom of our previously mentioned architects. Now architects have a smaller clock cycle, which makes timing closure that much harder. They begin trading off maximum frequency, area, power, and time to market. The extra design time will lead to higher costs in the physical design bucket, and any loss in performance may lead to additional costs in software and firmware development to hide silicon deficits.

During the design closing stage, teams will review the final database and ensure that they clean up lingering design rule (DRC) violations. In a smaller company (like a startup), the entire physical design team, clock architects included, will be on this spatially intensive task. Like the other topics in this post, DRC violations become increasingly difficult to resolve on leading nodes.

While the design team is closing DRC violations, they might still deal with global timing violations. A smaller company might hire consultants to solve the timing violations or extend their tapeout date to resolve both issues properly. In either situation, the added complexity from a leading node jacks up the PD and verification budget.

While costs are rising for physical design and verification engineers, there are new tools to stem the proverbial bleeding. Intelligent clock networks dynamically compensate for static and dynamic silicon variation, helping clock architects combat process or power-related variation. With intelligent clock networks, architects can minimize guard bands for process variation and close timing faster, allowing architects to maximize performance on any process node, even on the bleeding edge.



Leave a Reply


(Note: This name will be displayed publicly)