The Process Design Kit: Protecting Design Know-How

The key innovations that made pure-play foundries and the fabless revolution possible.

popularity

Once upon a time, integrated circuits (ICs) were built by the same companies that designed them. The design of an IC was tightly integrated with the manufacturing processes available within each company. In these days, when designs contained hundreds of transistors, companies modeled each feature in an IC at a first principles level, meaning each transistor or fundamental device was analyzed and characterized at the lowest, most basic level of electrical operation. To facilitate design and ensure accuracy, they used custom technology computer-aided design (TCAD) tools that provided device models augmented with schematic symbols and layout parameterized cells (Pcells), but that was about as far as automated verification went. The company controlled every aspect of an IC, from design to production. They did it this way because, well, there simply weren’t any alternatives at that time. However, the computing power and time requirements need to design ICs this way acted as an intrinsic brake on the number of components a design could contain.

Occasionally, however, there were times when those manufacturing lines were idle. In Economics 101, we all learned that’s called “excess capacity,” and it’s generally not a good thing. If your machines and staff aren’t doing anything, they’re costing you money. So the semiconductor companies did what any smart business would do—they invented a new industry. They offered up that excess capacity to other companies. At some point, someone realized that by taking advantage of this manufacturing capacity, they could create an IC design company that had no manufacturing facilities at all. This was the start of the “fabless” model of IC design and manufacturing that has evolved into what we have today.

In 1987, someone realized that same model could go the other way as well. The Taiwan Semiconductor Manufacturing Company (TSMC) was introduced as the first “pure-play” foundry—a company that exclusively manufactured ICs for other companies. This separation between design and manufacturing reduced the cost and risk of entering the IC market, and led to a surge in the number of fabless design companies.

Looking back at the last 20+ years of the fabless design revolution, you can quickly be overwhelmed by the sheer amount of innovation and improvement in the industry. Gordon Moore’s now-infamous observation about the growth in IC design componentry quickly became Moore’s Law, as the digital electronics industry began its expansion into nearly every industry and business model. The end of Moore’s law has been predicted time and again, but it hasn’t happened yet. New materials and new approaches were supposed to replace silicon, but that hasn’t happened yet, either. The primary reason for this constancy has been the ability of the foundries not just to innovate the silicon processes, but to do so in a way that introduced minimal risk for both the IC design company and the foundry. The magic making it all possible? The process design kit (PDK).

The advent of the pure-play foundries required a new approach to design layout and verification. With manufacturing moving from the design company to the pure-play foundries, the foundries had to find a way to guarantee the designers that their fully-designed system would actually yield at an acceptable (and profitable) rate. At the same time, to support the increasing complexity of designs, the verification focus had to shift from individual components to a more abstract and standardized level.

Of course, it wasn’t that simple. Just because individual devices can be successfully manufactured doesn’t mean they can be connected any old way and still be manufacturable as a whole. How the completed design reacts to manufacturing issues like lithography effects, chemical-mechanical planarization (CMP), particle contamination, and process deficiencies also affects the final yield and performance.

The first step was the introduction of design rules and design rule checking (DRC). Design rules defined the minimal manufacturing requirements of a particular process at a given foundry. Design rule checks confirmed that the drawn geometries satisfied these manufacturing constraints. They also helped the designers determine which drawn layers represented which masks and manufacturing steps (via layer mapping). Electronic design automation (EDA) companies provided tools that automated the constraint checking, enabling designers to quickly scan the layout and find any configurations that violated the design rules.

But designers also needed to know that the IC circuit met the intended timing targets and that the corresponding layout actually created the intended circuitry. The foundries took the device models used to characterize single devices and expanded them to pre-characterize commonly-used groups of devices. In particular, the advent of device models and simulation programs with integrated circuit emphasis (SPICE) simulations made it possible to use pre-characterized device components that could be simulated together as a system in a much more reasonable time. SPICE models representing a few pre-characterized devices (e.g., MOSFETs) enabled abstracted circuit simulations that ensured the circuit met the intended timing target. Layout vs. schematic (LVS) comparison verified that a physical layout did, in fact, implement the circuitry as designed.

The combination of pre-characterized device models and physical manufacturing constraints were the minimal requirements for fabless design success, and were the first incarnation of what we now refer to as PDKs. This seemingly trivial shift made it possible to separate device manufacturing and characterization from the design process, which in turn enabled designers to create complex designs based on more and more components. A designer could now take advantage of well-characterized devices already known to be manufacturable by a pure-play foundry to quickly and easily add new or additional functionality to a design, and using the PDK in combination with EDA automation, verify the design’s performance and manufacturability as a whole.

While these elements represented the bare minimum of design verification needed to ensure yield and performance, minimal did not equal sufficient for commercial success. As processes targeted continually smaller and smaller geometries, each new manufacturing process began to be affected by previously unimportant issues. As more and more devices were connected, increased amounts of metallization were needed to connect them together, but this metallization could no longer be assumed to provide ideal connections. Unintended (parasitic) electrical currents had to be identified, and their effect on circuit performance accounted for. Parasitic extraction was added to the design verification process, along with new tools and flows to enable designers to understand the parasitic impacts on the design behavior. This expansion in verification requirements resulted in yet another hit to time to market schedules and computation limits. Yet more abstraction was required.

Now the digital design flow really begins to blossom. Foundries realized they could provide pre-characterized intellectual property (IP), such as standard cells, library compilers, etc., and design companies could combine these IP with pre-characterized timing libraries. Designers now had a way to combine large sets of proven digital pieces together into a bigger system, and take advantage of a higher level of abstraction using static timing analysis tools. PDKs were expanded to include timing libraries for pre-characterized standard cells, as well as validation for certified third party IPs.

Of course, trying out all possible combinations of combining these blocks and checking the different timing combinations to get to a target timing closure is not trivial. The answer? Automating that trial and error approach in the form of place and route (P&R) tools that could rapidly evaluate many possible routing combinations to find “legal” layout options. Working in concert with the PDK, P&R was continuously extended and enhanced to account for new complexities for layout design requirements, as well as impacts to timing and power.

The next challenge the industry faced was the limit of lithography technology. As lithography reached the point where the shapes of components were significantly smaller than the wavelength of light used to image them onto the wafer, it became impossible to match the intended physical layout integrity. This time, the solution lay on the manufacturing side. Dedicated EDA software known as optical process correction (OPC) automatically modified the original layout prior to lithography to ensure the final silicon matched the original layout intention.

However, while OPC is a manufacturing task, it can and does have an impact on the design side. Even with the best OPC techniques, it is not possible to ensure that every layout scenario can be properly manufactured. If a layout must be changed, even slightly, what is the lithographic impact? More importantly, how can designers avoid creating a layout in which some catastrophic failure might occur? Answer: add design for manufacturing (DFM) recommended rules and lithographic process checking (LPC) to the PDK.

LPC tools enable designers to search for those conditions under which a design can satisfy all the DRC requirements, and all timing and related electrical behavioral expectations, and still fail in manufacturing due to faulty lithographic reproduction. Unfortunately for the bottom line, lithographic modeling is very, very compute-intensive and time-consuming. Incorporating DFM recommended rule checking and pattern matching into the design verification flow allowed designers to adjust designs to avoid well-known lithographic “hotspot” configurations, as well as implement proven optimizations to improve yield, before the design reached the foundry. At 20nm, the introduction of multi-patterning technology required new checks that could determine if a design was properly divided between multiple masks. All of these techniques helped designers optimize their designs to avoid manufacturing failures and/or improve yield ramp and design performance.

During this time, the use of electronics in markets such as medical devices, transportation, aerospace, communications, etc. increased the demand for highly reliable products. Meeting yield and performance targets were no longer the only crucial factors in IC production. With today’s many intrinsic uses of ICs, from automotive safety to medical device operation to Internet of things and cloud computing, it is not just critical that a device works correctly in test, but that it continues to work with full performance and accuracy, sometimes for decades, in the customer’s application. Timing analysis was expanded, and power and stress analysis were added to the verification arsenal to ensure ICs would meet market demands for low-power, high-performance devices.

Ensuring electrical performance reliability is one of the newest tasks in IC design and verification processes. To accommodate the need for high reliability and product lifetime, design rules were expanded beyond pure catastrophic printing issues to identify reliability and performance issues, such as electrostatic discharge protection, latch-up detection, time-dependent dielectric breakdown susceptibility, electromigration protection, etc. These sophisticated context-aware reliability checks that could consider both layout and circuit characteristics together helped ensure the design met product lifetime and performance reliability expectations.

Much like the evolution from the Model-T to today’s luxury automobiles, the PDK has progressed significantly over time, in conjunction with the EDA infrastructure, to make it as safe and easy as possible to design a system-on-chip (SoC) design targeted to an advanced process node. But still, the death knell for Moore’s Law continues; giving designers cause to worry. Can we really get faster designs and/or lower power? How much will it all cost and does the level of benefit warrant that extra price point? Yet, here we are continuing calmly down the path of Moore’s Law, with a clear path at least down to 3nm. Why? The simple answer is because those new SoC processes will all come with a qualified PDK, while the alternatives still lack tested, proven practices, implying significantly greater risk.

Consider the commonly referenced alternatives: micro-electrical-mechanical devices (MEMS), silicon photonics, and high-density advanced packaging approaches that combine multiple die or components. In terms of PDK equivalence, we’re back in the Model-T era.

In the MEMS world, design differentiation is still dominated by the unique MEMS structures designers invent, putting it at odds with the core PDK concept. Obviously, the foundries can’t guarantee they can manufacture any arbitrary MEMS shape. But what can be manufactured? To answer that question, MEMS design teams and manufacturing teams must collaborate, which reduces the time foundry teams can spend on pre-characterizations. Also, because MEMS designers will consider their components proprietary, the foundry may not be able to use the knowledge they’ve just acquired to help other customers. Without the ability to rely on well-known, characterized, and trusted components, a PDK for MEMS remains a task to be accomplished.

Silicon photonics struggles to some extent from the same issues of proprietary control over device design. Still, there is at least progress that can be made here. Certain common components, such as rings, Y-junctions, grating couplers, etc., can be agreed upon, and it may even be possible to pre-characterize many of these design elements. We’re already seeing SPICE-like post-layout optical simulation approaches, design rules and even simple LVS, while design companies are implementing custom layout solutions to help speed design. Even so, the majority of photonics design is still done at the base physics level using complex TCAD-like or proprietary coding solutions to get their work done. It seems apparent that there will be some fairly lengthy period of time before designers can make full use of pre-characterized photonics components, allowing design scaling to tens of thousands of device components or more.

And then there are the multi-die, heterogeneous packaging solutions, such as fan-out wafer-level packaging and silicon interposer-based designs. This technology also has its own set of challenges. With different pieces coming from multiple suppliers, who owns and is responsible for the design kit? How can any one entity create a design kit that works with all the components, since no one group has all the knowledge? And yet, this is the space that has recently seen the most progress. With the foundries entering the domain, we’ve seen a rapid movement toward the development of a package assembly design kit (ADK) that provides the same benefits as the PDK does for ICs. With support from the EDA companies, designers can now plan and manage their package design intent across all their components, similar to the schematic capture components from the custom layout world. They can combine tool capabilities to verify the connectivity of the post-package design against the design intent. Going even further, for package types where components are placed very close vertically, creating the potential for timing or signal impacts, parasitic extraction can be performed. In addition to the foundries, we’ve seen OSATs adopting a similar approach, indicating that there is now a reasonable and safe path forward where a design team can begin making trade-off decisions between full SoC designs versus heterogeneous multi-die package designs with confidence. With the development of ADKs, the total exposure and costs can be off-set through the use of known good die, or at least trusted, high-yielding die processes.

So, what does all of this mean? With a little foresight, it is obvious that even having a well-established design and verification process and a collection of EDA tools is no longer sufficient. To enable the design scaling such as we’ve experienced with Moore’s Law and the fabless model, we need a design ecosystem. Processes, tools, and requirements must all come together in a well-established and trusted design flow dependent on abstraction embodied and defined by the PDK. Future innovations will require new and enhanced PDK elements across the ecosystem that support and enable their adoption and implementation, whether that is yet smaller and smaller process nodes for SoC designs, multi-process packaging schemes, the use of unique technologies such as silicon photonics or MEMS, some combination of any or all of these technologies, or something we haven’t even dreamed of yet. In the end, the PDK is the element that bridges the gap between diverse design teams and manufacturers to drive the future and the progress of technology.



2 comments

Eva says:

Very informative article, thank you John.

Lewis Bosher says:

How long does it take for a design team to complete a PDK for a photonics IC in the fab? Is it six months or more?

Leave a Reply


(Note: This name will be displayed publicly)