The Importance Of Metal Stack Compatibility For Semi IP

Reduce cost and avoid re-routing by paying attention to metal stack requirements.

popularity

Architects and front end designers usually leave the back end to the physical designers: they know there can be different numbers of metal layers, but may not realize the characteristics of each metal layer may vary layer by layer as well and that different chips use different metal stack ups to optimize for their requirements.

This slide from IDF14 shows a simple summary of the breadth of variation: today’s 16nm processes have even more layers and options (20-30).

Since each extra metal layer used adds to cost (roughly 5-10% higher die cost per additional metal layer), cost sensitive die will be implemented in the fewest possible layers.

The highest performance chips typically have the most metal layers because they want the shortest routes and have the largest die sizes with the most routing congestion (FPGA chips almost always fall in this category).

Chips constructed with standard cells will have their lower metal layers standardized by the metal stack up of the standard cells.

The choice of other semiconductor IP will drive the metal stack up for several more layers: RAMs, SERDES, DDR PHYs, etc. may need a few more metal layers than standard cells.

Above that, the choice of the metal characteristics by layer will be determined by the nature of the signal routing with the top 2 layers typically being wide and thick metal for Vdd and Ground planes running over the entire die.

When chip designers are considering a semiconductor hard IP block for their design, they will insist that the semi IP supplier gives them a design that is compatible with the customer’s metal stack.

The fewer layers the hard IP requires, the more metal stacks it will be compatible with.

Flex Logix’ EFLX eFPGA IP has the fewest metal layers required of any eFPGA supplier:

EFLX100           40nm               5 metal layers (m1+4x)

EFLX4K             28nm               6 metal layers (m1+5x)

EFLX150           16nm               6 metal layers (m1+2xa_1xd_h_2xe_vh)

EFLX4K             16nm               7 metal layers (m1+2xa_1xd_h_3xe_vhv)

We can do this, with high density of LUTs/mm2, because of our revolutionary programmable interconnect which won the 2015 ISSCC Outstanding Paper Award and is covered by 3 patents issued in the USA this year.

The traditional mesh interconnect takes more area and more interconnect layers to achieve the same density.

eFPGA companies who provide their IP based on their FPGA chip designs will require customers to adopt their metal stack with near-maximum-metal layers. If a customer wants one of the other 20-30 metal stacks, the supplier needs to re-route: since different metal stacks have different thicknesses/pitches, this is not easy and involves either giving up performance or resizing transistors for performance. And, if a customer uses a metal stack with fewer layers it is quite likely it won’t be possible to deliver a solution (if the FPGA company could have used fewer layers they would have: remember each layer costs 5-10% more, cumulatively). A last problem with a re-route is that it won’t be proven in silicon when you get it.

So in evaluating eFPGA, consider the implications for the back-end designers, not just software tools, area and performance (which are of course very important as well).



Leave a Reply


(Note: This name will be displayed publicly)