provigil to focus, provigil stopped working, do i need a prescription for provigil, how to get provigil online, provigil hong kong, is it illegal to order provigil online

Chip Dis-Integration

Continued integration is no longer the natural way forward for semiconductors. What needs to happen to make it easier?

popularity

Just because something can be done does not always mean that it should be done. One segment of the semiconductor industry is learning the hard way that continued chip integration has a significant downside. At the same time, another another group has just started to see the benefits of consolidating functionality onto a single substrate.

Companies that have been following Moore’s Law and have ridden the technology curve down to 7nm are having to rethink many of their options, especially if the content includes any high-speed analog. But problems exist even for chips that are completely digital.

Meanwhile, companies looking at cost-sensitive, battery-powered IoT edge devices are quickly migrating from designs made from standard parts integrated on a board to SoCs that combine MEMS, analog, RF and digital. They are following the technology curve at a very controlled pace. And while they are looking at chip integration, they are very concerned about additional, unwanted functionality in IP.

End of the line for Moore’s Law
Moore’s Law has powered the semiconductor industry for five decades, and while there is no end in sight technically, it most certainly is slowing down economically.

“While we still have the density benefits of Moore’s Law, we are now concerned about tradeoffs between performance, power and cost,” says Tom Wong, director of business development for the IP Group at Cadence Design Systems. “At sub-28nm, the cost of design skyrocketed due to process technology complexity. We now deal with lithography effects, multi-patterning and finFET design, amongst many technical challenges. Just look at the mask costs for 28nm versus 16nm versus 10nm. Dare we ask how much a 7nm set of masks costs?”

Costs are rising in all areas. “The advantage of moving to the next node is performance and lower power,” says Hemant Dhulla, VP of product marketing for the memory and interfaces division of Rambus. “The massive disadvantage is the cost of tapeout and masks. As you go from one generation to another, it is increases substantially. It is not a linear increase. Not too many companies can afford a 7nm tapeout.”


Fig. 1: The challenges of continued scaling. Source: Imec

And there is another component to cost. “More functionality increases value, but also leads to increased area, which in turn leads to decreased yield and increased cost,” adds Rob Aitken, Arm fellow and director of technology for R&D.

While some markets are cost-insensitive and are willing to allow chip area to grow, they are reaching a limit. “There will always be some companies pushing the leading edge of new foundry technologies because they can take advantage of more transistors and the power savings they obtain from one generation to another,” says Dhulla. “They are really trying to push the highest possible system performance, and they are able to charge a premium price for their product. So to a large extent, cost is a secondary issue. Even then, they may not be able to fit the entire design within the chip. You can run into two kinds of limitations. One is the reticle size limit, and the other involves designs that are I/O-limited.”

The reticle size limits the amount of chip surface area that can be exposed using a single mask. This is set by the litho equipment, which define the largest size that can be exposed without errors being caused by distortion or imperfections in the mask. To make a chip any larger would require multiple adjacent exposures using a different mask set, all of which have to be precisely aligned.

“New packaging and assembly options expand the solution space, allowing complex designs that are too large for a reticle –or which would have unacceptably low single-chip yields—to be split across several chips,” points out Aitken.

Until recently, cost prevented this from being a viable solution. “When you get to 7nm and 5nm chips, it will just make sense to partition as much stuff onto older technologies as you can,” says Ty Garibay, CTO for ArterisIP. “7nm and 5nm are so expensive that there is plenty of room in the cost envelope to optimize. It allows you to optimize critical sections of the product into processes to which they are best suited.”

In addition, the new nodes are not favorable towards analog. “The industry has known that certain things do not scale well,” adds Stephen Fairbanks, president of SRF Technologies and Certus Semiconductor. “Digital scales, but analog does not. More than ever, analog—specifically, sensors and high voltage sensors and pulse-width modulated power supplies and DC-to-DC converters – none of those can integrate well when you get into finFET technologies.”

But that does not mean that analog is impossible. “There is still a debate about the speed of finFET devices to meet the needs of very high-speed analog content,” explains Navraj Nandra, senior director of marketing for the DesignWare Analog and MSIP Solutions Group at Synopsys. “The RF guys see more capacitance with finFET structures, and that limits the transition frequency of the device. But people are still innovating with finFETs and figuring out how tall to make the fins, how to depopulate the number of fins on a transistors, and other things that can change the performance of the device. But the general school of thought is that if you want high-performance RF, you are better off taking that part of the radio off-chip.”

And as soon as that becomes a possibility, it opens up a lot more options. “How do I optimize for super high-performance analog or low-power analog in a process that is designed for digital logic,” questions Garibay. “Developers will become more amenable to asking how to solve the problem a different way, rather than beating on it harder and harder because time to market is a cost function itself.”

Those kinds of issues are popping up much more frequently in the chip planning process. “New features on SoCs are not conducive to integration on the same chip due to their specific requirements, such as RF, wireless or MRAM,” adds Cadence’s Wong. “Some functions may need GaAs, GaN or other esoteric processes, while mainstream features will continue to rely on bulk CMOS. We have seen the transition from PolySiON to HKMG to finFETs, and are now beginning to see the first implementation in EUV. We are not that far from 3nm, where there will be another major technology shift to carbon nanotubes or gate-all-around FET technology.”

Dhulla provides one example of dis-integration that has been used successfully. “When you require a lot of SerDes, you may choose to have the ASIC with the logic and you can put the SerDes as off-die chiplets. SerDes do consume a fair amount of power, so you can create a more power manageable solution by dis-integration.”

This is why advanced packaging has taken off recently. “New packaging capabilities enable heterogenous structures, allowing better isolation and targeted processes for radio frequency/analog, memory, and high performance digital components, which can also introduce new approaches to power and energy management,” adds Aitken. “There is still a cost and complexity hurdle in adopting such approaches, but we expect that will become easier over time.”

Moore’s Law ramps up for IoT
While problems may be building for the most advanced nodes, other markets have just started down the path to SoCs. “At advanced nodes, there is dis-integration, but at the slight larger nodes of 40nm and 65nm, there is more integration of features that had previously been integrated at 180nm,” says Certus’ Fairbanks. “Everyone is trying to find the balance between features, cost, power and performance.”

Foundries are responding. “The foundries are revamping the 55nm and 40nm process nodes and providing thick oxide devices for logic libraries to provide much lower leakage,” says Nandra. “They are adding embedded flash. A new 40nm process might have very low leakage libraries with integrated embedded flash, both of which are technologies needed for IoT devices. They are also looking to package in the MEMs devices as well. Many of these are low speed applications that need extended battery life.”

TSMC has just released a 65nm processor with a BCD technology,” adds Fairbanks. “GlobalFoundries is doing the same. They are integrating more of the high-voltage capabilities with the older digital. 180nm is a sweet spot today because you can integrate a lot of high-voltage and bipolar technologies with 180nm digital. I anticipate that companies will want to integrate with slightly better digital than offered by 180nm, so we are seeing a push into 65nm.”

And just as in other segments, content will grow. “We expect to see increasing functionality and complexity in edge and leaf devices,” says Aitken. “This will allow for more localized processing in order to reduce latency and demands on bandwidth versus fully cloud-resident approaches.”

But that does not mean they stop caring about area. “One factor that we see, particularly at the more mature nodes, is leaner chips by design for use in IoT components,” says John Ferguson, director of marketing for Calibre DRC applications at Mentor, A Siemens Business. “Ultimately, they do not require huge dies with a great deal of complexity, and instead can be focused on very small dies to meet the specific goal.”

Nandra provides an example of IoT looking for leaner IP. “We had to redesign our USB 2 IP to consume less area for a 40ULP IoT device. To get to smaller area and lower power, there is a tradeoff in some of the features. Some features were removed and others, such as battery charging, have been added. Not only have the foundries revamped their More-than-Moore technologies, but the IP vendors have to relook at some of the architectures to get the area and power numbers into the useful ranges for those markets. They still want USB 2, but they do not need 480MB/s. They care about optimum power and area for the data speeds that they need.”

They also are scrutinizing IP more closely. “There will always still be a need for good, trusted IP,” says Ferguson. “The main difference is where previously a piece of IP might be targeted for use in all sorts of SoCs, now it may be more targeted for functionality.”

Tools also can help to remove wasted logic. “Fewer transistors and switching nodes directly translate into lower average and dynamic power and a reduction in peak current,” asserts Andy Ladd, CEO for Baum. “When this approach is taken, a methodology to understand and analyze power is critically important. Otherwise, designers have no way to understand if their tradeoffs between functionality and power meet the goals of the project. The EDA community needs to provide techniques to accurately analyze power under realistic scenarios early in the design cycle. In addition, IP providers must provide power models of IP blocks used as the foundation of SoC-based designs so that designers can plug-and-play with different IP configurations to optimize power versus functionality.”

The creation of representative scenarios is one of the goals of the soon-to-be-ratified Portable Stimulus standard. “In the past, system-level tests had to be created by hand and involved writing code that would run on the processors within the design,” says Adnan Hamid, chief executive officer of Breker Verification Systems. “This was difficult, time-consuming, and provided very low coverage of the complex use cases supported by today’s devices. With Portable Stimulus, representative scenarios can be created quickly and easily created enabling IP selection and power optimization strategies to be assessed.”

Some are asking if dis-integration may be a valid option for IoT as well. “With next generation NVM technologies such as XPoint, Optane, MRAM or ReRAM, you cannot build logic in that technology,” says Garibay. “So I will do 2.5D or 3D stacking to get the logic out there quickly and efficiently and leverage these new technologies.”

Integration issues
With dis-integration, a new integration challenge is created. “In an environment where you cannot fit everything into one chip, you have to architect and segment the total functionality across multiple chips, and how these chips are interconnected becomes very important strategically,” points out Rambus’ Dhulla. “In concept, chiplets seem to be logical and appealing. The challenge is the interfaces between the chiplet and the ASIC. A big challenge to the broad adoption of chiplets is cost-competitive packaging. Multiple fabs need to solve this and provide better packaging solutions.”

This is more of a business model problem than a technical one, Garibay says. “Intel has an advantage because they produce all parts of the chip themselves. When you create a 2.5D or 3D system out of chips from multiple companies, the thing that has stopped innovation is figuring out liability for dead multi-chip systems. There has yet to be a product brought to market that combines two different company’s products. That is the fundamental problem. Nobody can agree when you have a combined chip that is dead, who pays for it?”

This new level of integration creates opportunities, as well. “While there is some dis-integration, the I/O interfaces between them are becoming highly specialized,” says Fairbanks. “If you use standard I/O provided off-the-shelf, you will make sacrifices. It could be optimization for power or area or application to multiple standards and feature capability. The more features you try and add into a chip, the more features you need in the I/O. The more dis-integration we see, the more we want to optimize the I/O for things such as footprint and power. It doesn’t matter if there is more integration or dis-integration, I/O specialization is becoming more important.”

And that creates its own set of problems and advantages. “The necessary space for the I/O pins can be reduced by newer package types, says Andy Heinig, group manager for systems integration in Fraunhofer‘s Engineering of Adaptive Systems Division. “Chips with 100µm copper pillar on laminates allow a huge amount of I/Os in a small area. Also, fan-out technologies increase the area for the I/Os with only small additional costs. But for sure, such integration approaches need early chip and package planning, and also design support from EDA tools. Our experiences with customers show the greatest possible optimization potential for the I/Os happens in the product definition phase, or shortly after. If it is done when the chip is already designed, nothing can be optimized.”

The packaging infrastructure is becoming more important. “Historically, there has been very little rigor around design kits and EDA validation,” says Ferguson. “We’re now starting to see significant changes in that area, with even the OSATs getting on board with the concept of ensuring design integrity across the entire eco-system.”

Another problem that needs to be solved is the lack of communications protocols suitable for inter-chip communications. “HBM2 is the default today,” says Garibay. “Intel/Altera Stratix 10 used HBM2 as a customer acceptable port, but also defined two proprietary protocols that were optimized for data movements. I do think there is an IP gap that would allow for interoperability of chips in a 2.5D and 3D space. Aligning companies on a protocol would be useful for high-pin-count 3D.”

Conclusion
We have a long way to go before chiplets can be purchased and integrated into a product, but the writing on the wall is becoming quite clear. Cadence’s Wong lays out a strategy for companies to think about.

“Don’t migrate the entire complex SoC from one node to the next,” Wong says. “Divide and conquer. Only migrate the portion of your design that needs the highest performance offered by the next process node. Keep the complex functionality IP that you have spent so much time verifying, and continue to use it in the form of chiplets. And utilize packaging such as 2.5D interposers. Maximize your investment before moving to the next node.”

The economics of chip design is becoming more important than the technical possibilities. As newer nodes becoming increasingly costly, packaging technologies start to look a lot more cost-effective—and the prices of those are likely to fall considerably. Any company not looking at this today is likely to fall behind tomorrow.

Related Stories
Challenges At The Edge
Real products are starting to hit the market, but this is just the beginning of whole new wave of technology issues.
Quantum Effects At 7/5nm And Beyond
At future nodes there are some unexpected behaviors. What to do about them isn’t always clear.
Processing Moves To The Edge
Definitions vary by market and by vendor, but an explosion of data requires more processing to be done locally.



2 comments

Tony in Chicago says:

This is a really interesting article on a really important trend in electronic design – partitioning, scale of integration, and advanced packaging.

Thanks!

Gary Huang says:

Chiplets between Dies are not only the high-speed bridges, but also the adaptive protocol commanders.
There are some examples of chiplet, such as Mochi, EMIB, fan-out Carrier/Bridge and NoC.

Leave a Reply


(Note: This name will be displayed publicly)