As demand for higher bandwidth within a similar power envelope grows, so does the concept of SoC/ASIC disaggregation.
The demand for ever faster high-speed interfaces has never been quite so pronounced. In our increasingly connected world, petabytes of data are continuously generated by a wide range of devices, systems and IoT endpoints such as vehicles, wearables, smartphones and even appliances. The resulting digital tsunami has prompted industry heavyweights like Google, Microsoft, Facebook and Amazon to consider implementing and innovating new architectures in the data center to remove bottlenecks.
Indeed, after years of steady SoC/ASIC aggregation, a disaggregated approach is now seriously being considered in the form of SerDes chiplets and specialized low-power, application-specific die-to-die interfaces. The concept of SoC/ASIC disaggregation is certainly timely, as demands for higher bandwidth within a similar power envelope will only grow louder.
Designing silicon on more advanced process nodes is clearly one option, although the costs at 7nm and beyond are quite high. Moreover, developing mixed-signal silicon on consecutive nodes is both challenging and expensive. To further complicate matters, SoC designs are fast approaching the outer limits of both yield capabilities and reticle size architecture.
Viable silicon disaggregation can be achieved by moving high-speed interfaces like SerDes to separate die in the form of SerDes chiplets, shifting analog sensor IP to separate analog chips and implementing very low-power and low-latency die-to-die interface through MCM or through an interposer using 2.5D technology. In addition to leveraging known good die for SerDes in more mature nodes (N-1) or vice versa, disaggregation will facilitate the creation of multiple SKUs, while optimizing cost and reducing risk. More precisely, disaggregation will see SoCs broken out into higher yielding, smaller dies and allow companies to create specific designs with multiple variants.
Meanwhile, die-to-die interfaces can more easily accommodate multiple applications across memory, logic and analog technology. In addition, die-to-die interfaces do not require a matching line/baud rate and number of lanes. Moreover, forwarded clock architecture provides low power solution, while FEC may or may not be required depending upon latency requirements.
It should be noted that a number of companies are actively pursuing SoC/ASIC aggregation for switches and other systems. Similarly, the industry is developing ASICs with die-to-die interfaces on leading FinFET nodes, while at least one next-generation server chip is being designed with disaggregated IOs on a separate die.
In conclusion, understanding the advantages of SoC/ASIC disaggregation can help the semiconductor industry evolve on both a micro (silicon) and macro level (data center). With SoCs designs fast approaching the outer limits of both yield capabilities and reticle size architecture, the concept of disaggregation has never been timelier.
Leave a Reply