TSMC Uncorks A16 With Super Power Rail

Reporter’s Notebook: Aggressive roadmap ramps up competition at the leading edge.


TSMC showed off its forthcoming A16 process technology node, targeted for the second half of 2026, at its 30th North American Technology Symposium this week. As the foundry moves from nanometer to angstrom process numbering, the new nodes will be prefixed with an “A” designation (instead of “N”) and A16 is the first for TSMC.

TSMC said that N2 is still tracking to a 2025 production schedule. N3 launched in 2023, followed by N3E, which entered volume production late 2023 and N3P/N3X, which will roll out in 2025. While most of the new nodes have been and are led by premium mobile applications, A16, especially with the backside Super Power Rail, is expected to be targeted first toward HPC applications.

Figure 1, below, shows the latest TSMC Advanced Technology Roadmap presented at the Tech Symposium.

Fig. 1: TSMC’s process technology roadmap. Source: TSMC

It was reported last year that N2 would include a novel backside power architecture to help with power delivery and routing for HPC applications that typically have dense power delivery networks. That was scheduled for 2H 2025. The decision was made, though, to incorporate the new backside power architecture in A16, as it lined up better with the development of that process, and to make it available in 2026. Kevin Zhang, senior vice president and deputy Co-COO, confirmed that A16 will include transistor improvements over N2, so this isn’t just a marketing name change of N2 with backside power delivery. A16 will include transistor improvements, plus the new Super Power Rail advantages over the N2 family.

Fig. 2: TSMC A16 Super Power Rail. Source: TSMC

Figure 2 shows TSMC’s backside power delivery architecture, which uses a backside contact to the source. A comparison of Buried Power Rail, Power Via, and Backside Contact to S/D, shows Backside Contact to S/D has the best area scaling of the three methods, providing an advantage to TSMC. It’s also been previously reported that TSMC considers its N3P process to be most comparable to Intel’s 18A. It will be interesting to see direct comparisons between the two technologies once chips start rolling out of the foundries. TSMC’s continued march forward with new technology innovations is aggressive. Figure 3, below, shows the expected gains of A16 over N2P.

Fig. 3: TSMC A16 Comparison to N2P. Source: TSMC

The new N2 process, still scheduled for 2025, will have a capability like N3’s FinFlex technology, calledl NanoFlex. This will allow the mixing of short cells and tall cells in the same block to get better PPA by mixing in the best ratio of cells for any given design. Figure 4 shows the advantages of a mixed height design over a short cell only implementation.

Figure 4. TSMC NanoFlex

TSMC also held its first quarterly conference call for 2024 this month. Net revenue in US$ was down 3.8% quarter over quarter, but up 12.9% vs the first quarter of 2023. Figure 5, below, shows the breakout and revenue trends for the technology nodes over the past 10 years. It’s interesting to see that N3 has a similar dip in its Q3 revenue percentage that N5 also had in its Q3 (the first quarter of 2021). If N3 follows a trajectory like N5, that would put it on target for the mid-teens percentage of TSMC’s total wafer revenue that was previously forecasted by TSMC.

Fig. 5: Percentage of total wafer revenue by technology. Source: TSMC

A few other items of note from the Q1 conference call. On April 3, Taiwan experienced a magnitude 7.2 earthquake (by comparison California’s 1989 Loma Prieta earthquake was magnitude 6.9 and ~56 miles south of San Francisco). Wendell Huang, CFO and senior vice president of finance, had the following to say about the impact of the quake: “Based on TSMC’s deep experience and capabilities in earthquake response and damage prevention as well as regular disasters drills, the overall tool recovery in our fabs reached more than 70% within the first 10 hours, and were fully recovered by the end of the third day. There were no power outages, no structural damage to our fabs, and there’s no damage to our critical tools, including all of our EUV lithography tools.” This is an amazing testament to the planning and engineering undertaken by TSMC to handle such an event.

TSMC’s CEO, C. C. Wei, also had the following to say about status for N2: “Our N2 technology leads our industry in addressing the insatiable need for energy-efficient computing, and almost all AI innovators are working with TSMC. We are observing a high level of customer interest and engagement at N2 and expect the number of the new tape-outs from 2-nanometer technology in its first 2 years to be higher than both 3-nanometer and 5-nanometer in their first 2 years.”

He continued by saying, “Our 2-nanometer technology will adopt the nanosheet transistors structure and be the most advanced semiconductor industry technology in both density and energy efficiency. N2 technology development is progressing well with device performance and yield on track or ahead of plan. N2 is on track for volume production in 2025, with a ramp profile similar to N3. With our strategy of continuous enhancement, N2 and its derivative will further extend our technology leadership position and enable TSMC to capture the AI-related growth opportunities well into future.”

The Tech Symposium featured two guest speakers to attest to the insatiable demand for faster and more energy-efficient process technology, Andrew Ng, managing general partner at AI Fund, and James Hamilton, senior vice president and distinguished engineer at Amazon. Ng’s talk focused on where AI is heading, and he listed AI agnetic workflows, edge (on device, instead of LLM in the cloud), and large vision models. He also said that the visual processing revolution is coming, and that it will include analysis, not just generative AI.

Hamilton’s talk focused on the growth and scaling that is happening in the industry and how it is being driven to even higher levels by AI. He discussed the advantages of AWS building its own hardware and how it has advanced from the 5B transistor Graviton1 in 2018 to the 73B transistor Graviton4 in 2023, as well as creating other special purpose chips like Nitro for security to drive efficiency in their server centers. Hamilton also gave a fairly recent example from the last 12 to 18 months, where a customer had a 200B parameter dense model to train on 4 trillion tokens. It took 48 days of training time running at 10.3MW, for a total of 11.9GWh of energy, and at a total cost of ~$65M for the training. He said the largest next generation training runs are expected to exceed $1B, and that there are at least two more generations of growth.

Leave a Reply

(Note: This name will be displayed publicly)