Inside Intel’s Ambitious Roadmap

Five process nodes in four years, high-NA EUV, 3D-ICs, chiplets, hybrid bonding, and more.

popularity

Ann Kelleher, senior vice president and general manager of Technology Development at Intel, sat down with Semiconductor Engineering to talk about the company’s new logic roadmap, as well as lithography, packaging, and process technology. What follows are excerpts of that discussion.

SE: Intel recently disclosed its new logic roadmap. Beyond Intel 3, the company is working on Intel 20A. With Intel 20A, you plan to introduce a RibbonFET in 2024. What is a RibbonFET and how does that propel Intel forward?

Kelleher: RibbonFET is our name for what other people in the industry call gate-all-around. Some people also call it a nanosheet or nanoribbon. It’s the next transistor architecture that takes us beyond finFET. We’re utilizing finFET until Intel 3 and will continue to improve finFET for that process. When we go to Intel 20A, we will be utilizing RibbonFET at approximately the same equivalent node as the rest of the industry.

SE: Intel was way ahead when it came to the finFET. The RibbonFET moves the industry forward again at the most advanced nodes. Can you pattern this technology using the current version of extreme ultraviolet (EUV) lithography?

Kelleher: We are using existing EUV tools at the 0.33 numerical aperture in our development of Intel 20A, which is planned for 2024. For our processes in 2025 and beyond, we are already partnering with ASML on high-NA EUV, which is the next numerical aperture for EUV. This next version of EUV tools allows us to get down to much smaller geometries. Beyond that point, we will use a mix of EUV, high-NA EUV, and other immersion and dry lithography layers as well.

SE: Why this sudden interest in high-NA EUV, and where do you plan to use it?

Kelleher: It allows us to move to much smaller geometries and much smaller pitches, and it also enables us to prolong the double-patterning EUV. Our interest in high-NA began a couple years ago. There’s really three companies that work with ASML, and all of us have worked over the years on EUV. Three years ago, we had a conversation with ASML about what’s next. There’s a recognition that the industry as a whole will need to go there. So we decided to put a stake in the ground, saying we will drive it for 2025. That’s going to be challenging. We’ve signed up to take the first equipment which means we will be the first ones on the learning curve. We didn’t have EUV on 10nm, which is now Intel 7, and we’re getting it on what we’re now calling Intel 4. We want to make sure that, as we go forward, we can maintain the leading edge of EUV’s capabilities. It will bring a significant amount of learning, but it also will enable us to continue the progression down to the smallest geometries.

SE: These are going to be pricey chips to develop. On a die, do you foresee that everything is going to be on a RibbonFET, or do you foresee this as a mix-and-match strategy with lots of different things? Intel seems to be going in two directions here. One is pushing down the lithography curve. The other side is that you have a number of different technologies, which are going into faster interconnects and advanced packaging.

Kelleher: From a product perspective, we have heterogeneous packaging. Basically, it’s a mix-and-match strategy using tiles, a bit like LEGO blocks. Product designers can pick and combine various technologies by which they want to build our products. Not everything needs to be on the latest nodes. Instead, you can pick the technology that is best suited for the aspect of the product that you want to deliver. Once we get to Intel 20A, our transistors will be built from RibbonFET from there on. But equally, we will continue to use and drive forward our advanced packaging technology. Then we can deliver and enable those different sets of building blocks to our products. For designers, they can mix and match to deliver leadership products to customers.

SE: Intel has expanded its advanced packaging portfolio, right?

Kelleher: Our advanced packaging technology starts with our 2.5D packaging, which is EMIB (Embedded Multi-die Interconnect Bridge). Then we have 3D packaging, which is Foveros. This involves a base die, on which you can stack chiplets. We also have Foveros Omni, which brings more benefits like cost savings, since the base die doesn’t have to be the same size as the top die. It also gives you power benefits. With Foveros Omni, we’re going to a smaller bump pitch, as well. Additionally, we introduced Foveros Direct, which is copper-to-copper bonding. This basically takes us almost to the monolithic level. When you do face-to-face bonding, you eliminate the solder, and you can get a significantly larger number of interconnects per square millimeter.

SE: Intel will push the finFET as far as possible to Intel 3, and then it will introduce its gate-all-around technology. In contrast, Samsung will introduce gate-all-around at 3nm. Why didn’t Intel do the same thing and bring up the RibbonFET at Intel 3?

Kelleher: We knew we had additional improvements we could make on our finFET roadmap, based in terms of what we get from an intrinsic optimization standpoint. So why not take those gains before making the transition to what is a very different architecture? The bottom line is, when is the right time to do it? Our transition to gate-all-around, or RibbonFET, is basically driven by our belief that we can deliver more from our existing finFET. Then we make our transition. Time will tell how the rest of the industry lands in terms of introducing gate-all-around.

SE: Several companies have been working on gate-all-around transistors for a long time. What are the challenges with the technology? Do the challenges involve EUV or other process steps?

Kelleher: Over recent years, EUV has matured significantly. It has reached more full-scale adoption within the process flow. This obviously makes it much easier in terms of the geometries you are going to print with it. In the earlier days of EUV, the question was whether EUV was going to be capable of doing all the layers that it is ultimately capable of. EUV capability, I will say, has truly progressed. It is a key enabler in terms of doing gate-all-around. Beyond those issues, you must also think about your stack height in terms of building the ribbons themselves and how high you want to go. You must also think about how you deal with the substrate and the isolation from the substrate. These are all challenges to be addressed, and we have a pathway to resolve all of them while getting the defects down and delivering in the time frame.

SE: One of the problems with increasing density is getting power to the various components on a chip. What’s the solution?

Kelleher: If we’re talking about power, I would like to talk about our PowerVia. Our PowerVia is a key innovation. When you look at the process flow today, the metallization is at the front of the wafer. Basically, it’s power delivery to the front of the wafer, to the transistors and the interconnect metallization. Our PowerVia innovation changes that. With the PowerVia, we’re able to deliver the power from the backside of the wafer. It allows for more room on the frontside of the wafer and gives us more ability to relax a little bit of our dimensions as we’re going down. At the same time, we’re able to get the power directly to the transistors without the power drop. It takes us to the next place in terms of dealing with the overall power delivery challenges.

SE: So as a result, can you actually lower the voltage? You have the conduit all set up in terms of driving the power through the chip, right?

Kelleher: The bottom line is, you have the power connected to where you need to at the back of the chip. In terms of power, the voltage optimization really comes down to what the designers want from the final product. On some processes, we want to run at a lower voltage. If you’re pushing the performance, you want to run it as a higher voltage. We tend to do both within our products. Overall, we will be able to provide and support what’s needed from the designers.

SE: Intel’s PowerVia looks similar to Imec’s Buried Power Rail (BPR). Is the PowerVia the same or different than BPR? And even with PowerVia, you still need the copper interconnects for the chips, right?

Kelleher: Buried Power Rail, at the highest level, is the same general theme. However it differs in how it’s achieved. We’re delivering the power from the back of the wafer to the transistor. Buried Power Rail is basically getting it from the front side, so you have a different architecture in achieving that. It is the key difference. We believe our way is actually the better way. You still need to have contacts to the transistors, which means dealing with contact resistance to address transistor signals that need to continue. We need to continue working on lowering the contact resistance of all of the various metals. The metallization schemes need to continue to reduce the overall resistance.

SE: Why did Intel change its node naming strategy?

Kelleher: The industry as a whole had become misaligned in node naming. If you do a search on Google, you will find explanations on why Intel’s 10nm is the equivalent of 7nm at the foundries. We had to think about making it easier to understand for our customers. Now when they look at our process nodes and the names, they are able to make better decisions. Why now? We introduced our IDM 2.0 vision in March and spent a lot of time over the last six months working on a very detailed roadmap. The roadmap lays out how we will get back to performance per watt parity and performance per watt leadership. Given that we were moving, we decided now is as good as time of any to rename them. We are now spending our time focusing on what we’re doing rather than explaining a node name.

SE: Today, Intel is shipping 10nm products based on its SuperFin technology. (SuperFin is a finFET technology.) Then, Intel’s next-generation 10nm products are based on an Enhanced SuperFin technology. Now, Intel has renamed this as Intel 7. What is Enhanced SuperFin?

Kelleher: We have 10nm SuperFin running in the factory today, and that is delivering our products like Tiger Lake. The Enhanced SuperFin, which is now Intel 7, is the next generation of SuperFin performance optimization.

SE: Recently, Intel experienced delays with its 7nm technology. (Intel’s original 7nm technology is now called Intel 4.) What’s the status of this technology?

Kelleher: We did a very public announcement on what was then called 7nm and is now Intel 4. At that point in time we reset our milestones in terms of overall process development and defect density. Since then we also began working on what was basically a parallel process to streamline the process flow and really increased our use of EUV within that process. With that we were able to switch over from the original version of the process flow to the new version going into this year. It’s going very well. We’ve reached our milestones over the last nine months, which gives me confidence that the work we’re doing is going to deliver. There are other changes we’ve made, too. I’ve spoken about how to put together our roadmap to get to leadership in performance per watt. First of all, we’ve identified a significant number of projects, and we are spending the R&D and capital to enable that. Second, we have world-class engineers within Intel’s Technology Development group. That was true before, and it is still true now. But how we’re working is changing. Where possible, and where it makes sense, we’re adopting industry standards. Design enablement is a key area for that. With the progression in EDA, we had to catch up so we could set our designers up for success.

SE: Intel is planning five nodes in four years to move to parity with your competitors and then a leadership position. This breaks all the rules from your past about a node every 18 to 24 months, right?

Kelleher: We will be releasing an Intel 7 product later this year. After that, we’re going to Intel 4. Intel 4 will be in production in the second half of 2022, with product releases in 2023. Intel 3 comes in during the second half of 2023. Intel 20A will follow in 2024 and then Intel 18A will come in after that. Our performance per watt gain from one node to the next is greater than any one on its own. This allows us to make up time against our benchmarks of external competition. But if you want to catch up and move ahead, you need to move faster. The methodologies we talked about will enable us to do that. I believe we have a very solid roadmap to deliver on this.

SE: How about Intel’s interactions with the rest of the industry?

Kelleher: We’ve also changed the way we’re working with our equipment vendors, our materials vendors, and our EDA suppliers. We don’t need to invent everything. There’s a lot of learning in the industry that is already proven by the equipment suppliers. Where possible, we’re pulling from the best in the ecosystem. This allows us to focus our resources on the innovations that will get us ahead. Also, we’ve done quite a lot in terms of risk assessment and identifying areas in the process where there could be higher risk. And then, out of that risk assessment, we can decide what types of contingency plans we need to build and determine how long we should develop those plans for – especially for areas that are higher risk. Obviously, you can’t create a contingency plan for everything, or else you’d be double developing everything. Across Intel 4 and nodes beyond that, we’ve been working on streamlining the process so we can have less complexity in hardware manufacturing.

SE: Intel has done a lot of work on chiplets and interconnects in advanced packages. As you move into more standardization and heterogeneous integration, do all of these components have to be characterized to Intel standards? Or do all of the components have to be Intel tiles?

Kelleher: If we go back and look at this over time, we’ve had tiles from within Intel and tiles from outside of Intel. It was relatively simple when you had two tiles. Today we’re up to 47 tiles in a package that brings silicon from different foundries and manufacturers together. At the product and design level, one of the things we have demonstrated is heterogeneous hardware from different hardware providers, as well as our FPGAs. This is a bit like in the past where we had many chips on a board. Now these chips are moving into the package, and we’re able to package them together. We provide the framework for the building blocks to come together so that the product designer can say, ‘For this product I need this unique set of attributes and here are our specs.’ This could involve many different factories, and the design teams collaborate very closely with the process teams and the packaging teams to integrate it all into one package. For all the products coming out in 2023, our packaging team has been working with the various places where all the silicon is coming from – internal and some external – and working on how everything will be compatible. Ultimately, the product is tested internally to ensure that all those standards work together. As an industry, standardization is an area where we can do more work together in the future.

SE: Where does hybrid bonding fit into Intel’s roadmap? Is it going to be bump pitch scaling for the foreseeable future?

Kelleher: There will be packages with hybrid bonding and there will be various techniques in the same package. We have 2.5D and 3D together in a package today because that enables flexibility for the given products. We will have hybrid bonding, too. It will be a mix-and-match. As for overall scaling of the bumps, we expect our first generation of HBI (hybrid-bonding interconnect) to be direct copper-to-copper, which will be a significant increase in terms of the density of bumps per mm². We believe we can get more than 10,000 per mm² with what we’re doing in our first generation of HBI.

SE: A lot of the guideposts like the ITRS roadmap have fallen by the wayside, while others like Moore’s Law seem to be less relevant. At the same time, the number of choices in a design are increasing. How does this impact what you build, particularly for foundry customers?

Kelleher: You’re trying to get to the best possible product for the customer at a given time. That’s the highest order part. But you have many more options on the menu, and it’s more of an à la carte menu than a fixed menu. In the past, everything was based on the node that you were working with. I go back to the design enablement team, as well as the design efforts between the process and packaging. These teams have a lot of active discussion and debate in terms of how we achieve the best possible answer for given products going forward. There are certain technical reasons for why one version of a tile will or won’t be used. There are many ways to get there, and the supply chain itself has become much more complicated. Depending on the particular product and its particular features, it becomes a discussion of how we get there with the most manufacturable version of tiles as well as the supply chain.

SE: Are any new materials being used here? We’ve seen adoption of cobalt and interest in ruthenium. How about others?

Kelleher: We have a very active set of ongoing programs between our components research and the materials suppliers, as well as our technology development with the suppliers. At this point, I’m not going to give you more new names and materials, but we’re not going to be fully done with Moore’s Law until every element on the periodic table is exhausted.

Related
New Approaches For Processor Architectures
Flexibility and customization are now critical elements for optimizing performance and power.
Impact Of GAA Transistors At 3/2nm
Some things will get better from a design perspective, while others will be worse.
The Increasingly Uneven Race To 3nm/2nm
An emphasis on customization, many more packaging options, and rising costs of scaling are changing dynamics across the industry.
Stacked Nanosheets And Forksheet FETs
Next-gen transistors will continue using silicon, but gate structures and processes will change.



2 comments

wondering says:

EUV double patterning prolonged by high NA?

Answering says:

@Wondering
It would be nice if High NA EUV was available by the time EUV single patterning stopped being viable, so they could then move straight to High NA EUV and stick with single patterning.

However the High NA machines will not be available yet when EUV single patterning runs out of steam, so at some point they will have to start double patterning with EUV.

But High NA machines are expected to be available by the time EUV double patterning runs out of viability. So the switch to High NA EUV will delay the need to start triple patterning.

Leave a Reply


(Note: This name will be displayed publicly)