First of two parts: New materials, smaller geometries and novel semiconductor structures have managed to keep Moore’s Law powering forward for decades, but all of those solutions have a physical limitation that cannot be overcome. A fundamental change is required.
Electrons are slow, clumsy and quite easily distracted. They’re slow because it now takes a signal longer to cross a chip than the period of the clock signal. They often don’t travel in straight lines as they collide with other atoms. And electromagnetic interference between adjacent signals can mess with the information they are transferring.
On the other hand, light has none of these problems. It travels, well, at the speed of light, can cross the path of another beam and is not easily affected by outside interference. The only problem is that, in contrast, electrons are cheap, easily directed to do what you want, and we have decades of experience with them.
Up until the 1980s the telephone system used copper wires for its long distance circuits. While the principle of optical fibers has been known for more than 100 years, it was not until Corning managed to bring the attenuation levels down that it became useable as a replacement for metal wires. The advantages were so profound that it didn’t take long before all copper for long haul communications had been replaced. Ever since that time, the semiconductor industry has wanted to use optical interconnect between chips, but there have been significant technical and economic barriers to its adoption. We may be nearing the time when this is no longer the case.
There is increasing attention being paid to the connections between racks of equipment in data centers. Today, CAT-5 or co-axial copper is being used, but increases in bandwidth are both necessary and becoming ever more difficult to attain. Oracle’s chief technologist A.V. Krishnamoorthy recently said that within five years he expects all server connectivity to become based on photonics running at 25G or higher. Further, they expect wafer-scale servers using photonics as interconnect.
“Silicon photonics is going to be critical because right now there is only so much you can fit into a given area that’s 20mm by 20mm,” said Javier DeLaCruz, senior director of engineering at eSilicon. “But with silicon photonics you take out a lot of the embedded memory and crank up the processing power and you can access memory faster. The backplane becomes just a power delivery system. You don’t have to worry about hundreds of SerDes.”
Still, the path through the photonics maze is littered with difficult choices that have to be made because there are some tough dependencies in some of the choices.
The industry currently is divided about the best way of achieving the integration of silicon-based electronics and photonics. Until recently the notions of complete integration on silicon was not possible, and even though this has been described as being the Holy Grail, others disagree with this approach. But there are now two potentially competing methods of building everything on silicon, making the solution even more complicated. The second basic approach is to use heterogeneous dies and to bond them together either as flip chips or using 3D stacking. One thing is clear, there are a lot of factors that have to be traded off in this decision and this article intends to explore just a few of the larger concerns.
Difficulties with silicon
A laser is a device that can generate or amplify light to produce a coherent beam of photons. To do this it has to be stimulated so as to excite electrons into a higher energy state and to achieve a population inversion. As the electrons decay they emit a photon. This operation is called pumping. In order to produce a strong enough beam of light, optical feedback is used to allow the photons to make multiple passes through the medium. Turning silicon into a lasing material has been a challenge.
“Silicon is not a direct bandgap material” said Juan Rey, director of engineering at Mentor Graphics, “and because of that it is not possible to get it to lase with enough power.” In silicon, the conduction band minimum and valence band maximum don’t correspond to the same point in k-space (A way to describe crystal structures in terms of wave-like states defined in the momentum space). As a result, an electron near the conduction band edge and hole near the valence band edge have diﬀerent momenta and cannot be directly recombined to emit a photon.
Because of this problem, lasers, detectors, and modulators have relied on group III-V semiconductors (a compound made from a group III element such as indium or gallium and a group V element such as arsenic. Example compounds are indium pPhosphide (InP) and indium gallium arsenide (InGaAs) ) which have optical properties that are highly suited to these kinds of application. However, these are challenging and expensive to integrate using CMOS processing.
Recently, IMEC successfully demonstrated the first III-V compound semiconductor finFET devices integrated epitaxially on 300mm silicon wafers. Imec’s breakthrough process selectively replaces silicon fins with InGaAs or InP, accommodating close to 8% of atomic lattice mismatch. The new technique is based on aspect-ratio trapping of crystal defects, trench structure, and epitaxial process innovations. There is hope this will not only lead to smaller transistors than is possible with silicon, but both of the materials mentioned are also important materials for optoelectronics, meaning that optical devices could be integrated directly into electronic devices.
“Now you have a common material in your manufacturing process you can marry the two together,” said Kevin Nesmith, chief architect at Si2. “Before this announcement you had to pull out the glue gun.”
However, optical ampliﬁcation and lasing in Si recently have been achieved based on something called the Raman laser, a device that Intel has demonstrated. Intel states that because silicon Raman amplifiers are so compact, they could be integrated directly alongside other silicon photonic components, with a pump laser attached directly to silicon through passive alignment. Because any optical device (such as a modulator) introduces losses, an integrated ampliﬁer could be used to negate these losses. The result could be lossless silicon photonic devices.
The Raman effect can be used to generate lasers of different wavelengths from a single pump beam. As the pump beam enters the material, the light splits off into different laser cavities with mirrors made from integrated silicon ﬁlters. These can then be multiplexed together, sending multiple data streams out on a single glass fiber. Intel’s recent announced 50Gbps solution utilizes four hybrid silicon lasers created by fusing a layer of indium phosphide onto the silicon waveguides during the manufacturing process.
Source: Continuous Silicon Laser. Intel Corporation
John Ferguson, product marketing manager for Calibre DRC applications at Mentor Graphics is not so sure. “Lasers require a lot of power and are hot, so putting that on a stack has risk associated with this. However, they can act like a power supply where a single laser can supply the light beam necessary for many connections. It can be split, directed to where it needs to be modulated, transmitted and detected.”
This leads us to consider the other possibility – heterogeneous integration.
One of the key drivers behind 3D integration is the reduced design and manufacturing complexity compared to a system-on-chip (SoC) approach. Individual components of a device can be manufactured on different wafers, in different fabs or by different companies. This enables a modular approach, where heterogeneous components can be integrated together, and allows for physical IP blocks to be utilized.
For example gallium nitride (GaN), a material that is often used for making LEDs, lasers and other photonic elements and is usually put on a sapphire or silicon carbide substrate. The primary reason for this is that their crystalline structures are similar, and this has a number of advantages. However, this substrate is not only bad for logic, but the wafers are typically 2-inch or 4-inch wafers and not the 10-inch or 12-inch that silicon usually uses. So neither substrate is ideal for both. The answer is to make each component out of the ideal materials and then to combine them.
“For low cost, high volume manufacturing, heterogeneous integration is needed and in particular the integration of Indium Phosphide onto SOI wafers using a flip-chip approach,” said Thorsten Matthias, director of business development at EV Group.
Process flow for heterogeneous integration on preprocessed SOI wafers. Courtesy EV Group.
For heterogeneous integration a SiO2 layer is required on the compound semiconductor wafer as well as on the target wafer to create a proper bond interface. This technique, which is already well established for the production of SOI wafers, is called oxide-oxide bonding.
“Bonding will be the superior process compared to epitaxy, as it allows high quality integration at very low costs,” Matthias said. “Furthermore, plasma activation before bonding allows the heterogeneous integration on a preprocessed SOI wafer at low temperatures.”
Ferguson has a bird’s eye view as he attempts to respond to customer needs “Interest in putting all of the photonics on a single die appears to be waning. We are beginning to see companies use 2.5D and 3D integration where the photonics components are placed on a silicon interposer.”
Coming in part two: A look at additional photonic components and design factors related to building combined electronic/photonic components that may tip the scale towards one of these solutions