Is There Light At The End Of Moore’s Tunnel?

Second of two parts: The individual pieces of silicon photonics are all there, but getting them to work together isn’t something most engineering teams want to try yet.


Last month’s article, “Is There Light At The End Of Moore’s Tunnel,” examined the state of the industry in terms of integrating photonics components onto silicon. It concentrated on the piece that has been the hardest to achieve – the laser. However, as realizing that integration goal has become closer to reality, it has also waned in terms of the number of people who believe it is the ultimate solution. Instead, they have turned their attention toward 2.5D or 3D integration for this optical component. The primary reasoning for this are power/heat and size issues.

The laser has been compared to the power supply of a photonics system, and a single laser can feed many optical paths within a chip and possible across multiple chips. In this article, the other components of an optical system will be examined and further concerns arise about integrating photonics into the latest generations of silicon fabrication.

In addition to a laser, a photonic circuit requires a number of other component types, namely: modulators and demodulators, multiplexers and demultiplexers and waveguides.

Modulators / Receivers
Electronics has long had to convert between analog and digital and used A-to-D and D-to-A converters for this. The conversion between electronics and photonics happens using a modulator and a demodulator for the reverse direction. The most common type is a similar process to amplitude modulation used for radio communications, where the signal is used to modify the amplitude of a carrier signal. In this case, the carrier is the light beam from the laser. In some of the earliest modulators the digital signal would turn the light beam on or off but this has significant bandwidth limitations. Today, more sophisticated modulators are deployed, such as a Mach-Zehnder Interferometer (MZI).


Source: Dama et. al., Lightwire presentation: High Speed NRZ and PAM optical modulation using CMOS Photonics modulation using CMOS Photonics.

It works by splitting the laser beam into two paths. A phase difference is introduced between the two arms and the interference between them, when recombined, is either constructive or destructive. The phase change is caused by a change in the refraction index of the material.

Another type of modulator, the electro-absorption (EA) modulator, changes the amount of light absorbed using an applied electric field.

While photons move at the speed of light, the electron does not and this provides a performance gap between the two systems. If the optics can only send information at the same rate that a single modulator/demodulator can operate, then while there is still an advantage using optical due to the higher power consumption and drive requirements of the SerDes circuitry, the costs would be prohibitive. The photonics components are capable of much higher bandwidth and so many electrical circuits need to be multiplexed onto a single optical channel.

This is achieved using wavelength division multiplexing. Combining the multiple beams is simple, given that beams at different frequencies do not interfere with each other. This diagram from Intel in the previous article is reproduced here for clarity.


Source: Continuous Silicon Laser. Intel Corporation

It shows how different laser frequencies are created using different length laser cavities, each modulated and then combined together before being taken off chip to the corresponding demultuiplexer and demodulators.

Finally, the optical channel has to be routed around the chip to connect the optical components together and get the optical signals on and off chip.

“Waveguides are embedded in the silicon” says John Ferguson, product marketing manager for Calibre DRC applications at Mentor Graphics. “At a certain wavelength of light—and luckily this coincides with the frequency best suited to optical processing—silicon is transparent and silicon oxide is an insulator, meaning that you can make an almost ideal optical waveguide. So you can etch the silicon to carve out a waveguide.”

Juan Rey, senior director of engineering for Calibre DRC applications at Mentor, adds, “This explains why more applications that we see today are silicon on insulator technology so that the bottom oxide is one of those opaque layers.” This means that they can be created using a standard CMOS process.

A chip can contain any number of waveguides, and unlike electrical signals they can cross with almost no signal integrity issues between them. While few people are using this today, it is possible—and this could be a big advantage in the future. With electrical signals you have to pass them through metallization wires requiring up and down vias. This affects the power requirements and ultimately could lead to photonics being used on-chip.

“There are three ways to integrate silicon photonics with CMOS circuits” says Martin Eibelhuber, business development manager at EV Group. “The photonic can be buried beyond the CMOS layer, on top of it or processed on a different wafer. The integration of photonics by stacking it to the CMOS wafer gives the advantage that it allows flexible choice of electronics and photonics without adding thermal budget.”


Source: Roel Baets, Photonics Research Group, UGent – IMEC

Given that it is so simple to put the waveguide into the silicon, why would anyone consider putting it elsewhere? “The challenge is that all of the photonics devices are huge compared to most advanced technology nodes” explains Rey. “Many of these devices cannot be shrunk as there are only a few wavelengths at which silicon is transparent and this is on the order of 1µm wavelength. This restricts the dimensions for these devices.

Kotura recently developed an electro-absorption (EA) modulator that they claim is 25 times smaller than a traditional Mach-Zehnder Interferometer (MZI) style modulator. The length of the EA modulator is 55µm and they say that a competing MZI version would be measured in millimeters.

This is one reason why IBM is using 90nm to manufacture their opto-electronic chips today because they believe this provides a reasonably cost balance between the electronics and photonics components.

This mismatch in component sizes is one reason why some companies believe that the correct way to go is to integrate components such as the laser using flip chip technology, or perhaps ultimately 2.5D or 3D die stacks. “Components such as modulators are of the order of 10s of microns,” Rey says. “The width of waveguides tends to be from about 100nm to 1µm.”

But there are challenges with simply moving back to the older nodes, apart from the density, power and cost of the electronic components. In many cases the same kind of topics that are important in the smallest electronics components are equally applicable to photonics. For example, even though the photonics devices are larger they are more sensitive to variations in thickness and in some critical gaps that exist between components. Device behavior is very dependent on the actual shape of things and dimensions defined by the designer.

“Technologies such as this could be disruptive and require a lot of work by the EDA industry to support” says Ferguson. “With Calibre we ask what different needs to be done to verify a photonics structure compares to a traditional IC structure. The biggest difference is in the shapes. We are used to things being on a Manhattan structure with straight lines and maybe a 45˚ turn, and most verification engines are tuned to that. In the photonics world, abrupt 90˚ turns would be a problem, so we need to have carefully engineered curves for the waveguides. Verifying that becomes a unique difference but because of the investment we put in years ago with equation-based verification, we are able to handle it. We can measure the width of a bent curve and look at the various segments of it, mathematically compute whatever we need to get the required result.”

Another tool required to fully integrate the EDA flow is LVS. Here the situation is similar but the challenges are a little bit trickier because the tool has to know what the curve was meant to be in the first place. In a nutshell, this requires that the industry decide how to convey that information, and the many other new pieces required for the optical components, such as rule files.

“There are some pieces missing today, such as simulation,” says Ferguson.” We are used to doing SPICE simulations, which is very electronic- and device-specific. With photonics it is not only the devices, but you have to make sure your interconnects are behaving exactly as specified. This requires simulations of the optical behavior, and there hasn’t been a lot work done on large-scale photonics simulation. We need a high-capacity, high-performance optical simulator tied into the electrical simulation. This either has to be post-layout or you need a way to estimate things.”

The industry is trying to catch up with technology. For example, Steve Schultz, president of Si2, notes that “there are no photonics-aware PDK standards, and no standard APIs to link commercial EDA and photonics tools together, which are both essential for broad commercialization of this capability.” There is a European project named Plat4M that is working on the development of these,” adds Kevin Nesmith Chief Architect Si2, “and how to store the generic PCell format.”

While the components of a silicon photonics are all possible today, those that venture into this area will be taking it on as custom-chip development with very few tools ready to help them. This is really what the bleeding edge looks like today.


Paul Brunemeier says:

Not clear whether Si-integrated lasers is the Holy Grail or just another tiltable windmill.

A real challenge in large CPU’s is synchrony: getting all the critical circuits to agree on what time it is so that they work together without phase errors. The larger the device and the higher the frequency, the worse this problem becomes, as clock signals begin to suffer phase errors do to signal propagation delays.

An optical strobe pulse is, as you might gather, much faster, and propagation phase error is zero for practical purposes. If an optical signal can be flashed through the entire chip (say, at an energy below the substrate band gap) then sensors anywhere on the chip can detect it with zero phase error and keep all circuits synchronized.

This is a practical need and a potentially practical solution.

A less clear need and a pretty impractical solution has to do with data transfer.

As you are well aware, electrons and holes are fermions, which means that they are conserved—they hang around and don’t decay, as far as anyone is able to tell. (This doesn’t apply to an electron-hole pair, which is really sort of a boson.) Electrons will reside happily in a capacitor for a nearly indefinite period of time, at least compared to GHz time scales.

Photons are bosons, which means that they have essentially zero shelf life. In the absence of free space they despair quickly and turn into some other form of energy—like heat. Heat is a poor signal propagator.

Given the short lifetime of photons, a basic problem with lasers on silicon is that they need always to be on, especially in the laser pump case, in order to transfer data. In some architectures this makes sense, but in most the benefits are not worth the costs in heat dissipation, power consumption, and added complexity of design, fabrication process, device integration, and packaging.

So—interesting, but probably not going into an Xbox anytime soon.

Paul Brunemeier, Ph.D.
Director of Research and Development
Ted Pella, Inc.

Brian Bailey says:

Thanks Paul. I agree that we have now proved that it is possible. The next step is to work out if it is practical and economic. For some applications, especially data centers, that may be so, but for others, I tend to agree with you, that the economics will just not be there.

Leave a Reply

(Note: This name will be displayed publicly)