中文 English

Can Coherent Optics Reduce Data-Center Power?

The good and bad of replacing electrical signals with optical ones.

popularity

As optical bandwidth requirements increase, system designers are turning to “coherent” modulation schemes that can place more data on the same laser light, and lower power over long connections.

A newer question is whether those savings could be achieved for short connections within data centers, as well.

“Coherent is the direction everything’s moving, because for a given system and power budget you’re trying to pack as much data as you possibly can,” said James Pond, principal product manager for photonics at Ansys. “And coherent data communications is a way to pack more data.”

In addition to its use in longer-range applications, coherent optics is poised to help reduce the power required by intra–data-center communications. That can be done by lowering individual laser power, as well as by reducing the number of lasers used, but support circuitry may make those savings harder to achieve.

Photonics in the data center
Much of today’s installed photonics are dedicated to long-haul transmission — signals that go thousands of kilometers. Power is presumably a consideration in that application, but the bigger focus on power is in the data center.

As the data center workloads grow, more data needs to move to and from more places. At present, most of that is happening over copper wire. Optics are having an impact mostly on connections across campuses or from one data center to another over shorter distances than a long-haul connection.

Beyond that, the move to disaggregate the components on today’s servers comes with a promise and a challenge. The promise is that more efficient use can be made of data-center resources. The challenge is that resources that might previously have been collocated in the same server may now need to work from a greater distance away.

This configuration will challenge copper’s ability to provide low-enough latency, with photonics providing the obvious solution. If that happens, photonics would proliferate throughout the data center, where power and power efficiency are paramount.

Where the lasers are
The typical photonics configuration for a link is relatively straightforward. A laser creates light that can be modulated for transmission down the line. At the end, a receiver detects that light and decodes the signal.

Photonics has the benefit that none of the transmission or components along the way will consume energy in doing their work. Instead, any necessary energy is built into the original laser output, making the laser the sole consumer of energy. That, in turn, makes the laser a prime candidate for lower power.

For applications that can sustain higher-cost implementations, III-V materials can both provide the laser and modulate the signals. But proliferation in the data center calls for lower-cost photonics, and silicon photonics can provide much of what’s needed. The one thing that it can’t provide, however, is the lasing function itself. As an indirect-bandgap material, silicon is not capable of creating the laser light — only manipulating it.

For this reason, any connection within the data center will need a non-silicon laser coupled with silicon photonics. Currently, that tends to mean pluggable optics modules whose signals must be converted to electrical for delivery to the final destination. That electrical link tends to be SerDes to handle the high bandwidth, and so it is also a significant consumer of energy.

If the technology can be worked out, lasers can be co-packaged with silicon inside advanced packages. This eliminates the need for the electrical link, lowering power. But the fact remains that, using this model, each connection will need a laser.

So a move to photonics in the data center using today’s standard approaches means a huge number of lasers, each of which has to generate enough intensity to deliver its payload reliably.

Lasers generate heat, and that heat has to be managed — possibly requiring additional cooling, which also consumes energy. That provides three opportunities for lower power: requiring less power from each laser, reducing the number of lasers, and, as a knock-on effect, reducing the amount of cooling needed.

The move to coherent
Traditional optics is amplitude-based. That is, consistent with common intuition, optical signals are modulated by varying the amplitude or intensity of the light to carry data to the destination. At its simplest, this is analogous to how most electrical data signals are carried across wires.

The challenge with the traditional approach is that the power of the original laser beam being modulated must be strong enough to ensure that the data arrives intact at the receiver. That means that it can’t be merely detectable by the receiver; the detected signal must be stronger than any other noise so that it can be cleanly and reliably separated from the noise.

“Amplitude noise is made up of dispersion and non-linearities,” said Jigesh Patel, technical marketing manager at Synopsys. “Dispersion can be compensated by DSP at the receiver, but nonlinearities are hard to overcome (and may not be possible to compensate fully). The higher the laser power, the stronger the fiber nonlinearities.”

Coherent modulation places the carried information not in the amplitude of the laser light, but in the phase. Phase noise is less of an issue. “Phase noise is bad for performance, but modern lasers have narrower line widths than before,” continued Patel. “Larger line widths mean more phase noise.”

As a result, the strength of the original laser light can be reduced to the point where the modulated signal is just detectable at the endpoint. Noise in the amplitude will not affect the phase relationships, allowing this approach to bypass that noise entirely.

Fig. 1: A simplified view of basic coherent modulation. The left image shows standard modulation, where amplitude is measured. That amplitude must be sufficiently above the noise (gray) at the receiver to be reliably received. The right image shows coherent modulation, where the phase, not the amplitude, is measured. The amplitude can therefore be lower. (This doesn’t illustrate the use of polarization and quadrature.) Source: Bryon Moyer/Semiconductor Engineering

Fig. 1: A simplified view of basic coherent modulation. The left image shows standard modulation, where amplitude is measured. That amplitude must be sufficiently above the noise (gray) at the receiver to be reliably received. The right image shows coherent modulation, where the phase, not the amplitude, is measured. The amplitude can therefore be lower. (This doesn’t illustrate the use of polarization and quadrature.) Source: Bryon Moyer/Semiconductor Engineering

A prime motivator for this modulation approach is higher bandwidth. Unlike standard modulation, coherent systems can leverage light polarization as well as quadrature.

“Coherent systems allow sending information in in-phase and in-quadrature components of each of the two (x- and y-) polarizations of light because DSPs can recover data from polarizations and from the in-phase and quadrature components,” said Patel. This quadruples the bandwidth at a minimum.

“You’re sending not zeros and ones, but many symbols,” explained Pond. “By optimally utilizing different levels of amplitude and phases of the light, you can send multiple symbols. So this is why people start talking about baud rates instead of bit rates. You can get 32 or 64 symbols per chunk of time.”

And that is for a given wavelength of light. “You can multiply that across all the different wavelengths that you can put into a fiber,” Pond added.

Having made the change to coherent, other techniques can boost bandwidth further. “There is a whole range of different advanced modulation formats that people can use,” said Pond.

Patel concurred. “There are formats that can let you pack even more than 4 times (say, 16-times and 64-times) the data, but then the reach gets a bit shorter.”

This brings some additional benefits. “If you’re avoiding the big swings and amplitude of the laser, that solves a number of issues as well,” noted Pond.

For instance, in-line amplification may not be needed. “Coherent systems eliminate the need for having optical amplifiers in the link,” said Patel. “Optical amplifiers, such as erbium-doped fiber amplifiers [EDFAs], are expensive, need additional lasers for higher gain, and require extra maintenance.”

Coherent modulation and power
Power becomes an additional benefit, at least for the laser subsystem itself. By generating less laser power, heat becomes easier to manage, further reducing power by lowering — but not necessarily eliminating — the cooling requirements.

“Since the information is no longer in the intensity, you don’t need to drive the laser too hard,” said Patel. “You just have some minimum power that the detector can detect.”

Such connections already are being planned for the shorter inter-data center connections, of which there are perhaps dozens in a given data center. The big question is whether this could also work within the data center.

Receiving and decoding coherent signals is more complex. Using the traditional approach, you just need to detect the strength of the signal. “Either it’s a PIN photodiode or an avalanche photodiode,” explained Patel. “Its sole function is to take the optical power and convert it into a current.”

But this can’t be used for phase detection. “As soon as it gets converted into current, you lose the phase information,” he said. “You need to come up with some tricks on the receiving side so you can recall phase. The side effect of information in the phase is that that your receiver becomes a little bit more complex.”

At present, the additional DSP circuitry required may raise the power of the receiver. This still achieves net lower power on longer links that use higher laser power, but it’s less clear on the shorter links that would be used within the data center if fiber replaced copper in many connections.

Whether any such increase could be more than compensated by the lower overall laser power isn’t obvious, because these are lower-power lasers to begin with.

“Coherent transmission is a good option for inter-data-center communication, but is expensive (and potentially more power consuming) in intra-data-center connections because of the DSP at every transceiver,” said Patel. “Economies of scale and improvements in technology may make coherency inside the data center a possibility in the future.”

Relocating the lasers?
Power could be reduced further by eliminating the need for a laser to be associated with each individual connection. The links clearly need light that can be modulated, but the use of a laser per link is part of the long-haul legacy. Rethinking that approach could provide the opportunity for yet more reduction in power.

Pilot Photonics noted this possibility, based on a presentation at a 2018 Optical Fiber Communication Conference workshop by a team from AIST (Japan’s Advanced Institute for Science and Technology), University of Tokyo, Karlsruhe Institute of Technology (KIT), and Finisar.

“The idea is that instead of switching every packet electronically, the majority of the traffic is switched optically, with only a portion switched electronically,” explained Frank Smyth, co-founder and CTO of Pilot Photonics. “This saves an enormous amount of power and makes the continued data growth sustainable from a power consumption perspective.”

Increasing the number of data-center optical links from dozens to thousands means that silicon photonics must be leveraged as much as possible in order to take advantage of the lower costs of silicon processing. Those cost savings are hampered by the need for III-V materials at each link, encased in a hermetically sealed package. Instead, a single higher-power laser could be used to serve, say, a row of racks.

“You’d use indium phosphide to generate the light,” said Smyth. “Then you’d use silicon nitride to create micro-resonators on a very broad comb. And then you deliver that comb to each of the servers in a rack. And each one of those would have a silicon-on-insulator PIC [photonics IC], which has the modulators and the high-speed photonics. You might have a few hundred light sources, one to feed each rack or each row of racks.”

This allows, for the most part, the best materials to be used in the light generation, because there are fewer instances. “Silicon nitride offers very low-loss waveguides that also have non-linear properties (when pumped with an InP laser), making them ideal for generating optical combs,” said Smyth.

The generated light then could be made available for each link to modulate independently, which is an exception to the “best materials” aspect of generation. While not the best modulating material, it’s good enough, and the economics require it.

Given that a single laser would be providing enough energy for many links, it would be a higher-intensity beam, although, as delivered to the rack, it would carry no information.

“One benefit of this being centralized is that you can then use an EDFA to amplify all channels simultaneously, giving you more power and allowing you to distribute to more racks,” noted Smyth. “EDFAs are quite efficient, so you get the benefits of lots of centralized and relatively efficient gain.”

While it has been noted above that EDFAs are relatively expensive, only one would be required per laser rather than one per link.

Cooling for the laser would be done centrally, eliminating the need for server-level laser cooling. That central cooling would be more efficient than the distributed cooling required for the current architecture. Chips handling the photonics would no longer have to be hermetically sealed, reducing cost.

This is not to suggest that no cooling is needed in the racks. High-power computation is likely still to need cooling. But it removes the need to cool lasers in each rack.

Within the rack, then, this would provide the equivalent of a light rail alongside the power and ground rails. The central laser generation system would need to be very reliable, since laser failure would affect a huge number of links rather than just one. Redundancy at the laser source would likely be needed.

Fig. 2: Server racks can benefit from a centralized laser source. In this simplified depiction, yellow indicates a laser; peach indicates light distributed through fiber; blue indicates cooling. The top image shows individual lasers for each link (two shown per shelf for simplicity). Cooling would be needed for all of those lasers. The lower image shows a centralized laser, where the cooling can be focused on the laser itself rather than being distributed through the racks. Not shown is cooling needed for the other chips. Source: Bryon Moyer/Semiconductor Engineering

Fig. 2: Server racks can benefit from a centralized laser source. In this simplified depiction, yellow indicates a laser; peach indicates light distributed through fiber; blue indicates cooling. The top image shows individual lasers for each link (two shown per shelf for simplicity). Cooling would be needed for all of those lasers. The lower image shows a centralized laser, where the cooling can be focused on the laser itself rather than being distributed through the racks. Not shown is cooling needed for the other chips. Source: Bryon Moyer/Semiconductor Engineering

The data center then gets the benefits of several reductions in power:

  • Coherent signals require lower-power lasers;
  • A single laser is more efficient than multiple lasers, and
  • Cooling power can be reduced.

In addition, for links using pluggable optics, the power-hungry electrical SerDes links could be eliminated. The remaining question then is how much the DSPs, which are required in coherent receivers, negate those savings.

Today’s newer optical standards have a focus on coherent modulation. “The very first coherent transceiver based on a standard like Ethernet has just been released,” said Smyth. “It’s called 400 ZR. It’s an attempt to standardize coherent optics for the long-haul network.”

And more is coming beyond that. “With 800 ZR, everything is coherent,” said Patel. “400 ZR is also mostly coherent.”

But short-range optical connections would likely need new standards for interoperability within the data center. Such standards can take a few years to complete.

Conclusion
Moving to coherent optics between data centers is just starting. “In 2022, Microsoft is going to roll out this kind of interconnect in their data centers,” said Patel.

The changes that could motivate the move from copper to fiber within the data center are still a relatively long time away. Serious architecture changes could result, and such changes aren’t made lightly. The motivating OFC presentation postulated 2028 as a time when this could be in place.

“This is a pretty disruptive architecture and very different from how data centers are architected today,” cautioned Smyth. “I don’t believe it is in use anywhere, but most of the technology blocks are available now, so an architecture like this could potentially be deployed within five years if some of the hyperscale data center operators were to pick up on it. The additional benefits from the majority of the traffic being optically switched, and only a small portion electrically switched, is probably a bit further out because optical switching at that scale is still a bigger challenge.”

Related
Will Co-Packaged Optics Replace Pluggables?
New options open the door to much faster and more reliable systems.
Improving Energy And Power Efficiency In The Data Center
Optimizing energy and power efficiency in server chips and software is a multifaceted challenge with many moving parts.



Leave a Reply


(Note: This name will be displayed publicly)