Quantum coupling; programmable network routers; parallel computing.
Probing photon-electron interactions
According to Rice University researchers, where light and matter intersect, the world illuminates; where they interact so strongly that they become one, they illuminate a world of new physics. Here, the team is closing in on a way to create a new condensed matter state in which all the electrons in a material act as one by manipulating them with light and a magnetic field made possible by a custom-built, finely tuned cavity for terahertz radiation, showing one of the strongest light-matter coupling phenomena ever observed.
The work by Rice physicist Junichiro Kono and his colleagues could help advance technologies like quantum computers and communications by revealing new phenomena to those who study cavity quantum electrodynamics and condensed matter physics.
The team explained this is a nonlinear optical study of a 2D electronic material, given that when light is used to probe a material’s electronic structure, light absorption or reflection or scattering is what is sought, to see what’s happening in the material.
The work falls under the general subject of what’s known as cavity quantum electrodynamics (QED), where the cavity enhances the light so that matter in the cavity resonantly interacts with the vacuum field. What is unique about this is that the light typically interacts with this huge number of electrons, which behave like a single gigantic atom. As such, solid-state cavity QED is also key for applications that involve quantum information processing, like quantum computers wherein the light-matter interface is important because that’s where so-called light-matter entanglement occurs. That way, the quantum information of matter can be transferred to light and light can be sent somewhere.
Further, the team said that for improving the utility of cavity QED in quantum information, the stronger the light-matter coupling, the better, and it has to use a scalable, solid-state system instead of atomic or molecular systems, and that’s what they’ve achieved.
Programmable routers for flexible traffic management
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), along with the University of Washington, Barefoot Networks, Microsoft Research, Stanford University, and Cisco Systems hope to solve the current challenges of network traffic with routers that are programmable but can still keep up with the blazing speeds of modern data networks.
The researchers reminded that like all data networks, the networks that connect servers in giant server farms, or servers and workstations in large organizations, are prone to congestion, and when network traffic is heavy, packets of data can get backed up at network routers or dropped altogether. Also like all data networks, big private networks have control algorithms for managing network traffic during periods of congestion. But because the routers that direct traffic in a server farm need to be superfast, the control algorithms are hardwired into the routers’ circuitry. That means that if a better algorithm is created, network operators have to wait for a new generation of hardware before they can take advantage of it.
However, Hari Balakrishnan, the Fujitsu Professor in Electrical Engineering and Computer Science at MIT said new work shows that many flexible goals can be achieved for managing traffic, while retaining the high performance of traditional routers.
Previously, programmability was achievable, but it was not used in production, because it was a factor of 10 or even 100 slower.
With a platform the team developed, developers are not constrained by hardware or technological limitations, but by creativity so that innovation can happen more rapidly.
Traffic management can get tricky because of the different types of data traveling over a network, and the different types of performance guarantees offered by different services, and computer scientists have proposed hundreds of traffic management schemes involving complex rules for determining which packets to admit to a router and which to drop, in what order to queue the packets, and what additional information to add to them — all under a variety of different circumstances. While in simulations many of these schemes promise improved network performance, few of them have ever been deployed, because of hardware constraints in routers.
The researchers set themselves the goal of finding a set of simple computing elements that could be arranged to implement diverse traffic management schemes, without compromising the operating speeds of today’s best routers and without taking up too much space on-chip.
To test their designs, they built a compiler that was used to compile seven experimental traffic-management algorithms onto their proposed circuit elements. If an algorithm wouldn’t compile, or if it required an impractically large number of circuits, they would add new, more sophisticated circuit elements to their palette.
In one of the two new papers, the researchers provide specifications for seven circuit types, each of which is slightly more complex than the last. Some simple traffic management algorithms require only the simplest circuit type, while others require more complex types. But even a bank of the most complex circuits would take up only 4 percent of the area of a router chip; a bank of the least complex types would take up only 0.16 percent.
Beyond the seven algorithms they used to design their circuit elements, the researchers ran several other algorithms through their compiler and found that they compiled to some combination of their simple circuit elements.
A scheduler is described in a second paper, which is the circuit element that orders packets in the router’s queue and extracts them for forwarding. In addition to queuing packets according to priority, the scheduler can also stamp them with particular transmission times and forward them accordingly.
Finally, the researchers drew up specifications for their circuits in Verilog, the language electrical engineers typically use to design commercial chips. Verilog’s built-in analytic tools verified that a router using the researchers’ circuits would be fast enough to support the packet rates common in today’s high-speed networks, forwarding a packet of data every nanosecond.
Solving large-scale network problems
Thanks to being selected for funding for two years as a Parallel Computing Center by Intel Corp. to design new algorithms and software for massive networks, Purdue University computer science professor Alex Pothen and his research team are working on solving problems associated with massive networks with billions of nodes and links, implications of which could be felt in areas ranging from medical research to consumer issues.
Current state-of-the-art algorithms can take several days to process if the processing is even feasible at all. Pothen and his group will work on solutions to decrease the time to a few hours or less.
This November, Pothen and his fellow researchers will present their results at the ACM/IEEE Supercomputing conference (SC16) in Salt Lake City.