System Bits: June 12

Big data analytics coding; keeping wireless network data fresh; spooky quantum particle pairs.

popularity

Writing complex ML/DL analytics algorithms
Rice University researchers in the DARPA-funded Pliny Project believe they have the answer for every stressed-out systems programmer who has struggled to implement complex objects and workflows on ‘big data’ platforms like Spark and thought: “Isn’t there a better way?” Their answer: Yes with PlinyCompute, which the team describes as “a system purely for developing high-performance, big data codes.”

The team said that like Spark, PlinyCompute aims for ease of use and broad versatility. However, Chris Jermaine, the Rice computer science professor leading the platform’s development said, unlike Spark, PlinyCompute is designed to support the intense kinds of computation that have only previously been possible with supercomputers, or high-performance computers (HPC). “With machine learning, and especially deep learning, people have seen what complex analytics algorithms can do when they’re applied to big data. Everyone, from Fortune 500 executives to neuroscience researchers, is clamoring for more and more complex algorithms, but systems programmers have mostly bad options for providing that today. HPC can provide the performance, but it takes years to learn to write code for HPC, and perhaps worse, a tool or library that might take days to create with Spark can take months to program on HPC. Spark was built for big data, and it supports things that HPC doesn’t, like easy load balancing, fault tolerance and resource allocation, which are an absolute must for data-intensive tasks. Because of that, and because development times are far shorter than with HPC, people are building new tools that run on top of Spark for complex tasks like machine learning, graph analytics and more.”

Further, because Spark wasn’t designed with complex computation in mind, its computational performance can only be pushed so far, noted Jia Zou, a Rice research scientist and first author of a ACM SIGMOD paper describing PlinyCompute.

Rice University research scientist Jia Zou is first author of a new peer-reviewed study about PlinyCompute. Source: Rice University

“Spark is built on top of the Java Virtual Machine, or JVM, which manages runtimes and abstracts away most of the details regarding memory management,” said Zou, who spent six years researching large-scale analytics and data management systems at IBM Research-China before joining Rice in 2015. “Spark’s performance suffers from its reliance on the JVM, especially as computational demands increase for tasks like training deep neural networks for deep learning.

“PlinyCompute is different because it was designed for high performance from the ground up,” Zou said. “In our benchmarking, we found PlinyCompute was at least twice as fast and in some cases 50 times faster at implementing complex object manipulation and library-style computations as compared to Spark.”

Tests have shown that PlinyCompute outperforms comparable tools for construction of high-performance tools and libraries but not all programmers will find it easy to write code for PlinyCompute since, unlike the Java-based coding required for Spark, PlinyCompute libraries and models must be written in C++.

Algorithm gives networks most current info, avoids congestion
For wireless networks that share time-sensitive information on the fly, it’s not enough to transmit data quickly: that data also need to be fresh.

Consider the many sensors in a car. While it may take less than a second for most sensors to transmit a data packet to a central processor, the age of that data may vary, depending on how frequently a sensor is relaying readings. In an ideal network, these sensors should be able to transmit updates constantly, providing the freshest, most current status for every measurable feature, from tire pressure to the proximity of obstacles. But there’s only so much data that a wireless channel can transmit without completely overwhelming the network.

How, then, can a constantly updating network — of sensors, drones, or data-sharing vehicles — minimize the age of the information that it receives at any moment, while at the same time avoiding data congestion?

To this point, MIT researchers in the Laboratory for Information and Decision Systems have come up with a way to provide the freshest possible data for a simple wireless network.

A new algorithm developed by MIT researchers helps keep data fresh within a simple communication system, such as multiple drones reporting to a single control tower.
Source: MIT

Interestingly, this method may be applied to simple networks, such as multiple drones that transmit position coordinates to a single control station, or sensors in an industrial plant that relay status updates to a central monitor, the team said. Eventually, they hope to tackle even more complex systems, such as networks of vehicles that wirelessly share traffic data.

Eytan Modiano, professor of aeronautics and astronautics and a member of MIT’s Laboratory for Information and Decision Systems said, “If you are exchanging congestion information, you would want that information to be as fresh as possible. If it’s dated, you might make the wrong decision. That’s why the age of information is important.”

Traditional networks are designed to maximize the amount of data that they can transmit across channels, and minimize the time it takes for that data to reach its destination. Only recently have researchers considered the age of the information — how fresh or stale information is from the perspective of its recipient.

The team’s solution lies in a simple algorithm that essentially calculates an “index” for each node at any given moment. A node’s index is based on several factors: the age, or freshness of the data that it’s transmitting; the reliability of the channel over which it is communicating; and the overall priority of that node.

For example, there may be a more expensive drone, or faster drone, and you’d like to have better or more accurate information about that drone. That one can be set with a high priority.

Then, nodes with a higher priority, a more reliable channel, and older data, are assigned a higher index, versus nodes that are relatively low in priority, communicating over spottier channels, with fresher data, which are labeled with a lower index.

Obviously, a node’s index can change from moment to moment. At any given moment, the algorithm directs the node with the highest index to transmit its data to the receiver. In this prioritizing way, the team found that the network is guaranteed to receive the freshest possible data on average, from all nodes, without overloading its wireless channels.

Flying like weird curveballs
According to Georgia Tech researchers, curvy baseball pitches have surprising things in common with quantum particles described in a new physics study, though the latter fly much more weirdly. In fact, ultracold paired particles called fermions must behave even weirder than physicists previously thought. A team mathematically studied the flight patterns quantum particles that were renowned for their weirdness.

In the new study, the researchers even predicted that the particles can act like different quantum balls called bosons to mimic the manner that photons, or particles of light, fly. A simplified explanation of these ultracold paired particles and their odd flights is below.

Those influences all combine to give fermions a trajectory repertoire much odder than that of any master baseball pitcher, and the new study maps it out and opens new ways to observe it experimentally. The Georgia Tech team took the offbeat approach of adding quantum optical — or light-like – ideas to their predictive calculations of these specks of matter and arrived at eyebrow-raising, insightful results.

“A particle’s motion is usually frantic, but the cooling slows it down almost to a stand-still,” said Landman, who is also director of the Georgia Tech Center for Computational Materials Science. “And these particles also have wave properties, and at that temperature, the wavelength grows enormously long.”

“The waves become microns in size. That would be like a pebble growing to be a third of the size of this country. When that happens, the atom actually becomes visible under an optical microscope.”

The inflated size makes it easier for researchers to know the two particles’ starting locations. When they turn the laser tweezers off, the fermions fly away. The particles’ wave properties also have a lot to do with their weird flights.

“A particle in motion will act as a projectile under certain circumstances. But in others, it will behave like a wave,” Landman said. “We call it the quantum world duality.”

“As crazy as all this looks, there appears to be strong reliability in these behaviors that could even be predictably and practically manipulated,” Landman said.

As with a pitcher who finesses a screwball’s path, physicists could determine a fermion’s weird flight using quantum mechanical formulation, advanced computational simulation, and experimentation, the study said.

The top row illustrates a ground state boson-like (paired, i.e. bunched) configuration of two ultracold atoms trapped in a double-well potential on the left, and on the right, a fermion-like (unpaired, or anti-bunched) configuration of the two atoms. The atoms are represented by balls with the atoms’ spins indicated by the up and down arrows. The wave functions corresponding to the atoms are superimposed on the atoms.
In the bottom row, we show the theoretically predicted two-atom momentum (denoted as k1 and k2) correlation maps corresponding to the configurations in the top row. These momentum correlation maps, which contrast in the two cases shown here, could be measured in laboratory experiments when the double-well trapping potentials are turned off, and the results can be used to fully characterize the flights taken by the two atoms for each of the starting configurations. Such information, uncovered by the theoretical calculational modeling of Georgia Tech physicists, are targeted at aiding the design and analysis of ultracold atom experiments aiming at revealing the quantum nature of ultracold matter. Source: Georgia Tech / Uzi Landman

Why does all of this weirdness matter? Landman explained, “It looks like you may even be able to engineer what this quantum weirdness does. If you know particle states reliably, you may be able to use them as a resource for quantum computations and information storage and retrieval.”



Leave a Reply


(Note: This name will be displayed publicly)