System Bits: Nov. 14

Cyber attack tracking; entangled atoms; next-gen search.


Tracking cyber attacks
According to Georgia Tech, assessing the extent and impact of network or computer system attacks has been largely a time-consuming manual process, until now since a new software system being developed by cybersecurity researchers here will largely automate that process, allowing investigators to quickly and accurately pinpoint how intruders entered the network, what data they took and which computer systems were compromised.

Known as Refinable Attack INvestigation (RAIN), the system will provide forensic investigators a detailed record of an intrusion, even if the attackers attempted to cover their tracks. The team said RAIN will provide multiple levels of detail, facilitating automated searches through information at a high level to identify the specific events for which more detailed data is reproduced and analyzed.

A new cybersecurity system developed by researchers at the Georgia Institute of Technology and known as Refinable Attack INvestigation (RAIN) will provide forensic investigators a detailed record of an intrusion, even if the attackers attempted to cover their tracks. Image shows a schematic of how the system prunes information about system operation. Source: Georgia Tech

Wenke Lee, co-director of Georgia Tech’s Institute for Information Security & Privacy said, “You can go back and find out what has gone wrong in your system, not just at the point where you realized that something is wrong, but far enough back to figure out how the attacker got into the system and what has been done.”

The research is supported largely by the Defense Advanced Research Projects Agency (DARPA) and also by the National Science Foundation and Office of Naval Research.

Getting entangled atoms
Potentially providing a tool for highly precise sensing and quantum computer applications, National Institute of Standards and Technology (NIST) researchers have devised a method to link a group of atoms’ quantum mechanical properties among themselves far more quickly than is currently possible.

The method has yet been demonstrated experimentally but would theoretically speed up the process of quantum entanglement in which the properties of multiple particles become interconnected with one other, the team said.

While quantum entanglement usually spreads through the atoms in an optical lattice via short-range interactions with the atoms’ immediate neighbors (left), new theoretical research shows that taking advantage of long-range dipolar interactions among the atoms could enable it to spread more quickly (right), a potential advantage for quantum computing and sensing applications.
Source: NIST

Entanglement would propagate through a group of atoms in dramatically less time, allowing scientists to build an entangled system exponentially faster than is common today.

Arrays of entangled atoms suspended in laser light beams, known as optical lattices, are one approach to creating the logic centers of prototype quantum computers, but an entangled state is difficult to maintain more than briefly. Applying the method to these arrays could give scientists precious time to do more with these arrays of atoms before entanglement is lost in a process known as decoherence, the researchers explained.

The method takes advantage of a physical relationship among the atoms called dipolar interaction, which allows atoms to influence each other over greater distances than previously possible.

Applying the technique would center around adjusting the timing of laser light pulses, turning the lasers on and off in particular patterns and rhythms to quick-change the suspended atoms into a coherent entangled system.

The approach also could find application in sensors, which might exploit entanglement to achieve far greater sensitivity than classical systems can.

No matter what, the researchers have a good attitude about their work. “We think this is a practical way to increase the speed of entanglement,” research team member Alexey Gorshkov said. “It was cool enough to patent, so we hope it proves commercially useful to someone.”

Fruit fly brains inform search engines of the future
Similarity searches, which are what smartphone apps and websites use to crunch huge sets of data, as well as the ability to perform these massive matching games well—and fast—has been an ongoing challenge for computer scientists. But now interestingly, researchers at Salk Institute and UC San Diego have discovered that the fruit fly brain has an elegant and efficient method of performing similarity searches, which helps them identify odors that are most similar to those they’ve encountered before, so they know how to behave in response to the odor, such as to approach or avoid it. The researchers believe details on the fly’s computational approach to smelly similarity searches, could inform computer algorithms of the future.

Saket Navlakha, assistant professor in Salk’s Integrative Biology Laboratory and lead author of a paper on this topic said, “This is a problem that pretty much every technology company with any kind of information retrieval system has to solve, so it’s been something that computer scientists have studied for years. Now, we have this new approach to similarity searches thanks to the fly.”

The team explained that the way most computerized data systems categorize items―from songs to images―to optimize similarity searches is by reducing the amount of information associated with each item. These systems assign short “hashes” to each item so that similar items are more likely to be assigned the same or a similar hash compared to two very different items. (Hashes are a kind of digital shorthand, the way a bitly is a shorter version of a URL.) Assigning hashes in this way is called “locality-sensitive hashing” to computer scientists. When searching for similar items, a program looks through the hashes, rather than the original items, to find similarities quickly.

Navlakha was chatting with colleague Charles Stevens, a professor in Salk’s Molecular Neurobiology Laboratory and a coauthor of the new work, who had studied fly olfaction, when the former realized that flies—and all animals—are constantly faced with similarity searches as well. So he started combing the literature on the brain circuitry behind fly olfaction to work out just how flies identify similar smells.

“In the natural world, you’re not going to encounter exactly the same odor every time; there’s going to be some noise and fluctuation,” Navlakha explains. “But if you smell something that you’ve previously associated with a behavior, you need to be able to identify that similarity and recall that behavior.” So if a fruit fly knows that the smell of a rotting banana means mealtime, it needs to respond the same way when it encounters a very similar smell, even if it never experienced that exact smell before.

This illustration represents a fruit fly executing a similarity search algorithm based on odor.
Source: Salk Institute

Navlakha and his collaborators’ review of the literature revealed that when fruit flies first sense an odor, 50 neurons fire in a combination that’s unique to that smell. But rather than hashing that information by reducing the number of hashes associated with the odor, as computer programs would, flies do the opposite—they expand the dimension. The 50 initial neurons lead to 2,000 neurons, spreading out the input so that each smell has an even more distinct fingerprint among those 2,000 neurons. The brain then stores only the 5 percent of those 2,000 neurons with the top activity as the “hash” for that odor. The whole paradigm helps the brain notice similarities better than it would compared to reducing the dimension, Navlakha says.

“Say you have a bunch of people clustered by their relationships, and they’re bunched into a crowded room,” he explains. “Then take the same people and relationships, but have them spread out on a football field. It will be much easier to see the structure of relationships and draw boundaries between groups in the expanded space relative to the crowded space.”

While Navlakha and his collaborators did not reveal the actual mechanism by which flies are storing odor information—that was already available in the literature—they are the first to analyze how this process maximizes speed and efficiency for similarity searches. When they applied the process to three standard datasets computer scientists use to test search algorithms, they found that the fly approach improved performance. This approach, they think, may inform computer programs someday.
“Pieces of this approach had been used in the past by computer scientists, but evolution put it together in a very unique way,” says Navlakha.
Navlakha’s collaborators say that the study is among the first to make such concrete parallels between neural circuits in the brain and information processing algorithms used in computer science.

“For the past 20 years I’ve been interested in random projections [a core component of locality-sensitive hashing for similarity search] as they apply to algorithms running on computers,” says Sanjoy Dasgupta, a professor of computer science and engineering at UCSD and first author of the new paper. “It never occurred to me that similar operations may be at work in nature.”

“A dream shared by neurobiologists and computer scientists is to understand how the brain computes well enough that we can adapt its methods to improve machine computation,” adds Stevens. “Our paper provides a proof of principle that this dream may become reality.”

Leave a Reply

(Note: This name will be displayed publicly)