System Bits: Feb. 16

Smartphone seismic app; IoT nanoscale materials; contingency planning algorithm.


WW seismic network app
UC Berkeley researchers have released a free Android app that uses a smartphone’s ability to record ground shaking from an earthquake, with the goal of creating a worldwide seismic detection network that could eventually warn users of impending jolts from nearby quakes.

The app, called MyShake, is available from the Google Play Store and runs in the background with little power, so that a phone’s onboard accelerometers can record local shaking any time of the day or night, the researchers said. For now, the app only collects information from the accelerometers, analyzes it and, if it fits the vibrational profile of a quake, relays it and the phone’s GPS coordinates to the Berkeley Seismological Laboratory for analysis.

Once enough people are using it and the bugs are worked out, however, the researchers plan to use the data to warn people miles from ground zero that shaking is rumbling their way. An iPhone app is also planned.

While they don’t aim to replace traditional seismic networks like those run by the U.S. Geological Survey, UC Berkeley, the University of Washington and Caltech, they think MyShake can make earthquake early warning faster and more accurate in areas that have a traditional seismic network, and can provide life-saving early warning in countries that have no seismic network.

Further, a crowdsourced seismic network may be the only option today for many earthquake-prone developing countries, such as Nepal or Peru, that have a sparse or no ground-based seismic network or early warning system, but do have millions of smartphone users.

Three accelerometers aboard every smartphone detect shaking, and the MyShake app analyzes it to make sure it fits the pattern of an earthquake before alerting the Berkeley Seismological Laboratory. (Source: UC Berkeley)

Three accelerometers aboard every smartphone detect shaking, and the MyShake app analyzes it to make sure it fits the pattern of an earthquake before alerting the Berkeley Seismological Laboratory.
(Source: UC Berkeley)

Smartphones can easily measure movement caused by a quake because they have three built-in accelerometers designed to sense the orientation of the phone for display or gaming — although they are far less sensitive than in-ground seismometers. But, they are sensitive enough to record earthquakes above a magnitude 5 — the ones that do damage — within 10 kilometers. And what these accelerometers lack in sensitivity, they make up for in ubiquity. Given that there are an estimated 16 million smartphones in California, and 1 billion smartphones worldwide, even if there are only a small fraction of, for instance, the state of California’s 16 million mobile phones participating in the program, that would be a many-orders-of-magnitude increase in the amount of data that can be gathered.

IoT nanoscale materials
As vehicles communicate with embedded monitors alongside roadways to better route traffic, and home appliances connect to the smart grid to improve efficiency and reliability, the Internet of Things (IoT) might generate more than $14 trillion in economic activity by 2025, but with sufficient frequency spectrum to connect the devices, the concept cannot fully take off, according to Harvard University researchers. Many in industry believe that significant policy changes will be required to enable the needed connections while avoiding interference.

To this end, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Draper are developing a new approach to assembling nanoscale hardware that could overcome this challenge by enabling devices to generate and receive purer signals to reduce interference with other nearby transmissions thereby freeing up spectrum by reducing the need for space between frequencies that the Federal Communications Commission now assigns to different users.

DARPA and the U.S. Air Force Research Laboratory are funding what is called the NanoLitz project to find new ways to assemble nanoscale materials that cannot be accomplished with current techniques such as those used in the semiconductor and MEMS industries or through chemical synthesis. The results could be applied to tools that enable humans to scale sheer walls; stealth technology; and ultra-small position, navigation and timing devices.
The NanoLitz approach braids microscopic wires to reduce heat loss, improve efficiency, and sharpen filter response. To operate at frequencies used in devices like smartphones, Harvard researchers are developing techniques for making wires up to 1,000 times smaller than those used today that will then be braided with techniques borrowed from MEMS and microfluidics. The team is also developing a DNA self-assembly method as a tool for manufacturing braids.

Harvard and Draper's NanoLitz approach braids microscopic wires to reduce heat loss, improve efficiency, and sharpen filter response. (Source: Draper)

Harvard and Draper’s NanoLitz approach braids microscopic wires to reduce heat loss, improve efficiency, and sharpen filter response. (Source: Draper)

In parallel, Draper is developing a microfluidics-inspired approach for mechanically braiding the tiny wires that would be scalable to large numbers of wire at high throughput. Draper is also leading the efforts to model and design the Nanolitz wire to optimize electrical performance.

The improved signal performance could also enable devices to transmit up to five times more data per channel, receive much fainter signal levels, and overcome interference that disrupts GPS signals.

Contingency planning algorithm
Planning algorithms — which are widely used in logistics and control applications — can help schedule flights and bus routes, guide autonomous robots, and determine control policies for the power grid, among other things. Interestingly, in recent years, planning algorithms have begun to factor in uncertainty — variations in travel time, erratic communication between autonomous robots, imperfect sensor data, etc. That causes the scale of the planning problem to grow exponentially, but researchers researchers at MIT and the Australian National University (ANU) have found new ways to solve it efficiently.

The team has developed a planning algorithm that also generates contingency plans, should the initial plan prove too risky. It also identifies the conditions, such as sensor readings or delays incurred, that should trigger a switch to a particular contingency plan. And despite the extra labor imposed by generating contingency plans, the algorithm still provides mathematical guarantees that its plans’ risk of failure falls below some threshold, which the user sets.

The range of possible decisions that a planner faces can be represented as a data structure called a graph that consists of nodes, which are usually represented as circles, and edges, which are represented as line segments connecting the nodes. Network diagrams and flow charts are familiar examples of a graph.

In a planning system, each node of the graph represents a decision point, such as, “Should I take the bus or the subway?” A path through the graph can be evaluated according to the rewards it offers — you reach your destination safely — and the penalties it imposes — you’ll be five minutes late. The optimal plan is the one that maximizes reward.

Factoring in probabilities makes that type of reward calculation much more complex: The average bus trip might be 15 minutes, but there’s some chance that it will be 35; the average subway trip might be 18 minutes, but it’s almost never more than 24. In that context, for even a relatively simple planning task, canvassing contingency plans can be prohibitively time consuming.

To make the problem tractable, the MIT and ANU researchers borrowed a technique from some earlier work. Before the planner begins constructing the graph, it asks the user to set risk thresholds. A researcher trying to develop a data-gathering plan for a multimillion-dollar underwater robot, for example, might be satisfied with a 90 percent probability that the robot will take all the sensor readings it’s supposed to — but they might want a 99.9 percent probability that the robot won’t collide with a rock face at high speed.