Dynamic programming; deep learning for disaster analysis; electronic lockbox.
Optimizing multiprocessor programs for non-experts
While ‘dynamic programming’ is a technique that yields efficient solutions to computational problems in economics, genomic analysis, and other fields, adapting it to multicore chips requires a level of programming expertise that few economists and biologists have. But researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Stony Brook University mean to change that with a new system that allows users to describe what they want their programs to do in very general terms.
Then, it automatically produces versions of those programs that are optimized to run on multicore chips, and guarantees that the new versions will yield exactly the same results that the single-core versions would, albeit much faster, the researchers explained.
The team used the system to parallelize several algorithms — in experiments — that used dynamic programming, splitting them up so that they would run on multicore chips. The resulting programs were between three and 11 times as fast as those produced by earlier techniques for automatic parallelization, and they were generally as efficient as those that were hand-parallelized by computer scientists.
Traditionally more memory is needed but the team avoided this problem by reordering computations so that those requiring a particular stored value are executed in sequence, minimizing the number of times that the value has to be recalled from memory.
Of course that’s relatively easy to do with a single-core computer, but with multicore computers, when multiple cores are sharing data stored at multiple locations, memory management become much more complex, they reminded. And, a hand-optimized, parallel version of a dynamic-programming algorithm is typically 10 times as long as the single-core version, and the individual lines of code are more complex, to boot.
The new system — dubbed Bellmania, after Richard Bellman, the applied mathematician who pioneered dynamic programming — uses a parallelization strategy called recursive divide-and-conquer whereby the user simply has to describe the first step of the process — the division of the matrix and the procedures to be applied to the resulting segments, and Bellmania then determines how to continue subdividing the problem so as to use memory efficiently.
Automated rapid disaster damage analysis through deep learning
Purdue University researchers are harnessing deep learning algorithms and powerful computer vision technology to dramatically reduce the time it takes for engineers to assess damage to buildings after disasters.
They noted that in the aftermath of a disaster, engineers descend on the scene and must quickly document damage to structures such as buildings, bridges and pipelines before crucial data are destroyed.
The teams of engineers take a lot of photos, perhaps 10,000 images per day, and these data are critical to learn how the disaster affected structures. Every image has to be analyzed by people, and it takes a tremendous amount of time for them to go through each image and put a description on it so that others can use it.
Then, engineering teams routinely spend several hours after a day of collecting data to review their images and determine how to proceed the next day, but unfortunately, there is no way to quickly organize these thousands of images, which are essential to determine how to understand the damage from an event, and the potential for human error is a key drawback.
The researchers are developing a computerized system using advanced computer vision algorithms that could exponentially speed the process to could turn several hours of work into several minutes. (A YouTube video is available at https://youtu.be/WO3XmXKu4uI)
They believe this is the first-ever implementation of deep learning for these types of images. They are dealing with real-world images of buildings that are damaged in some major way by tornados, hurricanes, floods and earthquakes. Design codes for buildings are often based on or started by lessons that can be derived from these data. If that data could be organized more quickly, the images could be used to inform design codes.
Preventing tampering with data integrity
To address the challenge of how two parties can guarantee that their communications are not tampered with by an outside party, UCLA computer scientists have developed a new method called a ‘non-malleable commitment’ to address this problem, which they say is an electronic equivalent of a locked box.
Using new mathematical proofs that they developed, the researchers have demonstrated that this is a secure way to protect data being sent between two parties, and is believed to be the first solution supported by such proofs that requires only two rounds of one-way communication from the sender to the recipient.
A sender can ‘lock’ a message in a virtual box, and only later, provide a key to open it. An interceptor, who does not know the key, not only would have no idea what message is inside the box but also would not be able to come up with a new virtual lockbox that hides a related message.
System Research Bits Nov 1
Avoiding cloud malice; quantum-processing platform; machine-learning training.
Moore’s Law Debate Continues
As the industry speculates about 5nm and below, questions surrounding node shrinks remain.