System Bits: Jan. 31

Parallel programming; AI algorithm detects skin cancer; Parkinson’s brain mapping.

popularity

Optimizing code
To address the issue of code explicitly written to take advantage of parallel computing usually losing the benefit of compilers’ optimization strategies, MIT Computer Science and Artificial Intelligence Laboratory researchers have devised a new variation on a popular open-source compiler that optimizes before adding the code necessary for parallel execution.

Charles E. Leiserson, the Edwin Sibley Webster Professor in Electrical Engineering and Computer Science at MIT and a coauthor on a new paper on the subject said the compiler “now optimizes parallel code better than any commercial or open-source compiler, and it also compiles where some of these other compilers don’t.”

“Everybody said it was going to be too hard, that you’d have to change the whole compiler,” MIT professor Charles E. Leiserson says. “And these guys basically showed that conventional wisdom to be flat-out wrong.” (Source: MIT)

“Everybody said it was going to be too hard, that you’d have to change the whole compiler,” MIT professor Charles E. Leiserson says. “And these guys basically showed that conventional wisdom to be flat-out wrong.” (Source: MIT)

The reason other compilers don’t do this is because managing parallel execution requires a lot of extra code, and existing compilers add it before the optimizations occur. The optimizers aren’t sure how to interpret the new code, so they don’t try to improve its performance, the researchers explained.

In the new method, the improvement comes purely from optimization strategies that were already part of the compiler the researchers modified, which was designed to compile conventional, serial programs. They expect this approach to make it much more straightforward to add optimizations specifically tailored to parallel programs. This is especially crucial with the rise in cores or parallel processing units.

The idea of optimizing before adding the extra code required by parallel processing has been around for decades. But compiler developers were skeptical that this could be done.

“Everybody said it was going to be too hard, that you’d have to change the whole compiler. And these guys,” he says, referring to Tao B. Schardl, a postdoc in Leiserson’s group, and William S. Moses, an undergraduate double major in electrical engineering and computer science and physics, “basically showed that conventional wisdom to be flat-out wrong. The big surprise was that this didn’t require rewriting the 80-plus compiler passes that do either analysis or optimization. T.B. and Billy did it by modifying 6,000 lines of a 4-million-line code base.”

Specifically, the front end of the compiler is tailored to a fork-join language called Cilk — pronounced “silk” but spelled with a C because it extends the C programming language — which is now owned and maintained by Intel. But the researchers said they might just as well have built a front end tailored to the popular OpenMP or any other fork-join language.

Cilk adds just two commands to C: “spawn,” which initiates a fork, and “sync,” which initiates a join. That makes things easy for programmers writing in Cilk but a lot harder for Cilk’s developers.

For more details, click here.

Deep learning algorithm identifies skin cancer
In hopes of creating better access to medical care, Stanford University researchers have trained an algorithm to diagnose skin cancer.

Universal access to health care was on the minds of computer scientists at Stanford when they set out to create an artificially intelligent diagnosis algorithm for skin cancer. They created a database of nearly 130,000 skin disease images, trained their algorithm to visually diagnose potential cancer, and from the very first test, it performed with inspiring accuracy, they said.

Sebastian Thrun, an adjunct professor in the Stanford Artificial Intelligence Laboratory said, “We realized it was feasible, not just to do something well, but as well as a human dermatologist. That’s when our thinking changed. That’s when we said, ‘Look, this is not just a class project for students, this is an opportunity to do something great for humanity.’”

The final product was tested against 21 board-certified dermatologists. In its diagnoses of skin lesions, which represented the most common and deadliest skin cancers, the algorithm matched the performance of dermatologists.

Given that every year there are about 5.4 million new cases of skin cancer just in the United States, and with the five-year survival rate for melanoma detected in its earliest states is around 97%, that drops to approximately 14% if detected in its latest stages. Therefore, early detection could likely have an enormous impact on skin cancer outcomes.

Diagnosing skin cancer begins with a visual examination. A dermatologist usually looks at the suspicious lesion with the naked eye and with the aid of a dermatoscope, which is a handheld microscope that provides low-level magnification of the skin. If these methods are inconclusive or lead the dermatologist to believe the lesion is cancerous, a biopsy is the next step.

Rather than building an algorithm from scratch, the researchers began with an algorithm developed by Google that was already trained to identify 1.28 million images from 1,000 object categories. While it was primed to be able to differentiate cats from dogs, the researchers said it needed to know a malignant carcinoma from a benign seborrheic keratosis.

They did, however, have to write their own algorithm specifically for skin cancer images, explained, Brett Kuprel, co-lead author of the paper and a graduate student in the Thrun lab. They gathered images from the internet and worked with the medical school to create a taxonomy out of messy data. They then collaborated with dermatologists at Stanford Medicine, as well as Helen M. Blau, professor of microbiology and immunology at Stanford and co-author of the paper. Together, this interdisciplinary team worked to classify the hodgepodge of internet images. Many of these, unlike those taken by medical professionals, were varied in terms of angle, zoom and lighting. In the end, they amassed about 130,000 images of skin lesions representing over 2,000 different diseases.

The resulting algorithm’s performance was measured through the creation of a sensitivity-specificity curve, where sensitivity represented its ability to correctly identify malignant lesions and specificity represented its ability to correctly identify benign lesions. It was assessed through three key diagnostic tasks: keratinocyte carcinoma classification, melanoma classification, and melanoma classification when viewed using dermoscopy. In all three tasks, the algorithm matched the performance of the dermatologists with the area under the sensitivity-specificity curve amounting to at least 91 percent of the total area of the graph.
An added advantage of the algorithm is that, unlike a person, the algorithm can be made more or less sensitive, allowing the researchers to tune its response depending on what they want it to assess. This ability to alter the sensitivity hints at the depth and complexity of this algorithm, the researchers added.

Although this algorithm currently exists on a computer, the team would like to make it smartphone compatible in the near future, bringing reliable skin cancer diagnoses to our fingertips.

Brain mapping reveals Parkinson’s disease tremors circuitry
In other work at Stanford, another research team is working on a circuit-mapping approach to probe the brain to improve treatments for Parkinson’s disease, also providing a methodology to identify, map and ultimately repair neural circuits associated with other brain diseases.

A new circuit-mapping approach to probe the brain should help improve treatments for Parkinson’s disease. It also provides a methodology to identify, map and ultimately repair neural circuits associated with other brain diseases. (Source: Stanford University / iStock / D3Damon)

A new circuit-mapping approach to probe the brain should help improve treatments for Parkinson’s disease. It also provides a methodology to identify, map and ultimately repair neural circuits associated with other brain diseases. (Source: Stanford University / iStock / D3Damon)

Stanford bioengineer and neuroscientist Jin Hyung Lee, who studies Parkinson’s disease, has adapted the idea that — if a piece of electronics isn’t working, troubleshooting the problem often involves probing the flow of electricity through the various components of the circuit to locate any faulty parts — to diseases of the brain, thus creating a new way to turn on specific types of neurons in order to observe how this affects the whole brain.

Jin Hyung Lee, Assistant Professor of Neurology, of Neurosurgery, of Bioengineering, and of Electrical Engineering at Stanford Medicine (Source: Stanford Medicine)

Jin Hyung Lee, Assistant Professor of Neurology, of Neurosurgery, of Bioengineering, and of Electrical Engineering at Stanford Medicine (Source: Stanford Medicine)

Interestingly, Lee trained as an electrical engineer before becoming a brain researcher, and wanted to give neuroscientists a way to probe brain ailments similar to how engineers troubleshoot faulty electronics.

“Electrical engineers try to figure out how individual components affect the overall circuit to guide repairs,” she said.

In the short term, she expects her technique to help improve treatments for Parkinson’s disease. In the long run it provides a methodology to identify, map and ultimately repair neural circuits associated with other brain diseases.

Related Stories
Teaching Computers To See
First of two parts: Vision processing is a specialized design task, requiring dedicated circuitry to achieve low power and latency.
Neural Net Computing Explodes
Deep-pocket companies begin customizing this approach for specific applications—and spend huge amounts of money to acquire startups.
Tuning Heterogeneous SoCs
Just adding more cores doesn’t guarantee better performance or lower power.



Leave a Reply


(Note: This name will be displayed publicly)