Security flaw debugger; AI finds cancer; brain injury analysis.
Debugging web apps
MIT researchers reported that they’ve developed a system that can quickly comb through tens of thousands of lines of application code to find security flaws by exploiting some peculiarities of the Ruby on Rails web programming framework.
The team said that in tests on 50 popular web applications written using Ruby on Rails, the system found 23 previously undiagnosed security flaws, and it took no more than 64 seconds to analyze any given program.
Daniel Jackson, professor in the Department of Electrical Engineering and Computer Science, said the system uses static analysis, which seeks to describe, in a very general way, how data flows through a program. “The classic example of this is if you wanted to do an abstract analysis of a program that manipulates integers, you might divide the integers into the positive integers, the negative integers, and zero.” The static analysis would then evaluate every operation in the program according to its effect on integers’ signs. Adding two positives yields a positive; adding two negatives yields a negative; multiplying two negatives yields a positive; and so on.
The problem with this is that it can’t be completely accurate, because information is lost. If a positive and a negative integer are added, it can’t be known whether the answer will be positive, negative, or zero. Most work on static analysis is focused on trying to make the analysis more scalable and accurate to overcome those sorts of problems, he noted.
Further, with web applications, the cost of accuracy is prohibitively high, such that if a small program were written, it sits atop a vast edifice of libraries and plug-ins and frameworks, and when considering something like a web app written in a language like Ruby on Rails, if a conventional static analysis is run, it gets mired in a huge bog making it infeasible in practice.
But that vast edifice of libraries also gave Jackson and his former student Joseph Near, who graduated from MIT last spring and is now doing a postdoc at the University of California at Berkeley, a way to make to make static analysis of programs written in Ruby on Rails practical. Given that a library is a compendium of code that programmers tend to use over and over again, rather than rewriting the same functions for each new program, a programmer can just import them from a library.
Ruby on Rails were re-written so that the operations defined in them describe their own behavior in a logical language, which turns the Rails interpreter that converts high-level Rails programs into machine-readable code, into a static-analysis tool. With Near’s libraries, running a Rails program through the interpreter produces a formal, line-by-line description of how the program handles data.
Using AI to efficiently detect cancer
California NanoSystems Institute researchers at UCLA have developed a new technique for identifying cancer cells in blood samples faster and more accurately than the current standard methods.
They explained that in one common approach to testing for cancer, doctors add biochemicals to blood samples whereby those biochemicals attach biological “labels” to the cancer cells, and those labels enable instruments to detect and identify them. But, those biochemicals can damage the cells and render the samples unusable for future analyses. Other current techniques don’t use labeling but can be inaccurate because they identify cancer cells based only on one physical characteristic.
The team explained that the technique images cells without destroying them and can identify 16 physical characteristics — including size, granularity and biomass — instead of just one by combining two components that were invented at UCLA: a photonic time stretch microscope, which is capable of quickly imaging cells in blood samples, and a deep learning computer program that identifies cancer cells with over 95 percent accuracy.
The study was led by Barham Jalali, a UCLA professor and Northrop-Grumman Optoelectronics Chair in electrical engineering; Claire Lifan Chen, a UCLA doctoral student; and Ata Mahjoubfar, a UCLA postdoctoral fellow.
Photonic time stretch was invented by Jalali, and he holds a patent for the technology. The microscope is just one of many possible applications; it works by taking pictures of flowing blood cells using laser bursts in the way that a camera uses a flash. This process happens so quickly — in nanoseconds, or billionths of a second — that the images would be too weak to be detected and too fast to be digitized by normal instrumentation.
The microscope overcomes those challenges using specially designed optics that boost the clarity of the images and simultaneously slow them enough to be detected and digitized at a rate of 36 million images per second. It then uses deep learning to distinguish cancer cells from healthy white blood cells.
Each frame is slowed down in time and optically amplified so it can be digitized which allows for fast cell imaging that the artificial intelligence component can distinguish.
Normally, taking pictures in such minuscule periods of time would require intense illumination, which could destroy live cells. The UCLA approach also eliminates that problem because the photonic time stretch technique allowed the team to identify rogue cells in a short time with low-level illumination.
The researchers also said the system could lead to data-driven diagnoses by cells’ physical characteristics, which could allow quicker and earlier diagnoses of cancer, for example, and better understanding of the tumor-specific gene expression in cells, which could facilitate new treatments for disease.
Improving brain injury analysis
Imperial College London researchers have developed a computer program that mimics how doctors assess patient scans to determine signs of traumatic brain injury (TBI).
The team sees their program, called Deep Medic, as a tool to help clinical researchers to understand more about TBI. This year, they will use Deep Medic in a large-scale European study called CENTER TBI. Data from 5,000 patients will be collected from 30 hospitals across Europe and Deep Medic will be used to identify TBI in scans. This study has the potential to reveal new insights into traumatic brain injuries. It could also lead to the development more effective and efficient therapies for patients at lower costs.
Currently, the gold standard for detecting brain lesions is a manual process performed by a doctor who visually examines and marks out lesions on the scans taken by an MRI device. Depending on the particular abnormality, this process can take hours. This expertise is costly and not always an option for teams who need to assess hundreds of MRI scans as part of their research.
Interestingly, Deep Medic is an artificial neural network, a predictive algorithm that the team trained to look for abnormalities in brain tissue. It builds up a 3D image of a brain to pinpoint where lesions are located. Deep Medic can do the assessment in a matter of minutes, instead of hours, which would improve research productivity and reduce costs.
The system is expected to also have use in detecting brain tumors in patients with cancer and lesions caused by strokes.