MIT researchers make a system that automatically fills in the gaps to make programmers’ code more powerful; Stanford engineers create a software tool to reduce the cost of cloud computing.
Self-completing programs
Since he was a graduate student, Armando Solar-Lezama, an associate professor in MIT’s Department of Electrical Engineering and Computer Science, has been working on a programming language called Sketch — which allows programmers to simply omit some of the computational details of their code – and then automatically fills in the gaps.
If it’s fleshed out and made more user-friendly, Sketch could ultimately make life easier for software developers. But in the meantime, it’s proving its worth as the basis for other tools that exploit the mechanics of “program synthesis,” or automatic program generation. Recent projects at MIT’s Computer Science and Artificial Intelligence Laboratory that have built on Sketch include a system for automatically grading programming assignments for computer science classes, a system that converts hand-drawn diagrams into code, and a system that produces SQL database queries from code written in Java.
Solar-Lezama and a team of researchers have described a new elaboration on Sketch that, in many cases, enables it to handle complex synthesis tasks much more efficiently. They tested the new version of Sketch on several existing applications, including the automated grading system. In cases where the previous version would “time out” or take so long to reach a solution that it simply gave up, the new version was able to correct students’ code in milliseconds.
They explained that Sketch treats program synthesis as a search problem. The idea is to evaluate a huge range of possible variations on the same basic program and find one that meets criteria specified by the programmer. If the program being evaluated is too complex, the search space balloons to a prohibitively large size. Now, the researchers have found a way to shrink that search space.
When trying to synthesize a larger piece of code, other functions are relied upon, other subparts of the code and if it just so happens that the system only depends on certain properties of the subparts that should be able to be expressed somehow in a high-level language. Once it is specified that only certain properties are required, then the larger code can be synthesized, the researcher explained.
While it will take a good deal of work before Sketch is useful to commercial software developers, the application could be used as a tool-building infrastructure, using it to build higher-level systems on top of it.
Reducing the cost of cloud computing…with software
Just as Netflix uses an algorithm to recommend movies we ought to see, two engineers at Stanford University have created a system that suggests how to use computing resources at data centers more efficiently.
We hear a lot about the future of computing in the cloud but not much about the efficiency of data centers, those facilities where clusters of server computers work together to host applications ranging from social networks to big data analytics.
Data centers cost millions of dollars to build and operate, and buying servers is the single largest expense. Yet at any given moment, most of the servers in a typical data center are only using 20 percent of their capacity. Why? Because the workload can vary greatly depending on factors such as how many users log on. Since data centers must always be ready to meet peak demand, having excess capacity is the best way to ensure this today.
But as cloud computing grows, so will the cost of keeping such large cushions of capacity, which is why the researchers created a cluster management tool that they said can triple server efficiency while delivering reliable service at all times, allowing data center operators to serve more customers for each dollar they invest.
A key ingredient of the Quasar tool is a sophisticated algorithm that is modeled on the way companies such as Netflix and Amazon recommend movies, books and other products to their customers.
To understand how it works, the researchers said it’s helpful to think about how data centers are managed today. Essentially, data centers are managed by a reservation system — application developers estimate what resources they will need, and they reserve that server capacity.
It’s easy to understand how a reservation system lends itself to excess idle capacity. Developers are likely to err on the side of caution. Because a typical data center runs many applications, the total of all those overestimates results in a lot of excess capacity.
As such, the Stanford engineers are working to change this dynamic by moving away from the reservation system. Instead of asking developers to estimate how much capacity they are likely to need, the Stanford system would start by asking what sort of performance their applications require. For instance, if an application involves queries from users, how quickly must the application respond and to how many users?
Under this approach the cluster manager would have to make sure there was enough server capacity in the data center to meet all these requirements.
The goal is to switch from a reservation-based cluster management to a performance-based allocation of data center resources.
Quasar was created to help cluster managers meet these performance goals while also using data center resources more efficiently. To create this tool the Stanford team borrowed a concept from the Netflix movie recommendation system.
In a nutshell, by using a process called collaborative filtering, Quasar automatically decides what type of servers to use for each application and how to multitask servers without compromising any specific task. It recommends the minimum number of servers for each application and which applications can run best together.
The researchers have shown how they achieved utilization rates as high as 70 percent in a 200-server test bed, compared with the current typical 20 percent, while still meeting strict performance goals for each application.
Leave a Reply