Searching for energy-efficient architectures; graphene memories; faster optimization.
Searching for energy-efficient architectures
A workshop jointly funded by the Semiconductor Research Corporation (SRC) and National Science Foundation (NSF) sought out the key factors limiting progress in computing – particularly related to energy consumption – and novel research that could overcome these barriers.
The report focuses on the most promising research directions in the exploration of new devices for computing, the exploration of new circuit and system architectures based on emerging devices, and how to structure a research program to explore those options.
“Fundamental research on hardware performance, complex system architectures, and new memory/storage technologies can help to discover new ways to achieve energy-efficient computing,” said Jim Kurose, assistant director of NSF’s Directorate for Computer and Information Science and Engineering (CISE). “Partnerships with industry, including SRC and its member companies, are an important way to speed the adoption of these research findings.”
“New devices, and new architectures based on those devices, could take computing far beyond the limits of today’s technology. The benefits to society would be enormous,” said Tom Theis, Nanoelectronics Research Initiative executive director at SRC.
In order to realize these benefits, workshop participants from industry, academia and government concluded a new paradigm for computing is necessary.
According to the report, “Any new device is likely to have characteristics very different from established devices. The interplay between device characteristics and optimum circuit architectures therefore means that circuit and higher level architectures must be co-optimized with any new device. Devices combining digital and analog functions or the functions of logic and memory may lend themselves particularly well to unconventional information processing architectures.”
In three recent experiments, Stanford engineers demonstrate graphene technologies that store more data per square inch and use a fraction of the energy of today’s memory chips.
In one experiment, graphene was applied as the as the conductive metal in resistive random-access memory (RRAM). Unlike conventional metals, graphene remains conductive at atomically thin dimensions, allowing engineers to make each RRAM cell much smaller than was previously possible. According to the team, the use of graphene could lead to chips that hold far more data than was possible with RRAM based on conventional metal conductors.
Two other teams used graphene to make advances with a different but conceptually similar storage approach: phase-change memory.
In one, researchers used ribbons of graphene as ultra-thin electrodes to intersect phase-change memory cells, like skewers spearing marshmallows. This setup also exploited the atomically thin edge of graphene to push current into the material, and change its phase, again in an extremely energy-efficient manner.
In the other, researchers used both the electrical and thermal properties of graphene in a phase-change memory chip. However, here they used the surface of the graphene sheet to contact the phase-change memory alloy. In essence, the graphene prevented the heat from leaking out of the phase-change material, creating a more energy-efficient memory cell.
At the IEEE Symposium on Foundations of Computer Science, a trio of present and past MIT graduate students won a best-student-paper award for a new “cutting-plane” algorithm, a general-purpose algorithm for solving optimization problems. The algorithm improves on the running time of its most efficient predecessor, and the researchers offer some reason to think that they may have reached the theoretical limit.
“What we are trying to do is revive people’s interest in the general problem the algorithm solves,” says Yin-Tat Lee, an MIT graduate student in mathematics and one of the paper’s co-authors. “Previously, people needed to devise different algorithms for each problem, and then they needed to optimize them for a long time. Now we are saying, if for many problems, you have one algorithm, then, in practice, we can try to optimize over one algorithm instead of many algorithms, and we may have a better chance to get faster algorithms for many problems.”
With the best general-purpose cutting-plane method, the time required to select each new point to test was proportional to the number of elements raised to the power 3.373. The trio got that down to 3.
They also describe a new way to adapt cutting-plane methods to particular types of optimization problems, in many cases reporting dramatic improvements in efficiency, from running times that scale with the fifth or sixth power of the number of variables down to the second or third power.