System Bits: April 18

RISC-V errors; spin-wave logic gates; deep learning is old.

popularity

RISC-V errors
Princeton University researchers have discovered a series of errors in the RISC-V instruction specification that now are leading to changes in the new system, which seeks to facilitate open-source design for computer chips.

In testing a technique they created for analyzing computer memory use, the team found over 100 errors involving incorrect orderings in the storage and retrieval of information from memory in variations of the RISC-V processor architecture. The researchers warned that, if uncorrected, the problems could cause errors in software running on RISC-V chips. According to the researchers, officials at the RISC-V Foundation said the errors would not affect most versions of RISC-V but would have caused problems for higher-performance systems.

Researchers including Professor Margaret Martonosi (center) and graduate students Yatin Manerkar and Caroline Trippel have developed a tool that eliminates bugs by checking computer processor designs for memory issues. The tool is already leading to improvements in a major open-source chip project. (Source: Princeton University)

Researchers including Professor Margaret Martonosi (center) and graduate students Yatin Manerkar and Caroline Trippel have developed a tool that eliminates bugs by checking computer processor designs for memory issues. The tool is already leading to improvements in a major open-source chip project. (Source: Princeton University)

Margaret Martonosi, the Hugh Trumbull Adams ’35 Professor of Computer Science at Princeton and the leader of the Princeton team that also includes Ph.D. students Caroline Trippel and Yatin Manerkar said, “Incorrect memory access orderings can result in software performing calculations using the wrong values. These in turn can lead to hard-to-debug software errors that either cause the software to crash or to be vulnerable to security exploits. With RISC-V processors often envisioned as control processors for real-world physical devices (i.e., internet of things devices) these errors can cause unreliability or security vulnerabilities affecting the overall safety of the systems.”

Lustig, a co-author of Martonosi’s recent paper and now a research scientist at NVIDIA, said work was underway to improve the RISC-V memory model.

“RISC-V is in the fortunate position of being able to look back on decades’ worth of industry and academic experience,” he said. “It will be able to learn from all of the insights and mistakes made by previous attempts.”
The RISC-V project essentially offers specifications that guide the hardware and software design of RISC-V processors and software applications. These specifications, known generally as an Instruction Set Architecture, describe the most basic functions of the processor including its arithmetic and logic operations, as well as the way programs use the computer’s memory. Hardware designers use the instruction sets when building new chips, and computer programmers rely on it when writing new software.

Martonosi’s team discovered the problems when testing their new system to check memory operations across any computer architecture. The system, called TriCheck, allows designers and others interested in working with a design, to detect memory ordering errors before they become a problem. The name TriCheck derives from three general levels of computing: the high-level programs that create modern applications from web browsers to word processors; the instruction set architecture that functions as a basic language of the machine; and the underlying hardware implementation, a particular microprocessor designed to execute the instruction set.

Putting a spin on logic gates
While computer electronics are shrinking to small-enough sizes that the very electrical currents underlying their functions can no longer be used for logic computations in the ways of their larger-scale ancestors, how can a logic gate be built for devices too small for classical physics? This was the question a team of researchers from Technische Universität Kaiserslautern in Germany, University of Mainz in Germany, Scientific Research Company Carat in the Ukraine, and IMEC in Belgium set out to answer.

They reminded that a traditional semiconductor-based logic gate called a majority gate, for instance, outputs current to match either the “0” or “1” state that comprise at least two of its three input currents (or equivalently, three voltages). And in one recent experimental demonstration, the results of which were recently published in Applied Physics Letters, the team used the interference of spin-waves — synchronous waves of electron spin alignment observed in magnetic systems. They said spin-wave majority gate prototype, made of Yttrium-Iron-Garnet, comes out of a new collaborative research center funded by the German Research Foundation, named Spin+X. The work has also been supported by the European Union within the project InSpin and has been conducted in collaboration with the Belgian nanotechnology research institute IMEC.

The trident-shaped majority gate structure consisting of Yttrium Iron Garnet. The transparent material underneath is a Gallium Gadolinium Substrate. (Source: Applied Physics Letters: Fischer/Kewenig/Meyer)

The trident-shaped majority gate structure consisting of Yttrium Iron Garnet. The transparent material underneath is a Gallium Gadolinium Substrate.
(Source: Applied Physics Letters: Fischer/Kewenig/Meyer)

Tobias Fischer, a doctoral student at the University of Kaiserslautern in Germany, and lead author of the paper, said, “The motto of the research center Spin+X is ‘spin in its collective environment,’ so it basically aims at investigating any type of interaction of spins — with light and matter and electrons and so on. More or less the main picture we are aiming at is to employ spin-waves in information processing. Spin waves are the fundamental excitations of magnetic materials.”

As such, instead of using classical electric currents or voltages to send input information to a logic gate, the Kaiserslautern-based international team uses vibrations in a magnetic material’s collective spin — essentially creating nanoscale waves of magnetization that can then interfere to produce Boolean calculations.

“You have atomic magnetic moments in your magnetic material which interact with each other and due to this interaction, there are wave-like excitations that can propagate in magnetic materials. The particular device we were investigating is based on the interference of these waves. If you use wave excitations instead of currents […] then you can make use of wave interference, and that comes with certain advantages,” Fischer said.

Using wave interference to produce the majority gate’s output provides two parameters to use in controlling information: the wave’s amplitude, and phase. In principle, that makes this concept more efficient also since a majority gate can substitute up to 10 transistors in modern electronic devices.

This first device prototype, though physically larger than what Fischer and his colleagues see for eventual large-scale use, demonstrates the applicability of spin-wave phenomena for reliable information processing at GHz frequencies.

Full majority gate prototype. The brass block serves as an electric ground plate ensuring an efficient insertion of the RF currents to the antennae and, on the other hand, microwave connectors mounted to the block allow for the embedding of the device into our microwave setup. (Source: Applied Physics Letters: Fischer/Kewenig/Meyer)

Full majority gate prototype. The brass block serves as an electric ground plate ensuring an efficient insertion of the RF currents to the antennae and, on the other hand, microwave connectors mounted to the block allow for the embedding of the device into our microwave setup.
(Source: Applied Physics Letters: Fischer/Kewenig/Meyer)

The researchers also noted that because the wavelengths of these spin waves are easily reduced to the nanoscale, so too — although perhaps not quite as easily — can be the gate device itself. And this may actually improve the functionality, reducing its sensitivity to unwanted field fluctuations. In addition, nano-scaling will increase spin-wave velocities that will allow for an increase in computing speed.

They are aiming for the miniaturization of the device since the smaller it is, the less sensitive it becomes to these influences. Fischer added: “If you look at how many wavelengths fit into this propagation length, the fewer there are, the less influence a change of the wavelength has on the output. So basically downscaling the device would also come with more benefits.”

Deep learning revives 70-year-old idea
Deep learning technology that enables such things as the speech recognizers on smartphones or Google’s latest automatic translator is, in fact, a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years, MIT researchers are reminding.

They said neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what’s sometimes called the first cognitive science department.

Neural nets were a major area of research in both neuroscience and computer science until 1969, when, according to computer science lore, they were killed off by the MIT mathematicians Marvin Minsky and Seymour Papert, who a year later would become co-directors of the new MIT Artificial Intelligence Laboratory.

The technique then enjoyed a resurgence in the 1980s, fell into eclipse again in the first decade of the new century, and has returned like gangbusters in the second, fueled largely by the increased processing power of graphics chips.

Tomaso Poggio, the Eugene McDermott Professor of Brain and Cognitive Sciences at MIT, an investigator at MIT’s McGovern Institute for Brain Research, and director of MIT’s Center for Brains, Minds, and Machines said, “There’s this idea that ideas in science are a bit like epidemics of viruses. There are apparently five or six basic strains of flu viruses, and apparently each one comes back with a period of around 25 years. People get infected, and they develop an immune response, and so they don’t get infected for the next 25 years. And then there is a new generation that is ready to be infected by the same strain of virus. In science, people fall in love with an idea, get excited about it, hammer it to death, and then get immunized — they get tired of it. So ideas should have the same kind of periodicity!”

The recent resurgence in neural networks — the deep-learning revolution — comes courtesy of the computer-game industry. The complex imagery and rapid pace of today’s video games require hardware that can keep up, and the result has been the GPU, which packs thousands of relatively simple processing cores on a single chip. It didn’t take long for researchers to realize that the architecture of a GPU is remarkably like that of a neural net.

Modern GPUs enabled the one-layer networks of the 1960s and the two- to three-layer networks of the 1980s to blossom into the 10-, 15-, even 50-layer networks of today. That’s what the “deep” in “deep learning” refers to — the depth of the network’s layers. And currently, deep learning is responsible for the best-performing systems in almost every area of artificial-intelligence research.

The networks’ opacity is still unsettling to theorists, but there’s headway on that front, too. In addition to directing the Center for Brains, Minds, and Machines (CBMM), Poggio leads the center’s research program in Theoretical Frameworks for Intelligence. Recently, Poggio and his CBMM colleagues have released a three-part theoretical study of neural networks.

The first part, which was published last month in the International Journal of Automation and Computing, addresses the range of computations that deep-learning networks can execute and when deep networks offer advantages over shallower ones.

Parts two and three, which have been released as CBMM technical reports, address the problems of global optimization, or guaranteeing that a network has found the settings that best accord with its training data, and overfitting, or cases in which the network becomes so attuned to the specifics of its training data that it fails to generalize to other instances of the same categories.

There are still plenty of theoretical questions to be answered, but CBMM researchers’ work could help ensure that neural networks finally break the generational cycle that has brought them in and out of favor for seven decades.

Related Stories
Alternative To X86, ARM Architectures?
Support grows for RISC-V open-source instruction set architecture.
Will Open-Source Work For Chips?
So far nobody has been successful with open-source semiconductor design or EDA tools. Why not?
Neural Net Computing Explodes
Deep-pocket companies begin customizing this approach for specific applications—and spend huge amounts of money to acquire startups.
Betting On Power And Deep Learning
A very candid conversation with VC Jim Hogan about returns on invested capital.



Leave a Reply


(Note: This name will be displayed publicly)