Giant Steps—Backward

Why technology invented six decades ago is suddenly so important to SoC design.

popularity

With DAC headlining next week, power is sure to take center stage given its prominence as a key pain point for design engineers that are always on the lookout for a new technique to ease their power management burdens.

In many low-power designs, asynchronous technology may be just the thing. One of the biggest disadvantages of the clockless CPU is that most design tools assume a clocked CPU (a synchronous circuit), and as such enforce synchronous design practices. To design an asynchronous circuit involves modifying the design tools to handle clockless logic and doing extra testing to ensure the design avoids metastable problems.

At DAC, there is one company, Tiempo, which will be demonstrating its asynchronous synthesis tool that it hopes will dispel designers’ fears of trying out the design style that holds a lot of promise for low-power designs.

While researching this article, I was interested to discover that the application of asynchronous design style goes back nearly 60 years to the ORDVAC (Ordnance Discrete Variable Automatic Computer), which was built by the University of Illinois for the Ballistics Research Laboratory at Aberdeen Proving Ground to perform ballistic trajectory calculations for the U.S. military. It was based on the IAS architecture developed by John von Neumann, which came to be known as the famous von Neumann architecture.

ORDVAC, the first computer to have a compiler, became operational in the spring of 1951 at Aberdeen Proving Ground in Maryland and could exchange programs with its twin ILLIAC I.

After ORDVAC was moved to Aberdeen, it was used remotely by telephone by the University of Illinois for up to eight hours per night. It was one of the first computers to be used remotely and probably the first to routinely be used remotely.

The ORDVAC used 2,178 vacuum tubes. Its addition time was 72 microseconds and the multiplication time was 732 microseconds. Its main memory consisted of 1,024 words of 40 bits each, stored using Williams tubes. It was a rare asynchronous machine, meaning that there was no central clock regulating the timing of the instructions. One instruction started executing when the previous one finished.

Following the ORDVAC and the ILLIAC I in 1951, the WEIZAC was built in 1955, and then came the ILLIAC II in 1962, which was the first completely asynchronous, speed independent processor design ever built and was the most powerful computing machine known to man at the time. The Victoria University of Manchester built Atlas, after which came the Honeywell CPUs 6180 in 1972 and Series 60 Level 68 in 1981 upon which Multics ran, which were asynchronous.

In 1988 Caltech built the world’s first asynchronous microprocessor, followed by ARM’s implementation of AMULET in 1993 and 2000. MIPS created an asynchronous implementation of the MIPS R3000, dubbed MiniMIPS in 1998, which was followed by several versions of the XAP processor that experimented with different asynchronous design styles.

Since the late 1990s, an ARM-compatible processor was designed specifically to explore the benefits of asynchronous design for security sensitive applications, along with the “Network-based Asynchronous Architecture” processor in 2005 that executes a subset of the MIPS architecture instruction set; and most recently, the SEAforth multi-core processor was designed in 2008 by Charles H. Moore.

In any case, it will be interesting to see if, now that a commercial tool is available, designers will embrace an asynchronous design style where appropriate. I would expect some movement in this direction particularly given that learning asynchronous as a new discipline is not extraordinarily complex, according to companies like Octasic and Fulcrum Microsystems that embrace asynchronous techniques in the design of their chips.

 

~Ann Steffora Mutschler



Leave a Reply


(Note: This name will be displayed publicly)