Power/Performance Bits: Feb. 25

A researcher duo including the Georgia Institute of Technology have shown what they say is the fastest SiGe chip to date; MIT engineers assert that more clever cache management could improve chip performance while reducing energy consumption.

popularity

SiGe chip sets speed record
Researchers from IHP-Innovations for High Performance Microelectronics in Germany and the Georgia Institute of Technology have demonstrated what they say is the world’s fastest silicon-based device to date. A silicon-germanium (SiGe) chip has been operated transistor at 798 gigahertz (GHz) fMAX, exceeding the previous speed record for silicon-germanium chips by about 200 GHz.

High-speed silicon-germanium chips and measurements probes can be seen inside a cryogenic probe station in a laboratory at the Georgia Institute of Technology. (Source: Georgia Tech)

High-speed silicon-germanium chips and measurements probes can be seen inside a cryogenic probe station in a laboratory at the Georgia Institute of Technology. (Source: Georgia Tech)

 

While these operating speeds were achieved at extremely cold temperatures, research suggests that record speeds at room temperature aren’t far off, the researchers said. If that happened it could enable potentially world changing progress in high data rate wireless and wired communications, as well as signal processing, imaging, sensing and radar applications.

Further, they believe the results also indicate that the goal of breaking the so called ‘terahertz barrier’ – meaning, achieving terahertz speeds in a robust and manufacturable silicon-germanium transistor – is within reach.

In the meantime, the tested transistor could be practical as is for certain cold-temperature applications such as demanding electronics applications in outer space, where temperatures can be extremely low.

The results also show the potential for enabling applications of Si-based technologies in areas in which compound semiconductor technologies are dominant today.

Smarter caching
While computer chips keep getting faster because transistors keep getting smaller, the chips themselves are as big as ever, so data moving around the chip and between chips and main memory has to travel just as far. However, as transistors get faster, the cost of moving data becomes, proportionally, a more severe limitation. To address this, caches are used to circumvent that limitation. But the number of processors cores per chip is also increasing, it makes cache management more difficult. And as cores proliferate, they have to share data more frequently, so the communication network connecting the cores becomes the site of more frequent logjams, as well.

To overcome this, researchers at MIT and the University of Connecticut have developed a set of new caching strategies for massively multicore chips that, in simulations, significantly improved chip performance while actually reducing energy consumption.

The researchers have reported average gains of 15 percent in execution time and energy savings of 25 percent with the caching strategies.

Caches on multicore chips are typically arranged in a hierarchy: each core has its own private cache, which may itself have several levels, while all of the cores share the so-called last-level cache, or LLC.

Caching protocols usually adhere to the simple but surprisingly effective principle of “spatiotemporal locality,” the researchers reminded…but there are cases in which this principle breaks down.

To mitigate this, the researchers created a hardware design that, when an application’s working set exceeds the private-cache capacity, the chip would simply split it up between the private cache and the LLC. Data stored in either place would stay put, no matter how recently it’s been requested, preventing a lot of fruitless swapping. At the same time, if two cores working on the same data are constantly communicating in order to keep their cached copies consistent, the chip would store the shared data at a single location in the LLC. The cores would then take turns accessing the data, rather than clogging the network with updates.