The Return Of Time Sharing

An explosion of data has changed how technology is used and prioritized.


As early as the 1960s, it wasn’t uncommon to hear that transistors would be free. Those were pretty bold statements at the time, considering most computers in those days cost $1 million, required special rooms, and budding computer scientists usually had to sign up to use mainframe computers for one-hour time slots—often in the middle of the night or on weekends.

Still, those predictions proved right. Each new process generation did provide free transistors at each new node for the same price. And as enough transistors became available in smaller form factors, prices dropped to the point where devices could operate independently of centralized data centers. From there it was a logical progression of smaller, faster, cheaper, to put the equivalent of what once required a special room, millions of dollars of equipment, and lots of programmers, into a device that fits in your pocket and can be sold for a monthly fee that includes Internet access and off-site data storage.

The economics of this formula began changing at 28nm, though, when the percentage of free transistors began declining. Power-related issues became dominant—dynamic power density, leakage current, wire resistance, electromigration, thermal runaway, self-heating, signal corruption.

That occurred at the same time as a technological stall-out. Until a couple years ago, no one was quite sure whether the IoT was hype, and almost no one thought fully autonomous cars would surface until at least the 2030s. And in smartphones, the number of new apps that people had to have dropped significantly. The result was a period where performance was deemed good enough for most applications, while extending battery life and lowering power became the key considerations for designers.

Power remains a big issue, and it will continue to be a gating factor in advanced designs. But given the amount of data that needs to be processed, performance is suddenly back in vogue alongside power. And that has big implications for computing architectures, process technologies, packaging, and future investments.

While it’s clear that device sizes will continue to shrink, it’s less clear where computing will be done. This is particularly true in automobiles, which increasingly are looking like neural networks on wheels (even the wheels have sensors). But the real compute horsepower goes to deciphering the images around a car as it is moving down the highway at high speed. The data that is collected needs to be mined for aberrations and anomalies so that, even in corner-case types of conditions, cars can move safely and avoid accidents when something goes wrong.

This kind of data mining requires massive compute farms, and it requires the same push toward the most advanced process nodes for server logic that has defined semiconductor progress for the past several decades. What’s changed is that not all of the processing will be done in one place. It will be split across multiple devices, processed as needed in the most efficient way, and combined and sorted on schedules that make sense and as resources become available.

This is a huge step forward in technology. Machines will be able to update machines and communicate with other machines. And after more than a half century of progress in computer science and semiconductor design and manufacturing, it’s interesting to see that we have come a long way in perfecting the time-sharing model.

What wasn’t so obvious the first time around, though, is that sharing resources was merely a means to an end. After more than five decades, the technology and the time-sharing computing models have come far enough to begin focusing on the data rather than the technology or the strange use cases. And that will have a huge impact on what the underlying technology looks like and how it is shared over the next half-century.

Leave a Reply

(Note: This name will be displayed publicly)