Warp Speed Ahead

What can you do with orders of magnitude performance improvements?

popularity

The computing world is on a tear, but not just in one direction. While battery-powered applications are focused on extending the time between charges or battery replacements, there is a whole separate and growing market for massive improvements in speed.

Ultimately, this is where quantum computing will play a role, probably sometime in the late 2020/early 2030 timeframe, according to multiple industry estimates. Still, although there has been some progress in room-temperature quantum computing, the bulk of that computing initially will be done in extreme cold inside of data centers.

Between these two extremes, there is a growing focus on new architectures, packaging, materials and ever-increasing density to deal with massive amounts of data.

“If you look at anything around big data, all of these systems will become smarter and smarter,” noted Synopsys chairman and co-CEO Aart de Geus. “Over time the desire is not to get 2X performance, but 100X. The only way to get there is not by using faster chips, but by using chips that can only do a single task. In other words, algorithm-specific. By simplifying the problem, you can make things much more efficient.”

And this is where computing is about to take a big leap. In the past, the focus was on how to get more speed out of general-purpose processors, whether those were CPUs, GPUs or MCUs. Increasingly, processors are being designed for specific tasks. Rather than just one processor with many cores, there are multiple processors—some with just a single core, some with multiple cores—to handle very specific tasks and very specific data types.

This puts new pressure on big chipmakers. Instead of spending years developing the next rev of a general processor, the future increasingly is about flexibility, choice, and an increasing level of customization. This is why Intel bought Altera, and it helps explains why all processor makers have been ramping up the number of chips they offer. The risk of not coming up with so many possible options is that companies begin architecting their own chips, which is already happening. Apple, Amazon, Google, Microsoft, Facebook and Samsung today are creating chips for specific applications. It’s also why there is so much attention being focused on programmability and parallelism, whether that involves embedded FPGAs, DSPs, or hybrid chips that add some level of programmability into ASICs.

It also puts pressure on chipmakers to come up with new approaches for improving performance. Just turning up the clock speed is becoming more difficult at advanced nodes due to thermal limitations, which is why companies such as Cisco and Huawei began using 2.5D architectures to improve throughput between components. Interest in advanced packaging based on interposers, TSVs and lower-cost bridges continues to grow, along with more cache coherency to limit the amount of data that needs to be pushed back and forth between memory and the processor. There is even a resurgent interest in moving some processing into memory. And across the board, there is a growing emphasis on software-defined hardware, where the hardware and software are developed together much more closely than in the past.

So what can you actually do with 100X improvement in performance? Or how about 1,000X improvement? The answer isn’t entirely clear yet, but it does open up vast new possibilities that bode well for semiconductor design for many years to come.



3 comments

Hellmut Kohlsdorf says:

It is the consequence of the above that will impact how and what software will be coded. The i.MX8 family of devices from former Freescale, delayed to be available for so long, made clear that for the tier 2 and below customer based that cannot afford to have teams of specialists to cover each of the multiple functionalities the cores do support. Software libraries that result in easy to use API´s are required to hide much of the specialized functionality. Some even mean that AI supported SW development that has been trained for the different technologies supported by a single i.MX8 device might represent s scheme to follow!

Eric Olsen says:

teams for SOC software design? … found that out as early as imx6 and earlier! Why do you think Freescale collapsed? They needed to get a phone contract, and if you can’t spin SOC silicon fast enough, that’s what happens. It’s a mature market for big boys that can spend the money now required to develop commercially viable products. Just got back from CES and that’s the message I brought back. That being said, a lot of the big boys are making it easier to extend advanced AI technology to start-ups at no cost to help remedy this obvious situation. For example, Nvidia has an autonomous car “brain”, that if you can get an accelerator project going with Nvidia, you get access to the technology for free.

Eric Olsen says:

Ed, the 1000x increase in performance doesn’t even provide the compute power needed to deploy the next wave of AI capable consumer devices. At the CES show, it was clear the next trend in consumer electronics is to adapt the voice assistant technology (available now from your large software cloud services, like AWS, Google, IBM, etc) into these gadgets. That’s just the start. There are many AI services now available which are being rapidly deployed into applications across the globe, because these cloud companies have been pushing so hard to deploy their services, and become a leader. That’s a lot of cloud computing capability that will be needed in the very short term, in fact. You might want to check your stock market investments. I still like Nvidia’s position in hardware, but I cannot say who will win on software, but I’ll tell you …. it’s the guy with the compute power. And finally, it’s the guy with the cheapest compute power, and that means efficiency.

Leave a Reply


(Note: This name will be displayed publicly)