Home
OPINION

Which Processor Is Best?

Intel’s support for RISC-V marks a technological and cultural shift.

popularity

Intel’s embrace of RISC-V represents a landmark shift in the processor world. It’s a recognition that no single company can own the data center anymore, upending a revenue model that has persisted since the earliest days of computing. Intel gained traction in that market in the early 1990s with the explosion of commodity servers, but its role is changing as processors become more customized and heterogeneous.

There are four major elements behind Intel’s strategy, each the result of a culmination of multiple factors.

1. The scaling race is becoming background noise. Moore’s Law realistically ran out of steam somewhere in the pre-finFET age — as early as 90nm for DRAM, and as late as 22nm for microprocessors, which by that time relied on other tricks, such as branch prediction and speculative execution to improve performance per watt to continue improving performance per watt. But the benefits of scaling diminished sharply at each new node after that. Cost per transistor continued to drop, but the value of packing more transistors in a given space didn’t rise as fast as in the past. While the number of transistors may have doubled, the total cost of scaling has increased faster than any benefits from density. Just packing more transistors created problems with dynamic power, thermal, noise, power delivery, premature aging, not to mention layout, verification, simulation, and a variety of other physical/proximity effects that need to be addressed. Having 2 billion vs. 3 billion transistors on a die doesn’t mean much if you can’t use all of them. That’s not to say very dense chips or chiplets don’t have value, but that value needs to be assessed and potentially modified on a very granular level, and in the context of the end application.

2. A general-purpose processor is not the best approach in the data center. The most efficient and effective solutions are customized, which is why companies like Amazon, Google, Facebook, Tesla, and Alibaba are all developing their own chips for their data centers. Intel’s fortunes have been heavily tied to the server market for the past three decades, just like IBM’s. And unless they can come up with a cheaper, better, more reliable solution than a custom-made one, taking into account factors such as performance per watt and overall power consumption, consumers of big iron will continue to develop their own custom processors. This hasn’t been lost on Intel, which now is moving quickly toward a chiplet approach. That has been years in the making, starting with acquisitions of companies like NetSpeed Systems, which developed NoCs, as well as some massive internal development. Intel was slow to pull the trigger, although it did start filing patents more than a decade ago, starting with its EMIB bridge technology, and it has been shipping FPGA-based chiplets for six years. Rival AMD, meanwhile, has jumped into chiplet-based processors with both feet, and its acquisition of Xilinx will add more fuel in this area. Marvell likewise is heavily targeting edge servers using a chiplet strategy, making the most of its acquisition of GlobalFoundries’ ASIC business, which was acquired from IBM.

3. If you design it, build it, and manufacture it, you get money from all three buckets, plus other benefits. Intel always has benefited from having its own fab, which allows it to get new designs into manufacturing for testing and tweaking. But the cost of building leading-edge fabs have become so astronomical that the foundry approach has largely supplanted the IDM model. To keep its fabs humming, Intel has begun seriously pumping resources into Intel Foundry Services approach. It bought Tower Semiconductor in Israel, and announced plans to build or expand fabs in Ohio, Arizona, Ireland, and Malaysia. With that much leading-edge capacity, it clearly is taking aim at Samsung and TSMC, as well as its processor rivals, and benefiting from a major supply chain glitch that continues to hamstring a variety of industry sectors, particularly automotive. This is accompanied by global on-shoring wave, as various regions such as China and potentially Eastern Europe become increasingly insular.

4. New markets require fast solutions. The rule of thumb is that whoever wins the lion’s share of a new market will dominate that market for at least the first few years. This is why chiplets are so attractive, but the ability to create a chiplet-based solutions that has been fully characterized and optimized is a massive challenge. Intel plans to leverage an open ISA as well as many other components it has developed, and some it will license, in order to provide a menu of options and tradeoffs for customers. Whether this will win back systems companies remains to be seen, but it certainly is a powerful play for the whole edge build-out, and for companies looking for quick solutions that are more customized than previous off-the-shelf components. One of the biggest weaknesses in the early smart watch market, for example, was the poor battery life due to companies rushing into this market using off-the-shelf processors. By leveraging tightly integrated software, and various hardware modules that Intel can build and test, Intel can churn out more customized solutions from its growing fab capacity at a variety of price points, in a short market window, and still make it profitably. This certainly applies to the data center world, but it also opens the door for Intel to play in more places than in the past.

Taken as a whole, Intel is benefiting from a variety of market shifts, a growing focus on electronic component production as an element of national security (not just for the U.S.), and a fundamental shift in the value proposition for chips themselves. And unlike in the past, when its mission was clouded by start-and-stop movements and lots of internal second-guessing, the company appears to have a unified focus about how it will move the ball forward.

The best processor is no longer defined by which one can run a standardized benchmark fastest. It’s now a race involving multiple elements — performance for a specific application or use case, power consumption, customization, and cost. And that now includes the ability to manufacture at multiple nodes, to field third-party IP and chiplets quickly, and to work with a variety of partners in areas such as packaging and design. For Intel, this also represents a cultural shift, but it’s one the current leadership seems to be able to articulate more effectively than at any time in the past.



2 comments

Rupert Baines says:

> The best processor is no longer defined by which one can run a standardized benchmark fastest. It’s now a race involving multiple elements — performance for a specific application or use case, power consumption, customization, and cost.

Absolutely agree.

PPA as a generic “one size fits all” mist be viewed much more carefully. There will be times when a standard “off the shelf” core is a good fit, and perfectly fine.

But there are other times where differentiation, customisation and creating a March between HW resources & SW tasks for that unique application is critical.

In a world where Demand scaling stopped years ago, Moore’s law is pretty much stopped, then progress must be from architectural innovation. Heterogenous compute, domain specific acceleration, HW/SW code sign etc.

The challenge is how to achieve that, to efficiently generate those customized cores, to get the software toolchain and ecosystem etc

Of course, I would argue that Codasip has a solution to that challenge

Kevin Cameron says:

RISC-V is an ISA, as is X86 and ARM, and that’s just a compiler target for machine code generation, not an actual processor architecture these days.
Whatever the best processor is, it’s probably not a machine code interpreter for one of those.

Leave a Reply


(Note: This name will be displayed publicly)