RISC-V Pros And Cons

Proponents tout freedom for computing architectures, but is the semiconductor ecosystem ready for open-source hardware?

popularity

Simpler, faster, lower-power hardware with a free, open, simple instruction set architecture? While it sounds too good to be true, efforts are underway to do just that with RISC-V, the instruction-set architecture (ISA) developed by UC Berkeley engineers and now administered by a foundation.

It has been known for some time that with not offering the same kind of benefits as in the past, the standalone, general-purpose processor no longer will be where the biggest innovations take place.

“The money shouldn’t be going into the processors, necessarily,” said Ted Speers, senior technical director, product architecture and planning for Microsemi‘s SoC business unit, and board member of RISC-V Foundation. “The processor cost should come down, and then you innovate on top of that with accelerators, new architectures, and so forth.”

Technically, the ability to manage complexities has expanded to the point where a 32-bit RISC microprocessor is not considered a complex object anymore, noted Drew Wingard, CTO of Sonics.

“The barrier to entry as a microprocessor instruction set architecture is all about the software and the ecosystem,” he said. “There’s no magic in the underlying technology for microprocessors, in general. RISC-V essentially takes that to the next logical level to say, ‘Let’s try to capture an instruction set architecture together with enough structure and automation that allows us to build families of processors much more easily. And, we’ll choose to distribute it as an open source piece of IP so that the community can add on.’ It has aspects of the open source movement, it has aspects of the configurable processor movement, and it has the opportunity to restructure how we think about the costs of microprocessor IP.”

The business end of this market will likely the same model as Linux, where commercial vendors add in their own IP and support. Commercial suppliers of RISC-V cores include Nvidia, Andes Technology, Cortus, and Codasip.


RISC-V-based Rocket core mapped to ZedBoard running Linux. Source: HotChips.

The main ISAs used today are x86, ARM, ARC, MIPS and PowerPC, along with other ISAs used under the hood in GPUs and DSPs. But RISC-V is starting to make some inroads. Nvidia announced that its SoCs will contain a RISC-V control processor. Andes Technology, a softcore supplier, likewise adopted RISC-V in its 64-bit architecture.

RISC-V from an architecture standpoint, is both simple and elegant, said Anush Mohandass, vice president of marketing and business development at NetSpeed Systems. But there is more to a processor’s success than the processor itself.

“The key question is software ecosystems,” Mohandass said. “How are they going to develop? It’s the whole chicken-and-egg problem. There are more developers than there are designs. Somebody has to kick-start that process. This is the reason Intel dominates the datacenter space and why ARM dominates the mobile space. Yes, part of it is architecture. But part of it is the ecosystem. Once that gains momentum, it has to break the mold with new architectural stuff. RISC-V is hedging its bets in the emerging IoT space because there is no single, big unifying platform there. RISC-V has an opportunity there.”

Proponents of the RISC-V platform agree. Krste Asanovic, a professor at UC Berkeley who chairs the RISC-V Foundation, and is also co-founder of SiFive, which is commercializing its version. (https://semiengineering.com/sifive-low-cost-custom-silicon/). “RISC-V is reasonably straightforward for a small group to implement, which opens up the possibility of many different RISC-V cores—so a lot more variety in the market,” he said. “Engineering teams doing a design that need a processor can find a version that fits their needs from multiple vendors — even open source — or they can do a design themselves. Freedom is the biggest attribute here.”

Asanovic contends that RISC-V could level the playing field and allow providers to compete on the quality or customization of the implementation.

Challenges to adoption
However, with any new technology there are challenges. For RISC-V, one hurdle is keeping the ISA coherent as a single standard.

“If RISC-V fragments, it would just be a dozen different RISC-V ISAs that aren’t compatible, so the goal of the foundation is to make sure there is one standard,” Asanovic said. “Most of the core providers understand that the big benefit of RISC-V is the common software stack. The development cost of that far exceeds the development cost of any core. That’s a big attraction for other core providers. They don’t have to maintain the compilers, the linkers, the operating systems, everything else. It’s done by the community.”

That takes time to hit a level of maturity and trustworthiness, however.

“If you use one of the dominant instruction set architectures today, there aren’t five choices of debug environments,” said Sonics’ Wingard. “You name anything else that’s around the support world of this, and there are multiple choices from vendors who have long histories and well-understood business models. The RISC-V world is going to have to re-create all of that, or figure out how to adapt it to the most dominant ecosystem that’s around right now for chip design which is the ARM ecosystem. The commercial suppliers of RISC-V five cores must make their own determinations around this, which is a huge barrier to the RISC-V effort.”

Another barrier to RISC-V adoption is optimization of the implementation technology. “They have cores that work, they’ve proven that, but are they going to benchmark well versus a seventh-generation core implementing a commercial instruction set architecture? Probably not, not for a while,” Wingard said. “There are lots of corner cases to take care of that sometimes matter in applications. There’s a substantial important work that needs to go in. One can make an argument that in a large number of SoCs, the CPU should be called the control processing unit, not the central processing unit, and the actual throughput of that control processor may not matter. But for the people designing these chips, they’re never sure. It’s like a form of design margin they would rather have. For a given megahertz, they would rather have a higher-performing machine.”

And because the RISC-V instructions set can be extended by a user, some of those changes will affect how the core interacts with the rest of the chip. “There are a couple classes,” he explained. “One is it adds new types of transactions that could come out onto a NoC, or it adds an ability to speak directly to a tightly coupled accelerator of some kind, like ARM’s DynamiQ technology, where they have an ability to do some directly attached AI coprocessor kind of approach.”

Asanovic admits handling that fragmentation is a challenge. But again, the foundation was created to manage the standard and have everybody sign on. Companies desiring to use the RISC-V trademark must pass compatibility testing first.

Yet another challenge is dealing with patent issues, he said. “We were very careful in the design of the base ISA. It’s very simple. We like to call it a boring RISC, by going back to the original RISC principles. Dave Patterson did a genealogy search with undergraduate students [at UC Berkeley], and they basically showed the lineage of all the instructions. For the base ISA, they traced it back to the RISC I, RISC II, RISC III, RISC IV.”

Also in the membership agreement, the members agree not to sue each other based on the base ISA spec. If they do, they lose their rights. “However, if you look at the other proprietary ISAs in terms of patent challenges, you don’t really have much protection, and you’re seeing lawsuits where Company A sues Company B for using Company C’s IP. We’ve seen that with the graphic engines recently, so even if you buy from X, some company will stand behind it. The same is true for RISC-V. Companies are offering their cores, and they are indemnifying people under standard commercial conditions,” Asanovic pointed out.

Immature, but growing
RISC-V is still immature. “This is early days,” Asanovic said. “Not everything exists for RISC-V that exists for the other ones, but that is filling in at incredible pace. The open-source community likes this idea, so the best and the brightest are volunteering to help us port things over.”

And while most projects using RISC-V are in the microcontroller class, for the Unix-class applications processor it’s going to take a little bit longer. The goal at the foundation this year is to have the standard Unix platform defined so engineering teams know what is needed for the standard Unix builds.

“A big milestone is getting out the first Unix development board to developers so they can start porting Unix over,” he said. “For RISC-V, the insertion points are really at the low end and the very high end, where there is a lot of interest in new applications like machine learning accelerators, network processing or storage controllers, and even supercomputing. In that space, people are open to trying new ISAs. If you want to build your own chip, there’s very limited scope at the incumbents, so doing your own thing at the high end but having a decent port of the software is why people are interested at the high end. If you’re one of the big cloud providers, you want to do your own processor chip. RISC-V is something they may be very interested in. In three to four years’ time, there may even be adoption on big iron.”

Design flow impact
Practically speaking, from a microarchitecture perspective, RISC-V’s impact on the design flow may be significant. “If you’re going to not use the existing ecosystem of IP cores because you’re using different interfaces, then there is some significant disruption,” Wingard said. “At the level of synthesis, place and route, etc., there’s no impact. As we start to get up to the levels of the infrastructure needed for bringing the chip up—the debug infrastructure and all those things—yes, there’s major impact. And that’s an area where the RISC-V community is going to have to invest a lot of effort to become something that’s on par with the rich technologies that are available there. Then, at the software levels, there is an enormous amount of work to do around libraries and device drivers, among other things. For those stages of the design flow, it’s a bunch of work at this point.”

Mohandass sees only a question mark in the short term, as far as impact on the design flow. “You have a new ISA, you have a new processor. The short-term impact is that it has to be thoroughly verified. You’re seeing this play out in real time. People are going to question if this this robust. Is this solid? Is this going to work? And as it gets proven out in silicon and as it gets proven out in production, then those things disappear. Only then will you see the real benefits of an elegant architecture and a simple architecture.”

While RISC-V is not the first open source ISA, this particular initiative has come at an interesting time in the semiconductor industry that has been in consolidation mode for the last several years, noted Ravi Thummarukudy, CEO of Mobiveil. “As the industry matures, the focus has been on growing the business by consolidation, and smaller players find it difficult to replace the existing giants in most market segments also reaching maturity. Due to the increasing cost of semiconductor manufacturing, investment in small startups, especially with new CPU architectures, has dwindled. The only real possibility for breakthrough innovation in the CPU is through open source by pooling the collective creativity and available funds.”

At the same time, cloud computing and the IoT are driving semiconductor consumption. “On the datacenter side of things, Intel’s ISA rules the processor market, with ARM and other architectures holding minimal market share, Thummarukudy said. “I don’t expect much change to this scene. However, it’s an entirely different story on the end point or sensor side. This is where the maximum innovation is taking place in the market today. Processor architectures for IoT devices need low-power, cost-effective CPUs that could give startups a path to innovate a variety of new SoCs with small budgets. This is perhaps the biggest benefit of RISC-V.”

At the same time, in a software-driven world, software support for the RISC-V ISA is paramount, and the success and failure of this ISA will depend on how soon a stable software ecosystem gets created and maintained to enable a number of new applications to be developed around RISC-V, he added.

Graham Bell, vice president of marketing at Uniquify, agreed that RISC-V will drive a new level of activity in the IoT space, particularly as it takes the scalable features seen in semiconductor design IP, such as memory compilers, and brings that to processor development without proprietary roadblocks. “RISC-V encourages definition of the instruction set that fits the problem to be solved, saving silicon and the associated cost, and allows for the right mix of low-power and process performance requirements. Being able to create functional silicon for projects with outlays of one to two hundred thousand dollars means the bar has been lowered dramatically for who can start to prototype projects. We will even see project funding that is crowdsourced for entrepreneurs outside of the traditional design community. Besides lowering the cost of entry, RISC-V eliminates royalty payments for proprietary CPU IP and keeps ongoing production costs lower, leading to more products being brought to market more quickly.”

With key milestones for Linux support being hit, the RISC-V Foundation believes it is on pace for Linux 4.12 to support RISC-V, said Microsemi’s Speers.

Another consideration on the software front is the opportunity to use the switch in hardware to transition the software methodology, as well. “If I were a high-level engineering manager/director of engineering/vice president of engineering, I would use a switch to RISC-V to start transitioning my software methodology. You’ve got one transition going on, you’ve got a bit of a switch because people are going to have to potentially use new debuggers or other tools for RISC-V so start changing the methodology a bit, as well. It’s too good of an opportunity to pass up if you are an engineering manager,” said Larry Lapides, vice president of sales at Imperas.

Still, from a business model standpoint, RISC-V is destructive, Mohandass said. “It’s the whole open-source way of doing things. It tries to undercut the ARM model of establishing how CPUs or other cores should work and how they should be valued.”

If RISC-V succeeds, Wingard believes it will look more like the Linux model then any of the other open source business models, because typically most open source projects have a very small number of companies behind them. “If you want to use open source code in a commercial environment, it’s not unusual that the group of people who contributed the most code into the open-source project end up becoming kind of a service company that makes people feel comfortable doing it in a commercial context. But in the Linux world there’s heavy competition around that role. There is no single company that has been the largest contributor into the Linux kernel, first of all. Second, the total number of the lines of code is massive. Third, an operating system by itself is not very interesting without a set of libraries and applications and building blocks and coded stuff, so there’s tons and tons and tons of features that are there. There are a number of organizations, Red Hat is the biggest, but they are by no means the only one. People get very excited about different variants of Linux, and they have somewhat different business models for how you pay, but essentially most of the non-desktop computing right now runs on Linux, and most of those machines are being used for commercial purposes. There is a fee being paid to a software services company for that. They’ve gotten to a level of ubiquity where this service-based model and enterprise licensing kind of model works. We could see that here.”

Related Stories
Alternative To X86, ARM Architectures?
Support grows for RISC-V open-source instruction set architecture.
Will Open-Source Work For Chips?
So far nobody has been successful with open-source semiconductor design or EDA tools. Why not?
SiFive: Low-Cost Custom Silicon
Company seeks to build solutions based on open-source processor cores.
System Bits: April 18
RISC-V errors; spin-wave logic gates; deep learning is old.



7 comments

Kent Dahlgren says:

The biggest issue in getting traction will be the hardware ecosystem around the core. The AMBA SoC interconnect specs are the defacto standard for this and I expect to see ARM using these as a weapon.

Ann Steffora Mutschler says:

I agree that much work needs to be done in order to establish an ecosystem, so we will be watching for development in that area, and then adoption.

Karl Stevens says:

Is it worth the time, cost, and effort to establish an ecosystem? Is there real value or wishful thinking?

Ann Steffora Mutschler says:

Karl, my guess is that money will talk. When the OEMs decide RISC-V will provide benefit to the bottom line, the ecosystem will quickly emerge. As far as the value, it does seem like there is some momentum, and time will tell.

Karl Stevens says:

I think it is because “opensource” implies that there is a magic bullet in someone’s mind just waiting to be fired.
The “memory wall” exists because RISC is load/store architecture. Also heterogeneous system work because they use memory efficiently(of course not being hampered by an ISA helps).
A true RISC would do if/else, for, while, do and assignments.

Karl Stevens says:

Kent, right on. I spent my career in computers/systems so I know how computers work.
The basic difference among RISC ISAs is the load instruction addressing because data has to be loaded first before being used.
Therefore the key is access to memory, then data can be used in computation using the simple add, subtract, multiply, divide, and, or, xor instructions.
RISC came about because compiler writers could not figure out how to use more complex instructions(IBM 801, circa 1980). NOT because it was better, just REDUCED instruction set.
Of course x86 has been used in a few CPUs recently.
ISA is a minor factor, but most talked about.

David Rayer says:

You seem to remember it quite differently to what I recall. I haven’t spent my life in computers because I was around before them – as a computer lecturer for IBM and now retired.
In 1980 there were only two PC computer frames available (outside of mainframes) and that was the Apple 1 and the TRS-80 by Radio Shack. There was no CISC or RISC because those terms had not been coined by then and the 4040 had just become the 8080. In fact, as I remember it, most computers with those two exceptions were still on the amateur S100-bus and only the province of amateurs programmed them through switches for the address and data with a push button to say “Input”. Hardly surprising since they’d only just been released as chipsets mid to late 1978.
And as I see it, the growth in PCs has been entirely centric to the ISA (aka, intro of FPU; FPU insertion into the CPU; the introduction of MMX1,2,3 & 4; 4040 -> 8080->80186 -> 80286 -> 80386 -> 80486 to the Pentium line, etc.) But that was Intel’s line only.
There WERE no higher-level PC languages (except in mainframes eg. COBOL, Fortran, etc) in 1980 except for the ROM based in-built BASIC on the Apple. But I confess I don’t know what Radio Shack used. RISC and CISC weren’t even thought of at that stage.
As far as the “glue,” I agree with Kent to the extent it can be inconvenient in the short term, but long term could see a whole new configuration (eg., distributed CPU slices embedded in memory – touted once before like the mangled “transputers” that are finally seeing the light of day some 35 years after Texas Instruments first marketed them – the “T” Series eg. T800.)
“Many roads lead to Rome”, embedded and finessed by necessity! RISC was not born out of ignorance, but ingenuity, research, and speed. Ask Intel, unless they’ve forgotten the lesson taught them by the AMD RISC inside a CISC husk in the 80486 time trials. And let’s face it! The CISC ISA is a complex mess.

Leave a Reply


(Note: This name will be displayed publicly)