Will Open-Source Work For Chips?

So far nobody has been successful with open-source semiconductor design or EDA tools. Why not?

popularity

Open source is getting a second look by the semiconductor industry, driven by the high cost of design at complex nodes along with fragmentation in end markets, which increasingly means that one size or approach no longer fits all.

The open source movement, as we know it today, started in the 1980s with the launch of the GNU project, which was about the time the electronic design automation (EDA) industry was coming into existence. EDA software is used to take high-level logical descriptions of circuits and map them into silicon for manufacturing. EDA software starts in the five digits, even for the simplest of tools, tacking on two or three zeros for a suite of tools necessary to fully process a design. On top of this, manufacturing costs start at several million dollars.

In addition, a modern-day chip, such as the one in your cell phone, contains hundreds of pieces of semiconductor intellectual property (IP cores or blocks), and each one of these has to be licensed from a supplier, often paying licensing fees and royalties for every chip manufactured. The best known are processor cores supplied by ARM, but there also are memories, peripherals, modems, radios, and a host of other functions.

The industry would appear ripe for some open source efforts so that the cost of designing and producing chips could be lowered, or perhaps better designs could be envisioned by drawing on the creativity of a huge number of willing coders. But while some projects have existed in both EDA tools and IP, none have even dented the $5B industry.

Momentum is building again for change. At the design automation conference (DAC) this year, a number of speakers, executives and researchers addressed open-source hardware, and some new business models are emerging that may get the ball rolling. Some believe that once it gets started, it will be a huge opportunity for the technology industry, but most within the industry are doubtful. The problem is that without the full support of the fabs, IP suppliers and the EDA industry, it is unlikely to happen.

The challenges
There are significant differences between hardware and software, even though the languages are similar. One language in particular, SystemC, was meant to finally close that divide by adding constructs necessary to describe hardware into the C++ language. However, even if SystemC was a perfect language, it would not enable software developers to create hardware. Some of these issues translate into the way in which open source hardware can be envisioned. “We can often do a lot of interesting hardware developments at RTL, but when the rubber meets the road, there is a capital cost associated with going to the fab,” says John Leidel, a graduate student at the data-intensive scalable computing laboratory at Texas Tech University. “This includes doing the masks and all of the verification. That capital cost is often not well understood, especially in academia and national labs.

This is in stark contrast to open-source software. “In an open-source software environment you can develop software and it is just software,” Leidel notes. “If it doesn’t work, just rewrite it. There is only a human capital cost to do that. You don’t have to drop another million dollars on a mask set in order to fix your bugs.”

There are also significant differences in business models. “Innovation starts out as being proprietary,” points out Paul Teich, principal analyst at Tirias Research. “The most radical innovations start out as an idea, and the way in which it succeeds is that someone makes a lot of money on it. Open source is inherently a following mentality. The reason why open source software is successful is that it is OpEx driven. People can easily donate time, and that is what we are talking about with open-source software. When we talk about open-source hardware, someone has to pay for electrons and bits. It is no longer just my personal effort for a common good.”

The timescales are also very different. “With silicon spins, it takes two years to get to market,” adds Teich. “No matter what you do, or how good the idea is, that is how long it will take to actually see functioning gates on the market.”

Moreover, the tool chains are different. “With open-source software we have compilers and fairly good management tools for source code,” says Aaron Sullivan, distinguished engineer at Rackspace. “For open-source hardware, we also need fabs. They are expensive and not nearly as plentiful or distributable. They require a lot of expertise to run and develop. That is a critical difference in the rate in which the community can mature.”

Randy Swanberg, distinguished engineer at IBM, agrees. “In open-source software, there is a rich set of tooling, from run-time, to compilers, libraries to build upon, git repositories, storage management, etc. The tooling attracts the developers. They have to be ubiquitous and accessible. This is a challenge for the open source hardware movement.”

But to think of the two open source areas as being the same is perhaps part of the problem. “As the semiconductor and IP industries have moved over the past 20 years, the analogy that we think about in the semiconductor industry is not a technology problem, but a content problem,” explains , general manager of the IP division of Silvaco. “Semiconductors is more like the record industry than a technology industry. People differentiate themselves by what is recorded on the vinyl rather than the vinyl itself. It is not about optimizing vinyl but what hit records are designers putting on them.”

Up until now, the cost associated with pressing vinyl has restricted the available content, but times are changing.

Change ahead
The semiconductor industry has been on a tear for five decades, something often referred to as , and that torrid pace has defined how the industry operates. “Moore’s law, as we see today, is slowing down,” cautions , managing partner of Lanza techVentures, who also has sat on the board of many EDA and IP companies. “We need to start accepting that and think about what is going to happen. We are looking at a slowdown of geometric shrinking and new requirements for the evolution of the nodes.”

Lanza believes the race for the new node was such a big driver that the EDA tools never had to perfect anything. “All you had to do was to come up with something that kind of works.” But with the slowdown, it will cause a significant change for EDA and that perhaps presents opportunities.”

It is that slowdown that will trigger big change. “The industry has been ramping up the next node and praying that they can get better performance than the next guy,” says Mark Templeton, managing director of Scientific Ventures. “If the node is going to stay around for a while, people are going to want more variation within that node. That will challenge everybody.”

And it may not just be within the node that we see variation. “Today, there is such an explosion of diversity that is coming,” says Savage. “If you have a node that remains stable, from a geometry point of view, people are going to want to optimize it for IoT edge applications, for example, and that will cause a lot of fragmentation for markets with processors. This is a challenge because you will not be able to build a piece of IP and sell it 10 or 20 times, more like once or twice and so things start to become more service like than in the past. We will see a lot more process variation, and getting IP mapped to those will cause change in things such as budgets for verification and qualification in silicon.”

Platforms and reference designs
There have been several projects that centered on hardware platforms such as Motorola’s (now Google) Project Ara, which aimed to create a universal core around which components such as cameras and screens could be attached. As yet, no such phone has been released.

Perhaps the platform that garners the most attention is the field programmable gate array (FPGA). These are standard chips that can be reprogrammed to take on any logic function and are programmed in a similar manner to software. The hardware function is described in a software-like language and a compiler reads that description and turns it into a bit-stream that is downloaded into the FPGA. The FPGA then functions as that description, but not in the same way as a program running on a processor. Instead, the hardware emulates the function of the program and can do so much faster than a program running on a processor.

The downside is that takes considerable time to compile and download the program and the devices are expensive. However, “they do provide a semblance of a platform on which you can iterate without a huge expense,” says Rackspace’s Sullivan.

Swanberg adds that FPGAs “are starting to make progress because the major FPGA vendors have recognized this and are trying to create tooling that looks more like what a software developer is used to. These are C compilers with a few pragmas that can enable you to generate the bitstream.”

The FPGA vendors make money from selling the devices and this drives the business model. “They have a motivation to get more designs on their platforms,” says Savage. “Verticalization of the design community is one important thing that had to happen to achieve the vision. The market structure doesn’t facilitate that today in the IC design flows. That is the big difference between the hardware and software. There has to be a financial incentive on the semiconductor side to get access to the huge market that could be available.”

New business model
There are signs that business models are beginning to change. Google and Facebook both have developed hardware architectures that are optimized to improve performance and reduce power, particularly in data centers

Facebook got the ball rolling in 2011 when it created the Open Compute Project. The company made public what it was doing and offered up those advances to the rest of the industry. Since its inception, Microsoft, Google and many other companies have joined forces to target improvements in the datacenter. That has extended beyond hardware into new areas such as deep learning and artificial neural networks, and telco and storage.

While this is at the system-level and the racks of hardware in the data center, could there be similar interest in redesigning the chips themselves in a cooperative manner?

There are some additional hurdles that would have to be overcome. “Having a neutral or not-for-profit structure where companies have clear rules around IP is a necessity,” says Chris Aniszczyk, executive director at Cloud Native Computing Foundation. “You cannot have problems about where the code is coming from, and there need to be rules about how engagements happen. You will see this pattern appear more as hardware companies start getting involved, sharing designs and collaborating. You will see foundations popping up all over the place.”

One such foundation wants to make an open source processor. RISC-V was originally designed to support computer architecture research and education but has now become an open architecture available under a BSD license. It is intended for industry implementations under the governance of the RISC-V Foundation. Originally developed in the Computer Science Division of the EECS Department at the UC Berkeley in 2015, the RISC-V Foundation manages the standard, creates compliance tests, and organizes the community. It comprises industry leaders such as Google, HP, Microsemi, and Oracle as well as academic partners.

As yet, the RISC-V has little to offer above and beyond the capabilities of the Tensilica (Cadence) or ARC (Synopsys) extensible processors, which also come will full tool support and a solid history of implementations in silicon—except it will be free.

Open-source EDA
Even if all of the IP was available open-source, it still requires a tool chain and fabs. So what about changes in business model that would enable tools to be rented or leased rather than purchased? “We have to determine market elasticity associated with the cost of tooling,” says Savage. “If EDA companies would see a significant increase in revenues caused by an explosion in the number of designs being done, then they would be open to it, but that is not what we have been seeing in the past 10 years.”

Savage does see more hope in a different business model. “We need to see more of a royalty model, where EDA has a stake in the success of the product. If the success of the chip is tied to the quality of the tools, then EDA will want to place a lot of bets.”

And there are some companies that believe making the tools more open will grow the market. “Crowd-sourcing is used in a lot of industries and it helps in two ways,” explains Michael Wishart, CEO of efabless. “First, it brings the creativity of the whole community together, and that helps people solve problems. It is also a much more cost-effective model. We don’t want to lower the cost of design across the lifecycle. We don’t think that is where we want to go or you want to go. What we want to do is take the cost of innovation and take that as close to zero as you can. Then we will share in the prosperity.”

Another company is trying to provide the missing link between open-source hardware and commercial silicon production. “Open-source hardware has primarily been limited to purely digital synthesizable logic, but any commercial SoC requires integration with analog and mixed-signal components, such as memory interfaces and high-speed serial links, and mapping of the complete design to a target foundry’s process technology,” says Krste Asanovic, chief architect for SiFive. “SiFive is a fabless semiconductor company that is developing open-source RISC-V-based SoC platforms that provide a rapid, low-cost path to production silicon by integrating customer-specific hardware with industry-standard physical interfaces, and producing high-quality layout for commercial silicon production at leading foundries.”

When we get to the point where there is a huge variation in design, you have to ask where all of the content is going to come from. “A community makes sense,” says Templeton. “We have seen that in software, for example Github, which is a community of hundreds of thousands of developers, and you can find anything you want there as a starting point. We may need to head in that direction, but it is very challenging. If you want to develop a complex piece of IP, where do we get tools? Cadence isn’t going to want to listen to you. In the analog space, you also have to work with the foundries. If you ask a foundry for the design rules for the latest process technology, it will take 12 months if they love you. This is a difficult thing to do. If we think about communities, it changes the way in which the foundries have to think about design. We need tool companies to create models that fit this world better.”

Wishart points to one possible answer to the foundry problem. “We have a set of open source tools, and we have Xfab as a foundry partner. They understood that by controlling the design environment within our site, we can obfuscate all of the underlying technologies. Thus the reason for them to be concerned goes away. Now they look at it as a business principle. As a foundry, they can get IP on demand and they can serve many small customers, which we otherwise would not have been able to afford to serve. We had to address the concerns at the source.”

Conclusions
There are still a lot of hurdles that have to be overcome before open-source hardware becomes a force within the semiconductor industry. “Semi has not changed the requirement of volume in order to be cost-effective,” says Lanza. “But you have to stop thinking that the only way to innovate is by creating a million chips with a billion transistors.”

Having access to an almost infinite pool and ideas and designs is an interesting concept. “If you limit yourself to what your organization can do, even an organization the size of IBM with research teams around the globe, it doesn’t compare with bringing in 200 OpenPOWER partners,” says Swanberg. “Within a year there are 60 innovations put on the table. That demonstrates the power of open hardware.”

Templeton sees similar opportunities. “If I am trying to design a chip for a camera, there may be a kid in Czechoslovakia with a great architecture and I would like to get access to that and even buy it from him. We need a world where we have a lot more variety of content coming from a lot more places and somehow finding the people that need it.”

So does Wishart. “We think there is a tremendous amount of inherent creativity that is not allowed to happen. It is that world that we are starting, and I am sure it will start small and as the community grows, the expertise will grow. It would be a bad bet to say that at the end of the day, semiconductors is not going to innovate like other markets. We cannot predict what it will finish up looking like, but we do believe that a community is part of that.”

And if the slowdown in Moore’s Law does increase the need for design and specialization, that too creates many opportunities. “It’s going to be huge,” says Savage. “When I talk about exploding, I am talking about the amount of design content that will be available. Today we see consolidation—semiconductor companies consolidating, EDA consolidating, IP consolidating. What is different about design and IP is that this will not follow the same trajectory of collapse. Instead, the IoT market and auto will create new applications we can’t envision today and this will require new design content. There will be a thousand such pieces.”

Related Stories
Alternative To X86, ARM Architectures
Support grows for RISC-V open-source instruction set architecture.
Open Standards For Verification
Pressure builds to provide common ways to use verification results for analysis and test purposes.



2 comments

Kev says:

Open-Source in EDA just needs a good starting point, I’ll go with Xyce – https://xyce.sandia.gov/

Karl Stevens says:

The cost of chip fab gives good reason to design programmability–as in to load ram quickly to define or change function. Obviously there has to be a one time build, but the same chip can be used for different functions. FPGA configuration and bitstream load is still objectionable. However the same approach can be taken for FPGA, just that the chip costs more, but the FPGA fab cost is spread over more designs.

The tool chains focus too much on building at the tiniest detail — gates or FPGA bitstream with multiple bits for each wire/net(connection).

CPU/uC require memory, cache, MMU which impacts cost and performance. Multi-core is limited by data dependencies and memory speed and single core is limited by a single ALU.

The idea of just putting many processors, OSs, memories, etc. on an SoC as IP will not work because they are designed as stand-alones so a network to interconnect them has to be designed, both HW and SW.

The key is to define the function logic for each module with the required inputs and outputs and let the compiler make the connections as is done in OOP programming. HW and OOP are both modular hierarchies.

Microsoft .NET and Visual Studio is a good reference. EXCEPT The design community has shot themselves in the foot because hatred/distrust of Microsoft. Visual Studio is FREE but the whole chip design approach is based on scripting and using whatever the greedy EDA tool providers offer. Of course the EDA suppliers are not experienced system designers, so what magic is expected.
It goes back to when logic design was abandoned in favor of HDL/RTL for design entry. So now synthesis time is needed to modify netlists before simulation and even simulation may count pico-seconds although static timing is also used.

Based on 30+ years in logic design and computer systems, I think this hole has been dug deep enough and it is time to “wake up and smell the roses”.

Leave a Reply


(Note: This name will be displayed publicly)