Software Driving More Hardware Designs

Functions and new OSes become the starting point for SoCs, but hardware still matters.

popularity

The influence of software engineers is growing inside of chip and systems companies, reversing a decades-old trend of matching the software to the fastest or most power-efficient hardware and raising as-yet unanswered questions about what will change in SoC design.

The shift is particularly evident in chips developed for high-volume markets such as mobile phones and tablets. It’s also happening with high-value processors developed for servers and networks, and for some applications where power or performance is critical, such as cars and / devices. In all of those are markets there are enough engineering resources and sufficient price resiliency to spend time getting the functionality right first, and then to figure out how best to tackle the engineering challenges.

This was the original idea behind hardware-software co-design, but as the semiconductor industry disaggregated into fabless chipmakers, beginning in the 1990s with the advent of commercial foundries, that approach never received the attention that companies such as CoWare expected. The recent spate of large acquisitions, coupled with the growth of fully integrated systems companies such as Apple, Samsung, Google, and Amazon, has reinvigorated this market, raising the value of software in these designs. And while hardware is still critical to making the system work—and arguably where tougher problems remain—software now has a much bigger stake in defining what functions need to be included and which ones are given priority.

“Software is driving hardware plans,” said Patrick Soheili, vice president of product management and corporate development at eSilicon. “We’re seeing demand for a lot more hardware accelerators. On top of that, it’s a lot easier to find people when they’re defining something in Python than in Verilog and VHDL. This is a huge shift, and it’s been happening for a while. The software is not replacing hardware, but it’s happening on top of hardware.”

This is evident in the funding for startups, as well. VC investments in software increased to $19.8 billion in 2014, up 77% over 2013, according to a report by the National Venture Capital Association. Semiconductor hardware didn’t make the list.

But those stats are somewhat deceptive, too. While there is scant interest among VCs in funding chip or EDA startups, there is a ready willingness to fund companies developing artificial intelligence and deep learning, for example. In those cases, the hardware is being built specifically for the algorithms that run on them, rather than the other way around.

This is true for data centers being developed by Amazon, Google and Facebook, as well. In Amazon’s case, the focus is on commerce. For Google, the emphasis is on search, and for Facebook the main criteria are graphics and facial recognition. Each requires a very specialized design.

“An ASSP can’t handle the differences, and neither can an ASIC,” Soheili said. “If you want incredibly fast searches, everything has to happen faster and faster. The complexity gets deeper, but the chips don’t get smaller or easier. If you want to manager power in and out of the data center, you have to take out the latency or the power.”

New OSes for specific markets
The flip side of this is software needs to be created for specific purposes, as well, which explains why there are so many new operating systems being created—bare-metal, extremely efficient code written specifically for the IoT/IoE. ARM‘s mbed introduction in October 2014 was the beginning of this trend (or at least its revival). In May, Google introduced Brillo and that same month Huawei introduced LiteOS. There are a number of other OSes gaining ground, as well, including BSD-based TinyOS, Cisco’s NX-OS, as well as some older real-time OSes from Intel and Mentor Graphics that are finding new applications in areas such as connected automotive infotainment.

“If you look at a smart watch, it has a small OS and the step tracker has a much smaller one,” said Frank Schirrmeister, senior group director for product marketing for Cadence‘s System Development Suite. “So you have a low-level OS that is very specific to what the device needs to do, and you do all of this on bare metal. It’s much more specialized to focus on a limited number of things. So depending on the application domain, you design to that OS. It has the same level of abstraction, so you don’t have to look at the motion sensor to take advantage of it—you use the API.”

He said there also is specialization developing alongside certain markets. So for health care there will be more security built in, while in other markets such as leisure there will be less focus on security.

The semiconductor industry and software industries have played tag for years, sometimes meshing working more closely together than others, and at other times working completely separately to the point where software teams and hardware teams exchanged business cards for the first time in staff meetings.

“One part of this shift started around the smart phone,” said Drew Wingard, CTO at Sonics. “The OS environment dictated what hardware is useful. So you need a basic collection of hardware to support Android, for example. If you look back, though, the duopoly between Microsoft and Intel was probably as complete and equal as we’ll ever see. It was basically détente in that space.”

He said the shift away from that kind of co-development occurred when applications developers pushed to run their code on multiple chips rather than those made by just one vendor. While that made software more portable, it came at a cost of better performance and lower power—effects that could be neutralized by the progression of and Classical Scaling prior to 90nm.

“The level of sloth we can tolerate has decreased in a constrained form factor and the IoT,” said Wingard. “The number of layers of software you can afford to run has to be scaled back, as well. But with the IoT, it’s more than just hardware and software. It’s also the system where you do computations.”

Shift up, down, left and right
Perhaps the most visible evidence of this shift on the hardware side is the increasing focus of the Big 3 EDA companies on software. Synopsys‘ purchase of Coverity last year was the most prominent, but all of the large EDA companies have been dabbling in software one way or another for years.

“Coverity marked a very significant trend in the industry,” said John Koeter, vice president of marketing for IP and prototyping at Synopsys. “Software-driven design is a reality. It’s extremely rare to tape out a chip without running software on either emulators or FPGA prototypes. Customers ask for solutions for prototyping and software development platforms.”

What started largely as an experiment there is now almost ubiquitous. “A few years ago hardware emulation was used by hardware teams mainly, while software guys stuck to virtual platforms,” said Zibi Zalewski, general manager of Aldec‘s Hardware Division. “Requirements to deliver platforms for software developers earlier resulted in hybrid solutions like a virtual platform connected to a hardware emulator, which allows software guys to work on the whole SoC using the most recent RTL sources implemented in the emulator. All of that is for software testing and earlier delivery of different software layers—drivers, middleware, OS framework, and of course apps. Also, looking from the hardware verification solution provider perspective, demands from customers are definitely driven by the software teams now. Questions like, ‘What’s the OS boot time achieved on your tool or co-emulation APIs,’ are quite common these days.”

Zalewski noted that increased focus on software brings benefits to hardware, too. Earlier software testing allows more hardware testing, which in turn reveals problems that were not detected using typical hardware tests.

Just how big a shift this is, and when it began, is impossible to pin down in a disaggregated, global ecosystem. There are plenty of different opinions. But what is clear is that everyone agrees software will play a much bigger role in the future than in the past.

“We’re entering an era of software-defined hardware,” said Charlie Janac, chairman and CEO of Arteris. “If I have a a business model where I’m sharing a service with a self-driving car and I want to build software to enable that, I’m going to specify to the hardware supplier what the hardware looks like. Instead of the hardware architecture being created first, the software defines what the hardware does.”

Changing definitions
It’s important to note here that definitions of software-driven design vary greatly. The terms software-driven design, software-defined hardware, or software-defined networking, have been used interchangeably for some time. All of the major EDA companies promote early software development using hardware prototypes, with the bulk of that work focused on drivers.

The new wrinkle in this scheme is a more tailored hardware spec based upon functionality, which may be defined in the software as well as the hardware, depending upon the application and the focus of the company developing that spec. It also can involve more iterations up and down the software stack and back through the hardware design, providing there is enough volume and price elasticity.
How this plays out for individual companies, or even individual chip designs within those companies, varies greatly. Still, most experts agree there is a lot of low-hanging fruit even for companies that don’t completely change their methodology.

“Different companies have different emphasis,” said Russell Klein, program director for Mentor Graphics‘ Emulation Division. “What’s clear is that in the software space you can impact power far more profoundly than by doing things in hardware. But we also don’t see people taking full advantage of software in lowering power consumption. On the software side, developers have not had to worry as much about power in the past because they didn’t have that much control. Now that they need to meet the specs on power consumption we’re seeing more of them bring their software into emulation and FPGA prototyping.”

He noted that the bulk of the activity is still happening with drivers, because they are hard to write and debug even under ideal conditions. But as those drivers are developed, more software can be layered on top of that as virtual prototypes because it typically is isolated from the hardware by those drivers.

“As a driver writer, you really want to bring in a sample application because you need realistic simulation to wring out all the bugs,” Klein explained. He said that it’s still not possible to have a full model of the entire hardware design and develop software for that model because it takes to long to load applications and run it. But he said there are still enough gains being made at lower levels to warrant the time required to improve the interaction between hardware and software.

Conclusion
While some of this focus on software first may seem like déjà vu, the reality is that change is underway across the ecosystem. Operating systems that were created for computers and even for mobile phones are bulky and slow, just as general-purpose processors no longer are considered the fastest and most efficient way to execute certain code.

Change is constant on all sides, but it is more apparent on the software side because the inefficiencies have only sporadically been addressed there. As those inefficiencies are ironed out, more changes in the hardware will be required. And as the software is more closely tied to specific functions, particularly with a very limited battery life hanging in the balance, the changes made in software will begin to have much greater impact on the hardware designs.

“The semiconductor industry is growing because all of this software needs to sit on something,” said eSilicon’s Soheili. “But we’re seeing deeper complexity than wide complexity, which is driven by different functions. It’s like comparing a general practitioner to a neurologist. Both are knowledge, but they deal with different types of complexity.”



2 comments

Karl Stevens says:

Has anyone considered hardware structured like OOP objects so that parameters are passed by value but the data is passed by reference?

Then the function can be done in either hardware or software because logic function to be done is what matters.
Compiling source code to run on a CPU is only one way to execute the source code and also the most expensive and slowest.
The basic flow of a program is to do a compare and then enable one or more assignments then repeat at the next compare. To design hardware to do this more efficiently than a CPU is straightforward.
The important thing is that the logic function can be expressed as program flow and either run as a program or be implemented in hardware because of the modular structure of an OOP program is analogous to the modular structure of hardware..

garydpdx says:

Karl, designing in C/C++ with IEEE 1666 (the SystemC library and TLM-2.0 interface) will get you the design abstraction that you are describing. And for system-level design with hardware-software partitioning, moving across the boundary during design exploration was figured out at Polytechnique Montreal (the U de Montreal’s Polytechnical Institute) and now at the heart of Space Codesign’s technology. Please visit our site and take advantage the Resources section for background info and white papers, plus demos and presentations on our YouTube channel. (Also SlideShare, Twitter and LinkedIn.)

Leave a Reply


(Note: This name will be displayed publicly)