Multicore SoCs are creating complicated software architectures—and longer design cycles. Change is inevitable.
By Ed Sperling
As the number of processors and processor cores increase in SoC, so does the amount of software. But unlike hardware, which grows linearly, software frequently grows exponentially.
The great advantage of software is configurability—both before and after tapeout—yet it adds many more possible permutations and interactions that need to be worked out. And unlike the old PC model, which had a processor, a single operating system and multiple applications that fed into the application programming interfaces of the OS, new SoC designs feature multiple processors, multiple OSes, software and firmware code below those OSes, as well as the usual applications and virtualization software on top. Moreover, they run all of this across multiple cores, in multiple hardware modes, and usually in conjunction with complex partitioning schemes.
As you might expect, this has become a nightmare for engineering teams, which now are comprised of both hardware and software engineers.
“One survey showed there is a 300% increase in projects taking more than 24 months to complete,” said Andrew Caples, senior product manager for Nucleus in Mentor Graphics’ Embedded Software Division. “There’s a lot more complexity. Even with something as straightforward as WiFi drivers, if they want to certify a WiFi part it adds on time for people who haven’t done it before to complete all the back-end testing. On top of that there’s a greater need for power management. There’s a huge investment on the hardware side for power-saving features, but they’re not useful unless the software takes advantage of them. We’ve seen code bloat from the power-saving features alone.”
Rethinking software
The PC model and even early smart phones utilized a single OS. Current designs mix and match far more embedded software with the central OS, which could be Linux, Android, or Windows. But they also may run security features in isolation using an embedded operating system, or certain features that use an operating system that is considered part of a secure “trust zone.” This is particularly true for automotive and medical devices, where security depends on a limited executable code for a very specific and narrowly defined purpose.
Until very recently, the focus has been just getting the software to work with reasonable performance. But with limited battery life in mobile devices and intense competition to win sockets, the onus is on design teams to develop hardware and software in sync and make the necessary tradeoffs at the architectural level to optimize power and performance.
“Both the hardware and the software need to be architected together because you’re trading off function, software, and resource-sharing,” said Marc Serughetti, director of product marketing for system-level solutions at Synopsys. “You need to figure out what goes into hardware, what goes into software, and how to optimize the SoC. Software can affect system performance, system power and overall cost, but if you start to design in the software later in the cycle you get into serious trouble.”
He noted that the initial assessment needs to include how many processors are needed, what functions will live in hardware versus software, how the software will be mapped across those processors, and what the API access will look like.
“The challenge is not in ARM processors running Linux,” Serughetti said. “It’s whether you execute software on a set of cores or use a hardware accelerator with a lot of performance and different software APIs. In the past, there were a lot of hard-coded accelerators. But if you want to trade off flexibility and speed, you may need to run an algorithm that’s still evolving. So then you have to go back to the tradeoffs for hardware and software.”
Hardware or software?
Those tradeoffs are coming under new scrutiny, too. Complexity in software comes at a high cost in terms of performance and power.
“In the hardware world, we get trend information in nanoseconds,” said Kurt Shuler, vice president of marketing at Arteris. “That’s how fast we know when something has changed. If you rely on the kernel of an operating system, it’s making changes in milliseconds. So if you have to make quick decisions, you may have to do it in hardware.”
This is a pretty simple decision when it comes to a couple processors. It’s a lot more difficult in an SoC that has dozens of processors, accelerators, more dark silicon to preserve battery life, and excess margin designed in as a buffer for multiple possible user models.
It’s also more difficult in wearable devices, where heat is a major consideration. While users are comfortable with a warm phone in their pocket or hand, for example, they have a completely different reaction if they’re wearing it on their wrist or next to their head, as with Google Glass. Getting sufficient performance in that case may require some rather fancy engineering footwork, such as distributing processing around an SoC based upon heat profiles.
“Global scheduling will include more than just a CPU,” said Shuler. “Right now that scheduling is very local. In an OS kernel, it may be a question of core number one, two or three, and when to start or stop threads. But everything is based on the CPU, even though you may want to run something on another CPU, a GPU or a DSP.”
How it’s built
These are complex problems, and given enough time and money, engineers can accomplish astounding feats. The problem is that market windows for consumer devices are short, and that’s where most of the volume is going. If software has to be developed in sync with hardware and optimized for that hardware and specific uses, then the only way to do that is to pre-develop integrated blocks of hardware and software.
Talk of subsystems has been billed alternately as the future, and as hype about the future, by many chipmakers and engineers. While customized development frequently can be better optimized for power and performance, the process takes longer. That’s one of the reasons subsystems have existed in the military and aerospace markets for decades. But moving them into commercial production for consumer devices is new, and it’s a direction the big IP vendors consider inevitable.
“This will really facilitate code re-use,” said Mentor’s Caples. “It’s a big improvement in time, and accelerates the move from one design to the next.”
Subsystems, which include hardware and software developed in sync, have big implications for the entire value chain within designs. From an economic standpoint, being able to deliver solutions to specific markets in tighter timeframes has far-reaching implications for everything from where the perceived value is in designs to inventory management. But it also has implications for the role of the operating system and various other software layers also will have to change radically.
“The role of the operating system will be focused more on the application side as a way to abstract out software development,” said Serughetti. “But there’s also a layer below the operating system with device drivers that will need to be optimized in conjunction with the hardware. That’s so complex it no longer can be done at the end of the chain.”
Subsystems are a way of commercializing that change in manageable pieces. And while hardware engineers see the changes in design in terms of hardware complexity, it could well be the software that has the most direct impact on what their job entails in the future.
Leave a Reply