Software Modeling Goes Mainstream

More chipmakers turn to software-hardware interaction for performance, power, security.

popularity

Software modeling is finally beginning to catch on across a wide swath of chipmakers as they look beyond device scaling to improve performance, lower power, and ratchet up security.

Software modeling in the semiconductor industry historically has been associated with hardware-software co-design, which has grown in fits and starts since the late 1990s. The largest chipmakers and systems companies were the first to adopt it, particularly after their customers began demanding that chips include drivers and other embedded code because of the growing complexity of SoCs. But beyond the biggest chipmakers, few other companies paid much attention until power and thermal issues began impacting scaling.

Several factors have changed this equation on the technology front:

• Mainstream process nodes are now 65nm and below, at which point leakage current becomes a bigger problem.
• More devices are mobile, where the primary power source is a battery.
• More chips are being used in heterogeneous systems where the overall power budget is tighter than in the past, in part because there is significantly more communication to the outside world and in part because there is much more data to process.

The initial strategy for dealing with these issues was to increase the number of cores, but there is a limit to how many cores are effective for most applications. In many cases, the optimal number is two to four cores, even with multithreading and an increased emphasis on parallel programming.

But the software also can be changed in terms of how it interacts with the hardware. It can be written to achieve more work per processor cycle, and it can be threaded across at least two cores, and sometimes as many as eight, particularly if some of the work can be parsed into different operations. In addition, the software can be made much more resilient and secure, providing it is developed earlier in the design cycle, and it can be cleaned up before it reaches the market rather than adding multiple patches post-production.

“The goal is to model how hardware and software interact,” said Frank Schirrmeister, group director for product marketing for Cadence’s System Development Suite. “That could involve reshuffling of sequences so caches do not reload as much, which can have an impact on power consumption.”

It also can have a huge impact on the design process. Improving software can affect what kinds of resources are needed in a design, from memories to types of processors and hardware accelerators. It also can serve as a model for improving the hardware so that the software and hardware interact more efficiently.

“A lot of this is about system-level hardware-software tradeoffs,” said Mike Gianfagna, vice president of marketing at eSilicon. “Power continues to be a major concern for a lot of companies. They are doing a lot of analysis around power. There is a lot of interest in understanding how the software impacts the hardware, which is one reason why emulation is so popular these days.”

Gianfagna noted this appears to be helping because there are far fewer “fire drills” than in the past. “For the large designs, customers are very sophisticated. They are very clear about system requirements and they have crisp specifications on silicon-level power dissipation. That is driven by a lot of sophisticated hardware-software co-design, which is a byproduct of better analysis.”

Rethinking the system
Behind the focus on software is a shift toward more system-level design, which has been underway both in hardware and software for some time. But in many cases, these have been parallel efforts rather than a combined effort. The IoT is forcing some changes there, in large part because the resources are limited and the need to get the cost down for many of these devices is critical. A key way to do that is to improve the efficiency of the systems themselves, allowing architects to substitute lower-cost microcontrollers for processors and to shift the mix of memory types.

“This is not about the coolest and fastest microcontroller. It’s about what you can do with it,” said Darrell Teegarden, business unit director for the cloud-based systems and analysis group at Mentor Graphics. “We’re seeing this in end markets where they don’t sell a jet engine anymore. They sell engine hours or thrust or power generation. There are higher and higher levels of platforms, and this is giving way to platform wars. Those platforms include compute hardware, sensor and actuator hardware, software to control those sensors and actuators, and software to connect to the stream of sensor data to analytic engines. So you need to keep the big picture in mind, and solve the individual pieces along the way.”

Cadence’s Schirrmeister agrees. “The brute force approach may work for certain applications, such as where you have limitations on power to transmit. But for other applications, it comes down to where in the chain the data processing is done. If you’re looking at the edge of the clouds to do computation, you may need a model of the software, a model of the hardware, or a model of the hardware and software together.”

That kind of analysis and hardware-software pathfinding has been available for some time, but widespread adoption is a more recent trend.

“There are many companies creating application workload models, where the purpose is to capture the processing and communication requirements of the application,” said Pat Sheridan, product marketing senior staff for virtual prototyping at Synopsys. “That includes parallelism of tasks such as dependencies, the processing cycles each task takes, and the memory accesses, which includes reads and writes and to what regions.”

One of the benefits of these kinds of tools is they can raise the level of abstraction so the functionality does not need to be modeled for the software. “The key is being able to separate the application workload requirements from the resources,” said Sheridan. “You can map the tasks to the resources, and you can play around with different CPUs and accelerators, all of which are represented by generic models.”

IoT, security and safety
Tools vendors say that demand for these kinds of tools was almost entirely confined to the mobile market initially. It has since migrated to other markets, including automotive and IoT applications, as well as servers.

Of particular interest in the IoT world is software modeling for security reasons.

“The software content has been driven down from the high end to the low end, and methodologies once reserved for the high end are now migrating to simpler devices,” said Bill Neifert, senior director of market development at ARM. “In the Internet of Things, this is being driven by security requirements. The software has to be robust enough to stop an attack.”

Security is a global issue in design. It affects every level of software, an entire communications stack, IP, I/O, as well as every hardware component, including the bus and memory within and outside of a chip.

“There are two ways to look at security,” said Mentor’s Teegarden. “One is that you don’t want people messing with your stuff. The other is that if you charge for thrust hours on a jet engine, you need to be able to reliably measure and track consumption of resources. That has to be secure. In this kind of scenario, block-chain technology can be important, whether it’s keeping track of jet engine hours or bitcoin. It becomes a currency of its own. Every platform has to enroll, provision and authenticate. Software is an important part of that because you’re dealing with analog/mixed signal for sensors and actuators, a control algorithm, and all of that requires modeling and simulation.”

Security and efficiency cross paths in new ways when it comes to the IoT, which is showing up in a new breed of microcontrollers. Those microcontrollers are finding their way into more complex devices with extreme cost pressures. However, they require the same kind of attention to power and performance normally reserved for more complex CPUs, but with far fewer resources available.

“The demand for software modeling for microcontrollers has scaled up radically in the last year,” said ARM’s Neifert. “You need to ensure you have the correct power/performance envelope and not waste cycles. Additionally, you still need to include the security aspects.”

Exactly how much more efficient software modeling and hardware-software co-design can add to a design isn’t clear. One of the few data points that exists in this area is a case study by Samsung, (link is here, scroll to bottom of page) written a decade ago, in which it took a piece of hardware that it had created and revamped the software to better utilize the hardware. The result was an improvement in system performance of more than 50%, according to the paper. Much has changed since then, but the basic principles still apply.

Security is one side of the equation. As cars and medical devices are connected to the Internet, and to each other, safety is being modeled in software, as well.

“With safety, you’re building redundancy into everything, which can add 50% to the circuitry,” said Anush Mohandass, vice president of marketing and business development at NetSpeed Systems. “In the past you would build a design, but now you need to decide what ASIL (Automotive Safety Integrity Level) version you’re going to use and match that with your cost and performance goals. This is truly a software-hardware interplay. You determine the right amount of safety and you back it up in hardware.”

Data centers and high-performance computing
Semiconductor software modeling has become increasingly important in the data center, as well, driven by the need to be able to process, store and retrieve skyrocketing quantities of data in the same or less time, while using the same or less power.

“We’re seeing a focus on workload optimization for very specific tasks involving big data,” said Cadence’s Schirrmeister. “That’s a combination of hardware and software modeling and smarter ways of partitioning things. There is a lot of discussion about using FPGAs for this, where you do hardware-software co-design without committing to the chip in advance. You can add a special data crunching algorithm and offload that into the hardware fabric. There’s also a lot happening around memory, where you re-architect the memory architecture. It’s all about how the data comes in and out, which you address in the modeling of the software.”

FPGAs have become an active area of research in the data center because they can be used as a custom acceleration platform, according to Steven Woo, distinguished inventor and vice president of marketing solutions at Rambus. “The advantage of FPGAs is that they have large amounts of memory and they fit into a rack-scale architecture. With larger data centers, there are thousands of servers, and there is better total cost of ownership if they match those with the compute tasks. With legacy systems, there was a fixed CPU, I/O, disk, and memory. That may be okay if you have a small number of servers, but when there are thousands of servers and the workloads don’t perfectly match, it’s very inefficient.”

That inefficiency is measured in millions of dollars per month in power and cooling costs. But tackling the hardware issue is only one side of the issue. By more closely matching the hardware with the software, and taking advantage of better interaction of both, the savings can be enormous for data centers. That explains why companies such as Google and Facebook are developing their own hardware architectures. Google has its Tensor processing unit; Facebook is developing data centers based on the Open Compute Project.

Conclusion
Hardware-software co-design was expected to become a mainstream approach to design for nearly two decades. From a technology standpoint, it is much more difficult to get enough performance and power benefits out of new design without using this approach.

But there is another factor at play here, as well, which is much harder to quantify. Inside of many chip companies, the number of software engineers now outnumbers the hardware engineers, and they have been in place long enough to be taken seriously and to make decisions on which tools need to be bought to have more impact on power, performance and cost.

“The goal is to decrease time to money, and one way you do that is to make sure all of the drivers are ready when the hardware comes back from the fab,” said NetSpeed’s Mohandass. “But it’s more than just supplying models. It’s understanding what the end goal is. Maybe it’s performance. But what are you trying to get out of your efforts?”

In this case, software models may be a means to an end, but they’re certainly being recognized more frequently as an important way of getting there.

Related Stories
Cars, Security, And HW-SW Co-Design (Part 2)
Standards are helping address some issues in concurrently designing hardware and software. More challenges are ahead as automotive electronics and cybersecurity issues enter into the equation.
Bridging Hardware And Software
The need for concurrent hardware-software design and verification is increasing, but are engineering teams ready?
Cars, Security, And HW/SW Co-Design (Part 3)
Hardware and software engineers are discovering the advantages of close collaboration, something that is critical in the automotive industry and other areas.



Leave a Reply


(Note: This name will be displayed publicly)