Where Is The Software For Shift Left?

Shift left requires abstractions on which early analysis can be performed. For architectural analysis, abstractions are required for both hardware and software.

popularity

Co-development of hardware and software has been a dream for a long time, but significant hurdles remain. Neither domain is ready with what the other requires at the appropriate time.

The earlier something can be done in a development flow, the less likely problems will be found when they are more difficult or expensive to fix. It may require both tool and methodology changes, so that a process step has the necessary information earlier, and the tools to estimate the impact of potential changes. This generally is referred to as shift left, and it presents significant challenges when it comes to hardware/software co-design or integration.

Software integration traditionally happened after first silicon was available, but there is a desire for it to be done pre-silicon. Performing this operation post-silicon can lead to surprises, but at that point it’s too late to modify hardware. The result is costly software modification, which can significantly impact time to market, performance, and power.

This does not always mean that production software must be available during hardware development. It may be possible to define realistic software workloads so the hardware architecture can be optimized. Software from previous products or prototype software may exist, and while this isn’t necessarily the same software that ultimately will run on the hardware, it could be helpful for many hardware design tasks. This could include identifying traffic congestion or peak power, so that the power delivery network can be properly designed.

In other cases, the real software already exists and the hardware is being designed specifically to accelerate that function. For example, video encoding is a highly important task for streaming services. Power saved performing this task can lead to substantial efficiency gains. In the case of hardware for AI/ML, not only is software available before hardware, but it is almost certain that the software will have advanced significantly by the time that hardware is ready.

All of these are causing companies to reassess their methodologies associated with hardware/software development and integration. An increasing number of tools are becoming available to help with this, but in some sense, tool development suffers from the same problem. Hardware tool developers and software tool developers tend to be different companies, and they have yet to come together to solve some of these problems.

When either hardware or software comes first, ways are needed to bring them together virtually. “The way to get things early is by abstracting,” says Marc Serughetti, vice president of product management and applications engineering at Synopsys. “It’s no different than other domains. You’re abstracting because you can get to that information early, without knowing everything. The disadvantage of abstracting is you are removing information, and if that information ends up having an impact, you are losing that piece. That’s why these approaches can only work if you have a methodology that looks at it as being iterative and having milestones along the way.”

Availability of software
There are many types of software, and each has to be considered separately. “SoCs are loaded with micro-code,” says Jeff Roane, director of product management and marketing at Cadence. “This is software that’s developed by the semiconductor vendor, and it won’t be modified by the end customer. Application code rarely exists before the SoC is designed, with the exception of end-user application code. The Android software development kit (SDK) and iOS SDK are examples of those, and they’ve got the whole world developing apps. It is their job to make sure their SoCs are 100% compatible with the body of work, the legacy work, and the new development that’s taking place. It’s that application code for a new SoC where there is a chicken-and-egg problem.”

This is where a multi-pronged strategy often is required. “Software/hardware co-design is often practiced, in which both the software and hardware evolve iteratively,” says Andy Nightingale, vice president for product management and marketing at Arteris. “Collaboration between hardware and software teams ensures the software is ready, or adaptable, by the time the hardware is finalized. In many cases, early-stage code bases that focus on core functionalities can be made available. Alternatively, test software can be designed to mimic realistic workloads and benchmarks.”

There are other approaches being taken by some industries. “Within the automotive industry, they are trying to separate software to make it independent of the hardware,” says Synopsys’ Serughetti. “There always will be some software that will be dependent on the hardware, but you can start separating those pieces. Shift left starts by asking what can be done to start developing the application software independently of the hardware. For lower-level software, it’s how to start developing that without the hardware. This is where the concepts of virtual prototyping are playing a role. One of the concepts in shift left is to avoid big-bang integrations when two things come together. As you go toward this continuously iterative process of software development, of validation, of verification, you start validating and verifying around different aspect.”

The earlier the integration can start, the more valuable it becomes. “Shifting left, and having things available in a software environment, allows the system designer or the system architect to be able to anticipate problems earlier,” says Nilesh Kamdar, senior director and portfolio manager for the RF/microwave, power electronics & device modeling EDA businesses at Keysight. “There may still be things that escape, where an issue gets observed much later on, but the opportunity exists for a lot of things to be addressed and found much earlier. For wireless communications, there’s code development while the hardware is being developed. It doesn’t always get completely finished, with some code development continuing later, but a lot of this happens earlier.”

In other industries, this alignment is not yet in place. “The availability of software during the design phase remains a major issue,” says Andy Heinig, head of efficient electronics in Fraunhofer IIS’ Engineering of Adaptive Systems Division. “The purpose of the digital twin is to develop the software in parallel with both the design and manufacturing processes. However, often only software for the older generation of hardware is available, which serves as a starting point but does not consider new features and requirements.”

The ultimate goal is to be able to have hardware and software influence each other so that the best solutions can be deployed. This means making abstractions of the hardware available earlier for software development, and establishing software development practices that enable the important pieces of the software to be available early in hardware development.

Hardware for software development
There is no one solution that provides a perfect solution. As with all forms of abstraction, you either give up accuracy, visibility, or performance. “What’s needed is a combination of cloud-native simulations, an extensive model library, integrated tool chains, and hybrid (virtual/RTL) environments, creating a highly efficient and effective ecosystem for modern hardware-software development,” says Jeff Hancock, senior product manager at Siemens EDA. “It includes merging virtual and hybrid models together. This holistic approach not only accelerates the development process, but also enhances the quality and performance of the final product. It allows your software team to get started months, or possibly even years, ahead of where they would have been in the past. This helps customers get to market faster, and with better quality.”

Virtualization can provide the highest performance. “In the automotive industry, SOAFEE, a collaboration involving more than 140 members in the automotive and software ecosystem, has worked to define an architecture to support in-cloud development using modern virtualization technologies,” says Robert Day, director of automotive go-to-market in North America for Arm. “It enables seamless deployment to the vehicle. This approach allows automakers to port the same software across different hardware platforms and ensures that software can be tested before the hardware is available.”

Many hardware development teams start by defining a high-level model. “If you have a virtual model — a digital twin — you have a high-speed model that’s written in C++ and you can do some level of software development on it,” says Cadence’s Roane. “It comes down to how fast you can simulate enough patterns to do anything meaningful. Applications that manipulate images or audio streams tend to take millions of cycles. You get to a point where you can only do so much, even if you’re developing C models. Those models aren’t fast enough to do any meaningful software development.”

That only enables you to have confidence in functional correctness. “Another solution is mapping the design to an FPGA,” adds Roane. “That can run north of a megahertz. The challenge is if you have a design that’s really targeted for an ASIC or IC process node, then mapping that to an FPGA, is a completely new and different design, because now you’re mapping to different structures —  complex logic blocks, as opposed to base-level cells. The timing is completely different. You’ve got to redesign that whole thing just to put it into an FPGA.”

In addition, mapping to an FPGA can limit visibility. “For enhanced visibility or meaningful numbers for performance and power, you’re going to need the actual RTL that runs on an emulator,” says Serughetti. “This can be slow. Then you ask if I really need to have the Arm processor in the emulator? I could put this in a virtual platform. The hybrid world enables you to look at your development process as being iterative as software grows. It also enables you to cut the problem into pieces. When people talk about shift left there are two elements. One is starting earlier. The other is the ability to do it piecemeal. To bring up an operating system, you don’t need a full SoC. Doing a driver for the GPU, you just need the GPU and maybe some host core on the other side.”

The other side to this is what hardware can provide to enable software to be more aware of things like performance and power. “I don’t know that it’s being done yet,” says Roane. “At what stage is feedback being given to the software developer on the power consumption characteristics of this device that they’re developing software for? Software developers interact with a debugger that provides a purely functional view. They are not used to looking at hardware structures. And they are not used to looking at power consumption profiles for those hardware structures. There’s a team approach when it comes to resolving the issues. The software developer is still focused on functionality, and also performance of that functionality. But when it comes to optimizing for power, it’s still the hardware team.”

This is where the value of services like Continuous Integration and Continuous Deployment (CI/CD) provide value. “They enable developers to manage a continuously evolving software workload,” says Arm’s Day. “CI/CD pipelines help track software changes and ensure they do not negatively impact performance or power consumption. These services can provide automated testing and deployment, maintaining the integrity of the software throughout the lifecycle of both the software and the vehicle.”

CI/CD provides a valuable feedback loop. “By regularly integrating software changes into test environments, you can assess the impact on performance and power,” says Arteris’ Nightingale. “Regressions can identify changes against established baselines. This provides an important guide to the software development, ensuring it aligns with hardware constraints.”

What is important is that you understand the objectives for each solution. “There is no silver bullet, no panacea, where you can model everything, create a virtual twin that will run at the gigahertz speed and give you complete visibility into everything,” says Roane. “These are all solutions that get you part way down the path. But there’s a reason why you’re designing that silicon. That’s the thing that’ll give you enough juice to run everything.”

Software for hardware development
Different levels of software need to be available for each hardware development task. “At the architecture exploration phase you can often use software that’s represented by your previous generation,” says Serughetti. “Alternatively, software can be represented by a task graph. This is not the actual functional behavior of the software, but a representation of the events that are happening in the software. What you’re trying to do is optimize your interconnect. You’re trying to optimize the memories. And for this you don’t need all of the software.”

When it comes to power optimization, more accurate representations of software are required. “If you want to do power optimization, you have to be optimizing for realistic or real workloads, which means you have to go to the software team,” says Roane. “In the past, people tried to develop vectors that represented those meaningful workloads. Given the amount of software and how critical it is, it is far better just to go to the source of those workloads. It’s much more realistic than any test an SoC designer would concoct to wiggle the gates in the design.”

It is important for teams to continually assess their processes to work out if better solutions may exist. “If you go back 10 years, electromagnetic simulations were very expensive in terms of compute times,” says Keysight’s Kamdar. “You could only run them on a single computer and take up all the memory. This impacted the pace at which hardware decisions were made, and it impacted the accuracy that designers had to live with. Then came multi-threaded CPUs and hyper-threaded options, and then came massively parallel compute options. Instantly you are able to get answers faster. Now you have more predictive ways to do hardware design. It makes you realize that previously you were cutting down the problem to something that you could manage. Now you can handle much bigger problems.”

Today, many problems have to be cut down to remain manageable. “If you have a workload, it probably spans billions of cycles,” says Roane. “You are not going to throw billions of cycles at an optimization engine and say, optimize for this workload. There’s not enough time. The challenge becomes how to pick a meaningful subset of that software and use that to drive optimization. What you’re doing is saying, ‘I am going to design my power supply, my power grid, specifically to accommodate this worst case. This is only possible with a subset.'”

Process first
One thing is clear — this is primarily a methodology issue. The big bang that used to happen when software and hardware came together is no longer viable. An iterative approach is required, where both the hardware and software teams have to coordinate to ensure each is meeting the needs of the other.

It all starts with the specification. “The virtual prototype is a representation of something,” says Serughetti. “Even just a few years ago, most companies didn’t have a written specification. It was very hard to get started on the virtual prototype. It’s a process problem when a specification is written only once something is available. Many companies decided they needed to have better specifications up front. That doesn’t mean the specification is fully right. But the advantage of having a virtual prototype is that I can validate if the specification is correct. There are assumptions being made in the specification. But how do I validate they are right? With the virtual prototype, the first phase is actually clarifying the specification, ensuring a common understanding between the hardware and software folks and the person who wrote the specification.”

This is a very real problem, and one that needs to be addressed. The solution involves a methodology, along with the tools and techniques available to do this. It is also necessary for the makers of those tools to start working closer with their counterpart companies, because the solutions being provided today do not fully address the issues.

Related Reading
Shift Left Is The Tip Of The Iceberg
A transformative change is underway for semiconductor design and EDA. New languages, models, and abstractions will need to be created.



Leave a Reply


(Note: This name will be displayed publicly)