Toward Continuous HW-SW Integration

Increased complexity and heterogeneity are prompting new methods that can avert surprises at the end of the design cycle.

popularity

Hardware is only as good as the software that runs on it, and as system complexity grows that software is lagging behind.

The way to close that gap is to improve the methodology for developing that software in the first place. That includes making sure updates are verified and tested before being pushed out to devices, adding the same kinds of detailed checks that chipmakers have used to develop hardware in the past.

Trying to shift software development further left isn’t a new idea, of course. A number of approaches have been developed over the years to solve this problem. Agile software methods, for example, attempt to reduce errors by pooling the efforts of two or more software developers working simultaneously on code. Continuous integration, meanwhile, addresses the problem from a different angle. In essence, code is checked into a shared repository or development branch continuously, and then verified by frequent automated builds to find problems early.

“More and more development teams are using continuous integration as a means to streamline the overall development process, and to avoid unpleasant surprises during the integration phases of development,” said Warren Kurisu, director of product management in the embedded division at Mentor, a Siemens Business. “The approach of model-based design supports the process by doing much of the work up front through simulation and automatic code generation.”

With continuous integration, a digital twin is developed simultaneously with the real machine, ideally from the initial concept. The approach also allows development teams to work in more isolated teams, where the concept of continuous integration might apply more for the system, becoming a question of “when” the code is ready to integrate into the system.

“This approach encourages the model of continuous integration, since the methodology enables the validation of the design earlier, and allows developers to validate their code and test system configurations against the digital twin models,” Kurisu said. “As a simple example, consider Linux processes. This architectural design enables application developers to create applications on their desktop development systems, and as long as they adhere to the Linux programming model, these applications can be integrated in the final stage, or even loaded after a system has deployed. If system architects require a more stringent separation model, they can use self-contained execution environments such as Linux containers or docker containers. And this model is not just for Linux. Real-time operating systems include a process model that allows enables separation much like Linux processes.”

Once the code that runs in processes or partitions is available, it can be included in the continuous integration stream. While this may seem like an obvious step, it can quickly get out of hand with a mix of heterogeneous components.

“For example, take the Xilinx UltraScale+ MPSoC, which integrates quad ARM Cortex-A53 cores, Cortex-R5 cores, and an FPGA fabric, with multiple power planes enabling functional separation,” Kurisu said. “One would expect multiple development teams to be writing code for this SoC—one team developing Linux on the application cores, another team developing safety applications on the real-time cores, and another implementing algorithms on the FPGA fabric. Architecturally, each of these application areas might communicate over a defined interface. Although each of these individual teams might use a continuous integration methodology to build the code that runs on their cores, the main part of the integration begins once all the parts of the system are available. Here again, the question of continuous integration is tied to the question of when code is ready for full-system integration.”

Underlying problems
Complexity has been growing steadily, in part because no one is quite sure what kinds of chips or functionality will be required across a wide swath of nascent markets, such as virtual/augmented reality, automotive, medical, industrial IoT and deep learning. A common approach has been to throw multiple processor types and functionality onto a chip, because that is a cheaper alternative than trying to put everything into a single ASIC, and and then glue it all together with software.

“Final configurations are created based on the target market or customer’s requirements,” said Zibi Zalewski, general manager of Aldec’s Hardware Division. “Scalability of subsystems allows you to grow the size and complexity very quickly. It is not a problem to scale from dual- to quad-core these days, for example, but it may be an issue to catch up with the proper tools. In addition, the hardware part of the project is no longer the dominating element. The software layer is adding significant complexity to the project. So it’s not only about the number of transistors. It’s also about the target function itself.”

That has a direct bearing on overall system quality, which ultimately is a measure of the effectiveness of various methodologies used to create that system.

“The challenge when you look at quality is that it’s not just the quality of the chip,” said , director of models technology at ARM. “It’s the overall system. And that creates a segmentation problem. So there are things you care about in each design, but for automotive, industrial and enterprise computing those are all different. Some involve different hardware. Others involve different applications of the same hardware. And if you’re dealing with safety-related issues that require documentation, a solution in that space is about software processes as well as the underlying hardware. Quality is only as good as you can prove it to be.”

Minimizing the pain points
No matter how good the methodology, though, something always goes wrong. The goal here is to minimize the surprises at the end, but nothing will ever completely eliminate them.

“When you write applications, you really do have to run them on the real target because you might not have had the right memory allocation, or you might have binary-only libraries and they’re only available on an ARM, because a lot of people try to test embedded systems on an x86,” said , CEO of Imperas Software, which uses a simulator that runs continuously under the code. “Jenkins, an open source automation server, is mentioned often, so that when the code is touched, it can be run on ARM, MIPS or Renesas, and get a, ‘Yes, it passed,’ in a few minutes without any management of the jobs. With Jenkins you can say, ‘Here are my machines. I’ve got four machines you’re allowed to run the tests on,’ and other people can be sharing the resources. It can simplify even the simplest program. Even if you have a program with one file of it, which is just an algorithm, it allows you to build, quite simply, a little bit of automation so that when you change it, it fully tests it for you and you’ve got records of it. Yes, you have to learn how a product like Jenkins works or how a simulator works, but for a few thousand dollars you can get something that can provide an extremely efficient system for compiling, building, testing, and validating so that even on small projects.”

The challenge is being able to test it with an embedded system, and this is where a solid methodology comes into focus.

“The software we produce are tools to allow people to develop software,” said Allen Watson, MetaWare product manager at Synopsys. “For example, our customers take our tools and write the embedded software using our tools. But our tools are software themselves. There are some differences, but at the end of the day we are all writing software, and it’s pretty complex software. Within our group we do use continuous integration. We don’t see an alternative. We have multiple developers writing software for what ends up as one product, but they are all working on different things. Typically, they all work on their piece of software they’ll do some testing of it locally in unit tests, and when they are ready they will check it into the mainline development branch of the software.”

Not everyone is on board with this concept, though. Graham Bell, vice president of marketing at Uniquify, believes the focus should be on continual rather than continuous integration.

“Continuous means unceasing without a break, and continual means a pause between activities,” he said. “Certainly, integration efforts for embedded software go through a loop of integration-testing-revision until the feature creep ends or the bug count reaches sign-off level. That is where there is a pause in the process. This integration loop happens again and again as the design moves from a silicon virtual prototype to hardware prototype to final silicon to product in the hands of the consumer. Some might argue that these stages of integration are now overlapping because various design teams need to get knowledge of the working design as early as possible to accelerate sign-off at their levels of abstraction. If this is the case, then maybe we can say these simultaneous efforts are leading a continuous integration of embedded software.”

This is more than just hair-splitting on a definition. It affects the fundamental methodology. But the number of companies lining up on the side of continuous versus continual integration continues to grow.

Sanjay Pillay, CEO of Austemper Design Systems, believes that continuous integration is the only way to achieve optimal time to market. “Current complex SoCs and their development timelines cannot be realized using a serial development approach of hardware followed by firmware and software. Engineering groups now must include both hardware developers and embedded software designers, who often outnumber their hardware counterparts. They start on the project together and work side-by-side throughout the entire project cycle.”

But as the number of heterogeneous elements in a design continues to expand, this is just one more option for simplifying the development process.

“One could argue that today’s state of the art in both software and hardware would preclude the need for continuous integration,” said Mentor’s Kurisu. “A model-based design actually motivates the approach, and in fact creates leverage for a continuous integration methodology.”



Leave a Reply


(Note: This name will be displayed publicly)