Hardware-Software Rift Persists

Chip and software engineers are supposed to be speaking the same language, but reality isn’t quite so simple.

popularity

Last month Semiconductor Engineering published an article about power optimization and the roles of the hardware and software teams in reducing energy consumption. The article portrayed the hardware team adding lots of capabilities for power reduction, while the software team was not making full use of those capabilities.

That article made the rounds in a couple of LinkedIn forums populated primarily with embedded software engineers. They were quick to respond and to point out some things that could help the hardware industry understand the problems and form better ties between the two groups. The big question is, “Are the two teams working together or is the divide as large as it has ever been?”

To find the answer it is sometimes necessary to read between the lines and to understand the complete team dynamics. “It’s not about the software or the hardware, it’s about the system,” said Jon McDonald, technical marketing engineer for design and creation at Mentor Graphics.

It is at the system level that many decisions are being made, and by system we are not talking high-level models of the hardware.

“More functionality is moving into the software part of a system, but that doesn’t make the hardware design decisions less important,” points out Tom De Schutter, product marketing manager at Synopsys. “It does, however, mean that the two pieces—hardware and software design—cannot happen completely out of context of each other.”

While this is true, it does not mean that the two teams have to get closer together. “In order to enable the development of the two pieces to happen in parallel, operating systems and frameworks are being used to abstract the hardware from the software,” says Mike Thompson, senior product marketing manager for ARC Processors and Subsystems at Synopsys.

Software teams have been larger than hardware teams for a long time, and reuse has been a part of both teams. A device such as a smart phone, for instance, has millions of lines of code in it and the goal is to re-use as much of this code as possible whenever a new hardware platform is made available. Add in the operating system and middleware that may exist, each of which may have a different reuse cycle, and we start to see where many of the problems lie. These disconnected update cycles are one of the reasons why it can take a considerable time before hardware features, such as power reduction, get used effectively.

In the hardware world, IP represents a large portion of the system and, most of the time, every effort is made to ensure there are no changes made to that IP. Making any change, no matter how small, would complicate the verification process and reduce the value of reuse.

The same is true for the software. Just because a new feature, such as power reduction circuitry, is added into the hardware, does not mean that the software modules that already exist will be updated to use these new capabilities immediately. Making changes in stable code could have disastrous consequences and these software updates have to be scheduled ahead of time.

As much as hardware features may drive software features, the reverse is also true. “As the basic software stack stabilizes, it creates opportunities to optimize the hardware underneath the abstraction layer at which software operates,” says Pranav Ashar, chief technology officer at Real Intent. “We have seen, for example, non-parallel software written in the late ’80s that can still be compiled with adequate optimality on CPUs of recent vintage even though the CPUs themselves are unrecognizable from their precursors. Similarly, as new application platforms develop and as new architectures get traction, new software stacks will be developed and there will be initial developmental challenges leading to subsequent hardware optimization challenges. Hardware and software alternate in terms of posing challenges.”

Rob, Neff, software engineer at RF Ideas, responded to the original article. He wrote, “We contemplated using the ARM sleep or hibernate functions, but that involved more software work and reliability testing than my boss was willing to put in. Instead, we just turned the device off entirely and rebooted on power-up. The same would be true for changing the clock speed on the fly — a team needs to reduce risks and make sure the whole multi-tasking system can handle those changes.”

Others decry the lack of tools to help the software team understand power and provide the tools that would help them optimize their code. While the EDA industry is looking to use virtual prototypes to fill this role, it appears that this is one disconnect with the software industry. They are looking for development systems using real hardware.

Johan Dams, CTO of WRD Systems wrote, “With systems that are power-sensitive, you build a development system first where every single power domain is accessible for measurement. This has to be part of the design.”

Dams went on to describe a system they had put together than contained multiple ARM cores, onboard peripherals such as VPU, IPU, the NEON unit, GPU core and had variable frequencies for the cores. “Every single software routine was monitored for power overhead. Optimizations for speed were compared to the impact they had on power.” His conclusion was, “We knew exactly where to optimize, what impact the software routines had, and even where the hardware could be improved.”

While the hardware and software teams really may not be getting closer together, or speak the same language, it is clear that systems depend on both to be successful. Moreover, the tradeoffs between what is implemented in each are being thought about a lot more than in the past. At the end of the day, decisions have to be made about where time and money are spent, and ensuring those decisions are aligned between hardware and software may be the most important system development function.



Leave a Reply


(Note: This name will be displayed publicly)