Inaccurate Assumptions Mean Software Issues

Embedded software development is no longer a one-dimensional problem, and tools need to adapt.

popularity

It doesn’t seem that long ago when features and functionality were being added to next generation processors and SoCs ahead of demand.

Actually, I recall when new processors were released, embedded software developers were forced to think of innovative ways to exploit the new features in order to differentiate the product to not be left behind.

Today, in many respects it seems as if the tables have turned – and in a BIG way. Software engineers now have an insatiable demand for more and more compute power. How many cores can be stuffed into a silicon package? Autonomous driving, artificial intelligence, and 5G are just a few applications for which the software developers are pushing (actually demanding) for semi providers to deliver the additional compute capability.

The vision for these vertical markets appears to be so well laid out, as if the software engineers have already consumed not only the current capacity for the multicore SoCs available today, but also the planned capacity for the SoCs that will be available in foreseeable future. Transformative shifts, such as the move from Classic AUTOSAR to Adaptive AUTOSAR, in which distributed computing will give way to centralized computing in the vehicle, are forcing the semiconductors to deliver the equivalent of embedded supercomputers to just keep up with next generation systems.

For embedded developers, the challenge will be optimizing existing systems through silicon consolidation and code refactoring while generating the code needed for next generation functionality. And, it is not as if the existing systems being consolidated were not complex – they were – just ask the software engineer responsible for software integration.

As more cores are being added SoCs, and software is being consolidated, software developers will be challenged to optimize the system. After all, more cores provide more options to distribute processes, tasks and threads. I guess one can reach deep to recall the formula for permutations:

nCr = n! / r!(n – r)!

Without going through the math, trust me, the different combinations can grow quickly… into the thousands depending on the number of cores available and the software tasks to be mapped.

However, the issue will not be mapping, it will be testing the mapping. To test the system for the correct mapping of task(s) to core(s) will require complex tools. Tools that have the ability to take into consideration task dependencies, memory access burdens, channel overhead, software dependencies and more. This is not a zero-sum game.

Moving a task from one core to another can add latency associated with memory overhead, channel burden, variable access, and many other dependencies. This can create performance issues, missed events, race conditions and more. It seems as if software developers are at somewhat of a disadvantage relative to the hardware developers, as EDA tools solved equivalent problems for the hardware engineer.

The EDA tools increased the productivity of hardware developers even as the designs were becoming exponentially more complex. However, for the software engineer, it is as if the tools have stood still with time.

To that end, I still see task mapping and scheduling being derived using a spreadsheet. For a one-dimensional problem, the spreadsheet is probably perfect. However, the problem is the problem is no longer a linear one-dimensional problem… it is more like a Rubik’s cube. And yet the tools available today are the same linear tools that were used in the past.

I guess that gets to my point. As with every design, assumptions have to be made. It’s academic if the assumptions are documented: assumptions that are not documented are still assumptions. In the past, assumptions could be validated before committing to the design—arguably, the tools were available to validate the assumptions. Today, the software developer is hard pressed to validate assumptions because the tools are not there to do the multi-core “what-if” analysis needed to test task mapping and scheduling with the overhead burdens for memory accesses, communication channels and more. Relatively simple questions (the operative word is “relatively”) can be difficult to answer with empirical data: Do I need 4 or 8 cores? How should the tasks be distributed to improve performance, packet handling and processing? For the embedded software developer, the tools that could possibly be used are either too costly, too slow, not scalable, or not available.

I don’t have any market or empirical data to back this up; however, I have talked with enough embedded developers who have provided anecdotal data to conclude that a lot of software issues trace back to inaccurate assumptions made during system design. Unfortunately, the issues that result from inaccurate assumptions can manifest late in the project development cycle: many times during software integration.

Wrong assumptions lead to wrong design decisions which lead to system issues that may not appear until late in the design cycle… When issues are most costly to resolve.



Leave a Reply


(Note: This name will be displayed publicly)