Embedded Evolution

When a car is hacked or data stolen, who gets blamed? Normally the software, but it is hardware that really created the problem.

popularity

The design of embedded systems has changed drastically from the days when I was directly involved with them. My first job after leaving college was to design aircraft control systems. I had the dubious honor to be working on the first civilian fly-by-wire aircraft – the Airbus A310. The reason I say dubious is that we had so many eyes trained on us, and that system contained so much redundancy it made the space we had to fit the electronics into a crazy restriction.

Hardware at that time was very limited in terms of CPU performance and memory. Part of the reason for that was that everything was in separate packages and board space was at a premium. Plus, of course, was only a couple of decades old at that point. We were restricted to using components that had received a certain level of certification, and that put us another generation of parts behind other electronic systems.

Software had a very specific function to perform, and often with hard deadlines, which meant that you counted bytes and you counted clock cycles. You could not afford to miss your deadline and had to prove your longest path through the code. Constructs such as cache would never have been allowed (even if they had been invented) because they made timing uncertain. But then again, the thought of having memory on-chip at that time would have been science fiction.

Fast forward through a couple of decades (OK, three) and things are very different. Processing power is almost unlimited, if you are willing to accept multi-core architectures. On-chip memory exists by the gigabyte, and even off chip memory takes almost no space at all. So much can be integrated into a single die that the need for off-chip components is a fraction of what it used to be. But within those changes we have transferred a large part of the problem from hardware to software.

Sure, they have almost unlimited horsepower and memory and huge libraries of software laying around for them to pick up and integrate, but they are probably less equipped to deal with some of the challenges they face today than they were 30 years ago. Ask a software engineer how long a task will take and it is unlikely that he can tell you the maximum time. He probably cannot even tell you an average time with a reasonable level of confidence.

If that task utilized more than one processor, or if the system is composed of heterogeneous processing elements, the chances of knowing elementary timing became even more unlikely. While software takes advantage of them, it does so in a very ad-hoc manner. Hardware advanced in the way it did because it was reasoned that it was better not to upset the software more than absolutely necessary.

What we have created is a mess. We could have provided most of those gains in a much more controlled manner if we had imposed some changes or restrictions on software. Just consider what would have happened if we said a software task that was intended to run independently could not share memory with another task. No ad-hoc sharing of information. It all had to be explicitly defined. Hardware systems could have contained various mechanisms to perform this exchange of information either through small amounts of a controlled shared memory space or through message-passing.

Message-passing systems could have been much faster and a small amount of shared memory space would have eliminated the need for highly complex and costly cache coherence systems.

The solution chosen was easy to implement at that time and appeared to have minimal impact. Both of those decisions turned out to be anything but easy or cheap. They have brought us to where we are today, with processing systems that are very difficult to utilize well, with insecure systems that allow one task to spy on another, with high overheads in silicon and a lack of tools for software engineers that would enable quality software to be produced.

We are again trying to fix those flaws with yet more bandages—a little bit of hardware and some more software to try and recreate what could have been done in the first place. Hypervisors are placing restrictions on software that they cannot share memory. And while today that is at a coarse level of granularity, how long will it take before the industry agrees this was good and should be propagated further through the system? When will hardware do what it should have done from the beginning and create secure memory architectures and multi-processor cores implemented with high-speed message-passing systems?

Sure, it would take time before the software engineering community was ready to fully take advantage of these new capabilities. But let’s face it, they still cannot use the “features” we gave them 20 years ago. It’s time for the hardware industry to accept they are the ones responsible for data breaches and hacked cars and stop blaming the software. It is time to start designing complete systems, rather than relying on the wall between hardware and software to make half the job easier at the expense of the other half.



1 comments

Karl Stevens says:

Did you notice that the GPU was created because the existing HW is junk?
AMP is alive and now it is time to quit worshipping RISC and move on to HW that does if/else, for, while, do, and computation.

Leave a Reply


(Note: This name will be displayed publicly)