Design For Airborne Electronics

Safety, security and traceability are required in avionics, but more complex chips are exposing gaps in the tool chain and in engineer training.

popularity

The Next Generation Air Transportation System (NextGen), an FAA-led modernization of America’s air transportation system meant to make flying more efficient, predictable and safer, is currently underway as one of the most ambitious infrastructure projects in U.S. history.

This is not just a minor upgrade to an aging infrastructure. The FAA and partners are in the process of implementing new technologies and capabilities to shape a modern, resilient, and secure National Airspace System that serves more than 2.7 million passengers and 44,000 flights per day.

Avionics OEMs like Airbus, Boeing, and McDonnell Douglas, along with semiconductor suppliers the likes of BAE Systems, L3Harris, and Northrup Grumman, are familiar with NextGen. They also are well-versed in RTCA/DO-254 standard, which is the means of compliance for the development of airborne electronic hardware containing FPGAs, PLDs and ASICs.

DO-254 requires avionics equipment to contain both hardware and software (which must adhere to DO178C), and each is critical to safe operation of aircraft. There are five levels of compliance, A through E, which depend on the effect a failure of the hardware would have on the operation of the aircraft. Level A is the most stringent, defined as “catastrophic” effect (such as the loss of an aircraft), while a failure of Level E hardware will not affect the safety of the aircraft. Meeting Level A compliance for complex electronic hardware requires a much higher level of verification and validation than Level E compliance.

“FPGAs are mainly used in DO-254 electronics today,” observed Louie De Luna, director of marketing at Aldec. “But now we have the SoC FPGAs coming up, so they’re becoming popular in avionics. When it comes to certification for those types of devices, the hard embedded processors in SoCs are mainly Arm-based, and are treated as a COTS with a separate guidance. The software apps running on the processor need to comply to DO-178C, while the hardware IPs on the FPGA fabric needs to comply to DO-254. When we’re talking about DO-254 for SoC FPGAs, we’re really talking about the hardware IP, peripherals, and custom hardware logic on the FPGA fabric that have to be verified as part of the entire system with the processor.”

In DO-254, requirements-based physical test is the preferred verification approach, as opposed to simulation. While simulation is a big part, the physical test is king, and the avionics phrase, “Hardware flies, not simulations,” still applies. “Simulations are just based on models, so it’s not really 100% representation of the system so in terms of safety,” De Luna said. “The results from the physical test is still the preferred approach. This is where the main challenges come from.”

Lots and lots of paperwork
Steve Carlson, director, aerospace and defense solutions at Cadence, stressed that it’s the requirements themselves that are the biggest headache. “The DO-254 game is all of the paperwork that goes alongside the design. It really doesn’t affect the design much at all. It just affects what artifacts you keep along the way. They want a reproducible path at the end, so you have to have all the scripts, etc., to recreate the design. Also, you must have the traceability between your top down from the requirements, and bottom up from the test that you do to prove that the requirements have been met. Other than that, it’s a ‘regular design job’ where the incremental steps may have to do with safety mechanisms, fail-safe kind of operation, and different hardware security measures that you might put into place.”

Still, there are gaps in the tools. “There’s a lot of room for improvement in verification,” Carlson said. “When we talk about metrics for verification, you get some people who still talk about line coverage. One guy in a space program talked not about line coverage, but the the number of tests. So you have over 100,000 tests. What do they test? The idea of using formal verification as much as possible, along with simulation, emulation and early software bring up for the hardware/software integration issues, speak to the notion of Shift Left that is foreign to most of them in practice, if not even in concept.”

That’s only part of the problem. For DO-254, there is no such thing as a DO-254 certified tool, De Luna said. “However, there is a tool assessment qualification and another guidance for the tools that would be used for design and verification. If a tool will be used for design purposes — and design means it could introduce faults into the design — or the verification tool could fail to detect errors during the verification, then both design and verification tools have to go through a tool assessment and qualification.”

Even so, this just pertains to the tools, and not the end product. The end product typically is a PCB with an SoC FPGA on it. DO-254 is applied for any hardware, and especially if there is complex hardware that includes a processor, such as an FPGA or an SoC FPGA.

“Aviation companies traditionally use methods that include the PCB, and they have real systems driving the inputs of the PCB,” De Luna said. “Then they capture the outputs on the PCB outputs. Here, DO-254 is applied at the chip level, which means they don’t have any access into the FPGA so they can’t probe it. So they can’t do the requirements-based testing on the FPGA chip, which is the main concept of DO-254, and the majority of design and verification challenges originate from that.”

Pick your methodology
Avionics electronics engineers are continually challenged with deciding which methods are best-suited for claiming credit during requirements-based testing.

“In a perfect world, a customer would test all requirements on the airborne target — in other words, the final hardware,” said Jacob Wiltgen, functional safety solutions manager at Mentor, a Siemens Business. “This includes ensuring all lines of code are executed, all functional behavior is demonstrated, the assumptions between hardware and software interfaces are exercised and the list goes on. Practically, this is very difficult to complete when taking into account the available technologies and schedule and budget pressures. Also, as it stands right now with the current technology, it is rarely feasible to satisfy all of the DO-254 verification objectives with on-target testing. As a result, customers must be innovative in defining the methods and platforms which provide adequate testing for each requirement.”

Many platforms exist including simulation, emulation, and prototyping devices with or without the target PLD to name a few, he reminded, where each platform has pros and cons and acceptability for the requirements being tested.

“Tradeoffs in controllability, visibility and debug exist must be evaluated when deciding on a test approach,” said Wiltgen. “For example, simulation can be deployed to collect code and functional coverage, and an argument can be made for the use of simulation for requirements-based testing on logic internal to the FPGA. However, some level of hardware target verification is performed to verify interfaces, and to ensure that synthesis and place-and-route have translated the RTL correctly into the design programming file. Simulation provides a high degree of visibility, controllability and expedites debug of failures.”

Nevertheless, the hardware-based testing for interface requirements, or those involving external components, work best when matched up with the airborne target because the interface and electrical signaling must be accounted for. So even though a PLD image is exercised, visibility into design behavior is often limited.

“Overall, customers who are successful in completing requirements-based testing on time use a variety of methods and platforms, identifying requirements that are verified using off-target verification platforms versus requirement that must be tested on the airborne target,” Wiltgen said.

The remaining unknowns
DO-254 is a very strict functional safety standard that is linked to a regulated process of auditing and certification.

“One key challenge is independent verification of design transformations, like synthesis,” said Sergio Marchese, technical marketing manager at OneSpin Solutions. “A way to reduce risk is to use old, ‘proven’ versions of implementation tools and switch off optimizations. This leads to additional design effort, as hardware engineers cannot make the most of the available technology. But what is worse is that nobody really can measure how this approach reduces risk. Gate-level simulation can provide additional confidence, but is slow, effort-intensive, and provides very little coverage. Lab tests are fast to run, but debugging is tricky and iteration loops are very slow.”

As a result, many companies are now moving to formal equivalence checking to get objective, exhaustive verification that synthesis and place-and-route design transformations have not introduced bugs. “This has been the standard approach in ASIC flows for a long time,” Marchese said. “But now there are commercial tools specifically targeting FPGA flows, which dominate aerospace applications. Tools here can provide massive effort savings and deliver exhaustive verification, something that is crucial to achieve DO-254 certification predictably.”

Everyone wants safe flying structures that we can depend upon, and DO-254 is a piece of the overall puzzle in providing safe air travel,” observed Cadence’s Carlson. “DO-254 fits within the larger certification process administered by the FAA, looking at the electronic components, and the ones where you have to do more serious reporting the higher levels of assurance. DO-254 is mostly about tracking what happened. All the verbiage that you read is that it’s preventing errors, but it really doesn’t. There are humans involved and they make mistakes. What the DO-254 process really brings is a level of traceability. This includes traceability back to the requirements, but also traceability in a process that if something goes wrong and they diagnose a failure, then they have the traceability back to the person who made the mistake. So it’s a finger-pointing mechanism for future use, but it really doesn’t prevent problems.”

The DO-254 process goes something like this — there’s a designated engineering representative (DER) for the FAA who is on the part of the certification process that you work with.

“Here, what’s important is to get a plan up front, because it gets really messy and costly if you try to retrofit DO-254 tracing after the fact,” Carlson explained. “You really want to plan for DO-254 up front, have the certification officer in on the planning, and then it becomes an automated process. Upfront you should meet with the DER and agree on the verification approach, since verification is really the key to this whole thing, and then build a detailed verification plan. This is very much like a functional verification plan that you would do for a commercial project. There are other aspects having to do with functional safety and security that get built in, like in automotive. But there are features that need to be verified. If you take these plans, you can institutionalize them in planning and management tools.”

All of this upfront planning work can be embedded into commercial tools from Aldec, Cadence, Mentor, OneSpin and others so the tracking and report generation are automated. Customized reports can be generated for the compliance officer.

Electronic warfare requirements
Another aspect to DO-254 is the interest Congress has shown an interest in the Department of Defense’s (DoD’s) electronic warfare (EW) portfolio. That is a key component of the U.S. national defense strategy to counter heavy investments by China and Russia in electronic warfare-related systems over the past two decades. According to a public domain report, Congressional Research Service estimated that DoD is seeking to invest approximately $10.2 billion in FY2020 funding for EW programs based on unclassified sources.

Over the past two decades, China and Russia have grown to consider U.S. military command and control, intelligence surveillance, and reconnaissance (C2ISR) networks as a critical capability, and they are developing capabilities to effectively compete.

Cadence’s Carlson said there is a renaissance in government interest here because of the realization that they really are behind. “What’s fun about this is we’ve been able to work our way up and down the chain to affect policy that goes into law and changes the way people do design.”

Some of this stems from the timeframe during which IBM sold some of its foundry business to GlobalFoundries, which included the trusted foundry part of the business that supplies into the U.S. government. Because GlobalFoundries is a Saudi-owned business, that set off some initial red flags, which prompted a widespread review of the U.S. electronics policy. An investigation examined activity by potential adversaries, spurring a number of national initiatives centered around trust and assurance, along with who would serve as a trusted foundry partner for advanced-node production and for key components in those chips.

“There’s a whole trust and assurance program that’s being funded to try and figure out how we could use TSMC in a trusted way so we don’t have to worry,” Carlson said. “And there are other competing proposals to build a fab as a national asset. The DoD, the Department of Commerce, and Homeland Security all have their own fab proposals.”

Going forward, hopefully this will help to avoid the issues that have plagued the development of the Lockheed Martin F-35 Lightning II stealth multirole combat aircraft, which was a decade overdue and half a billion dollars over budget.

“There are so many things that went wrong with the F-35,” he said. “The idea of Shift Left was what was happening. They would actually do a software update, load it onto a plane, put the plane up in the air, and then watch the blue screen of death appear. They called it fly-fix-fly. It’s something like $10,000 a minute to fly one of those F-35s. It’s extremely expensive. Finally, Rear Admiral Edward G. Winters III said not to waste another drop of fuel debugging software. What we were able to do is to get into policy the concept of emulate before you fabricate. That’s basically a Shift Left idea, where you get the hardware and the software and integrate them before prototypes are made — or heaven forbid, the production articles. That can really make an impact in terms of the timely delivery, and also reducing the cost. There are a whole bunch of folks out there who want to do the right thing, but they haven’t done an SoC for 20 years. It wasn’t even called SoC the last time they did one, so they need help. This is a real opportunity for the entire semiconductor ecosystem to help the U.S. get more efficient and make sure that we’re competitive.”


Fig. 1: F35 fighter jet. Source: DoD

More SoC-based funding is expected as the U.S. government sees the gaps in its electronics resources. “When you start talking about the requirements for edge-based AI, autonomy, vision, along with all of the new sensor fusion technology, you can’t send all that data over the air,” he said. “You’ve got to process locally, and that means it’s going to be the SoC kind of systems. Many of these will be in mission critical systems that will DO-254-like compliance.”



Leave a Reply


(Note: This name will be displayed publicly)