Medical, Industrial & Aerospace IC Design Changes

Strict quality, safety and security requirements and increasing complexity are pushing companies to adopt some leading-edge commercial approaches.


Medical, industrial and aerospace chips are becoming much more complex as more intelligence is added into these devices, forcing design teams to begin leveraging tools and methodologies that typically have been used only at the leading-edge nodes for commercial applications.

But as with automotive, the needs of these systems are changing quickly. In addition to strict quality, safety and security regulations, they also now require the latest design processes and technologies. Put simply, product development of a medical device is no easy feat, and it’s getting much harder. Modern expectations for medical device product development include a faster time to market, safer and smarter devices, and a built-in assumption that these devices will continue to work as expected throughout their lifetime.

“This expectation is set against a backdrop of regulatory requirements that deeply interface with the design process itself, known as design controls,” according to Ryan Bauer, director of medical device and pharmaceutical solutions at Siemens Digital Industries Software. “Today, electrical engineers increasingly find themselves at the nexus of cross-domain development decisions. They both impact and are impacted by other product development work streams, such as mechanical, software, clinical and regulatory.”

While the concept of design controls is not new, the spirit of the regulations requires close coordination of all development activities, including design reviews, change management and risk application. And that has opened up a whole set of tools and methodologies developed in the commercial IC space that were never applied in this market.

“Historically, in practical terms, this meant a lot of time-consuming manual coordination across teams with meetings and documents, and ultimately delays as serial decisions were made,” Bauer said. “The advance of design authoring and data management software applications for the product development process is a game changer for medical device companies that embrace a new way forward. Design authoring tools that can provide faster, streamlined design capabilities, in addition to collaboration methodologies, which are in high demand.”

For example, electrical and mechanical teams require frequent communication as they work out physical, thermal, and electrical challenges in a design. “Also, a live exchange of a comprehensive digital twin of the design can benefit much more than just clearance checks to avoid geometry collisions,” he said. “It can provide copper and flex circuit details, or even provide for real-time design reviews with collaborative proposals and decision-making directly in a common model. This built-in evidence of collaboration is useful for regulatory compliance, and it makes the teams more efficient, as well.”

Simulation studies are aided through this higher definition/accuracy of the design across domains. In fact, regulatory agencies are increasingly aware and promoting the use of simulation in good design practice. They are also accepting results as digital evidence to support submissions and decision making.

Similarly, close coordination early in the design cycle between electrical and software teams is necessary for efficient product development. “A common development environment facilitates the live coordination of requirements, specifications, testing, and defects. Teams benefit from a real time environment for speedy issue resolution and working from a common set of data. This is a regulatory expectation as well – the rigorous management of issues during development,” he continued.

Digital twins
A digital twin in a medical space is, at the highest level, a digital representation of what is done in the physical world, with the piece which connects it is the data, which runs amongst them, offered Frank Schirrmeister, senior group director, solutions marketing at Cadence. “What you’re actually twinning is the device itself, just like you would provide a digital representation of a cellphone or a plane or a device in a plane, you would now run a virtual version of it. And the reason you do this is actually for safety and security reasons, for certain things you actually don’t want to do.

Companies such as Ansys, Cadence and Siemens, among others offer 3D simulation technology that can be used here to make advancements in surgical procedures, such as heart surgery by simulating something like a stent being placed and the impact on the blood flow and surrounding tissues.

According to Ansys, implantable cardiovascular devices — stents, coils, heart valves and pacemakers — are complex as a result of exacting product and regulatory (FDA) specifications. The study of hemodynamics is critical to cardiac device engineering, which can benefit greatly from engineering simulation and advanced fluid–structure interaction modeling.

This type of technology also can be used to simulate the operational logistics of the hospital environment, which is extremely valuable during a pandemic. “Thinking about hospital beds and air filters, another digital twin in this domain — which also applies to industrial, aerospace and automotive — is not the device itself, but the operational logistics of the environment,” said Frank Schirrmeister, senior group director for solutions marketing at Cadence. “How many hospital beds do I need at which time? An operational digital twin in a healthcare environment is the digital twin of the whole hospital and its processes. What you see in these scenarios now is the simulation of how things spread. Where do I put which patient so that I’m being careful? Digital twins are perfect for this. The whole notion of digital twinning, the whole notion of simulation and trying these things out before you expose them to the real body, will be key going forward.”

Case in point: 15 years ago, hearing aid companies were using virtual prototypes, even in small designs, because extensive simulation was needed for the software and the filtering acoustics. In fact, Medtronic has said publicly that it uses emulation in the development of the chips in their pacemakers.

“Techniques such as emulation and virtual prototyping are applied to do enough testing, and do enough to Shift Left in the wider sense,” Schirrmeister said. “Often, Shift Left refers to software development. Bu in the case of medical, it’s really used for testing. So for the tests I’m running on the real device later on, I’m running them earlier. Those pieces are key, and they are revolutionizing things.”

There are different standards for every domain, but at the core, they’re doing similar things. “Just like the digital twin that can predict the maintenance aspects of a airplane, when parts need to be changed, it can also predict when a human being’s heart will be overloaded,” he said. “Here, tools such as formal verification can be applied for security and safety so that you’ll never get into a state that will be crucial.”

Avionics design requirements
There is significant crossover in other markets such as aerospace, where reliability and safety need to be proven just as in medical equipment.

“A significant aspect of the criteria here is traceability, which also entails a requirement-driven process,” said Louie De Luna, director of marketing at Aldec. “For example, starting from aircraft-level functions like a landing gear function, traceability must be established from there to the system level, and then down to the board level. And if they’re using an FPGA, they have to have the FPGA requirements too based on the function, and then traced to the VHDL source code of the FPGA, then to the tests and test results. Full traceability must be established downstream from the aircraft level function, down to the test results and upstream as well, and that’s hard to do. Downstream could be easy but upstream may be difficult.”

Functional failure path analysis also is required, and traceability is used to ensure it’s done correctly, De Luna said. “If a function at the aircraft level fails, then they know what the causes are for that and what elements of the aircraft would have caused that.”

There’s also an overlap between avionics and satellites in terms of single-event upsets (SEUs), such as when a radioactive particle causes damage. “These are a rare phenomenon, but they do happen. There are some regulations when it comes to single-event upsets, whereby some air framers are not permitted to use SRAM-based FPGAs because those are susceptible to single event upsets. SEUs are considered catastrophic failures, so careful attention is paid to this. There are different ways to mitigate SEUs including redundancy at the system level or at the FPGA level, in which redundancy is established. Even inside the FPGA, there may be redundancy on the circuits inside the FPGA itself,” he explained.

Traditionally, even though commercial tools exist that would help with traceability, it has been tracked with Excel. “Usually only the largest companies have the budget to purchase the tools that maintain traceability. The larger the company, the more budget they have to buy tools. There are also some small suppliers to the Tier One companies, but they may not have the tools and they stick to using Excel,” he continued. Still, managing traceability using Excel is very difficult and De Luna doesn’t recommend it.

Additionally, there are some special considerations when it comes to IP in the avionics market due to safety requirements in DO-254. Depending on the classification of IP, whether it is soft, firm or hard, it may not allow users of the IP to perform required tests. “The hard IP can’t really be controlled. If it’s firm IP, there are limits as to the things that can be done. Soft IP has the most guidance. If you’re using soft IP, you need to make sure that it goes through a linting process. And as far as the DO-254 standard, it goes through a linting process. But if you’re purchasing commercial IP, which is encrypted, you can’t do linting on that. What most companies like Dallas Avionics, L-3 Avionics and others, do is create their own IP. They stay away from purchasing commercial IP because of that fact. There are a lot of things that they can do then with the source code, one of which is linting. Otherwise they wouldn’t be able to meet the requirements for DO-254. On top of that, close to 100% code coverage is also required. You can’t do that with commercial IP because it’s encrypted,” De Luna said.

Industrial flexibility
For other companies, flexibility is the name of the game in developing embedded processor IP, which can be maintained going forward.

For example, Trinamic, an embedded motor and motion control ICs and microsystems supplier, uses an embedded RISC-V-based processor for that reason. The company’s motion control products go into multiple markets including laboratory and factory automation, semiconductor manufacturing, textiles, robotics, ATMs, and vending machines for reliable positioning such as 3D printing, medical pumps, and security cameras.

RISC-V provides significant flexibility because engineering teams can alter the source code, as long as it remains compliant with the instruction set architecture. But this also creates its own set of challenges, particularly around verification and support, which is particularly important in applications where safety is involved.

“You’ve got four options here,” said Jerry Ardizzone, vice president at Codasip. “First, you can buy a core for Arm. You know it works and it’s supported by the best tools in the world. There are plenty of third-party tools and software. The second option is to build your own core, which implies you know how to do that. Building a small processor isn’t that hard, but you never want to sell a chip where the processing element locks up. The third is to use an open-source core such as RISC-V, and with 175 companies backing this, it’s really happening. The fourth option is to buy a commercially available RISC-V chip, which is also possible.”

Complex designs today in any vertical segment, but especially in market with stringent design requirements for safety, security and quality, close work between various parts of the design team is crucial.

To this point, collaborative design is at its best when supported by a common product data and workflow control environment, typically known as product lifecycle management (PLM). Data integrity is key, and a PLM functions to maintain this integrity and provides a collaboration engine across the broader organization. Taken together, PLM, along with advanced design authoring tools and simulation, can significantly increase product development efficiency while generating better outcomes, all while maintaining regulatory compliance, said Siemens’ Bauer.

Leave a Reply

(Note: This name will be displayed publicly)