Product Lifecycle Management For Semiconductors

The C-Suite wants the chip industry to use PLM, but are their issues different enough that a more specialized black-box approach would be better?


Product lifecycle management (PLM) and the semiconductor industry have always been separate, but pressure is growing to integrate them. Automotive, IIoT, medical, and other industries see that as the only way to manage many aspects of their business, and as it stands, semiconductors are a large black box in that methodology.

The technology space is driven by a mix of top down and bottom-up processes. Bottom-up tends to be preferred in areas driven by innovation, where being fast and nimble is considered more important than being on time and on budget. Top-down processes are predominant in large systems and industrial development, and they are becoming the preferred way to control products within a company — especially when dealing with large derivative portfolios.

Semiconductor hardware and software development have long fallen into the bottom-up category, even though they have introduced a lot more formalism than existed in the past. Software also has been changing with the incorporation of Agile methods. Requirements management, verification management, revision control and bug tracking are all in place and being extended over time. Feedback loops exist for many parts of the development and manufacturing cycle so that systemic issues can be fed back through the process and corrected.

The growing use of chips in the automotive, medical and Mil/Aero industries have added new types of requirements to the development flows, and while often seen as being a burden by the semiconductor development teams, PLM processes are being incorporated and treated as a prerequisite to get the business.

When looking at large systems that have many complex parts, and when companies need to make sure they meet the earnings expectations given to Wall Street, knowing exactly where everything is becomes crucial. The C-suite wants to be able to see progress towards goals in every aspect of the company and some think the semiconductor industry has gotten away with their sloppy processes for too long. “This is usually a top-down decision,” says Mark Hepburn, product management and engineering group director for Allegro Pulse at Cadence. “It rarely starts from engineering. The C-level says we need better control, better visibility on our overall solution to be competitive. And then it’s basically given to the electronics folks and they have to be compliant.”

The recent supply-chain disruptions have thrown more fuel into the fire as management looks at redistributing manufacturing operations, such as wanting to move semiconductor operations from one foundry to another, or to a different country. According to a Kalypso publication for Dassault Systems, “To achieve true innovation results transformation, all semiconductor firms must now move beyond product data management (PDM) and place more emphasis on first-pass design success, efficient product introductions, and manufacturing flexibility.”

What is PLM
Cadence’s Mark Hepburn provides a primer. “PLM is a very nebulous beast. It’s not a tool. It’s more of a process and methodology that a company may put in place. Even to this day, a large number of the business systems inside some of the large enterprises are homegrown. PLM is focused on how a product is realized inside a company. The terminology often used is that ‘it acts as the single source of truth,’ and it basically represents a complete picture of a product. It goes from product requirements to the packing peanuts, literally. Some of the most important things around traditional PLM are the business aspects of the product and how pieces in the chain interact. They have this picture where if something changes in any one of the domains, such as a requirement, or a problem that is found in manufacturing, they can see how that change ripples through in a traceable and controlled way through the organization. It provides a picture of the impact associated with a change, the cost, the risk to the product, etc.”

PLM generally does not get into the functional domain, except through interfacing into systems like application lifecycle management (ALM). This is a parallel track to PLM, which often is used for the software side of the product. Those systems are usually integrated.

“Systems are hierarchical, so you have systems of systems,” Hepburn said. “Consider a large telecoms company that builds 5G networks. That’s a system. It’s got RF, it’s got backhaul for networking, it’s got antenna design. You have to bring it all together or they don’t have a product. Their product isn’t an antenna. The product is the whole infrastructure they sell to a carrier. They have to model everything, and that’s why PLM is critical to them. Plus, they have high product variability. They have to create systems that can work on buildings, work in forests, work in subways. They want to share IP as much as they can to get efficiency of scale.”

Quality is essential, and it needs to be extended well beyond a single vendor because these systems are not all homogenous or developed by a single vendor. So multiple vendors need to coordinate what they’re developing, and that data is then stored in PLM.

“A lot of these systems companies are really pushing to bring the electronic record directly in, because so much of their content is reliant on that electronic use definition,” Hepburn said. “Today, on the electronics and even more so on the IC side, it’s a handoff. At some point, somebody in PLM creates a document and gives it to engineering. Engineering goes and builds something based off that specification, and then they have a manual handoff back to PLM to record what actually got built. That’s the typical flow in electronics, and that’s very much the flow in IC today that I’ve seen.”

Integrating PLM and semiconductors
Can PLM successfully envelop semiconductors? “In industries like automotive, where ISO 26262 is driving reliability and functional safety requirements, everything has to be traced,” says Simon Rance, head of marketing at Cliosoft. “There has to be full accountability such that if anything goes wrong, either in production or even in use in the field, there has to be full traceability back from every point throughout the entire system. That could be back to a single chip or a piece of IP in the chip. This is where hardware design data management and PLM systems come together. They don’t want to have all of these separate systems because it is difficult to manage. They want to try to come to one single data management solution where somebody can go in and put in a search query. You can start investigating a failure and see what it is associated with in this entire process — not just in the chip design itself, but all the way through to manufacturing.”

This was a key consideration when Siemens’ PLM Division bought Mentor Graphics in 2017. Siemens is one of the largest players in the systems market and has been investing heavily in PLM processes and tools. If you want to effectively get PLM into the semiconductor industry, you have to understand it. There is no better way to do that than to be on both the inside and outside of the problem. Then, you may be able to see how to make it work and to create any necessary transition aids.

That integration may be slow, and the semiconductor industry continues to create internal tools that utilize specific knowledge of the domain. “By creating and maintaining traceability between disparate systems for requirements, specifications, EDA and hardware designs, software code, and documentation, engineers know immediately when a change occurs and the effect of that change on other design artifacts and parts of the system,” says K. Charles Janac, president and CEO of Arteris IP about their new trace application. “Unlike application lifecycle management (ALM) and product lifecycle management (PLM) solutions that require engineers to use a single environment that is not best-in-class in any one aspect, domain-specific solutions create a system-of-systems that allow complete visibility of requirements traceability through the entire SoC design flow and product life cycle.”

But there are areas in which electronic design has lagged. One of the issues for verification teams is that they have placed a large focus on verification, which is attempting to ascertain that what has been built is as specified. You only need to see any of the familiar V diagrams, as depicted in figure 1, to see the annotation on the top right, which says validation. That is where you ascertain that what you designed was the right thing. This is a little late to find out that the specification was wrong, and when used as part of a larger system, it can seriously impact business. The only way to fix this is by using a top-down, virtual development environment before any detailed design work starts.

Fig. 1: Classic V diagram for verification and validation. Source: Semiconductor Engineering
Fig. 1: Classic V diagram for verification and validation. Source: Semiconductor Engineering

While the industry has developed virtual prototyping tools, most of those are only used for the early development and bring-up of software, and some companies will develop partial models to help them analyze performance bottlenecks.

It is possible to perform validation first. “Several years ago, the architecture group of a large processor IP company wanted to add virtualization,” says Simon Davidmann, CEO for Imperas Software. “They modified the existing processor model and got the software up and running. Initially, they found things didn’t work very well, so they changed the architecture in the behavioral model. Eventually they got a very efficient simulation, and then used that to design the RTL. This move to higher-level simulation helps you shift left and parallelize things. That can help optimize the software and the performance of it, as you’re designing the RTL to meet that spec.”

Both shift left and concurrent development depend on effective digital models. “We are taking many of the same techniques that have been used on the IC side and applying them upstream into a system context,” said Fram Akiki back in 2019, when he was vice president for electronics and semiconductor industry for Siemens PLM Software. “It is not just about linking digital twins and digital threads into a digital fabric, but being able to deploy certain techniques, because from a verification standpoint and digital model development the semiconductor industry is probably one of the most mature and has the best expertise of any industry. Being able to take that expertise and deploy it upstream into a system is proving to be powerful. There are some reverse techniques that are also happening, where some of the ways that Siemens has looked at certain issues from a system perspective in a behavioral model are having a lot of applications in the SoC development, particularly as it relates to things such as functional safety.”

Simulation models and digital twins probably need to come into alignment. “There is likely to be convergence around the digital twin,” says Cliosoft’s Rance. “What may drive this to happen even faster are chiplets. Design data management, traceability, reuse, as well as failure analysis have to happen across the entire system flow. Chiplets make it a lot more difficult to track data. Without the right tools in place, you’re tracking things like license agreements and data about system integrations, almost at a paper level or spec level. It is quite archaic when you think of how our world is now. We’re very digital. But this is how a lot of this stuff is still being tracked and managed. It falls through the gaps because, depending on who you are — If you are the systems integrator or the design engineer — you certainly don’t have some of those pieces of paper sitting on your desk.”

Silo busting
The problem with the semiconductor flow is that it has been heavily siloed in the past. Attempts are being made to break some of those down. “Take a look at PCB design,” says Hepburn. “You may think that is one homogenous solution. It isn’t. There are design engineers that do logical design, and a different team of design engineers that do physical design. Those are two different systems. And they pass information back and forth. One of the things that’s occurring now is that companies have realized a lot of problems are happened because of that. They can’t see the cycles, the handoffs, or the problems that occur. The people pushing for PLM are basically saying that while your independent systems work well enough at the scale that you’re at right now, they don’t scale to the system’s level. And we need things tied into one environment.”

2.5D and 3D systems are putting increasing pressure on those silos, which now spread across PCB design, IC design and manufacturing, and packaging. The required systems are not yet in place, and they are one of the limiters to more widespread adoption.

There are other areas where semiconductors are certainly different from other technology domains. For example, semiconductor manufacturing is often a race to utilize the latest technology which means there are additional risks that are being accepted.

Designs are often done on pre-release versions of models, where there is a known degree of uncertainty and unexpected problems can turn up at any time.

“This is where you have the transistor architects and the process integrators feeding into the people who are doing the first libraries, who are creating the first ring oscillator and getting an early preview of what a block is going to look like,” says Aveek Sarkar, vice president of engineering for Synopsys’ Custom Design Group. “If I were to lay out this particular sample circuit, from a PPA point of view, are there certain things that we should be doing? The notion of design technology co-optimization is becoming even more important. How are we able to influence the different pieces that have resided in different teams within organization? We are attempting to bring all of them together, to have an early preview of these effects, and provide that feedback to the process engineers and the architects on the left-hand side of the equation. And we are trying to help them help with the right-hand side in a more efficient manner.”

The semiconductor industry has created many processes to help assess the stage of developed, the level or risk, the quality, etc. But is has had to constantly update and alter those because they originally were developed for individual silos. The industry is increasingly finding that they are limiting what can be done. Change is happening.

“The electronics world has been from the inside looking out, and PLM has largely been something they are not concerned with,” says Hepburn. “That’s probably going to change.”


Kurt Shuler says:

Good article, Brian!

I’ve tried to address the “silo busting” part of the semi/EDA PLM issue in this video by Ed:

Our customers have requirements to continue to use the “best tool for the job” within their silos while linking the changes/deltas between them to establish and maintain traceability. Semi has so many complex and industry-specific requirements that I doubt the leading “generic” PLM solutions companies such as Dassault, Siemens, PTC, etc., would find it economically viable to serve the semi market. But I could be wrong!

Leave a Reply

(Note: This name will be displayed publicly)