Engineering Change Orders Revisited

More third-party IP helps in some cases, but it creates problems in others.


By Ed Sperling
The perennial nightmare of the marketing head reporting that a customer will buy a design—but only if it fits into a specific power envelope or has better performance or I/O—is all too familiar to engineering teams.

In theory, using more third-party IP should help alleviate this problem because the IP can be changed out relatively easily. The reality, though, is that it’s not so easy to change out IP. Moreover, the push to add more functionality into SoCs has created dependencies and interrelationships between blocks that frequently create ripple effects throughout the rest of the design. And because of the close link between design and manufacturing these days, it can extend well beyond tapeout.

Issues at the front end
Choosing IP has always a tradeoff between a proven track record and differentiation, as well as what’s available in-house that companies already have developed and don’t have to pay for. As more functionality is added into SoCs, though, the amount of third-party content is rising. That has left many companies in a quandary about which IP to use in designs, both from a power/performance standpoint and from the standpoint of what’s the best way to win a socket in a design with the least likelihood that an ECO will force an IP swap.

“The big vendors sell more standards-based IP in higher volume, and with that you generally get higher quality and lower risk,” said Hans Bouwmeester, director of IP at Open-Silicon. “But IP that’s sold in less volume is more differentiating and can provide a greater value add. IP also is sold by a spectrum of companies. There’s custom IP, which is more like a design service and often not standards-based. On the other side, you have companies like Synopsys and ARM, which is more like a grocery-store model. The flexibility for customization is lower, but it’s easier to integrate.”

Once an engineering change order is handed down, the choice still can go either way. If time is critical, the most likely option will be the more standardized IP—but not necessarily. Differentiation can win additional business, which favors customized IP.
And despite the widespread deployment of standard IP, neither choice is a guarantee that an SoC will work using the new IP.

“We don’t verify other companies’ IP, but there are situations where they’re using IP from Synopsys and someone else’s IP on the physical layer is supposed to talk to it,” said Navraj Nandra, senior director of marketing for DesignWare Analog and MSIP at Synopsys. “This is where the concept of a subsystem comes in. You’ve got various pieces in there—physical, software and MAC (media access control)—and how you make sure the pieces work together is by having interfaces understand what they’re connected to.”

But not every ECO is a significant problem. A standard IP swap out in a design with well-documented pins and interface standards, for example, may be relatively straightforward. Adding features to a design is definitely not.

“One of the keys is that you’ve got to have very experienced engineers to adapt to the known and the unknown,” said Nandra. “Young engineers, no matter how good they are, tend to get bogged down in details. You need to understand what’s really needed and then you need to be able to articulate the benefits to the customer. There needs to be quite a bit of dialog between the R&D team and the customer on tradeoffs, and those conversations have to start way early with the fab and the customer. The customers expect IP to work on the first instantiation. It requires collaboration. It takes a village to get all of this working.”

For one thing, ECOs in the past used to be considered place and route problems. That description no longer applies for advanced SoCs, and just swapping out IP blocks for other IP blocks doesn’t work.

“One of the big problems is that many times assumptions are not written down for designs,” said Pranav Ashar, Real Intent’s CTO. “Unless you document everything clearly, the chip might malfunction. You need to make sure all of the parameters are known up front or you can create a verification nightmare. If you’re doing a wholesale ECO, you need full-chip signoff. All of the assumptions you’re making need to be factored into verification. Assumptions are typically where corner cases fail.”

Issues at the back end
The trouble continues at the back end. What used to be a linear progression of steps—passing from one step in the flow to the next—is now much more concurrent due to time-to-market concerns. Software is developed almost from the outset using virtual prototypes, and design for manufacturing links a foundry’s capabilities and restrictions to the design teams.

That sounds straightforward enough, but ECOs can turn this finely tuned machine into a nightmare. Changes at the front end need to be reflected and cognizant of what’s happening at the back end, and typically that’s not the case.

“The problem we see with ECOs is that you didn’t design it to do that,” said Michael Buehler-Garcia, director of Calibre Design Solutions marketing at Mentor Graphics. “Just look at it from the physical perspective. You’ve got hard IP that’s certified early in the process. It’s certified early because you need to build the test chips. But they’ve waived some DRC and DFM rules before the IP was frozen and certified. That fires up errors, but it’s okay because the foundry waived it and there’s a tracking system for that.”

So what happens when other parts of the design are changed at the last minute? Those waivers either no longer apply, or they don’t necessarily get considered further upstream.

Buehler-Garcia noted that stacking die—particularly once platforms are available—could help alleviate some of the problems with ECOs. But he said the infrastructure to make that happen will take years to develop. “This is really in its infancy right now.”

Issues in the middle
A third piece that is affected by ECOs is the part between the front-end design and the back-end manufacturability—the bus or on-chip network that serves as the signal path for an SoC.

“We find that when about halfway to two-thirds of the way into the design process when the marketing guy comes in wondering, ‘If only we could do this,’ that’s the most damaging to design teams,” said Drew Wingard, CTO at Sonics. “The cost to get through those changes is huge. It starts with the hardware, because SoCs are limited by the memory system. So if they’re sharing memory, what else might be damaged? The whole goal is to allow designers to minimize the ripple effect. That means quality of service allocates bandwidth, using firewalls to protect data, and an arbitration topology that is separate from the physical topology.”

In the past, one of the approaches used to deal with ECOs was the superchip concept, where what gets used ultimately is done by subtraction of existing features rather than adding new ones. But that approach doesn’t work anymore because extra margin eats up both power and performance and adds to the overall cost—a triple whammy for the entire power-performance-area equation.

Subsystems will help to some extent. Better partitioning also can help. But an ECO can affect multiple layers in a stack, which is one of the reasons the entire on-chip network concept has taken root for both Sonics and Arteris. By establishing a network topology, it can be reconfigured more easily providing the changes are brought in early enough. Too late in the process and the price goes up, the layout becomes messy and market windows are missed. And then it depends on the organization to see where the blame falls.

Leave a Reply

(Note: This name will be displayed publicly)