Version Control Nightmares

The rampant re-use of IP and the growing reliance on software to smooth over glitches is creating a nightmare in version control of everything from IP blocks to EDA tools.

popularity

By Ed Sperling
The rampant re-use of IP and the growing reliance on software to smooth over glitches is creating a nightmare in version control of everything from IP blocks to EDA tools.

Version control has always been a problem in SoC design, of course. Tools have to be in sync with engineering teams that are spread across multiple continents and working on different pieces of the design either concurrently or in tandem. But the problem has become much worse as companies stress the re-use of IP and the integration of third-party IP, and as design teams are forced to deal with more complex issues such as power and proximity effects and the integration of more software throughout the process.

“The more programmable or multiple operating modes, the more combinations you have to deal with,” said Chris Rowen, chief technology officer at Tensilica. “There’s a combinatory explosion of how the pieces fit together. You have upgrades on everything, and then you simultaneously change some things and guarantee that all the new things will work with the old things.”

The problem is exacerbated by the addition of standards and open source, as well.

“This gets doubly hard when hardware, firmware or some tool or module gets added in later,” said Rowen. “All the interfaces are now standardized, but sometimes they don’t conform to the standard. You have to try them all out and test it, and if it doesn’t work you have to modify it. Open source contributes to this, too. You must stay current in open source but you don’t always know what changed. It’s written in some note somewhere, but that’s not always easy to find.”

Derivatives
Version control is particularly difficult in the realm of complex SoCs where derivative chips are now an economic necessity. The cost of developing chips at advanced process nodes can run as much as $100 million. It’s almost impossible to generate enough volume from a single chip to pay for those development costs, but it can be feasible if there are also derivative chips. Unfortunately, it also makes keeping track of different versions of just about everything much more difficult.

“We’re doing an ASIC with 25 sub-chips,” said Marco Brambilla, ASIC design manager at STMicroelectronics. “To do this you start the 95% netlist, and then after one week you have another version of block one and another version of block two. Those versions might change your floor planning significantly, though. At the end, all you can do is hope you didn’t mix anything up. Once you get to a 100% netlist you do a formal check against your golden RTL. If there’s anything that slipped through the crack you’re doomed. You may have to re-do a block. Especially with the back-end data, it’s very difficult.”

The problems don’t stop there, either. Large chip developers often try multiple different options in developing blocks.

“You may have someone working on five different versions of a block,” Brambilla said. “We have to run it in parallel with Synopsys, Cadence and Mentor and pick the one that gives us the best result. With each those, you try a couple floor plans, too.”

Tool updates and dependencies
Making matters worse is the steady stream of updates that are released for all software, including EDA tools. Existing tools need to be modified to deal with such changes as 3D stacks and new manufacturing processes, and point tools typically are modified multiple times until all the bugs are worked out.

“I have seven ASICs in development and we buy the Text Find from Cadence,” said Brambilla. “So now Cadence finds a bug in their tool and sends us a release. This affects IP in a repository here, and designers need to check in the version they checked out. But I also want to make sure that whenever a modification was done somewhere else it gets modified here, too.”

The same applies to IP. Soft IP, in particular, is updated regularly. In a global design operation, keeping track of different versions of IP across different IP repositories is no trivial matter. Changes in any part of that IP may affect the overall chip’s functionality, or it may have no effect until something else is changed.

“If you shake something on this tree what else shakes?” asks Mike Gianfagna, vice president of marketing at Atrenta. “What’s linked to what? And you need to think about these things in a framework that’s multinational and multi-company. There are companies that have built tools like this for large-scale development projects, but they haven’t typically been used for chips. Maybe the time has come.”

He noted that these kinds of tools have been used effectively by other industries such as nuclear power plant design, where there are typically thousands of pages of documentation.

“If something changed on page 700 what does that mean? Up to now, a lot of chip design has been “shoot from the hip” with the understanding that the chip is only out for six months and then you move onto the next one. That has to change. Someone has to put an infrastructure in place, and to manage it and pay for it,” he said.

Who’s responsible?
But who’s actually going to pay for developing that kind of system. In the EDA industry there has never been an economic return for developing that kind of tool. While chipmakers will pay for tools to design and verify their chips, there are far fewer companies willing to invest in complex inventory management tools.

“There’s been a lot of money lost in EDA trying to build a configuration management capability,” said Charlie Janac, president and CEO of Arteris. “We’re on the receiving end of this. The network on chip is one of the first things defined. There is memory interleaving and all these kinds of architectural issues. Then it’s the last thing that gets modified, because all this space left over from the IP is allocated to the interconnect. So when people make mistakes in configuration management, it’s cheaper to get Arteris to fix it and to make accommodations than to fix the IP.”

Conclusions
Version control problems will become much more democratized over the next few years. It won’t be just the largest companies that are wrestling with these issues, particularly as 3D stacking becomes mainstream and as more software content and third-party IP become required in developing chips.

The challenge, however, will be keeping track of all the changes and what effects those changes will have on the functioning of a chip. The complexity is reaching far beyond the capabilities of a spreadsheet and even the most organized oversight, and the problem will continue to grow until the pain becomes almost universal. At that point, perhaps, there will be enough perceived opportunity to develop these kinds of tools. But even that is far from certain.



Leave a Reply


(Note: This name will be displayed publicly)