It’s still not possible for a dozen pieces of IP from disparate sources, but it’s getting easier.
What exactly does it mean to automate integration? Ask four people in the industry and you’ll get four different answers.
“The key issue is how you can assemble the hardware as quickly as you can out of pre-made pieces of IP,” said Charlie Janac, chairman and CEO of Arteris.
To Simon Rance, senior product manager in the systems and software group at ARM, IP integration simply means putting pieces together any way they should be connected.
For others like Drew Wingard, CTO of Sonics, automating IP integration comes down to the need to be able to treat IP blocks as black boxes. When they are described well enough, “if I instantiate them as directed, they just work. And I don’t have to understand the details about how they work.”
Alternatively, Frank Schirrmeister, senior group director, product management in the System & Verification Group at Cadence, said the simplest definition of IP integration automation involves a definition of the topology of the system at a higher level, together with specification of registers and interfaces for the blocks. That makes it possible to automatically generate and configure the top-level description for the design.
Janac pointed out that is already possible to automate IP integration using existing interconnect technology, which simplifies the process, even though it isn’t as simple as pushing a button. The interconnects themselves have become de facto standard approaches to this problem, even though there are no official standards in place.
Mike Gianfagna, vice president of marketing at eSilicon, agreed. “If we are talking about the automation of the generation of IP for chip assembly or place and route, that’s pretty much a solved problem. Everyone today has the equivalent of what looks like a make-file that assembles the chip together and stitches it and pulls from different sources – all that works pretty well. But this wasn’t always true. The place and route flows have become a lot more hierarchical and a lot more automated, so this concept of a centralized cookbook that you can put together with UNIX commands that will pull the right pieces and assemble the chip — that works pretty well.”
John Swanson, senior manager, marketing and business development of DesignWare IP and tools at Synopsys reminded that certain levels of automated assembly are pretty straightforward. “There are common interfaces that can be defined – you can automate that. If you can define conductivity with a rule set you can automate that, and that gets you a pretty big head start and eliminates a lot of the more tedious parts of a design.
Where he sees engineering teams really benefiting from the automation now is if they have a subsystem put together targeted at a specific market. “They can start playing with it and trying different things; different configurations, different hierarchies to look at timing closure — and that is where you see more of the benefit from automation. [However,] being able to take 12 IP blocks from various IP vendors including internally, having a tool magically put it together and have a working design is not there yet.”
Companies such as ARM and others are trying to get there, however. Starting with its own IP, the company has technology to automate integration, and is working with standards organizations to include IP from other sources.
“Depending on what type of IPs those are and how they need to be connected together, there is another level of decision making around that IP integration as to whether there is a type of interconnect that needs to be put or inserted in between those IPs that are being integrated together,” Rance explained. “Or is it simple enough that it can just be a wire to wire kind of stitching? When you look at the IP integration from an interconnect standpoint it is critical to determine the coherent and the non-coherent aspects, as there are architectural decisions that have to take place right up front before integrating IP together. What are the IPs that sit in the coherent datapath, and which ones sit in the non-coherent datapath? And then, what type of interfaces are typically good at handling cache coherency and those types of things for both types of datapaths?”
Interconnect as automation methodology
Given that the semiconductor industry works with pre-made blocks, the fastest way to get them to work is if those blocks can hook into the interconnect, Janac said. “That’s why there should be one or two, but certainly no more than three interconnect infrastructures. It’s a hard technical problem to solve, and if the interconnect doesn’t work the chip doesn’t work. It takes time to get the interconnect right, and it takes money and customer experience. You have to become very good at IP protocol conversion.”
This protocol challenge is no small task. Jason Polychronopoulos, product manager for Verification IP at Mentor Graphics, noted that protocols such as PCIe and AMBA have been around for a long time, but engineering teams tended to go their own way on the smaller buses. “There were some fairly old and fairly long-in-the-tooth standards, but they were simple enough interfaces that it didn’t matter too much. You could get by using those. But the increase in complexity and the increase in the amount of reuse of IP and IP integration between companies, combined with the need for lower power and higher efficiencies, brought people back together away from some proprietary things they may have had cooking within their own project groups to doing things like the MIPI group did, where they standardized on things.”
Polychronopoulos said that ultimately, for IP blocks to work together, the interface has to be defined. “That’s just fundamental. Beyond that, there are things that you need to do to make sure that behaviorally they work together and then just plugging them together. There are tools that can help with plugging them together and people use things like XML to define the configuration of things.”
Even so, when it comes to automating IP integration, Sonics’ Wingard asserted that it does work, which is the idea behind the on-chip interconnect. He pointed to the work Sonics did originally with the VSIA organization that resulted in OCP/IP Consortium, which was a bit like VSIA but more practically usable. “All the work around that was about how to enable for our customers the ability for them to create libraries of truly reusable, automatically integratable IP — and we made it a pretty long way down that path. We got a chance to pioneer some pretty interesting stuff. What the SPIRIT Consortium ended up standardizing in terms of IP-XACT was the concept of meta-data representation of the choices the designer makes in constructing a system or in an SoC, or in configuring an IP block, and all those kinds of things. We were doing that in 1997/1998 because we were building configurable IP and configurable interfaces. In the OCP spec, there was a whole lot of stuff around how to do packaging, and the trade organization was relatively unique in that one of the main deliverables of the organization beyond the specification was protocol checkers — instead of what we now call transactors for doing verification, an ESL library of the first transaction-level models for doing higher-level system modeling.”
Mindset change required
This approach falls short beyond the digital realm. But the idea that IP blocks can be grabbed, integrated once, and then it’s done is a different mindset. What’s even harder to grasp, though, is that the IP can be updated throughout the design project and it still works.
“Typically there are multiple versions of a newly developed IP block or one being modified that are going to be delivered for integration at different stages of the project as that hardware piece matures,” said Wingard. “Usually, there are going to be multiple iterations of the top level of the design that are done while a lot of the components are almost black boxes, but people want to start early floorplanning so they can try to figure out with the patterning is going to look like because of the package to be designed.”
At the end of the day, what the engineering team cares about is that they can prove that the integration is correct, that the different modes are working, and that the different register maps are working, said Schirrmeister. “To define the system integration automation is really from a higher-level description of the topology of the system, and a reusable specification of the blocks in the system has to be able to generate the configuration of the RTL topology for the system.”
He said this is where you solidify the block diagram you put on your white boards. “On your white board you have the block diagram and on the chip design side. We had a Word document called interface.doc, which was only accessible by two people on my team. It would define for each block the interface and the description of who the interfaces talk to. That’s what IP integration automation means to me.”
This definition may be different from an implementation perspective because some of the IP may be hardened, and may be only available as layout. “The challenge here is you want to have a top-level description where you avoid that somebody types it in, mistypes a name and now you have to find this bug and you find it because simulation doesn’t start up because some acknowledgement signal isn’t exactly right. These are the ones you want to avoid,” Schirrmeister continued.
This fits into an idea of a continuum of Verification (sometimes referred to as Shift Left, or as a superset of Shift Left) and how to assemble the design and reassemble the environment it is connected to as the design moves through that continuum. “It might be as simple as saying I assemble the design and build the verification environment on top of it semi-automatically. Then when you run verification you’re not always running verification on the full chip – you’re typically only taking the portion of the chip that is relevant for that verification task. Now you want to configure this in a way that you can take snapshots of it and different verification aspects and you want to be up to reconfigure it to build your new test case. But in terms of automation of IP integration, it’s actually more complicated than just that. For each configuration you want to be able to pick up the right verification environment. That’s where people come to the idea of a simulation-less integration of the whole environment where you essentially implement with tools all of the interfaces, if the IP is described correctly, that all the interfaces and the assembly is actually correct, and you don’t find simple bugs in a very time-consuming fashion.”
Truly, the question in all of this boils down to what do users value the most? “Typically they don’t value the editor too much where you do all the design entry. They value the results you achieve with it. This is all about the database being able to regenerate different configurations of a design fast and being able to layer on top of it like the verification environment,” Schirrmeister concluded.
Again, while the definitions vary, the industry is converging around the same general principles for automated IP integration. Ultimately, the engineering users will decide what sticks.
Ann – good topic and one that has been around for a while as many of your commenters have observed. One topic I didn’t see mentioned was the ROI for automating assembly. Back when lots of big semis were building platforms, this seemed like a good bet. Many planned to build a superset SoC which accommodated every likely configuration, then use knows on the “configurator” to dial in what you wanted for the next market variant you wanted to target. But that sort of fell apart when it became apparent there were only going to be 2-3 winning application processors and everyone else had to rethink where they fit. Tag onto the Apple or Samsung mobile solutions? IoT? Automotive? Go analog?
Add to that some significant complexity around the bits that are still very much in-house: a lot of the bus, test/debug, power management, IO, … Many of these have custom in-house scripting solutions that work well today with conventional flows. Why re-architect all of these and re-train the integration teams to a new methodology (probably not without bugs) when everyone is scrambling to make sure they remain relevant in a radically changed market? I know of several big design companies who were wholly committed to a more standardized assembly automation who have either canned those programs or are seriously rethinking the value.
Don’t get me wrong – automation would be a wonderful thing in the right market – a stable, slowly evolving market where we know pretty much what the next 5-10 years look like and at least some leaders have time to re-tool to a new methodology. But that isn’t today’s market. In the list of priorities that keep a typical CEO awake at night, I don’t imagine improving integration productivity ranks very high.