IP integration drives SoC methodology shift; interconnect challenges abound.
By Ann Steffora Mutschler
With the amount of IP blocks being integrated in SoCs today – in some cases as many as 100 blocks in a single chip – SoC design methodologies are shifting to address the new challenges this complexity brings. The good news is that these integration challenges has put the spotlight on the issues—along with the skyrocketing development costs for the creation, qualification, acquisition and integration of IP, which can account for as much as 25% of the total hardware design budget.
“There is an increasing amount of external IP being used and things are moving toward a printed circuit board-type methodology where we have large customers that are making several IPs and buying the rest, but we also have some smaller customers that only make one IP (the differentiator of their chip) and everything else is bought,” noted Charlie Janac, chairman, president and CEO of Arteris.
Design project managers are struggling to know how quickly that SoC can be assembled, what the cost is of the assembly, and how quickly the verification can be complete.
“The challenge is to quickly integrate IP that has multiple protocols. IP can be wrapped to chase a protocol, but that introduces unwelcome latency and risk. So you really want to use the native IP protocol that the IP comes in because that’s what is proven,” Janac said.
Further quantifying the situation today, Neil Hand, director of product marketing at Cadence Design Systems pointed out that “at 65 nm about a quarter of a $45 million design spend was spent on qualification of IP, which seems a little big but other customers tell us that for every dollar they spend on acquisition of IP they are spending $2 to $3 to make it work.”
The problem is “there is still a complete lack of consistency between the IP providers. There is no consistent set of standards for deliverables or consistent standards for quality or even what it means to be IP. Some vendors will say it has to be silicon proven when all they’ve actually done is put it in a test environment,” he said.
“Designers spend a lot of time creating the extra views, the extra models, and the extra things to integrate into their design before they can even get it to work. Even if you’ve got “silicon-proven IP” it doesn’t mean it works it all works well together,” Hand pointed out.
Mike Gianfagna, Atrenta’s vice president of marketing, believes this trend spells opportunity for EDA. “The shift from authoring to integration demands a new set of design tools to support reuse and integration. IP-XACT is one standard that is helping to drive this. There are others. At the center of the shift is the need for a rapid assembly, prototyping and validation tool set that works at a high level of abstraction on designs that are not yet complete. The need to interface to the software developer is also present here. These new tools will be in high demand, and should command a good average selling price. EDA hasn’t seen new budget dollars for quite a while – this new trend will break that streak.”
Cadence’s answer to this is its Open Integration Platform, whose stated goal is to reduce SoC development costs, improve quality and accelerate production schedules by concentrating on an application-driven development process and encouraging open, standards-based, collaboration within an ecosystem of production-proven semiconductor design companies, IP providers, foundries, service providers, EDA vendors and assembly houses. It is part of the company’s EDA360 view of next-gen, application-driven development. Cadence’s recent acquisition of Denali Software fits into this, as well.
The platform includes integration-optimized IP from the company and its ecosystem partners, an Integration Design Environment along with integration services. Cadence mixed-signal (analog and digital) design, verification and implementation products and solutions are the underpinning of the Open Integration Platform, the company said.
At the same time, while it has not been stated directly, Synopsys, with its acquisition of Virage Logic, is also expected to come out with IP subsystem products of its own at some point, possibly arranged around a sophisticated interconnect. ARM and Posedge also provide IP subsystems.
What does complexity mean for the interconnect?
While design teams have been using third-party and internally-developed IP in SoC designs for at least 10 years, what’s changed is that over time they have put more and more IP blocks onto the SoC. Today for high-end parts there could be upwards of 100 blocks on the SoC, according to Rich Wawrzyniak, senior market analyst for ASIC and SoC at Semico Research.
“The connectivity between the IP blocks is absolutely critical because the advantage of the SoC may be that you can put all of these blocks on the same chip to get the performance, but if the interconnect is incorrect, improper or not efficient enough you lose all of the advantages that you just gained. The biggest issues on these things are the type of bus architecture you’re going to be using,” said Patrick Soheili, vice president of marketing and business development at eSilicon.
Technical issues surrounding the interconnect abound, including IP re-use; efficient transport (how quickly the data can be moved around the chip); memory bandwidth issues; the number of gates needed, and routing congestion.
There is also the issue of SoC services. Here, Janac believes the industry has been confused about where those services go – whether they belong in the IP, the memory controller or in the interconnect. “The network-on-chip (NoC)-type interconnect handles the data transactions (signal packetization), which gets the SoC data onto the network and the wiring and transport services which move the data all over the chip,” he said, noting that Arteris’ NoC supports SoC services—higher level functions that control the operation and performance of the SoC. SoC services include quality of service, security, power domain management, frequency domain management and software debug.
“These higher-level functions have an impact on SoC performance, power consumption, security and software quality and belong in the SoC interconnect because they represent SoC system-wide functions that need to go to many parts of the chip. In a multicore SoC, the interconnect is the only part of the chip that sees all of the data traffic and thus is the best place to consolidate these higher level service functions,” Janac explained.
“The NoC-type interconnect is ideal for implementing these services it implements predictable networking techniques with relatively modest number of interconnect wires and communications control logic. Individual IPs such as memory controllers and processors no longer see all of the data operations that occur in an SoC and so they not the best place to try to implement these types of SoC wide functions,” he added.
Particularly on the road to 28nm, interconnect issues are a key roadblock along with IP readiness. “What’s challenging about 28nm is obviously the complexity, the readiness, the availability of the IP; the readiness of that IP becomes a major Achilles heel of getting to 28nm,” Soheili said.
In order to reduce the risks, costs and time to market for its customers headed for 28nm, eSilicon is planning to take a platform-based approach in which it would pre-design SoCs.
“This works in particular vertical markets where a customer can look to this platform that could be a certain percentage of the design already completed. Then, the customer’s value-add is in selecting and providing the software stack, the applications, the drivers all the way up to the system-level, along with marketing and channels and getting the customers. This could be done in very large volume markets or could be in smaller volume/higher margin markets or anywhere in between. The smarts go in finding a superset and going after vertical markets so you do the job once, you get through the silicon validation process and the design process and then harden it as much as you can before the customer comes in. Now the NRE is down, the engineering time and the risk are down, the time to revenue is down.
To be sure, this is an interesting time in the industry as design complexity and vendor consolidation bring up new challenges to address.
Leave a Reply