First of two parts: Systems vendors used to take their lead from chipmakers. Not anymore.
Throughout the PC era and well into the mobile phone market, it was semiconductor companies that called the shots while OEMs followed their lead and designed systems around chips. That’s no longer the case.
A shift has been underway over the past half decade, and continuing even now, to reverse that trend. The OEM — or systems company as it is more commonly called today — now determines what features go into a system, often based on what software will be needed or what application it will be used for. After that, the specification is developed for the hardware, including the performance characteristics, the power budget and the cost target.
This is a significant change, and it is unfolding over a period of years—sometimes unevenly, sometimes in unexpected ways, and usually in small pieces that don’t provide a clear overall picture of what’s happening. And while it doesn’t necessarily make hardware any less valuable — semiconductors are at the center of every major development and shift in technology — it does affect ecosystems for everything from IoT appliances to consumer electronics to automotive subsystems, and the underlying IP and design strategies that are used to create them.
“We’ve been expecting that as the pendulum swings back from the standard-product silicon companies that control everything, and almost hollowing out the OEMs, to the OEMs having to be responsible for more of the hardware content of their devices, that it does bring forth a bunch of new ways of thinking about what we used to call ASIC technology,” observed Drew Wingard, CTO of Sonics.
Apple is the first company that comes to mind as an example of this kind of change—a systems company that also develops its own chips—but the recent wave of consolidation is an indication that far more change is on the way.
“Apple is at the thin edge of that wedge, but they did it in such a fashion that they basically absorbed the ecosystem that they created,” said Wingard. “For the rest of the world, we see systems companies all over the place that are finding they can’t quite get to what they want by buying what’s off the shelf or just coming off the shelf from their favorite semiconductor vendors. That puts some stresses into their systems because many of those companies had gotten rid of their ability to design and make chips, so there’s this very open question about whether they are [getting a return on investment]. Many of them already were working with semiconductor companies in a collaborative fashion around the specifications for a chip. But they were buying that chip as a standard product, which meant that they didn’t necessarily get a better price than anybody else on it, and then their competition could buy it.”
The initial solution to the standard product conundrum was what was known as the super-chip concept, where certain features in a “standard product” were turned on for some customers and left off for others. The successor to that is a base platform approach developed by a design services company that becomes starting point for a more customized solution.
“There is another business model where the systems companies grow their own silicon design capability,” Wingard said. “The companies embracing this approach tend not to want to do it in the same fashion that the fabless chip companies do it, where they try to do everything — the notable exception being Apple. They tend to do it in a fashion where they are really trying to bring in the front end design of the chip and the system design, making sure that the chip as it is delivered is going to plug well into a board and into a case with the appropriate cooling technologies and all those things, and to have the appropriate system power consumption. That one is most exciting because it has the maximum freedom, but now the question is, are those players going to build a $100 million capability? They probably don’t want to. We find the value of abstractions, the automation, the productivity, the visualization — we find those technologies to be very important in helping people do that class of design.”
What does the customer really want?
Shifting business models change the support ecosystem, as well. They affect the chip design strategy, and they affect the entire design flow — and they raise a lot of unanswered questions.
“We are putting ourselves in our customer’s shoes and trying things out,” said Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence. “We try to replicate that internally, so we have as part of our agreement with ARM, for example, access to their internals so that we can do validation of our tools with it.”
Cadence has a team building designs for the specific purpose of validating tools and infrastructure. What they’ve learned is that a fair number of different suppliers must be considered both on the IP side and the tool side. Schirrmeister noted that in collaborating with ARM, for instance, Cadence has been able to see how the ARM Fast Models connect into simulation and emulation as a way to make sure that those interfaces are pre-validated.
Simon Rance, senior product manager, systems and software group at ARM, agrees that relationships between ecosystem players have changed, particularly as systems companies are taking back more of the responsibility for the system integration.
ARM’s IP comes in a number of different flavors, but the company also develops what it calls “system IP.” That is deliberately named, he explained, “because if you look at that IP by itself, it’s either a subsystem or it would be tightly integrated with some surrounding IP in the overall system. In the system IP aspect we’re looking for two things — what’s contained in a bag of IP components all integrated together in a particular architecture to create some sort of system IP, and then how would that be associated with its surrounding IP in the next level of system hierarchy.”
Making IP more configurable has been both a statement of direction and a challenge for IP companies. It’s tough to do, but it’s also considered a big opportunity because IP vendors don’t how and where their IP will be used, which makes it difficult to optimize and to characterize.
“The difficulty for hardware providers of the IP is they often have to create this IP, and supply it not knowing how it’s going to be used in some end system by, let’s say, a Qualcomm or a or an Avago,” said Rance. “They’re literally creating all of these configuration options and then saying to those type of companies, ‘These are all of your options, go configure it how you want to go in the rest of the system with all of the other IP that surrounds it.’ That’s when you start getting into all of those levels of software-type configurations that go with the hardware IP. It’s becoming a challenge. We’re having to guide more of what are valid configurations because it just may not work in their overall system with other IP perhaps the way it’s intended.”
Steve Roddy, senior group director for the Tensilica unit at Cadence, noted there has been a shift from focusing on speeds and feeds to application-focused engagements. “Whether it’s a semiconductor company trying to move upstream or a systems box guy trying to do their own silicon for differentiation reasons, they’re trying to make decisions at the architecture time about a vertical-market-specific or application-specific major subsystem. So let’s say it’s a radar vision ADAS system in a car, or it’s a vision system in your phone, or you’re watching TV. What people are looking for is, ‘How can I do stereoscopic vision on your platform Cadence?’ They’re not coming to the table and saying, ‘Tell me about how many MACs per second your DSP can do,’ or how many transactions per second your DDR interfaces can do. They are looking for a layer higher up.”
Because of this, ecosystem players rallying around the OEMs are pulling together silicon reference platforms. In some cases those are FPGA prototypes. They’re also pulling in the algorithms and systems software providers to be able to pre-save development effort.
“Whether it be the processor side where we work to port application level software to our various specialty DSPs, or whether it be on the interface side, we are doing proof of concept development chips and reference designs with a variety of partners – both hardware and software,” Roddy said. “Those are necessary so that the Chinese company with a lot of money to spend and a desire to build custom ASICs can see beforehand if the risks they’re taking are measured. They’ve got a pretty good idea that they’ll be able to do what they want to do at the system level from Day One.”
The complexity factor
Developing silicon for systems vendors is a whole different kind of problem, though. On one hand, the chips and IP inside of them are better defined. On the other, the systems are much bigger and more complex, requiring much more disciplined integration methodologies and better integration of tools within the design chain. EDA companies have been arguing for a number of years that using separate design and verification methodologies no longer works, and with real-system-level specifications. Now their customers are starting to recognize it.
“Projects are big and complicated, handled by software and hardware teams, while software teams usually wait for hardware or work using virtual platforms based on high-level models,” said Zibi Zalewski, hardware division general manager at Aldec. “To verify the whole SoC and speed up the moment when different teams work on the whole project, it is necessary to integrate different tools and IPs into one verification ecosystem.”
That’s evident with hybrid verification environments used for SoC testing.
“A prototyping board is used to port transaction-level emulation platforms (SCE-MI-based) with the ability to connect with the external world on the hardware side, via speed adaptor IPs, and the software side, via transactor IPs,” Zalewski said. “The hardware interface allows you to connect with real streams of data, such as the Ethernet speed adapter, while the software interface provides the connection to a virtual platform via a TLM interface used by the software developers. Such a hybrid platform allows you to verify the whole SoC early in the project timeline and integrate different teams to work on the same project sources during development and testing.”
That kind of integration will become essential as designs are integrated more tightly with larger systems.
Keystone players
Some things remain fixed, though, even as the context changes from chipmakers calling the shots to systems vendors taking control. That’s particularly true for certain pieces of IP. For example, no one wants to develop a processor core or a standard interconnect if they’re already commercially available and don’t offer enough differentiation to warrant the development cost, effort and time.
“There are keystone players in the various pieces of those solutions — both in terms of the IP you incorporate into your system and then in the tools that you use to put those together and build and verify those chips in the system,” said Phil Dworsky, director of strategic alliances at Synopsys. “IP players like ARM and Imagination are certainly key players when it comes to processors and graphics. If you look at the interfaces around the chip, you’ll find Synopsys is one of those key players that you see in a majority of chips that incorporate frequently used interfaces. Almost any market you look at has still the same rough composition of processors, graphics or video, audio, special functioning computation and then the interfaces around them — and you need to pull them together to build these systems. What becomes more complicated in the design process is the amount not only of hardware but especially of software that’s in these systems. To be able to do that kind of software design, you need to have a model of that system, and be able to run the real software: bring it up, write it, debug it, way in advance of the hardware being available. That’s a key driver.”
Other things remain the same, too—the design kits, tools, virtual prototypes, simulation and emulation, and an emerging class of additional tools that are increasingly standards-based.
An interesting twist to these system-level activities at the semiconductor-company level is the fact that they now want to share these tools and technologies with their customers as a way to enable them. This creates another level of ecosystem engagement, said Marc Serughetti, director for business development with the system-level solutions group at Synopsys.
“In our work with Freescale, which brought together IP from many sources, Freescale wanted to make it available to their customer and have them use this platform in the flow of the customer. So there’s another entire ecosystem of the tools that those customers are using to do their development as well as what Freescale and their customer are looking at.”
What do the systems companies want?
Determining just where to put resources to meet the needs of the OEM is a challenge for ecosystem players. ARM’s Rance stressed that this depends on the systems company, what they are trying to accomplish and how they partition their design groups. “I’ve noticed you can go to one system house and then go to another completely different system house, and how they tackle a system-based design and partition hardware, software, verification teams is completely different. It’s that level of differentiation just in how they tackle the design task overall adds that extra layer of unpredictability in integrating this stuff together because everybody is integrating using a different method.”
This has some rather interesting ramifications for every facet of the semiconductor supply chain.
“It used to be when you looked at a system, a lot of the system used to be all IP they designed somewhere in the company worldwide, so they had some level of familiarity,” Rance said. ” If you start looking at their system-based designs now, I would say close to 70% or 80% of the IP is not even their own. It comes from some other provider, and they are adding their 20% to 30% of their own IP to differentiate from their competition. Those IPs, they just don’t play well together because they’re all described differently. Some are very hardware-centric, some are very hardware defined. It’s a tough task.”
Part two of this report will address the growing software challenge in the system-level ecosystem today.
Leave a Reply