What started in the processor and OS world is now migrating to more complex systems, driven by time to market and the need to reduce complexity.
By Ed Sperling
Platforms are attracting far more attention from makers of SoCs because they are pre-verified and can speed time to market, but the shift isn’t so simple. It will spark major changes in the way companies design and build chips, causing significant disruption across the entire SoC ecosystem.
Platforms are nothing new in the processor and software world. Intel, IBM AMD, and Nvidia have been developing hardware platforms for years, and operating system and middleware vendors have created software platforms to take advantage of them. In the SoC world, the farthest this type of approach has progressed has been subsystems, which are being offered by IP vendors such as Tensilica, and more recently by Synopsys and Cadence.
That’s beginning to change, however, and the momentum will grow as the push for more derivatives from a single design and stacked die gain in popularity. Complexity, cost and the need for re-use and flexibility of designs have raised the stakes in developing complex SoCs. Those factors have even provided a big push to stacking of die, particularly 2.5D approaches with an interposer.
But stacked die also offer another wrinkle in SoC development. Rather than ensure that all the pieces work together at the same process node, the ability to “bolt on” pre-verified platforms as needed that were developed in whatever process makes the most sense is a huge advantage in the race to bring chips to market more quickly, as well as to target specific markets with an ecosystem of platforms. Memory, logic, and analog all are being viewed as potential platforms.
“What makes platforms so attractive is the ability to quickly swap in different blocks for a particular market segment or application, and then to create derivatives using different platforms,” said Steve Erickson, vice president and general manager of IP and product development at Open-Silicon. “We’re already starting to see this with some ARM designs. There is a lot of software re-use and lower cost.”
The timing is particularly good for a couple of reasons. First, companies have continued downsizing their engineering organizations, which means that for each new chip they are looking for two or three versions built off the main platform. These are more than just simple derivatives. There are multiple derivatives from each of those platforms. And second, there is a large group of companies trying to get to market quickly with proven quality and good performance.
“Most of the CPU performance is based on an overall infrastructure that does not change very much,” said Erickson. “Usually there is a part of the SoC where there are segments you can’t afford to build yourself. And if you can re-use half or more of it and the verification goes faster, then it’s a different story.”
Software
Less well understood by hardware design teams are software platforms. While there have been efforts to co-design SoC software for specific hardware, work is only beginning to develop and optimize software platforms for SoCs. Linaro, for example, is developing an optimized version of Linux for the ARM core, while STMicroelectronics is working with Google on a variation of Android that is much more power-aware and efficient, according to Philippe Magarshack, group vice president for technology research and development at STMicroelectronics.
“This kind of work isn’t going to happen at the application level,” said Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence. “There are 1.9 million applications for mobile platforms. They don’t care about every hardware platform and they don’t care about most portions of the design.”
The bottom line is that it will be up to chipmakers and platform vendors to make all of this work better. The first examples of this are subsystems from companies such as Synopsys (audio), Tensilica (audio and video) and Cadence (PCIe). Texas Instruments has done a variety of implementations with its OMAP platform, as well, where the operating system is ported to the platform and optimized.
“Optimization has to happen at various levels,” said Schirrmeister. “Ideally, you’d like power to be optimized at the highest level. But there’s also a disadvantage to abstracting away the underlying hardware and just outsourcing a task to middleware. That’s why you see Linaro dealing with a lot of lower-level hardware issues. If there is new functionality in the core it may not be coming through to the full extent. Linaro’s goal is to optimize the software for different hardware architectures and make that into Linux distributions over time.”
Looked at differently, the challenge is how to best map applications onto platforms—and who’s responsibility that actually is.
“You can do this two ways,” said Johannes Stahl, director of product marketing for system-level solutions at Synopsys. “One is to have rich APIs. The other is to add a lot of intelligence into the hardware. But what’s still unknown is how much needs to be done in the operating system versus the OEMs doing it. The OEM with the semi supplier has to optimize key applications for the platform. But there are also applications that the consumer downloads. Will the application developer have the benefit of tools to optimize for that architecture? We don’t know, and even if they do will they use them? It’s uncertain how much they’re willing to take on.”
Divide and conquer
The push toward platforms also raises the discussion about ‘divide and conquer’ approaches. If a platform really can stand on its own, connected potentially by richer application programming interfaces in the software and by discrete, pre-verified and pre-integrated assemblages of hardware, then the next challenge is to put all of these pieces together in interesting ways—and still be able to provide enough differentiation to make the approach palatable to design teams.
The divide and conquer approach is particularly applicable in 3D stacks, which are just now under development at major chipmakers.
“We’ve talked with customers working with TSVs,” said Kurt Shuler, vice president of marketing at Arteris. “The conclusion is that each die is just as complicated as it was before, but now you have pillars in the middle of the chip. That makes real estate even more precious, and it makes the back end a lot more complicated. Right now there is one clock tree, but in the future there may be multiple sources for a clock.”
Shuler predicts a shakeout of companies that can do all the pieces, all the way across the supply chain. Some foundries may prove better than others, and some packaging companies may be better than others. “The big question for everyone will be who’s calling the shots going forward. It won’t necessarily be the semi guys.”
That plays handily into the network on chip approach from both Arteris and Sonics of unbundling the pieces and putting them together quickly.
“There are people in different stages of looking at things to unbundle,” said Jack Browne, senior vice president of marketing at Sonics. “Historically what companies have done is look at the sweet spot for a product—what will give you the best results for a design. This is why ARM has been offering performance optimization kits, so that all pieces working with ARM cores can get the desired performance result. But there has been a lot of community push back because that makes it harder to differentiate.”
He said platforms are somewhat straightforward to create. The bigger question is whether the partitioning has been done right at the system level. For example, is it better to run one graphics processor that does eight windows, or eight GPUs doing one window each?
“Right now the pain point is scaling,” he said, noting that platforms can help significantly with that.
Leave a Reply