The New Platform-Based Design

What we call ‘platform-based design’ is evolving to allow for better time-to-market and optimized integration of specialized IP blocks.

popularity

By Ann Steffora Mutschler
Driven by the continued explosion in design costs, the term ‘platform-based design’ is evolving.

A platform used to be viewed as an actual chip with some configurability on it that a semiconductor company promoted. Their customers would buy that chip in volume, configure it to their requirements, and sell it inside their end devices. The definition has become broader since then.

“Today, a platform has a connotation both in terms of a broad implementation pathway that you’re going to work with—so a 2.5D or maybe FPGA-based platform—and a fabric definition like that,” observed Pranav Ashar, chief technology officer at Real Intent. “Also, a platform has come to mean something a little more virtual. A lot of big companies in the mobile space are designing their own applications processors, but the application processors across these companies look very similar. They have a certain number of ARM cores and GPUs, and a certain variety of radios and other. What these companies are competing on is the integration of all these soft blocks and the execution of that integration. Today in the mobile space, for example, a platform and a reference design have come to mean something very similar.”

A slightly different take on the platform had been predicated on the idea that by making a set of choices, by building a set of deliverables, the first chip on a platform could be built and then derivative chips could be spun out based on the same platform.

“What many in the industry have learned is that the total cost of ownership of each of those derivative chips is just still fundamentally too high,” noted Drew Wingard, chief technology officer at Sonics. “And those costs are probably not where the rest of us expected it to be. We thought if we could make the silicon development costs okay that everything else would just fall into line, but it turns out that so many of these SoCs are being built in a model where the semiconductor company doesn’t just have to produce the chip. They don’t just have to produce the software that runs on the chip. They actually have to produce the complete reference platform that could be shipped directly by an OEM. What we’re told by some of these guys is it ends up being the cost in producing that reference platform and in building that extra software.”

However, there has not bee the same level of investment in platform-based benefits for either software or reference platform creation.

“That’s really very interesting and sad at some level, because you’d think in both domains it would be an easier problem to solve,” said Wingard. “But the way the economics of the semiconductor industry work today, they make as few investments in software as they can. They make as few investments in reference platforms as they can because it’s all stuff they have to give away to try to sell the chip, which is how they make their money.”

Still, there is a lot of interest in reference designs. “When we are talking about platform-based design we are talking about system-on-chip platforms as maybe a representative CPU subsystem with a fairly standard set of peripherals and a fairly standardized way to manage the clock and power and a fairly standardized way to stitch everything together,” said Jeff Scott, principal SoC architect at design services provider Open-Silicon. “In our view there is also a fairly standardized way to incorporate specialized or custom IPs into the system.”

Of course this platform includes the verification plan as well. This has been a new way of thinking for the Open-Silicon team, he said, since it’s only been in the last 18 months that they’ve focused on system on chip. Previously, the company had been working more on custom and networking-type designs, where it either contributed a piece of the IP or did custom types of designs.

“We are trying to position a reference design with a standard set of practices, not only to instill confidence in the customer that we know what we are doing, but also so that we have a jumpstart on the design. The real effort is integrating their specialized pieces into it, and we know how to do it. We’ve done it more, and it is low risk. It’s really a matter of how quickly can we incorporate their special sauce into our standard reference design.”

Similarly, Wingard has noted platform questions switching around to, ‘If you can’t build those derivatives, what do you do? Do you abandon those markets?’

“No, you don’t abandon those markets,” he said. “You end up trying to make this one chip design cover more sub-markets than you would have otherwise done. Basically, it means that you overdesign the chip. So that first chip now becomes even more expensive. You’ve lost some of the overhead about wanting to build the platform that you can re-use, but what you gain there you massively lost because now you suddenly have an extra 50% of different kinds of I/O resources you may need, and you can’t afford to bond out all of them. You end up with all this complexity and now you multiplex all these I/O devices to the precious number of pads you have to the outside of the chip. You end up with all this extra complexity and all this extra overdesign. So you end up with these superchips.”

Keeping the focus

As a result of the growth of these reference designs and the ‘reference design as a platform,’ the attention of the companies making the SoCs is being focused onto the problems that really matter in the integration of these sub-blocks.

“As you work on iterations of the same reference design over time, clearly you build up some legacy code for blocks that are going to be used again and again and then your verification requirement, which is a critical part of the ability to execute efficiently,” said Ashar. “It’s focused on things like the interfaces between these blocks, the power management of these blocks, and so on. The fact that this baseline sort of design is now being carried over and is being evolved incrementally is focusing the attention of these companies on the real problems that are to be addressed in the interest of executing efficiently on these SoCs. It’s almost like creating abstraction levels to be used in both design and in verification.”

Overall, this shift is a reminder that time-to-market is always a differentiator. “If you are the first one out that certainly helps position your product,” Scott said. “I have run into situations where people want to do their own thing and they want everything differentiated, but with the complexity of systems now and the time-to-market pressures of redesigning everything from scratch that doesn’t make sense in most cases. You really just want to differentiate on the thing that you are good at.”



Leave a Reply


(Note: This name will be displayed publicly)