More Rigor, Please

With more semiconductor companies embracing a single-platform strategy for their SoC designs, a more strict methodology can enable designers to focus on innovation.

popularity

By Ann Steffora Mutschler
Semiconductor companies are embracing a single-platform strategy for their SoC designs, but sifting through the options can be quite a feat. While not wildly different from the traditional derivative approach, a single-platform strategy can mean different things to different companies.

Sometimes it refers to a platform that is already successful in one application and is then leveraged to work in a different, often related application, explained Jeff Scott, principal SoC architect at Open-Silicon. In this case, the original platform is typically modified to some extent for the new application, but the design effort is significantly less than what it would be if it was started from scratch. He noted that the only way this differs from a derivative strategy is the move from one application area to another.

Another type of single-platform strategy is to consciously design the chip from scratch to include functionality that can support multiple applications, sometimes referred to as the ‘superchip’ approach. “The challenge here is deciding what functionality should be included. In most cases, one application will require hardware that may not be needed in another. Therefore, the single-platform concept is not typically optimized for cost and power for any particular applications. The reduction in NRE, along with the increase in production volume, has to be carefully considered against the increase in individual part costs. Time-to-market can also be a consideration,” Scott said.

The superchip approach to building platforms is hardly new. “The easiest example to describe is probably the application processor business for phones where people were building these enormously capable things that were incredibly highly integrated,” said Drew Wingard, CTO at Sonics. “A significant chunk of that integration had to do with deciding not to decide how much I/O resource was going to be needed. Those chips had dozens and dozens and dozens of different types of interfaces and multiple copies of certain kinds. It got so ludicrous that if you asked those companies how many of those interfaces could actually be used simultaneously, they would say, ‘I only have enough pins to actually run about a third of them at the same time.’”

What drove that approach was the belief that it was so expensive to build the core function of the chip—the CPUs, the GPUs, the display controllers, etc.—and that design and verification as well as software development was so high that many engineering teams decided it was economically cheaper to overdesign the chip in this one area, which was considered to be somewhat less expensive, he continued. “Adding a bunch of extra I/O basically cost them a little bit of silicon area and some really simple logic for multiplexing the pins, but added some difficult decisions about which ones. In some cases they ended up shooting themselves in the foot by making that choice wrong, but other than that, it was a way of being able to say, ‘Yes, I’ve got that,’ to more customers. When the final story on that is written it will turn out that some of the savings they believed they got were false savings. Verification challenges, for instance, don’t scale very nicely in this place. It’s much more complex at the system level to verify something that has a whole bunch of extra ports on it that are sometimes there and sometimes not there.”

The bottom line is that some of the economies are just not there the way many expected them to be, but in some market segments, it has become the dominant way that chips are designed.

Along these lines, Scott said that from Open-Silicon’s vantage point, there are similar platform requirements for different customers. If these customers could agree on what functionality needs to be included, benefits such as shared NRE and lower part costs due to increased volume could be leveraged. But each company has its own schedule and its own sensitivities to cost and power consumption, so it would be very difficult getting them all to agree on the definition of the platform and a common schedule.

Mix and match
To build in more flexibility, Wingard said what many engineering teams wish they had is the ability to mix and match features. This is something microcontroller companies have had for years. “If you look at how many different line items are on the menu at a typical microcontroller company, they’ll have hundreds of part variations. Now those don’t all have different silicon behind them—sometimes it is just a bond-out option, sometimes it is just a fuse that’s blown somewhere that turns off some function—but they definitely are able to build designs at much less cost per design and with much less software investment for each design point.”

He noted that much of the work around chip level platforms hasn’t left in enough flexibility to allow people to rapidly re-target what they need, which is what’s behind the superchip concept. “It also makes those superchips not flexible enough. What’s really bad about that situation is when you spent all that extra time and money to build a superchip and it turns out that you didn’t have the right collection of things anyway,” he explained.

As such, Sonics and others promote a design style that allow designers to be much more agile by simply trying to look at what decisions design teams want to defer. This is effective because, “most of the decisions you would like to change have relatively modest software implications—they don’t change that there is a microprocessor,” he said. “They might change which version of a microprocessor is used or how big its caches are but that doesn’t normally have a big impact on which software runs. The challenge is getting engineers to believe that sometimes isolating things makes the system better and that by building a decoupled architecture where we are isolating the components from each other, we can build something that’s more composable and therefore more reliable,” Wingard added.

Don’t forget verification
Many times, large chipmakers will use a single platform strategy, whereby the architecture team will design a big ‘superchip’ in one location, and that will be a chip that three, four, five or six other chips will come off of, observed Kurt Shuler, vice president of marketing at Arteris. “What happens is even with that single platform chip, there are parts of it—like the graphics are done in one country, the DSP stuff is done at another site, the CPU complex is done at headquarters—and when they do some of these derivatives they actually farm some of that out to foreign design centers. They basically slice up the block diagram and give the parts out to different people, and then they bring it back in and put it all together. The teams all individually verify the chunks they are working on, and when they get it back they still have to do the whole SoC verification. The big thing they wish they knew is how tough it is to take everything back and verify it all together.”

Had they thought of the verification challenge up front, he said, they may have approached the design differently from the outset. “When they started trying to do this, they would have a big fur-ball interconnect block—a bunch of crossbars or tiered bus thing—and they would cut it up. But they were never really successful. They never got the time-to-market benefits because it took so long to put all this together,” until they put in more rigor in the form of a network-on-chip (NoC).

Arteris has worked in the last year to make its NoC smarter to keep track of all the disparate pieces. “What happens is, with the graphics guys for instance, their subsystem actually has part of the NoC and that subsystem NoC has to connect to the top-level NoC, and the same thing with the [teams] who are doing video. Everybody has got these subsystems, and what happens is that as they are working on their own stuff they may change registers or addresses or different things. When the design is put back together, unless you have a way to keep track of everybody’s changes, you’ve got to do a lot of manual futzing, which brings on verification problems. Putting everything back together again for verification of not just the initial platform but all of the other things based upon it—that is where you’re going to save your time.”

At the end of the day, like it or not, there needs to be more rigor in chip design today, and that’s where things are headed. Whether it’s called a single-platform design, a network-on-chip or a flexible platform, the overarching idea is the same: find the common denominator, enable adjustments in a flexible way, keep track of the changes, focus on innovation. As technology solutions continue to evolve in this space, it will free design engineers to unleash even more creativity within the framework of optimum performance, throughput and yield.



Leave a Reply


(Note: This name will be displayed publicly)