Who’s Calling The Shots

Second of two parts: Software remains a growing challenge.


As discussed in part one of this report, OEMs are making more of the decisions about what goes into a system design.

A large part of this shift involves software, which falls on many plates throughout the ecosystem. Making sure all of the layers of software interoperate and integrate well together is no small feat, and it is growing in complexity at every turn as systems becomes more sophisticated.

“Even if you look back 10 or 20 years ago, there was already a lack of communication between hardware and software teams,” said Simon Rance, senior product manager in the systems and software group at ARM. “Now, the communication is almost completely broken or segmented because there’s an imbalance. There are four to six times as many software designers than there are hardware designers. Hardware is designing their portion of it having no idea how software is going to go in and program it all. And when something goes wrong for the software engineer, the hardware engineer has no idea what the heck they were trying to do in the first place anyway, so they can’t help them debug. It’s these type of issues that are where the system schedules are getting very long, not shrinking.”

Going forward, a complete solution from the system level down will include the quality and security of the software, noted Phil Dworsky, director of strategic alliances at Synopsys. This is evidenced by a number of acquisitions in this area by Synopsys and other players, as well as increasing discussion about software quality in general.

Software challenges are daunting
Andrew Caples, senior product manager for Nucleus product line in Mentor Graphics’ Embedded Systems Division, observed that software and hardware collide or they converge. “Right now, there’s so much capability on the hardware — there’s all sorts of accelerators, there’s crypto acceleration, there are GPUs, and all sorts of connectivity, but if you look at what devices have today as far as the capabilities of the silicon today, it requires an awful lot of low-level code to make that all work. Just bringing up GPU support for graphics acceleration for a display is more difficult. Displays used to be cool and novel. Now compelling displays are necessary. Everybody wants their device to look like an iPad in all sorts of cool displays and icons and lots of capabilities — those require that type of graphics support.”

Supporting the various graphics engines out there requires a tremendous amount of expertise, he explained. “After that, you can extrapolate out. Everything is connected, we already know that. If you look at the connectivity, supporting 802.11 is kind of passé now. But there are lots of chipsets out there from companies like Broadcom or TI or Qualcomm or Bluegiga, where the amount of low-level expertise to be able to write 802.11 wireless drivers that have the functionality you need, whether it is enterprise support, soft AP support, security, various modes to be able to test and just bringing up that connectivity becomes very challenging. There are certifications that can be required from the WiFi Alliance to ensure conforming and compliance. And then you add in Bluetooth and 802.15.4 and Zigbee, and it becomes really quite difficult to be able to provide all this support in the chipsets.”

It’s not uncommon today for devices to have many cores and require a substantial amount of low-level support for bring up. In fact, many of these devices contain what is in a PC. This changes the relationships among different providers in the ecosystem because it requires more support from vendors.

“It’s harder to provide comprehensive support for all of the boards out there and all of the SoCs out there, all the processors out there — you really have to make your bets on which boards and processors are going to be widely embraced,” said Caples. “The amount of effort to support these boards with the different peripheral support can be many man years of engineering effort. It requires comprehensive DSPs with device driver support that provides the graphics and the wireless connectivity and multicore support and support for the accelerators.”

The response by OEMs in some cases is to conform to standards and standardized testing and certifications. But just throwing more bodies at a problem isn’t necessarily the best strategy.

ARM has been wrestling with this issue for quite some time and recently rolled out some technology that looks to address this by allowing any designer, whether they are hardware or software or verification, the same viewpoint of the system design information, Rance said.

“It’s not doing it from a hardware point of view, or a software point of view or a verification point of view,” he explained. “It’s saying, ‘This is all of the system level information, now use the tool. Depending on who you are, whether you are a hardware or software engineer, go use the tool your way for what you need it to do, and it will represent that same system data in your kind of viewpoint.’”

ARM is using the IP-XACT schema to describe a system or IP, he said. “By leveraging that, it’s completely agnostic as to whether you’re a hardware or software designer. And we can add more and more information about the system, the constraints on the system, performance type of expectations on the system, and then depending on whether you’re a hardware or software engineer, you can take that information in the tools and let the tool munge that information and put it into the format they need it in. That way it is always consistent across. It’s almost like a communication aid, really, between these different groups.”

This type of technology does mean that the more design teams take this approach, EDA tools need to be able to work and play nicely with each other, Rance said. And clearly, they will have to because the systems companies require it.

“It’s a fine line of using something like IP-XACT, which is a standard to describe this, but then you still have Company A or Company B trying to stretch that standard to create their own solution. It takes these big systems houses to really push and to emphasize it. Where these systems house are pushing for more openness, there lies the problem. These technologies can be easily replaced and switched out with something else. You can continue to fight companies working together or tools working together, but in the end, they have to to solve today’s system challenges,” he added.

At the end of the day, as part of the gargantuan effort of designing, integrating and verifying an elegant, sophisticated electronic system today the systems OEMs are in the driver’s seat. Whether it is choosing partners, IP, foundry, packaging the OEM is also driving openness and interoperability amongst all the players in the game. The successful players will learn where their pieces fit, and how to ease the integration in the system.


RKP says:

Interesting and well written article. I Would like to draw your attention to two developments within Accellera. The first one being IP-XACT, while it has addressed challenges of IP integration it still lacks the software-view of an IP. Secondly the Portable Stimulus initiative approaching from a system-level verification challenges is also in a way trying to address software-view of SoC. I see a path where both these initiatives will hit up on a common ground in long term. As a contributing company at Accellera Vayavya Labs has been trying to address this specific challenge of software be it used for validation or for production ready software.

Leave a Reply

(Note: This name will be displayed publicly)