Custom Versus Platform Design

From automotive to consumer markets, strategies are all over the map when it comes to future design.

popularity

The increase in SoC complexity is being mirrored by a rise in complexity within the markets that drive demand for those chips. The upshot is that a push toward greater connectivity, lower power and better performance—and all for a minimal cost—has turned the pros and cons for custom design vs. platforms and superchips into a murky decision-making process.

For the past decade, the focus of large semiconductor companies has been on applications processors for the mobile handset market. That market has matured and consolidated significantly during that period, forcing chipmakers to look for new revenue sources. They have found a number of new ones, such as automotive, medical, industrial, and the home market, all of which will be connected to other devices via the  infrastructure.

But because all of these markets are changing so quickly, with the communications infrastructure still under construction and a steady stream of standards being proposed, there is no clear indication of what will resonate with consumers, what is the best strategy for developing chips, and whether what works today will be seen as adequate in 12 or 18 months. And while that might make marketing departments excited about upcoming opportunities, it is creating chaos in the semiconductor world where strategies for tackling markets are in almost constant flux.

Drivers wanted
Consider what’s happening in the automotive market, for example. Companies are rushing to provide chips for electronic control units (ECUs), which are basically the automotive equivalent of a subsystem involving a processor or microcontroller, memory, network interface chips and a sensor or actuator, as well as the SoCs that go into the infotainment systems inside of cars.

But what are carmakers looking for? The answer depends on the company, the reputation they’re trying to promote with a particular model, and the price range.

“When we were at 1 micron and 0.5 micron, there was a lot of reuse of chip design, which was pretty limited,” said Jim Buczkowski, director of electrical and electronics systems research and advanced engineering at Ford Motor Co. “Today, the design, whether it’s for a consumer application or an industrial or automotive application, there are far less design changes. And most of it is modeled way ahead of time on a process that can support from minus 40 degrees Celsius to 125 degrees Celsius.”

That’s the standardized side of things, which is effectively a platform process. But this entire market is far more nuanced than it might appear on the outside—and not so easy to crack for companies not used to doing business there. Automaker suppliers tend to view things in terms of functions. One function, such as braking, may involve several ECUs, for example, one to monitor the brakes, another to determine tire slippage, and a third for engine braking.

“The old way of thinking about it is that one company would get a contract for the ECU and build that,” said Serge Leef, vice president of new ventures at Mentor Graphics. “Now, the only question is which MCU is used, and that varies widely depending on the company and the function. So for power doors and windows, which are the least demanding, they may use a design dating back to the 1980s. Engine control is ultra sophisticated with very sophisticated timing units.”

But there’s another consideration here, as well—the car company’s real value statement. A luxury performance carmaker may splurge on components for the suspension and engine, but cut back on other parts, while another carmaker might value the interior electronics and infotainment more than the performance. That could mean choosing a cheaper MCU or CPU in one area, and developing a more custom part for another. And some of this may be standardized while other parts may not.

“About 20% to 30% of new cars are based on AUTOSAR (Automotive Open System Architecture),” said Leef. “The software and hardware are not interlinked, which means you can recompile software from one ECU to another. This shifts control from the supplier to the OEM (carmaker) because the software will work the same on multiple devices.”

Critical or not?
Semiconductor vendors have been pushing in the opposite direction, though, particularly for connected systems. The argument is that hardware is less likely to fail than software, and it’s more secure because hardware is more difficult to hack.

“They are duplicating things at the hardware level, particularly when it comes to functional safety,” said Kurt Shuler, vice president of marketing at Arteris. “This is showing up in ADAS (advanced driver assistance systems). They have the same architecture and compare results. Then they compare results, and if they’re different it can retry and reboot.”

The automotive electronics world is full of standards, and the number and breadth increases whenever safety issues are involved. The Automotive Safety Integrity Levels are a risk classification system for electronic parts that fall under the ISO 26262 standard. They range from A to D, with A being the lowest level that is suitable for an infotainment system while D is suitable for the most critical systems. There are even groups such as Yogitech, TUV and Esida, which will certify parts according to these standards. Shuler noted that ASIL levels C and D require hardware duplication of a CPU and on-chip memory.

Certification costs money, and parts that are developed according to standards such as these—in automotive, industrial and medical markets—are more expensive to develop and easier to commoditize because of the standards. That has opened the door to global competition on hardware. And it has made it far more attractive to add in programmability whenever possible, because standards are constantly being updated as more sensors and communication are added into everything.

“The increase in programmable engines is a continuing trend,” said Drew Wingard, chief technology officer at Sonics. “It’s not just IP blocks, either. It’s entire subsystems. Some markets can still afford a superchip approach. But others need to learn from the microcontroller world, where they can do chips with 8 to 10 people in a couple of months. We’re going to see a lot more of that kind of approach to design. You can’t expect to develop one device and sell 200 million units. So what comes next is all about controlling costs.”

Bigger LEGOs
One way to control costs is with configurable snap-in parts. The third-party IP industry has been built on this concept, and all of the major IP vendors—ARM, Imagination Technologies, Synopsys, Cadence, and Mentor Graphics (embedded software)—are building subsystems with integrated IP blocks and embedded software, and those in turn are being crafted into even larger subsystems by service vendors such as eSilicon, Open-Silicon, Invecas, and Global Unichip.

Nothing is ever quite snap-in, though, particularly in complex SoCs where characterizations of that IP can be incredibly complex, which is why there is an increasing amount of interest in subsystems. Mike Muller, CTO at ARM, is credited with calling this “bigger LEGOs.”

“We’re doing more preconfigured subsystems than in the past,” said Vasan Karighattam, senior director of architecture for SoC and SSW engineering at Open-Silicon. “You also can do an SoC and preconfigure it with applications or groups of applications, or you can do that with a subsystem. You still want to add programmability into this because you don’t want it all hard-wired. Then you build it according to where you can get the highest return and tie it to multiple applications. So you may add in something like facial recognition, but you also can use the same SoC or different applications such as a sensor hub.”

That’s good news for IP vendors, because it allows them to charge more for pre-integrated components. “Most IP fits into many markets,” said Ron Lowman, IoT strategic marketing manager at Synopsys. “The big challenge is to add in energy efficiency, and there the question is how far customers are willing to go on their designs. If you look at 55nm, traditionally that ran at 1.2 volts, but we can decrease it down to 0.9 volts.”

That can reduce the overall cost of a device because it can be developed at an established process node where process technology is mature, yields are proven and leakage is lower. Looked at from the standpoint of re-usability, this becomes one of the less expensive options with much quicker time to market than advanced-node designs. It’s a different way of carving up the same problems of time to market, controlling costs and leveraging what already has been developed.

A different approach is to develop for specific markets with longer-term design-ins. “This works in some parts of automotive,” said Mike Gianfagna, vice president of marketing at eSilicon. “It also works for implantable medical devices and other non-invasive medical devices, and for things like smart cities and smart grids. For those it’s more about security. Security is the design issue.”

Conclusion
While each market has its own unique opportunities and challenges, the amount of change in almost all markets is enormous. And the longer those changes continue and the more pervasive they become, the more the strategies behind designing chips will remain in flux.

There always have been complaints by device vendors that chips take too long to develop, and there have been almost perpetual complaints by chipmakers that EDA tools don’t allow them to meet deadlines. But the reality is that the whole industry is in this together, and no one is quite sure how these trends will shake out. It will take time to figure out how the Internet of Things will affect design, how the increase in electronics in automobiles will shake out, and where other markets will fit into this picture. And until then, the amount of innovation in new ideas as well as recycling of old ideas will continue in force, with both predictable and unpredictable results.



Leave a Reply


(Note: This name will be displayed publicly)