Starting Point Is Changing For Designs

Market-specific needs and rules, availability of IP, and multiple ways to solve problems are having a huge effect on architectures.

popularity

The starting point for semiconductor designs is shifting. What used to be a fairly straightforward exercise of choosing a processor based on power or performance, followed by how much on-chip versus off-chip memory is required, has become much more complicated.

This is partly due to an emphasis on application-specific hardware and software solutions for markets that either never existed before, or which were so immature that developing customized solutions was not profitable. Driver assistance technology in cars, for example, only became a priority a couple years ago when Tesla stunned a plodding automotive industry with a vehicle capable of driving itself. And cloud computing has grown significantly over the past several years (see fig. 1 below), dwarfing growth in traditional data centers and changing the overall flow of data.


Fig. 1: Growth projections for cloud-based data storage and processing. Source: SEMI/Cisco VNI Global IP Traffic Forecast

Add to that list machine learning/AI, which is a horizontal market solution that utilizes vertical-market algorithms. There also has been steady growth in the IoT/IIoT, as well as virtual and augmented reality. Put these together and the emphasis on market-specific solutions begins coming into focus. In many of these segments, this is less about choosing a processor or memory than catering to the unique needs of a market, or even a slice of a market.


Fig. 2: Augmented reality market growth. Source: Grand View Research

“We’re seeing a lot more segment-specific designs,” said Wally Rhines, president and CEO of Mentor, a Siemens Business. “Segment-specific development in standard and custom products changes the game because they understand a specific industry. The benefits to that are higher market share in a particular segment, and higher profits as a semiconductor company. The more diverse the product base, the less likely you are to have a 35% to 40% operating profit.”

Some of these segments, particularly those involving safety, have their own certification requirements. Others are brand new and evolving, so a design built using predominantly off-the-shelf parts may not be successful. This was evident with the first batch of smart watches, which flooded the consumer electronics market several years ago. Battery life was so limited, because those devices started out with general-purpose, off-the-shelf parts, that many consumers found them more of a nuisance than a useful tool.

“The starting point varies greatly from market to market,” said Bill Neifert, senior director of market development at ARM. “As the importance of automotive electronics grows, companies that develop this technology are ramping up their methodologies to catch up with other markets. Their needs are different from companies outside of automotive, where there is a big push for custom accelerators that can fit their exact needs while still using standard programming. There is always innovation at the bleeding edge, but that’s not where most companies are. Most are attacking niches, so they need better speed or lower power. IoT drives a lot of this. But if you’re trying to scale a general-purpose processor, it puts you out of the price point that’s acceptable.”

The flip side of that approach is building a more customized solution, but that adds its own set of constraints. Economics do not favor customized designs for many new markets either. This is especially evident where the price of a device is an initial consideration for consumer buy-in and the need to get something out the door quickly requires more engineering resources and different design methodologies.

“Designers are squeezed on two sides,” said Marc Greenberg, product marketing group director for Cadence‘s IP Group. “On one side, there is an increase in available transistors and the need to complete new design steps in advanced processes. On the other, the market windows stay the same. The end result is that people attempt the implementation of standards-based memory and interface IP earlier and earlier in the design cycle. Not too long ago, it was unusual for a new company with little silicon experience to attempt a chip on an advanced node or with the latest off-chip interface standards. Today, it’s commonplace to see young companies attempt advanced node design and new standards simultaneously.”

Process node confusion
This is where commercial IP should play a big role. Increasingly, it’s not that simple.

“Selection of IP is becoming more and more complicated because for many designs or redesigns, the IP is not qualified or has not been integrated into a design,” said Ranjit Adhikary, vice president of marketing at ClioSoft. “For a lot of these companies, the real key is time to market. It’s about which node they will use. But their IP selection may be restricted by internally developed IP versus third-party IP. Using commercial IP increases the cost. Some companies may use it to hit the market first and then decrease the cost later by developing their own IP.”

To complicate this decision further, there are enough differences between foundry processes that an IP block’s characterization may look rather different for two 10nm processes, for example. And what works best at one node may not be the best choice at another.

“There are multiple choices where you can trade off power and performance, but it has to be available for every process node that you’re working at,” said Mary Ann White, director of product marketing at Synopsys. “So you’ll always have foundation IP, which includes memory. And that’s made up of standard cells, which may be high-density or high-performance versus low-power. After that, it gets more difficult.”

In an increasing number of cases, IP may not be available at all. Foundries have been flooding the market with new processes, both at new and established nodes, forcing IP vendors to choose where to put their resources. This is becoming a big problem for companies looking to race ahead with a new design at the latest node.

“When you look at a design, you have to determine whether the IP is even available,” said Lisa Minwell, eSilicon‘s director of IP marketing. “What is the limiting factor? That may depend on the market you’re going after. So if it’s a networking design, what is the limiting factor there? We’ve been feverishly working so that we have control over this. You have to have physical IP that is available.”

That view is being echoed across the industry. “If a node changes enough, it may take us six months to do a port with a few engineers,” said Geoff Tate, CEO of Flex Logix. “For most IP, though, the port time is much more substantial. We use digital design rules, so there is lower porting time in man months. But we do see an issue for customers with other IP. If you’re developing a phase-lock loop, for instance, the cost is high enough that you’re going to need one or two customers to get your money back. The only way to do that is to stick with the big nodes like 16/14 and 7nm. The 10, 12 and 22nm nodes are intermediate nodes.”

Couple that with advanced packaging options, because not all IP can be plugged into a fan-out or 2.5D design, and just keeping track of the available IP becomes a challenge.

Infrastructure out
IP isn’t the only starting point. In some markets, chipmakers are now starting the design from the electronic plumbing and developing the architecture from there.

“Unless they are explicitly trying to do a general-purpose processor, most things start from the pins in,” said Ty Garibay, CTO at ArterisIP. “So they may need ‘this much’ bandwidth or PCIe gen 4, or DDR3, 4 or 5. Then how many different channels do you need from there and where do you get all of these PHYs. It pushes inward from there. You have ‘this much’ data. Where are you going to process it? Do you need a GPU or a dedicated-purpose accelerator? For a single market, there may be a specific type of data. And then you may have processing left over, or you keep some extra CPU cycles on the side for things you don’t know about.”

This is consistent with observations by other companies. Anush Mohandass, vice president of marketing and business development at NetSpeed Systems, said the first time he saw the first hint of this shift was about six months ago, when a large customer started with the interconnect and then began looking at the process next.

“Once you step back and think about it, it makes sense,” Mohandass said. “You need to know the bandwidth and the latency needs so you can plan out the path between the different elements. But it surprised me because we hadn’t seen this in the past. The first time it happened, we thought it was an exception. But since then it has become more of a pattern, particularly for automotive and hyperscale storage. What’s happening is we’re seeing a shift from a CPU-centric to a memory-centric design. From there you draw a blueprint for how you move data and how quickly the system needs to respond. Figuring that out is critical, because it determines how fast a sensor needs to send data.”

Safety-critical markets
The automotive, avionics, industrial and medical sectors add their own set of issues. IP needs to be available, but it also needs to be qualified for the relevant standards for a market.

In avionics, the solution has been for big companies to develop their own IP rather than waiting for commercial off-the-shelf (COTS) to be qualified for a particular node or process. “There are different approaches,” said Louie de Luna, director of marketing at Aldec. “One is to reverse engineer IP, do more test and provide more documentation so it adheres to the standards. That’s what big avionics companies are doing. They don’t buy commercial IP. They make their own. But we’re also seeing a trend where IP vendors are now labeling their IP DO-254 compliant.”

Commercial avionics IP falls into three categories, soft IP, which is synthesizable HDL source code; firm IP, which is basically a netlist that needs to be placed and routed; and hard IP, such as an FPGA with an embedded ARM core. All of that requires extensive documentation. But avionics product cycles are two to five years, on average, and the added cost of developing IP isn’t significant. Automotive is a whole different story.

“We’re seeing this shift more and more in segments that are new, such as automotive, hyperscale storage and 5G,” said NetSpeed’s Mohandass. “Those are all emerging segments with little legacy, and the thinking is that whoever gets to market first will capture the market.”

Unknowns
Keeping up with all of these markets isn’t simple for chipmakers. The newness of some segments and the rapid shifts in established markets, such as automotive, are causing regular disruptions. Protocols and standards are in a state of almost constant evolution, which means a chip design started today may be outdated by the time it reaches the market.

This always has been one of the hazards at the leading edge of design, but the problem is spreading. For one thing, many of these new markets do not require advanced-node technology. Moreover, many of these devices are being connected inside of much bigger systems, so protocol and standard changes are not confined to finFET-level processes. They affect analog devices created even at older nodes.

There are a couple ways to deal with this. One is to add margin into chips to handle any changes. The second is to add customizable programmability into designs, which is where the embedded FPGA market has staked its claim.

“If you’re focused on 5G, the digital front end may look very different over time,” said Steve Mensor, vice president of marketing at Achronix. “You know the function, but you don’t know the filters because those are still being developed. The only way to deal with that effectively is with an FPGA. It’s the same in machine learning. You know there will be a high number of matrix multiplies, but you don’t know the specs for the neuron weights. On top of that, the algorithms change, and you don’t know what the algorithms will be. But if you have a convolutional neural network and you’re doing a 17 x 17 multiplier, that may be overkill. Maybe you only need an 8 x 8. That allows you to significantly decrease the area and decrease the power.”

Another issue is figuring out which approach is best because there are so many possible ways to solve a problem and new ways to look at old problems. “We’re seeing a lot of new players, and new offerings from existing players for markets like autonomous driving and artificial intelligence,” said ArterisIP’s Garibay. “There is a new sub-market developing there. There also is a rapidly changing grouping of IPs that are now being made available to the general market. The question is how they will fit into the tool chain.”

This is evident even with microcontrollers, which used to be considered standard parts. The old 8-bit actuator with on-chip memory has given way to 16- and 32-bit devices with connections to external memory—formerly the differentiator between an MCU and a CPU.

“If you talk with the FPGA companies about how their programmable SoCs are used, it could be an embedded CPU or it could be used in a custom MCU,” said Neifert. “That provides huge processing power and additional programming options. The folks developing MCU devices more and more treat it like a CPU with its own ecosystem. One driver of that is the desire to enable the end customer to put more software on there. On top of that, there are security concerns. You want to be able to let the customer run their own software without compromising security. That also requires new tooling, because adding these features without new tooling is pretty worthless. You need software modeling capabilities, compilers and debuggers.”

Conclusion
Changes that are underway throughout the semiconductor ecosystem today are indicative of a fundamental shift in how chipmakers are approaching markets and what is important to them. There are several forces at work here:

• There is a growing recognition that the best way to address a market is not necessarily with the fastest or most power-efficient general-processor. This has prompted a shift toward more heterogeneous processing, advanced packaging and a focus on how and where to process growing quantities of data.
• IP vendors are becoming increasingly cautious about which process nodes will be lucrative enough to warrant an investment of their resources, because each one is now different enough that it cannot just be moved from one foundry’s process to another.
• There is enough uncertainty in end markets that architects are rethinking what will cause the least amount of disruption to a design if it needs to be tweaked for evolving protocols or standards or market needs.

Taken as a whole, these represent significant shifts in how companies are approaching design, and ultimately they will have a big impact on tooling, IP choices, and the architecture of the design itself.

Related Stories
Rethinking Processor Architectures
General-purpose metrics no longer apply as semiconductor industry makes a fundamental shift toward application-specific solutions.
Heterogeneous System Challenges Grow
How to make sure different kinds of processors will work in an SoC.
Tuning Heterogeneous SoCs
Just adding more cores doesn’t guarantee better performance or lower power.
Homogeneous And Heterogeneous Computing Collide
Part one in a series. Processing architectures continue to become more complex, but is the software industry getting left behind? Who will help them utilize the hardware being created?



Leave a Reply


(Note: This name will be displayed publicly)