Racing To Design Chips Faster

As the market begins shifting toward more vertical solutions, methodologies, tools and goals are changing on a grand scale.

popularity

A shift is underway to develop chips for more narrowly defined market segments, and in much smaller production runs. Rather than focusing on shrinking features and reducing cost per transistor by the billions of units, the emphasis behind this shift is less about scale and much more about optimization for specific markets and delivering those solutions more quickly.

As automotive, consumer electronics, the cloud, and a number of industrial market slices edge toward more targeted designs, the metrics for success are shifting. Large SoCs glued together with software will continue to push the limits of in high-volume markets such as mobile phones, but it’s becoming clear that is no longer the only successful path forward. Squeezing every last penny out of the design and manufacturing processes is proving less important in some markets than providing a compelling solution within a given time frame.

This is hardly a straight line, though. Customization—or at least partial customization—is proving to be a different kind of technology and business problem. The emphasis more often than not is on how chips behave in the context of a system and within acceptable use-case parameters, rather than just meeting specs for power, performance and cost. And it has set in motion a series of changes in how design tools are used, in the tools themselves, and in how companies prepare for these new market-specific opportunities.

“In the past, the semiconductor industry was all about finding a market where you could make one design for a huge market that gave you economies of scale,” said Zining Wu, chief technology officer at Marvell. “Two things are changing. First, as we get further along in Moore’s Law, cost is becoming a problem. So you have to do things differently. And second, the trend that is becoming apparent with the Internet of Things is that different segments require different products. So you’ve got a prohibitive cost on one side, and more fragmented markets on the other.”

As the ante at the leading edge of Moore’s Law goes up, the number of chipmakers pushing to the next node is diminishing. Acquisition and consolidation are a testament to just how difficult it has become to stay the course of shrinking features. But even some of the die-hard followers of Moore’s Law, as well a number of smaller players, are exploring other options in different markets. Chips developed at the most advanced process nodes still account for the largest volumes of chips produced from complex designs, but they no longer represent the largest volume of designs.

Jean-Marie Brunet, marketing director for the Emulation Division at Mentor Graphics, calls it the beginning of the Applications Age. “There is no limit to verticalization. Hardware alone is no longer the differentiating factor.”

Brunet said the key concern on the design side as these new vertical market slices are created is eliminating risk. “You want lower risk at tapeout. If you have a new application and you address a solution to new users, that is about decreasing risk.”

Part of that risk is missing market windows. Tom De Schutter, director of product marketing for virtual prototyping at Synopsys, said there is time-to-market pressure building in markets that never grappled with this issue before. In automotive, for example, the average design time used to be seven years, with a five-year turnaround considered to be aggressive. In some cases, notably in infotainment systems, that window has shrunk to as little as a year because anything longer than that is outdated technology.

“One of the big changes is a shift from design to integration,” De Schutter said. “Design is still important because you need something to integrate. But the focus on integration is new. This is not just about developing a massive SoC. There are a lot of different platforms and subsystems, where you provide the application needed for a specific market.”

This certainly doesn’t mean the market for advanced SoC designs is weakening. But it does mean there are many more opportunities developing around the edges that are focused on different goals.

“Things are changing for a subset of the market,” said Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence. “If you look at the ITRS data, cost is still a basic hurdle to overcome and tools are developed as a way to make designers more productive. That will continue to happen. But there also are a larger number of smaller designs coming to market, and the challenges there are much different.”

Faster, more integrated tools

One of the benefits from this shift is that faster and better integrated tools are being used across a much wider class of designs. This has always been a core consideration for the EDA market, which is why emulation sales now represent a sizeable portion of all of the large EDA vendors. But emulators are now being sold in conjunction with FPGA prototypes, and they’re being integrated with tools from across the flow. Simultaneously, EDA vendors are investing more and updating and expanding what their tools can and how quickly they can do it. This is obvious for finFET-based designs, where there is more to analyze, but it is being applied to other sectors, as well.

“As you move from 22nm to 16nm, line-widths change, electromigration rules change, and as you go from 2D to finFET, drive strength changes,” said Aveek Sarkar, vice president of product engineering and support at Ansys. “You have local self-heat. There are additional metal layers, so the heat trapped in dielectrics increases. All of this can worsen the life of a chip, so you need a better temperature profile.”

That requires much faster system-level simulation, whether the system is defined as a chip or a system that includes that chip. But either way there increasingly are contextual considerations, such as where and how a chip will be used, and a list of physical and electrical conditions for the entire system. As chips are targeted at new markets, those factors need to be considered on a much larger scale than before, particularly when chips are being designed for automotive, medical and industrial applications.

“What we’re finding is that everything is changing at once,” said Bill Neifert, director of models technology at ARM. “A lot of it is in the larger scale applications of things we’ve been doing in the past. So with front-end processing, there is an acceleration of capabilities. There are more subsystems where you bundle IP with software and that has to be validated. Almost every company now has at least one emulator. Virtual prototyping used to be a ‘nice-to-have’ but now it’s a ‘must-have.’ And it’s not just for software development as companies are using virtual prototypes at multiple points in the design cycle. There has been an expansion of all these things.”

Neifert added that these demands have spread well beyond the mobile market into a custom application and chip. “Even with all of the consolidation, we have not seen a decrease in overall activity.”

Part of the effort to speed up designs also involves more expertise from more sources. One criticism that has been made repeatedly by EDA insiders is that customers do not have the expertise to take full advantage of the tools. That has generated two different approaches. First, services are frequently offered, and sometimes bundled, with tools and IP.

“The key thing for modern verification platforms is scalability and reusability to operate with different flows, languages and vendors,” said Zibi Zalewski, general manager of the Hardware Division at Aldec. “The more complicated project, the bigger number of different elements that need to talk to each other. All those elements result not only in tight tools integrations and optimal data exchange, but also in very close cooperation between design companies and EDA tools providers. It is no longer a tool deal only. It is widely understood to include engineering service, the tools, IPs, automation and overall expertise to help the partner with the project challenges. Tight schedules and multi-discipline projects force team members to focus on their own part and use the wisdom and experience of others. EDA tools provider must be ready not only to help with tool operation, but also to customize the tool and IP for the on-going project, becoming actually a project active participant with a strong influence on schedules and deliveries.”

The second approach is to make the tools more reflective of how engineers actually use them rather than how tools vendors think they should be used. “Most emulator use models are RTL, and you go from RTL to gate with a synthesis process,” said Mentor’s Brunet. “But when you get an ECO because the silicon comes back with a bug, you make a fix to the netlist but you don’t go back to RTL. So you need a robust gate-level flow.”

Beyond correct by construction
One way to cut time to market and reduce costs is to get a chip right the first time. Re-spins are expensive in terms of engineering resources, but they are even more costly in highly competitive markets where a difference of a few months could make the difference between who wins and who loses a deal. This has given rise to the often-cited “correct by construction” idea, which in theory sounds great. Reality is usually rather different, particularly in complex designs where correct by construction rarely happens on the first try. One chipmaker insider described it as “a fantasy.”

Nonetheless, there are things that can be done to minimize the impact of engineering change orders (ECOs) and bugs that are found too late in the process to effectively do anything about them in hardware. That is getting a second look, both by chipmakers and tools companies.

“One of the most neglected areas is architecture,” said Sundari Mitra, CEO of NetSpeed Systems. “Right now, EDA starts at RTL. What’s missing is automation and algorithmic content. Companies have been used to taking spreadsheets and verifying if they’re correct and whether they meet performance. That does nothing for the SoC construct, bandwidth and latency. Those need to be part of the architectural design.”

Mitra noted that innovation in the mobile market has been an evolutionary engineering challenge. “With the IoT and automotive, it’s revolutionary. If you look at a car, it will be able to sense if a driver is asleep and it will be able to sense that at different frequencies and transmit that to automated controls. And it will all be merged into one or two chips. We need to change how we think about putting chips together.”

This is important for another reason, as well. Consumers and businesses are demanding the same kinds of capabilities in devices that are now available in portable devices such as smart phones and tablets—the ability to stay current through regular updates.

“People are selling cars sooner just to get new features,” said Kurt Shuler, vice president of marketing at Arteris. “That requires much more flexibility in hardware and software. This used to be a waterfall development process, where you went from one design process to the next. That’s starting to change everywhere.”

Different packaging options
One related change is in the packaging, and this is happening across high-volume mobile markets as well as vertical markets. Big processor companies such as IBM and Intel, as well as a number of networking chip companies, all publicly have embraced 2.5D packaging as a way of modularizing and re-using design components. Even Apple reportedly is using a fan-out package for its next-generation iPhone, and AMD has been selling a 2.5D graphics chip since last year.

But this will take time to catch on beyond early adopters in price-insensitive markets. “What we’re finding is that customers don’t want to take on too many new things at once,” said Mike Gianfagna, vice president of marketing at eSilicon. “So they may move to 2.5D using an interposer, or they’ll do monolithic cores on a substrate. They might try one new thing, but they’re not going to do them all. So they might use a different packaging strategy or they might decide to use multiple cores on a single chip. But throwing in everything at once is too risky.”

Gianfagna noted that one of the big drivers for this change in the high-volume markets is that the 28nm design flow isn’t working for 16/14nm. “Verifying 2.5D is not all that complex,” he said. “But if you’re doing 16nm chips—and we’re working on those now—it requires substantially more resources. Those are larger and more complex designs. You’ve got double and triple patterning, timing closure issues, different parasitic effects.”

Marvell has taken a different slant on this problem, developing its own serial interconnect and software and providing customers with a menu of modular chips (MoChi’s) that can be customized quickly for various vertical markets, including those where semi-custom chips will be sold in lower volumes.

“The challenge was to make the serial IP robust enough that it could support different MoChi’s in the same package or across a PCB in two chips,” said Wu. “The software is related, but the underlying physical IP is transparent to the software. The MoChi’s are connected at the bus level. So to make the system work, you have a northbridge (CPU communication) and several southbridges (I/O).”

He noted this will work in 2.5D through an interposer, or in a fan-out package.

Conclusions
Targeted solutions, with semiconductors as the core components, will continue to enable more vertical markets as the economic and time-to-market equations shift. Massive changes are underway across many market segments, and they will drive sales for existing chipmakers, tools companies, packaging houses and foundries, as every facet of the industry begins to adapt to new opportunities.

However, this isn’t entirely a big chipmaker’s game anymore. The price of entry is no longer based on the ability to develop a finFET at 16/14nm. Increasingly, it will include the ability to leverage market expertise and knowledge about specific vertical needs, using pre-developed subsystems or platforms, new ways to put them together, and perhaps even the most advanced tools will be delivered as a service. Shrinking features and cramming everything onto a single die is one strategy, but it’s no longer the only one. And that will become increasingly clear as new market solutions are developed faster than ever before.



6 comments

Roger Sundman says:

Ed,

An excellent article in the right time, very interesting. The “pre-developed” subsystem/concept will become a necessity for smaller companies in need of an ASIC at a cost they can afford. According to an earlier article in Semiconductor Engineering, Marvell’s CEO claimed a cost of $40 to $50 million for a 28nm project, which requires sales of many millions of devices i.e. more than any typical IoT device. A majority of all companies may produce 10k units/year plus minus several thousand, but the number of companies
is almost uncountable. That’s why the annual shipment of processors can reach 15 billion to 20 billion a year. In perspective, you need 1.500.000 companies each buying 10k to reach the number of billions, a substantial army even after deduction of mobile phones and smart-cards. Therefore, near 100% of all companies can’t take the burden of the extreme NRE cost associated with bleeding edge nodes. However, this new war theater will create a very dynamic environment and new business models must come in place taking advantage of the new challenges. The creativity of processor architects, design engineers and marketers will all contribute to this vibrant market.

Predicted by processor architect Dr Nick Tredennick and Brion Shimamoto almost 13 years ago, the use of reconfigurable processors will finally emerge.

Roger Sundman
Imsys

Karl Stevens says:

Very interesting for sure.
Building at or near the transistor level can not be done quickly just from the number of wires/connections required. That leads to a need for programmability. However embedded cores with the memory and caches peripheral connections is not the solution either.
For some reason, no one seems to be interested in microcode control. One complaint is that the control word is very wide. Well that is a non issue inside a chip, further it does not mean a control bit for every flip-flop. further some fields can be encoded.
GPUs are given blocks of data and it is kept local while processing.
FPGAs use LUTs for logic, but still have to be placed and wired so are not a total solution. But FPGAs are good for DSP as they can process input on the fly without using memory. DSP blocks could also be used on ASICs.
So why not consider using microcode to execute the if/else, loop, and assign statements directly? No, it is not C to hardware. It is done using memories which can be loaded very quickly and the footprint is small so functions can be dedicated to reduce RTOS usage.
This is a design problem, not an EDA issue.

Roger Sundman says:

Microcode is what we do at Imsys and it’s about architecture. We are very interested in it and we create best energy efficiency among existing processors, offers the best code density and smallest silicon area. Also a number of other interesting features available.

Karl Stevens says:

@Roger: IBM started sing microcode control in the IBM System/360 processors about 50 years ago and continued through several generations of systems.
That was an engineering choice, not an EDA creation.
It is sad that today’s designers and managers believe that EDA companies will create the magical tools that will solve today’s issues. The EDA companies do not engineer/design systems or products. They will just continue to dig the same hole deeper.

Roger Sundman says:

Karl, Yes and I’m one of the happy guys that was actually using what you are mentioning. I had access to a IBM/360/40 with a IBM 1401 emulator in it. This was a very clever solution to go from one architecture to another without too much pain.One has to remember that computers at that time where rented from IBM, very clever by them. The monthly fee (176 hours) was extremely high. A tape station, fridge size, was about my monthly salary. Tape density 556/800 bpi! That was some history and almost by chance many years later, I learned about micro-coding and how efficient it can be. Both in terms of speed and power efficiency. I don’t do any micro-coding myself but like to talk about it;)
Regards

Karl Stevens says:

Hi, Roger. The thing that micro-code must have is local memory. The key to flexibility of chips is use of memory to evaluate Boolean expressions and use memory instead of flip-flops.
So far there is no interest in my project that uses 3 dual port memories to execute C source code directly.
The control is an evolution of that Mod 40 and memory in the form of FPGA LUTS is used for ALU and counters.

Leave a Reply


(Note: This name will be displayed publicly)