What Makes A Chip Design Successful Today?

Maximum flexibility is no longer the reliable path to product success. While flexibility must be there for a purpose, it also can be a liability.


“Transistors are free” was the rallying cry of the semiconductor industry during the 1990s and early 2000s. That is no longer true.

The end of Dennard scaling made the simultaneous use of all the transistors troublesome, but transistors remained effectively unlimited. This led to an era where large amounts of flexibility could be built into a chip. It didn’t matter if all of it was being used; greater flexibility made the total market opportunity larger.

Fast forward another decade and unused transistors are becoming a lot more expensive—and in some cases a liability.

With scaling no longer happening for many companies, competitiveness now comes from better design, better performance and lower power. More demands are also placed on each of those transistors. They have to implement functionality that is secure, safe, able to remain in service for a lot longer, deal with harsh environments and in many cases be updatable in the field. This means that devices often have to be tuned for a specific end-application.

Doing this specificity imparts a greater cost per transistors for design, verification, manufacturing and test and is causing many business decisions to get rethought.

“To build a product that appeals to as broad a possible audience, people used to build in a lot of flexibility,” says Jeff Miller, product marketing manager for Tanner products within Mentor, a Siemens Business. “We want all the features that everyone is going to want. That comes with a certain overhead. One of the great challenges of engineering, especially engineering as it intersects with marketing, is to build a product with the right feature set to appeal to enough markets so that it is a successful product without compromising itself and collapsing under it’s own weight.”

Products have to balance costs and flexibility. “While designing SoCs, flexibility to support a large subset of applications remains the key for design companies to succeed especially when the volumes are low,” points out Ranjit Adhikary, vice president of marketing for ClioSoft. “They are all too aware of the cost of the silicon and take the necessary measures to decide what features they must retain in order to achieve the desired price, performance and area.”

At the newest nodes, the equation is a little different. “The economics of semiconductors are changing,” says Geoff Tate, CEO and co-founder of Flex Logix. “There are escalating masks costs, die in the hundreds of square millimeters, design teams of hundreds of engineers and IP budgets in excess of $10M. Transistors may remain cheap, but it’s the development cost and time that is the growing problem. Of course, a design done from scratch, one that is finely tuned to include only what is needed, is better than one that is bloated. But few companies can afford to do everything from scratch anymore because of money or calendar time.”

Fig. 1: Has EDA failed to keep up with Moore’s Law? Source: DARPA

Selecting IP
The industry got to where it is today thanks to reuse and a growing supply of third-party IP. This significantly reduced time to market and the size of engineering teams. The same market forces influenced the development of IP—maximize the feature set so that it would appeal to the greatest possible market.

The introduction of the RISC-V open-source ISA is questioning that wisdom. “For some applications, cost is very important,” says Krste Asanović, chief architect for SiFive. “Cost is a project by project decision. Flexibility is more of a C suite decision. If you are going to innovate, do you want flexibility? That will cost you because you have to tool-up your engineering. So cost is not the primary issue there, you are addressing a market that you couldn’t do before, you have some competitive advantage, maybe you will charge more for the chips because they are more capable than others.”

Having flexibility is the key. “Freedom of design to do what you want without lock-in is an interesting business proposition,” says Martin Scott, CTO for Rambus. “Enrolling the academic community, as well as the commercial industry, to add extensions to the ISA that inherently make crypto and security easier is a hard problem, but it needs to be done. Designing with security expertise and making those tradeoffs architecturally is also important. This may support why we are seeing traction earliest in smaller designs and IoT designs. It is simply easier to take complete architectural control of those devices rather than large multi-threaded processors that have to interact with large applications.”

But flexibility requirements also depend on the maturity of the end market. “Devices and chips that start off as a new technology tend to be very broad,” says Miller. “There is a large NRE and tooling costs involved in bringing a chip to market, so people want them to be as broad as they can be. But over time, these are optimized to particular use cases and environments.”

There are other inherent advantages of smaller, simpler products. “Simpler products offer a smaller attack surface, thus improving security,” points out Sergio Marchese, technical marketing manager for OneSpin.

From a security standpoint, simplicity is very attractive. “Being simple and starting from a simpler base makes issues such as verification and security more tractable,” Asanović adds.

That doesn’t eliminate the need for verification of pieces that aren’t needed, however.

“It is not that easy,” explains Adnan Hamid, CEO for Breker Verification Systems. “If functionality exists in a design, it has to be verified, regardless of whether you use it for the intended application. That functionality could be activated through a security vulnerability, and you may need to ensure that a product remains safe under all conditions.”

And while standards have continued to grow and become more complex over time to meet new demands being placed on them, the complexity of some designs and the need for more granularity goes well beyond those standards. “We had to redesign our USB 2 core to fit into a smaller area on a 40ULP type of IoT device,” explains Navraj Nandra, senior director of marketing for the DesignWare Analog and MSIP Solutions Group at Synopsys. “To get to smaller area and lower power, there is a tradeoff in some of the features. Some features were removed and others, such as battery charging, have been added. For IoT edge devices, they still want USB 2, but they do not need 480MB/s speed. They care about optimum power and area for the data speeds that they need.”

New optimization challenges
Automotive and IoT add requirements that have not existed in the past. Both of these markets are looking for product deployment times that far exceed those of commercial products, and this makes product updates in the field necessary.

“You need to preserve enough flexibility so that they can be useful well into the future and adopt different mission profiles as industries evolve,” explains Miller. “For IoT and businesses of that type, it is about the service and value you are delivering. It is no longer about the unit costs of the CPU or an AI processor. It is about the most effective way to deliver the service, which provides the revenue. This challenges the more traditional semiconductor way of looking at things, such as selling chips. Chips are being designed to sell services.”

Machine learning also is creating new demands for flexibility. “AI algorithms are continually changing,” says Mike Gianfagna, vice president of marketing for eSilicon. “This leads to hardware acceleration that is typically done with GPUs or FPGAs, which allow for field updates as algorithms evolve. This trend has its limitations, however. A custom AI accelerator in the form of an ASIC will always deliver superior power, performance, total cost of ownership when compared with more general and programmable approaches. What is needed is a flexible ASIC architecture with the ability to quickly adapt to changing algorithms.”

Marc Naddell, vice president of marketing for Gyrfalcon Technologies, agrees. “A generic implementation would be a costly overkill in terms of a chipset with comprehensive software that has been engineered to accommodate any use.  No manufacturer really wants to deploy that because it will be expensive and not energy efficient. This is particularly true for the edge devices. Companies are trying to introduce incrementally advantageous solutions and to make sure they fit all of the core requirements that will make it a successful mass market product. They are being very specific in terms of the benefits they are trying to deliver on the edge. They have to make sure the device is still comparably small because want small devices.”

A balance has to be found. “New algorithms and research papers are coming out frequently,” says Martin Croome, vice president of business development at GreenWaves Technologies. “We cannot be specialists in optimizing last years problem. We need an element of programmability. Some people take the approach of building a chip for a specific subject. That works for something like keyword wake where there are enough products that it makes sense. But the more general market for sensors and edge devices, there is a huge amount of segmentation. If you build to one specific job, you will not have enough volume.”

The same reasoning applies for security. “As things become connected, concerns about security become paramount,” explains Miller. “Security is an incredible challenge to get right the first time. What most people rely on is the ability to field upgrade their devices to overcome the latest discovered threats. The security aspect of flexibility becomes very important. If you have all of your functionality fixed in hardware, you will have a hard time sending out an update when a security vulnerability is found, versus having more embedded processors that can be firmware upgraded in the field.”

Embedded FPGA capability is a new type of flexibility being added to some devices. “FPGA onboard the device is a really useful way to provide flexibility in an economic environment like the IoT,” says Gordon Allan, product manager for Questa at Mentor. “Everyone needs 5G, everyone needs WiFi, everyone needs high definition video, audio voice recognition—all of those standard functions—so where does someone differentiate their product? It can either be from software or by programmable hardware such as an embedded FPGA block.”

Reducing design costs
If design costs could be reduced, it would have a significant impact on the number of products that could exist in the market. The specificity of those products would lead to less waste in terms of area and power and result in more optimized products.

As the race for the next manufacturing node slows down, could we expect more optimized tools for mature nodes? “The latest nodes have challenges like double and triple patterning that don’t exist with the larger feature sizes,” says Miller. “So there are some features that would not benefit someone using a larger feature size. The challenge is to ensure the folks operating at 180nm are not subsidizing the needs of the 7nm customers and are paying for tools that meet their needs at a fair rate. The tools are similar in many ways. There are capabilities that make more sense for people designing sensors and power chips at larger feature sizes and there is a distinct set of capabilities that people at advanced nodes will need.”

Tools with greater focus can help. “What is more important is focusing the technology on the specific problem at hand,” says OneSpin’s Marchese. “If you try to create a model that is good for many different problems, that might save some development effort but might not be optimized for any specific application. This is especially true in formal tools where small changes in the internal model can have a huge impact on performance. A similar argument can be made for tool ease of use. The key here to present the user with only the options and features that are relevant for the task at hand.”

Reworking tools is problematic. “It is difficult for EDA tool providers to remove features in a tool,” says ClioSoft’s Adhikary. “For tools which have been used by designers for a long time, the ramifications of any changes they make can potentially affect a lot of designs. Scripts which once used to work, could break if the features supporting it are removed. It is also not very easy to determine which feature is being used specifically by which customers.”

In highly competitive fields, balancing design costs, manufacturing costs, and flexibility can be a make-or-break decision for product success. The days of super-flexible chips are gone, replaced in some cases by demands of a new kind of flexibility—field upgradability.

Emerging markets are changing the focus from functionality supplied at purchase time, to functionality evolution over the lifetime of the product. In addition, greater attention has to be paid to ensuring that any available flexibility is secure. Significant reduction in the cost of design may be needed to help more optimized products be economically brought to market.

Leave a Reply

(Note: This name will be displayed publicly)