Shift Left Is The Tip Of The Iceberg

A transformative change is underway for semiconductor design and EDA. New languages, models, and abstractions will need to be created.

popularity

Shift left is evolving from a buzzword into a much broader shift in design methodology and EDA tooling, and while it’s still early innings there is widespread agreement that it will be transformative.

The semiconductor industry has gone through many changes over the past few decades. Some are obvious, but others happen because of a convergence of multiple factors that require systemic change in the process itself, which is the case today. On the surface, shift left may look like a minor change involving optimization strategies, or the introduction of new metrics, but this is a fundamental move from an insular semiconductor perspective to the start of the systems’ age of design. What is perhaps even more surprising is that the push for change is coming from all sides of the existing flow.

This is not the first time the industry has faced such challenges. “Shame on us if the institutional fog of war has caused us to forget those lessons,” says Rob Knoth, group director for strategy and new ventures at Cadence. “We have experienced this multiple times. We have been faced with a scope that is broader than the tools we have, based on physics or phenomena, or that aren’t perfectly captured by the tool you have. We have faced deadlines that are too aggressive to be able to do a perfect job. People who have succeeded in those epics are the ones who are honest about what can they simulate, what can they estimate, and how it will impact the people up and downstream of them.”

The change encompasses many areas, some of which are itemized below:

Change in optimization focus: Optimization strategies have shifted from simple power, performance, and area (PPA) metrics to system-level metrics, such as performance per watt. “If you go back into the 1990s, 2000s, the road map was very clear,” says Chris Auth, director of advanced technology programs at Intel Foundry. “Just make things smaller, make things faster, make things cheaper. We are seeing that spread out into a variety of different avenues. PPA optimization is just one thing you do, but there are lots of other things including thermal or frequency per watt that you now optimize for. I don’t think the story is fully written on how to do that.”

3D-ICs: Chiplets are more than just fabricated pieces of IP. They are changing many aspects of the development flow. “3D-IC is changing the market and industry,” says HeeSoo Lee, segment lead for high-speed design at Keysight. “PPA now involves multiple disciplines. You are not vertically structured any longer. You are incorporating a lot of different blocks into a system design. How do you optimize that from a system perspective? What are the languages you are going to use to talk to those different die or different functional blocks? It places interesting market demands for chiplets, or heterogeneous integration, for the system level. We need to optimize system performance by utilizing different die, and then optimize the entire package performance.”

How those die are arranged, assembled, validated, and tested remains a significant challenge, however, and some of that work can be done ahead of time through pre-assembled, pre-integrated chiplets. “The workload and software is where differentiation is happening,” said Christopher Rumpf, senior director of automotive at Arm. “So we will still sell IP products, but we are now assembling them into larger subsystems.”

Increased interdependence: All metrics become connected to each other as we get closer to the limits imposed by physics. “The goal in design closure is to optimize across multiple variables, based on certain conditions or inputs provided by the user,” says Manoj Chacko, senior director of product management at Synopsys. “In addition to PPA, there is now R, for reliability or robustness. This started when we had to consider IR voltage drop, which was impacting performance. Techniques to mitigate that were developed. Then we see variability — of the devices, and device behavior changing based on the neighbors and its context — and that is impacting the performance of a design and impacting power.”

Domain-specific design: Gone are the days of general-purpose design. In many industries, bespoke silicon is the only way to achieve the necessary design characteristics. “You’ve gone from general compute to more workload specific applications,” says Intel’s Auth. “Everybody has a different way that they need to push the technology. PPA is a component, but there’s a lot of things that PPA doesn’t cover.”

This is especially evident in automotive design. “If you think about a car, it’s a real system modeling challenge,” said Jean-Marie Brunet, vice president and general manager for hardware-assisted verification at Siemens EDA. “They have different sampling rates, different clocks, different accuracy of the models. How do those things talk to each other? It’s an industry integration challenge, which is why we have digital twins. They can give you a visual representation of the end device or system. This is an incredible synchronization and integration challenge, and it’s a very heavy compute platform environment. You have to make a decision about which software will work with specific hardware.”

System design: The system is no longer just the hardware. Ravi Poddar, senior leader and advisor for the semiconductor industry at AWS, noted that Volvo reportedly has about 120 ECUs, 100 million lines of source code, and 3 million functions in 30 million places in that source code.[1]

“We’re seeing a shift left using software and virtual twins,” Poddar said. “This includes virtual hardware in the loop, and it requires you to test early and test often.”

Software is now an integral part of hardware design, but the development schedules often are different for hardware and software. “The software is always missing when doing analysis of the system and the hardware,” says Ahmed Hamza, solution architect for the cybertronic system engineering initiative at Siemens EDA. “You build a system, but if you don’t know what software will run on the system, the whole equation will change. Hardware, system and software are the three pieces that we need to put together early enough in the analysis that it will impact how the future system engineering is done.”

What hasn’t changed is the way we have to approach the problem. “The key thing about optimization is that as you go through the flow, your degrees of freedom decrease,” says Marc Swinnen, director of product marketing at Ansys. “In the beginning, you have many degrees of freedom, so your potential optimization is much higher. As you go further down the flow, your degrees of freedom reduce. The potential goes down.”

Engineers like to divide and conquer. “We cannot do everything at the same time,” says Keysight’s Lee. “We know the order in which things need to be worked out. Of course, not everything is equally critical for every single application or use case. Some designs are more focused on the system performance while others are more focused on power. Every engineer or system architect has their own charter to go in and optimize. We are entering a new era where it’s not completely stabilized or mature.”

Holistic Optimization
What is achievable at any given step in the development flow is limited by predictive uncertainty. “When you make a change in your design, let’s say power, there is the signal, but there is also noise, which means there is uncertainty,” says Ansys’ Swinnen. “The actual power could be a bit higher or lower, because you don’t have the wires, you don’t perform the exact synthesis. There’s a noise level in your information and given the amount of uncertainty you have at a given stage you can only optimize down to the noise level. Beyond that, any information could be spurious. You are wasting your time, you’re just randomly generating numbers, but have no idea if it’s really lower or higher. You’re limited by that noise band, and once you hit the noise band, it’s time to move on to the next step, because there’s no more optimization that makes sense at this step, even relative.”

Where to start an analysis is highly dependent on optimization goals. “What you need is a good software solution that can simulate the entire system that you’re going to have,” says Auth. “Then you can optimize each of the components. You end up with a lot of very customized software that simulates a variety of these things. It simulates thermals, it simulates frequency or something like that. There’s a big opportunity for software that can give you a better answer to the designer as to the best way to optimize the technology.”

But it’s essential understand the accuracy of the models or information being used. “A simple way to describe it is ‘garbage in, garbage out,'” says Lee. “How good is the model that represents each of the die in a heterogeneous integration? If your model is not accurate enough, and you’re building optimization on top of it, then you are optimizing based on garbage. While simulation technologies are super powerful, they need to address all the ingredients in the right way so that accuracy is preserved. If you don’t have those technologies and models, then you’re going to develop something based on the wrong input data.”

In the past, the industry has developed abstractions that help preserve necessary accuracy while reducing computation time. “At the front end, you don’t have metrics,” says Swinnen. “You use proxies. For example, power density is a good proxy for thermal. For timing at the RT level, you use a wire-load model based on fan-out. As you go into placement, you use Manhattan distance. Then as you go to routing, it’s net length. The router is not actually looking at timing. The placer is not looking at timing. Each step has its proxy, which gets better and better, and it’s only when the routing is done that you finally have an RC model. That’s not a proxy anymore. Now you have the actual simulatable delay of the wire.”

This means you must constantly assess what is possible both in terms of accuracy and affordable compute power. “What help do you need to improve the accuracy of your process? You may need some creativity to try strategies that didn’t work in the past,” says Cadence’s Knoth. “Maybe you didn’t have the compute horsepower to do all the simulations you needed, and today you might. When we’re talking about PPA optimization strategies, and most recently we saw this with power, you didn’t have enough horsepower to integrate vectors into the design optimization phase. Well now the algorithms are better, the estimation techniques are more accurate, the simulation horsepower is better. You can start integrating some data that in the past you just estimated, and now you can actually measure it.”

Better estimates allow for more optimization and automation. “Designing a network on chip (NoC) requires analysis based on physical design,” says Andy Nightingale, vice president of product management and marketing at Arteris. “When you control aspects of the physical placement, coupled with many analysis tools, we can build an explorer tool that lets you run traffic profiles. You can push tons of transactions to any part of the network, and you can see how it performs in a worst-case scenario in terms of the quality of service scheme, the width of the interfaces, and other parameters. You’ve got a good idea of wire length in the design, you’ve got a good idea of what the placement is going to look like, you’ve got a good idea of what the latency and bandwidths are going to look like across the network. All those things combine to give a really good starting point to the next stage in the process.”

The system level is where you have greatest degrees of freedom. “The majority of the power and performance optimization — something like 70% to 80% — is decided during the earliest part of the design phase with architectural decisions,” says Ninad Huilgol, founder and CEO of Innergy Systems. “Architects typically have some way of performing what-if simulations to decide the optimum architecture. Increasingly, this area is being helped by advanced power modeling that can perform virtual simulations to estimate power and performance.”

Without the necessary estimation tools, there are limits to what can be achieved. “The thing about predictive uncertainty is that there are two ways to go about it,” says Swinnen. “You guard-band, or you invest more in measuring it. Guard-banding is the easy way. You just make very big margins, but that increases the noise band and limits your ability to optimize. In some cases, this is not immediately obvious. Guard-banding limits the amount of early optimization you can do and forces you to optimize further down the pike.”

Chiplets are one area where this may be happening. “If you talk about backside power, or stacking chips, it’s not sufficient just to think about one chip’s thermals,” says Auth. “You also have to consider its impact on the chips next to it. You have to know how the system is going to exercise it. Previously, we focused on optimizing one die or one chiplet, but when it’s in a bigger package with multiple chiplets, the optimization of one die is a little bit less important than the optimization for the overall package that you’re going to end up with.”

This can create difficulties if the development is not completely top down. “Optimizing system and module metrics is a complementary and evolutionary process,” says Kinjal Dave, senior director of product management in Arm’s Client Line of Business. “While we are increasingly seeing more time and resources being spent on the system level, module metrics are crucial during development phases when a complete system has not yet been established. In the early stages of development, you often can’t construct a full system because not all functionalities are finalized. Hence, you continue analyzing those module metrics and gradually incorporate more system-level metrics later in the development process.”

The industry has work to do before some of this becomes possible. “Hardware teams and EDA tools spend a lot of time performing analysis, but unfortunately this information does not have a path back to the system engineer,” says Siemens’ Hamza. “At the beginning of the analysis, they need some metrics. They need to understand what’s going on. There is immediate information that system engineer can benefit from. What if we could make it available in a language and a form that they can read and understand? That would change the whole concept of system engineering.”

The lack of information means you have no choice but to use guard-bands. “As you shift left, you’re not going to have complete knowledge,” says Auth. “You’re going to put some guard rails up. And you are planning that when you get more knowledge in the future, you’ll be able to alleviate some of that guard banding and get more performance in the future.”

Shift left is attempting to move an increasing amount of information forward, but it needs to move out from the existing hardware focus into software and systems as well. “You need to ask the what-if questions at the top level versus the bottom level,” says Arteris’ Nightingale. “You make an untimed model of your design and drop the software onto it, so you get an idea of what the software is going to be demanding of the system. Then you refine that down from that top-level view to say what hardware components are required. You’ll never converge if you say, I’m going to start with X type of processor, and Y type of system fabric and try and end up with a result. You have to start with the end application in mind, and then work down from that.”

To solve the problem, everyone has to come to the same table. “The semiconductor industry has achieved the scale that we have because we’ve accepted a certain number of rules,” says Knoth. “To do that, we haven’t always done everything that the process is capable of. If you go outside the silicon and you go to the package world, you go to the PCB world, you go to the mechanical world, there is much less rigidity. We’re going to see some flow outwards of more standardization. And we’re already seeing this at the chiplet level, with more standards getting put in place.”

Conclusion
While shift left is gaining a lot of attention, there is a broader transformation underway. It involves players that were never included in the past, and they often talk different languages and have their own methodologies in place. In many cases the changes that need to be made are not just technical. They also impact the organization’s structure. New tools and methodologies have to produce good results based on the level of investment they require, both from the EDA perspective and the design houses, and they need to keep in mind the long-term direction and goals.

This is not going to be solved in a year or two. It requires rebuilding of the entire flow, which ultimately will bring systems engineering and semiconductor development under a single umbrella.

—Ed Sperling contributed to this report.

Reference

  1. Vard Antinyan, technology and strategy leader, software engineering, at Volvo Group.

Related Stories
Shift Left, Extend Right, Stretch Sideways
Development flows are evolving as an increasing number of optimization factors become interlinked. Shift left is just one piece of it.
Is PPA Relevant Today
Power, performance, and area/cost have been the three optimization targets for decades, but are they pertinent for today’s complex systems?



1 comments

Theodore Wilson says:

Another excellent article, thank you.

Leave a Reply


(Note: This name will be displayed publicly)