Shift Left, Extend Right, Stretch Sideways

Development flows are evolving as an increasing number of optimization factors become interlinked. Shift left is just one piece of it.

popularity

The EDA industry has been talking about shift left for a few years, but development flows are now being stretched in two additional ways, extending right to include silicon lifecycle management, and sideways to include safety and security. In addition, safety and security join verification and power as being vertical concerns, and we are increasingly seeing interlinking within those concerns.

All of these are putting pressure on traditional development methodologies, which were based on a waterfall model. Increasingly, members of a development team need to be aware of what surrounds them in the design flow, including:

  • Making early estimations and decisions about things that traditionally came later in the flow;
  • Mitigating problems that may happen later in the flow to improve design time;
  • Working with those upstream in the flow to help validate earlier decisions or debug issues;
  • Dealing with specialists that may be concentrating on issues such as safety or security;
  • Developing models or abstractions that enable concurrent activity to be performed, such as hardware models on which software can be executed; and
  • Developing vector sets that are used for design optimization, such as peak power or thermal scenarios.

“Shift left is a fancy marketing euphemism for ‘shortening design cycles’,” says Steve Roddy, chief marketing officer at Quadric. “It creates several interesting changes, both in design team behaviors as well as engineer skills. At its core, shift left encourages overlaps of design processes that previously were sequential. Designers at all stages of product development thus need to pay even greater attention to what their coworkers wrestle with, both upstream and downstream of each individual.”

Its impact is broad. “Everybody needs to shift left,” says Matt Graham, product engineering director at Cadence. “Even when you’re not shifting left or pulling in timelines, you are packing more into those timelines. There’s a lot more parallelism happening. What used to be a waterfall is now becoming concurrent loops. This is pervasive and is going to continue. It will not get to a place and plateau. It’s going to keep going.”

Still, shift left means different things to different people. “Shift left is about avoiding surprises,” says Frank Schirrmeister, vice president solutions and business development at Arteris IP. “Teams are trying to do a larger scope of integration than their individual field, with neighboring fields as early as possible. Traditionally, it has been used the most in the context of hardware/software, and now happens in other areas. To me, shift left always means that you do something at a higher level of abstraction, but in reality there are many shifts left where you don’t really do something that much earlier, staying within the scope of fidelity. This is basically continuous integration.”

Other areas have also been impacted in the past. “Shift left is not new, but most of it was in a narrow space,” says Neil Hand, director of strategy for design verification technology at Siemens Digital Industries Software. “Verification started earlier, but it stayed within its domain. What has happened over the last few years is shift left has spread to more domains, and more importantly, it’s now happening between domains. When you look at model-based engineering and shift left, you look at the way data is managed and its role. They all start to become intermingled. System design is becoming more critical, and the feed-forward of information through the design cycle, and the feedback of information — either for the same generation of design for concurrent engineering, or for future revisions of a design — is going to become critical.”

That feedback and feed-forward of information now extends well beyond the bounds of traditional chip design. It covers increasingly large aspects of the design cycle, manufacturing, and usage in the field.

Traditional shift left
One of the first areas to encompass shift left was physical synthesis, which pushed physical verification to the designers’ world. “The aim is to reduce downstream problems, but the person operating on the data is not an expert,” said Siemens’ Hand. “They want to focus on the things they know are wrong. They will be annoyed if they’re chasing false positives. This will reduce the effort for the downstream guys, and the downstream guys are then working on a subset of the data, the really interesting bugs they are going to have to dive deep on.”

Not all shift left requires a change of abstraction. Sometimes a shared data model will suffice. “Shift-left is really the fusion of ‘extend right’ technologies (such as place-and-route, extraction, power analysis, physical verification, and sign-off) earlier in the design flow phase, such as RTL synthesis,” says Mary Ann White, product management director for design implementation solutions at Synopsys. “It helps to have all the design tools in an RTL-to-GDSII flow working on a single data model to help designers achieve sign-off closure faster. An RTL-to-GDSII flow should seamlessly blend synthesis, P&R, and signoff techniques such that traditional back-end techniques, such as placement and congestion, can be used during synthesis — while also providing typically front-end techniques, such as restructuring, during the back-end phase of the design to achieve the ultimate PPA.”

While not as advanced, shift left is also happening in the analog world. “Analog IC design is still largely done manually,” says Benjamin Prautsch, group manager mixed-signal automation at Fraunhofer IIS’ Engineering of Adaptive Systems Division. “Shift left comes with the ability to both re-use and adapt IP and IP building blocks rapidly. There are two key considerations, interfacing and automation. For example, automating analog layout enables rapid parasitic extraction, which improves decision-making on the schematic-level. In addition, simulated transistor-level performance can be passed to the system model (interfacing) so the overall system performance is updated rapidly, again improving decision-making.”

These concepts also are pushing shift left to higher levels of abstraction with high-level synthesis (HLS), where accurate estimates of timing or power are required in order to be able to select appropriate architectural alternatives.

New directions
Today, extend right is removing the clean border that used to encapsulate chip design with fabrication and deployment. “While enabling dramatic benefits, pre-silicon development environments cannot fully model the device, and hence surprises occur with silicon – e.g., lower than expected performance — leading to painful debug and diagnosis session,” says Klaus-Dieter Hilliges, platform extension manager for Advantest. “A new opportunity with shift-left is that post-silicon validation and test environments are developed pre-silicon. Specifically, pre-silicon and post-silicon efforts can be consciously assigned to the most suitable environment based on a common test plan. The Portable Test and Stimulus Standard (PSS) is specifically design for this re-use of test content across insertions.”

Extend right goes further than this. “Then there is silicon lifecycle management,” says Arteris’ Schirrmeister. “That data needs to reside somewhere. It needs coordination because it interacts with other data. It’s a data format nightmare, and it needs alignment across companies. I don’t know who the orderly hand will be.”

SLM adds a whole new layer of complexity. “Silicon lifecycle management involves the insertion of monitor IP, the gathering of data during test, assembly and in-field operations,” says Randy Fish, director of product management for SLM products at Synopsys. “Purpose-built analytics engines can be run on the data depending on the use case. Data can be fed forward from the in-design phase to later stages or fed back from in-ramp, in-production, or in-field phases into the design environment (see figure 1). This is all very use-case dependent.”

Fig. 1: Silicon lifecycle management. Source: Synopsys

Fig. 1: Silicon lifecycle management. Source: Synopsys

Some aspects of stretching sideways also are being incorporated into tools. “Adding automated safety measures/mechanisms, often in the form of redundant elements such as triple modular redundant (TMR) registers or dual-core lock step (DCLS) to the synthesis and P&R process makes designs more resilient and functionally safe,” says Synopsys’ White. “These often-redundant safety measures should automatically be inserted in the context of the design constraints so that PPA targets can still be met. In addition, the RTL-to-GDSII tools should be aware of which redundant safety elements should not be optimized out by downstream tools without having to create scripts or manually identify them. Formal equivalence tools should also identify that these redundant elements have been inserted into the design since formal tools tend to reduce redundant functionality.”

Intermingling is expanding. “Because of things like power and functional safety, the loops are expanding, or merging in some cases,” says Cadence’s Graham. “For example, to do power analysis you really need the output of the back-end tools that are generating what it is actually going to look like, but you still need feedback from the front-end in terms of things like waveforms and activity files. While the loops are concurrently spinning, they’re also feeding into each other and back on each other.”

Some feedback loops are manual today. “The ability to gather live data, analyze that data, and feed it back automatically requires the architects to manually look through it, and understand the implications,” says Hand. “There is a lot we can do, and we can look toward things like preventive maintenance in industrial settings, where perhaps you take live data of a braking system on a car, looking at wear patterns, look at acceleration patterns, tie that back into software — and then have over-the-air updates to change braking profiles, which may improve service life of the product. There are all sorts of amazing things that are possible. We are just scratching the surface on the semiconductor side.”

Until some of this becomes more systematized in standard flows, it creates an opportunity for highly bespoke tools and flows. “Shift left means more opportunities for bespoke EDA,” says Michiel Ligthart, president and COO for Verific Design Automation. “Companies see a need for differentiated and customized design flows. Semiconductor companies may incorporate proprietary steps into their RTL design flows. We have seen companies develop tools when they need to keep power down in chip, or for IP customization, design for test circuitry, and debug functionality.”

It can become even more complicated when some of these loops extend across company boundaries. “Shift left affects external sourcing of critical design IP,” says Sergio Marchese, vice president of application engineering for SmartDV. “We see a clear trend toward IP customization needs across project lifecycles, for both ASIC and FPGA targets. Hitting functional and PPA goals in advanced SoCs often requires removing unnecessary IP functions, adding pipeline stages, implementing custom interfaces, or incorporating features from a new version of a standard interface still in draft stage. The additional challenge is that SoC requirements and goals may change multiple times, in many cases, to accommodate new technical and business findings over the course of a project. Design IP suppliers must be willing and able to adapt to SoC developer needs and integrate their support team into tighter iteration loops, all while controlling IP quality and cost effectiveness.”

Impact on design teams
Moving visibility of one domain into a second domain changes the way that people interact with data. “You have to be able to represent that data in a way that feels natural to them,” says Hand. “That may involve new abstractions, but it also requires you to adapt the data. The designers who are working with the left-shifted data are working with a subset they can understand, that they know how to work with, and that is relevant for their role.”

New tools and languages can help to bridge the divide. “Traditionally, software engineers try to fit their code to the constraints of the chosen processor hardware,” says Mike Eftimakis, vice president for strategy and ecosystem at Codasip. “The alternative is to co-optimize the hardware and software together to create a custom compute solution. For example, embedded software developers are familiar with profiling and analyzing computational bottlenecks. Such developers can devise new instructions and to re-profile their software workload. Hardware designers would be able to add incremental architectural features to an existing core design. They both use an architectural language like CodAL to describe the processor, and tools automatically generate both the hardware design and software tool chain from the same description. This approach allows architectural tradeoffs to be rapidly made, and to ensure that hardware and software worlds are consistent.”

Still, the volume of data can become overwhelming. “How do we take those concurrent loops and automate as much as possible or catalyze those engineers to be more productive in the loops that are concurrently spinning?” asks Graham “There is no way that one engineer, or even a team of engineers, is going to be able to consume all the information that’s being spun out of all those loops. We need some way to filter that down to some amount of data that is consumable by an engineer, or team of engineers. At the very least, help with the signal-to-noise ratio coming out of that.”

There has to be a balance. “It’s not practical to teach you everything,” says Hand. “In an ideal world, you could shift left everything. Now a designer would need to know every aspect of semiconductor design down to the point that they can debug and address it. That’s not practical. It has to be described in a way that is actionable to them, or that is packaged up in a way that the person who is going to action can do that.”

Better communications channels may be one result. “Pre-silicon development environments enable cross-team optimization and optimization that would not have been possible before. For example, software developers can effectively communicate with hardware developers and demonstrate their observations,” says Advantest’s Hilliges. “Historically, functional tests in manufacturing test are manually created black-box tests, not transparent to test engineers. With portable EDA created test content, it makes sense for post-silicon teams to learn how to work with pre-silicon teams, their content and tools, e.g., to effectively debug a PSS-based test case.”

Skill sets do change over time for a number of reasons. A groupthink at SmartDV raised some interesting points about how staffing has changed over time. In the early days there were only hardware designers, and they did their own verification. The required skill sets have changed over time, allowing some people to become more specialized. It has become commonplace to have teams that are a combination of generalists and specialists. Today, we are seeing teams become more split both globally and locally. It begs the question, “How does shift left mesh with what the industry has learned from its globalization efforts over the last 25 years, and the recent remotization forced by COVID?”

Future implications
Just as no engineer can be expected to know everything, no tool can do it all either. This strategy has been tried and failed.

It calls for a more open environment in the future. “We’re going to have to standardize, at least as a company, on how we bring all this data together,” says Graham. “There needs to be a common and understood set of API’s across all of our tools that we can produce into and consume from that data.”

Being able to do it across companies will require open standards. “Tools are interacting together, but they’re also interacting in an open ecosystem with each other,” says Hand. “You’re starting to have well-defined APIs and well-defined data infrastructures. That is going to fundamentally change how these things come together. We are committed to an open exchange of information within an EDA flow. You shouldn’t be forcing people into a single holistic flow where you’ve got to be in one company, one flow. Well-defined APIs coupled with well-defined data exchanges will allow you to build a unique application for the individual customer.”



Leave a Reply


(Note: This name will be displayed publicly)