Silo Busting In The Design Flow

Waterfall development flows no longer work for chip design, but unified tool flows may not be the answer.

popularity

An increasing number of dependencies in system design are forcing companies, people, tools, and flows to become more collaborative.

Design and EDA companies must adapt to this new reality because it has become impossible for anyone to do it all by themselves. Moreover, what happens in manufacturing and packaging needs to be considered up front, and what gets designed in the design phase may not yield sufficiently in manufacturing or test. And use cases and applications may have unique effects on everything from performance to thermal and reliability in the field. Everything is shifting both left and right, and communication needs to happen in both directions.

“If you think about the proverbial wall, you no longer can throw things over it and say it has become somebody else’s problem,” says Shekhar Kapoor, director for product marketing in the Digital Design Group of Synopsys. “You really need to address all these problems while you’re doing the design. That starts from RTL all the way down.”

Some see the problem from RTL up as being just as important, including systems design, mechanical, software, optical and many other disciplines that used to be kept separate.

“The history of the industry used to be a lot of points tool companies and startups, creating points tools, and that was the best way to solve the problem,” says Michal Siwinski, corporate vice president for market and business development at Cadence. “That started to shift to more of an end to end flow. Just like our customers started becoming more vertically integrated, the notions of integration across multiple products and domains became absolutely essential in our business.”

Interdependencies cause much greater levels of collaboration. “Our organizations have dramatically changed,” says Ravi Subramanian, vice president and general manager for the IC Verification Solutions Division of Mentor, a Siemens Business. “We see this happening not only in the ‘what’ we do in our work, but also the ‘how’ we do it and ‘who’ needs to do it. What I see in all of those are the increasing introduction of domains, which either merge or require some cooperation between defining the ‘what’ needs to get done, and the ‘how’ it needs to get done. That ultimately is requiring companies, and groups within companies, to be working and collaborating in ways they haven’t before.”

Defining the flow is becoming increasingly difficult. “In almost every scenario, because there is such a wide variety in the way that customers are using a flow, there is no one flow, it’s many different flavors,” says Cadence’s Siwinski. “Solving a problem for a mobile handset, is quite different than solving it for a data center rack. The tools involved are the same, a good portion of the technology is similar, but the optimizations and therefor the requirements that it drives, will vary greatly.”

Dependencies
The industry long has been aware of the dependencies between hardware and software and these have tightened over time. “We finally have all the tools in place to allow that collaboration to happen,’ says Johannes Stahl, senior director of product marketing at Synopsys. “But it requires management attention to make it happen and to impact design. I still see companies where the software team waits for the silicon. They work with some previous generation silicon, and they don’t want to get involved in the pre-silicon work. So in some companies the evolution has happened successfully, and in others it is still going on.”

Software and hardware need to be brought closer together before they are likely to want to work together. “Does the programmer realize that what they do affects your power envelope?” asks Kevin McDermott, vice president of marketing for Imperas Software. “They often have this ivory tower view that they can do things without regard for how the hardware is going to behave underneath the hood. But if we can give them the feedback and the insights when they make certain choices, they will realize when they have gone down the wrong path. Compare that to going into a hardware board or the effort associated with building an FPGA prototype. You can’t be that granular or that separated. If you provide immediate feedback, it is going to be a hardware/software tradeoff.”

As the touch points increase, allowing them to remain separate will finish up costing teams in terms of the optimizations they can make. “The shift left, and the parallelization of the software development cycle with the hardware development cycle, is changing,” says Pat Sheridan, product marketing senior staff for virtual prototyping at Synopsys. “There are increasing touch points between those teams. Consider power analysis, where you identify the critical windows of software execution where you want to analyze power. That’s a very visible example. A lot of the other touch points will be in the verification flow.”

Others have similar observations. “Today’s SoCs are becoming software workload driven,” says Mentor’s Subramanian. “So instead of computer architects working in isolation, looking at a set of things, you actually have to look at the workloads, which are going to be the benchmarks for power and performance and drive the SoC architecture.”

Power itself is heavily dependent on many other factors. “Power is a great example of a multi disciplinary task,” says Synopsys’ Sheridan. “There are many people involved in the understanding of power, or energy efficiency, in an electronic system, or SoC. You have the architects, you have the software team, you have the hardware people looking from the verification point of view, and then you have the implementation team. All these different levels can have their own impact on improving the power and making the system more energy efficient.”

Verification is inching closer to everything. “We see implementation guys coming closer to verification guys,” says Stahl. “But there’s a different focus. The verification team is primarily interested in quick exploration of changes to the RTL. The implementation team wants to know what they can squeeze out of the back-end flow. So, they have some interaction, things have to work together, and teams have to talk together. That topic is probably one of the most interesting ones where you have to get many different people in one room to discuss the power flow.”

Over time, a greater number of teams are being pulled closer together. “The co-working problem can be seen when it comes to the chip package interface,” says Andy Heinig, group leader for advanced system integration and department head for efficient electronics in Fraunhofer IIS’ Engineering of Adaptive Systems Division. “Often there is a big gap based on organizational structures within companies having separate silos for design and production topics. That builds a huge barrier between both topics. Often system engineers sketch a possible solution, but there is no chance to build the system because the basic technologies can’t be combined as drawn. The goal must be to provide all necessary information.”

That includes other areas such as test, which can be considered to be another use case for the SoC. “It is important not to burn up the chip at test time,” says Robert Ruiz, senior director of product marketing at Synopsys. “Test creates more activity that increases power dissipation or, more likely, creates an IR drop, because generally the chip will be tested at a higher power draw compared to its standard mission mode.”

There are an increasing number of dependencies in the back end. “Design for manufacturing (DFM) was, and still is a problem,” says Shekhar Kapoor, director of marketing at Synopsys. “There are an increasing number of manufacturing rules that you need to take into account earlier in the flow. Design for yield (DFY) is another. We have to consider all of the DFx issues up front, from a convergence point of view and a correlation point of view. You need something that ties them all together and hence the whole concept of a common data model, which is a very foundational technology.”

More recently, packaging has become tightly integrated, as well. “There are many tools that have been put in place for hierarchical design, and for large-scale hierarchical analysis,” says Kapoor. “Now, when you abstract them up to the 3D-IC level, it creates more complexity. You need an understanding of the implications for IP pieces. The high-speed interfaces impact how you partition your design. We need the technologies that were used at the chip level, such as hierarchical floor planning, and now they have to take these interfaces into account. And that has its own implications on the analysis side, such as thermal.”

It continues all the way up to the system. ” We have people, who are building solutions, coming back and saying, I need to define the enclosure in this particular piece of equipment,” says Subramanian. “It requires going all the way back, with respect to knowing earlier, but also bringing multiple disciplines together. The two new drivers for this are in the IoT edge area, as well as in the automotive area.”


Fig. 1: System-level considerations across the design flow. Source: Semiconductor Engineering

Verification impact
Verification has spread over time from just being about RTL functional verification to also being multi-disciplinary. “This is particularly important for dynamic power analysis,” says Sheridan. “The software that runs on the system is defining the critical windows of activity that will be the most important to understand. Getting the simulation and emulation teams to work together with the people doing power analysis really helps you focus on the right window of software activity, and then you can make decisions based on the right information.”

Many of the back-end implementation tasks also are becoming vector-driven. “Implementation tends to be more detailed-oriented,” says Stahl. “They will work with smaller vector sets, but they need to be the right ones. The verification guys look more at the overall picture. They will take a million cycles when looking at power. They are looking for big changes and the big system effects. They need quantity, while the implementation guys have a quality aspect.”

Common engines
Over time, the engines have become closer together so that the iteration cycles are minimized. Having more information up front may allow decisions to be made earlier, lowering the risk that problems will be found later in the flow,” says Ruiz. “Accuracy comes from the engines that you are using. Design decisions apply constraints. When you make those decision depends on where you get the biggest bang for the buck. You can decide what decisions to make upfront and what may be better deferred to the downstream part of the flow.”

That may also be influenced by the types of design being worked on. “You want the system design to remain open and flexible for as long as possible in the system design cycle,” says Anna Fontanelli, CEO of MZ Technologies. “You need to avoid local optimizations, but rather look at system-aware optimization.”

As well as optimizing a design, companies have to look at how to optimize the flow. “It depends if a company is trying to push the envelope or not,” says Siwinski. “When somebody is trying to push the envelope and create the best product, they have to make some of these decisions upfront. Sometimes they’ll de-risk it by adding another re-spin, or making sure they get to an early prototype — or even just have a specific project that might not see the light of day. They will factor in that this implementation might not be production worthy. But I see them still going really fast. In other cases where you don’t need to push the power/performance curve, you have a lot more flexibility, and people can delay some of the decisions for later.”

The key question is when to make those decisions. “It really comes down to what questions you want to answer at which stage of the product development process,” says Subramanian. “Then you must look at what information is needed to make those decisions. The notion of shifting left is about doing something earlier in time. We need to find practical ways to answer them earlier, because often answering them earlier requires having the right models and having the right analysis capabilities. This is driving the creation of new capabilities and tools, driven by the types of models becoming available earlier in the cycle.”

Transforming EDA
EDA historically has been a pendulum swinging between point tools and fully integrated flows. Today, that pendulum is swinging away from the fully integrated flows, which may sound counter-intuitive given the increasing levels of interdependence.

“It is almost never going to be possible to have all of the engines in-house for all the analysis and optimizations that you will need,” says Kapoor. “If you extrapolate from the chip level, to SoC level, to system level, which is expected to grow a lot with heterogeneous systems, it encompasses so much. It encompasses experience in chip design, in-board design, multi-physics, computational algorithms, and much more. Companies will have strengths in certain areas, such as implementation or analysis, but there will need to be a lot of partnerships to make sure you can have the whole system working. We need to have the exchange of information, standard interfaces, sharing all of these interfaces, and the technologies. That’s all going to be very fundamental and required.”

Subramanian agrees. “You look at each domain or problem you want to solve, and then you look at the interdependencies. These solutions need to be available to the customer to be able to solve their problem, and that may only be possible by bringing two domains together. Customers are expecting to have a solution that works, whether it’s from one EDA company or multiple. They don’t care. They expect to have a solution working, and they expect the tools to be interoperable and to be able to create flows. Now the reality is that’s not always the case, but the larger customers have been very effective in driving this.”

Conclusion
Increasing numbers of companies are working together, and groups within the same companies working together. This is transforming how many companies operate.

But how do EDA companies choose which areas to invest in, and where to partner, assuming no one can be the expert in all areas? “That is the question of the hour to all companies,” says Subramanian. And at this point, there is no clear answer.



Leave a Reply


(Note: This name will be displayed publicly)