Systems & Design
WHITEPAPERS

Optimizing IC Designs For Real-World Use Cases

Vector-driven approach comes at expense of secondary tasks.

popularity

Semiconductor systems are becoming more focused on power, performance, and area for the primary scenarios they are likely to see in real-world applications, but increasingly at the expense of secondary tasks.

This is happening at all levels of abstraction and all stages of the design flow. At the highest level, processors are being optimized to run a given set of software. RISC-V is one of the most visible in this space, where the software represents the scenarios that are driving hardware architecture and implementation. Further down in the development process, floorplans are being optimized for performance or to help mitigate thermal issues.

In the past, much of the power optimization associated with power and clock gating was done in a static manner, but today vectors are being used to determine if the benefit justifies the cost. That raises several big questions. First, why is this becoming important now? Second, how and when are these scenarios being captured? And third, who is responsible for all of this? As with many aspects of the development flow, the answer depends primarily on where you are in the flow and what is an appropriate level of abstraction.

At least part of this stems from the growing number of places in which electronics are used. “There are a whole new set of scenarios that people have to evaluate,” says Isadore Katz, senior director for marketing and business development for Siemens Digital Industries Software. “They don’t necessarily have cooked or baked-in recipes for the different blocks or components in terms of their power consumption or footprint. If they say, ‘Let’s go off and use the same old architecture,’ then all of a sudden they hit thermal problems or other issues because you just can’t do it that way. They can’t correct them in the physical design, so it suddenly opens up a whole new series of vector-driven scenarios.”

Power optimization is a concern that crosses all phases of design. “I see things in the realm of analysis that are extremely valuable because they can lead to architectural changes — things that are going to move the needle,” says Guillaume Boillet, senior director of product management and strategic marketing at Arteris. “And there are things with a very local aspect and those can be automated, and there’s been a lot of attempts at doing this. But they all require extremely good vectors. You have to decide what scenario you need to feed into a tool and realize that there are tradeoffs, because it makes the flow more complex. It’s still an art and it’s being done manually by people. The scale is so big, and the scenarios are so complex, that we are nowhere near just running something to make sure that it’s being optimized – it’s complex.”

That complexity involves time scales. “Your chip is going to fluctuate in temperature depending on what it’s doing, and you need to have the right activity to capture those situations,” says Marc Swinnen, director of product marketing at Ansys. “These chips are so complex that it is not easy to define when that will happen. Secondly, the time constants for electrical parameters and for thermal parameters are very different — more than two orders of magnitude. When you’re looking at timing errors, you’re looking at nanoseconds or a few microseconds at most, but thermal is slow. When you get some heat that blossoms in a part of the chip, it slowly dissipates through the chip and a block will see increasing temperature because of what happened two seconds ago in the block next door. That can affect the timing. You need seconds at least, but that translates to billions of clock cycles.”

Architectural level

There are many reasons to use workloads to drive architectural decisions. “There’s a growing realization and acceptance that real workloads need to be analyzed early in the design flow to guide some of the architecture decisions,” says Preeti Gupta, director of product management for semiconductor products at Ansys. “Consider a large processor that has multiple thermal sensors. You can’t place a thermal sensor at every location within the chip. These have to be optimized. You’re measuring the temperature through those thermal sensors, and every time it hits a particular threshold of temperature you throttle back the frequency of the processor so it does not heat up and there isn’t a thermal runaway situation. You are possibly using it to drive your DVFS algorithms. When do you scale down voltage, and when you scale down frequencies? Those kinds of scenarios can be concocted by the designer or the verification team.”

Power and thermal are key drivers for this methodology change. “Heat is driven by power consumption, which in turn is driven by activity,” says William Ruby, director of product management for the Synopsys EDA Group. “All of these gradients on a chip are workload- or activity-dependent. In order to do a thorough analysis, you really want to be able to analyze this with a realistic workload, an application that is running as opposed to synthetic vectors. Once you have the application workload, you can essentially drive the analysis flow with that level of activity. With cell placement, we are looking at how to do thermal-aware cell placement. The thermal aspect needs to come in as part of the cost function of timing, power, area, and so on. A realistic activity scenario is absolutely key to driving this whole flow downstream.”

But sometimes, more abstract workloads are required. “A network-on-chip (NoC) is a very flexible IP that can be configured in a lot of different ways,” says Arteris’ Boillet. “We cannot feed vectors into it because it would be unrealistic to expect someone to come up with a representative vector at the SoC level. It takes way too many things into consideration. So instead, we have developed a pseudo language to describe the type of traffic to be expected between the initiators and targets. And from that the tool automatically generates performance reports. In the future, we were going to automate that so we will not only consider physical aspects, but also scenarios or vectors to automatically guide the architect.”

It’s important to consider the environment in which a system is intended to operate. “Smart mirrors are a good example,” says Siemens’ Katz. “You want to put as much of the computation as you can into the mirror. You need object recognition, object detection, and then push that back up into the main processor so they can make decisions and take reactions. You can’t clog up the LAN inside the vehicle, so you have to look at the data capacity and the data processing capacity. How much power does it consume? What is the latency? What is the cost? You have to spend a lot of time evaluating the throughput, looking at the architecture, investigating several different algorithms or implementations. You’ve now got an algorithmic problem combined with a performance problem, and a latency problem, and a power problem.”

It all forms a flow. “It becomes a matter of discussion and effort,” says Frank Schirrmeister, vice president of marketing at Arteris. “Where do I stop? Verifying the NoC itself with synthetic corner cases is one thing. But now you’re bringing in software, bringing in a variety of processors, bringing in new effects around how all of this interacts. The question becomes how deep do you go, in terms of how many cycles do you spent on it. You really need some level of methodology. What are the items you want to target? Can I use something like Portable Stimulus to drive into the regions I wasn’t aware of, by having a constraint solver giving me more tests that the architect wasn’t really thinking of?”

Register transfer level

Vectors have been used at the RT level for some time, and in a variety of different ways. “Design teams are very concerned about power and thermal footprints,” says Ansys’ Gupta. “There’s a great amount of mindfulness in terms of the vectors being used to understand the power consumption of the design, and they can be used in several ways. In a mobile handheld application, there are multiple idle modes of operation and multiple active modes of operation. These are predetermined by the power methodology teams working with the verification engineers. They are saying, ‘This is where we are going to target both our power measurement and power reduction efforts.’ Many also deploy a regression use case. This is not to say that vectors are the be-all and the end-all of the design process. There are certainly things that can be done without vectors — some things that are structural in nature, that do not depend on the activity flowing through the design.”

Those vectors can be used in several ways. “If you take a video playback application for a device, you may have a specific vector simulating that scenario and nothing else,” says Suhail Saif, principal product manager for power analysis and reduction products at Ansys.” This allows the design houses to focus on the specific vectors that simulate the particular application. They know the pitfalls of their designs in those application scenarios, as well as the opportunity to optimize their design for those applications. The designers also have an opportunity to optimize other parts of the chip that shouldn’t toggle, that shouldn’t be active, shouldn’t be consuming unnecessary idle power while focusing on this particular application. For example, AI or data center chips have a very narrow set of application scenarios, but in a massively distributed manner. For them it becomes even more critical to optimize the power for the given application, because if I can remove 1nW in this application, it’s going to be basically multiplied by thousands and yield that much power saving for me. That also will translate to thermal optimization.”

At some point, the actual vectors become less useful. “In the world of implementation, we do not take in vectors directly,” says MaryAnn White, director of product management at Synopsys. “We take vectors in FSDB format and convert them to SAIF files, (Switching Activity Interface Format). It is part of UPF, and most of our flows are driven specifically from a power perspective. The SAIF file allows you to look at the switching activity rather than looking at the whole vector simulation set. if you are looking for better power, it’s better to have the more accurate expectation of how your device operates in the world, and that’s what the SAIF files should do. You could run multiple SAIF files for different scenarios.”

Who creates the vectors

In many cases, the vectors required for architectural exploration or power reduction are different from those that would be created by the functional verification team. “I see power methodology teams hiring verification engineers, just focusing on vector development,” says Gupta. “It’s not your regular functional verification engineers that are writing these test vectors for power. They are spending the effort and making the investment to hire verification engineers to write power vectors.”

They also are using different skill sets. “It’s not going to be your traditional DVT set,” says Katz. “These have to be environmentally driven vector sets. It’s not that people are unfamiliar with the challenge of building those things. Siemens has invested a lot into building digital twins that let you exercise environments. You have to be more deliberate about making sure you’ve covered all of the possible situations and scenarios. It may not come from the traditional architect in that sense, but it is going to have to come from someone who has responsibility for the system. And they have to lay out the scenarios, or at least the whiteboard version of all the scenarios. Then someone else has to make sure they’ve been filled in to cover sufficient depth.”

Some scenarios need detailed knowledge of the implementation. “The verification team is involved, but they also need some aspects of the characteristics of the traffic beyond what the architecture knows,” says Boillet. “They may be measuring what a piece of IP generates, translate that into statistical information in terms of size of the packet, how often it happens, reads and writes, and they would be feeding that to the person who drives the tool sets. It is done in combination with the customers who is also helping with the knowledge.”

Transfer of vectors from IP supplier to integrator can be difficult, however. “Comprehensive verification, taking into consideration all possible system use cases, is the IP vendor’s responsibility,” says Dhanendra Jani, vice president of engineering at Quadric. “The integrator only needs to validate the proper interconnection of the IP within the system. The IP provider should also provide a reference testbench that demonstrates typical use models in RTL simulations. Supporting gate and power simulations in this testbench can allow the customers to quickly take the IP through physical implementation with their choice of tool flows, third-partly libraries, and operating conditions, thereby enabling quick productization.”

Vectors through the flow

Today, there is little continuity in vector sets through the flow. “It’s a mishmash,” says Gupta. “At the architecture level, there’s certainly a good appreciation of the fact that we need to target the use cases. Once you are locked into that architecture, you’re making smaller changes at the power grid level. Normally, the vectors are defined early in the process, and then windows are carried forward for the rest of the process.”

Selecting those windows is important. “After you have done the high-level simulations, we can capture ones that demonstrate certain traits, such as where the power changes the most,” says Swinnen. “This one shows the power peaks. We can collect the greatest hits of your activity vectors. And then we can feed that to the thermal tools and the power tools to say, ‘Okay, this is what’s going to happen. This is the worst that you’re going to see happen.’ And that’s part of the key for failure analysis.”

A combination of techniques takes coordination. “If we look at it from an implementation perspective, SAIF is static,” says White. “It doesn’t do a dynamic analysis. The power analysis engine should be able to correlate what’s happening between a static and dynamic version. Typically, when they’re doing the implementation, they’ll look at peak power analysis. But in the end, you still have to do sign-off power analysis, and that’s not SAIF-driven, but more vector-driven.”

Reality check

Vectors are important for power reduction, but they are not in widespread use throughout the flow. “It is being done, but most of them are doing it in their head,” says Boillet. “The architects may take into consideration heat dissipation, especially if chiplets are involved. Is it automated? Not at all. It’s way too complex. We don’t need to build something extremely complex in order to figure out where the hotspot is. It’s straightforward, at least at the high level of granularity. The architect knows they shouldn’t be putting all the CPUs in one corner, and you should put them close to the supply, etc. There are some common-sense things that don’t require crazy math.”

Schirrmeister agrees. “We have seen these beautiful charts that show how we can influence placement, and by virtue of us knowing how much data runs through a switch, we can visualize the switching activity and where the data is going. That translates into a thermal view. Have I seen anything beyond beautiful charts? Have I seen anyone actively modifying the architecture of the NoC or moving things around to balance thermal more optimally? I really haven’t seen that. The technology is there, yet there’s still so much focus on actually getting it done — meaning just getting it functionally correct — that these other things are also important, but are a second order thing to deal with.”

Related Reading
AI Adoption Slow For Design Tools
While ML adoption is robust, full AI is slow to catch fire. But that could change in the future.
True 3D Is Much Tougher Than 2.5D
While terms often are used interchangeably, they are very different technologies with different challenges.



Leave a Reply


(Note: This name will be displayed publicly)