Efficiency Metrics Get Fuzzy

With the old measurements no longer useful, companies are struggling to come up with benchmarks that make sense.

popularity

Not too long ago chipmakers used to measure transistors per hour and software developers would measure lines of code written per day or per week. Those metrics have fallen by the wayside—and chipmakers are still lamenting that loss.

The problem is that nothing has come along to replace the old metrics, and complexity has left many chipmakers scratching their heads about how to build efficiency into their design flows. The big question now is, ‘Efficiency compared to what?’ There are no useful measurements to determine if companies should buy new tools, change their methodologies, insource or outsource, or where they should look for bugs and begin verification. And as chips become more complex, the definition of efficiency becomes more complex and harder to quantify.

“We definitely need some metric to measure efficiency, because once you have that metric you can do more,” said Taher Madraswala, president of Open-Silicon. “In our business, the rate at which you close the design based on power and performance is a bigger concern than how many transistors you add. The tools are there for us to do a thousand transistors without errors. But at the lower geometries, there are so many combinations that you need experiment with that the number of possibilities is factorial.”

In that case, efficiency may be a measurement of how quickly engineering teams can converge on a solution. “We’ve had this debate internally,” said Madraswala. “Are we efficient and are we competitive? There is no formula for this, although I wish we did have one.”

He’s not alone. Metrics are missing everywhere, from architecture through physical design and onto verification, and in embedded software and IP.

“Quality is still a measure of efficiency,” said Mike Gianfagna, vice president of marketing at eSilicon. “But the reality is that most people say verification is done when they run out of time. A 20nm SoC requires something like 100,000 hours of engineering time. You’re talking about a large team for a long period of time. You can’t say productivity isn’t important, but the dimensions by which you measure efficiency are very different than in the past.”

Why metrics matter
SoC design isn’t the only industry to grasp for metrics. In fact, the computer industry for decades searched for a way to compare manual tasks to automated spreadsheets, data entry and word processing, particularly in the days of mainframes and minicomputers. The lack of comparative data frequently slowed down purchases of multi-million dollar computer systems, impacted the upgrade cycle, and raised questions about whether computers—especially centralized computer systems—really did provide a competitive advantage.

In the SoC industry, the lack of metrics can delay the adoption of new tools and raise questions about whether IP needs to be bought or repurposed, and how extensively IP should be analyzed for a design. With time-to-market pressures increasing, that leaves many companies at a loss to figure out which way to go next, and crossing their fingers at tapeout.

“One way we look at it is schedule delays,” Gianfagna said. “If you don’t do a good job of selecting IP to manage power, when you put it together it may be two times over the power budget. So you can determine the efficiency of the design team by whether they hit the power budget, or whether they achieve timing closure, or by how many extra iterations or chip bring up cycles they need.”

Time is money
Good metrics allow companies to figure out where their problems are, and to deal with them in the most effective way. This is simple enough on an assembly line, where productivity can be measured in numbers of products manufactured per hour or per day. For a complex SoC, it’s much more difficult to make sense of the data.

“Productivity measurements are critical for big chips, but they’re going to be the enablers for the Internet of Things,” said Drew Wingard, chief technology officer at Sonics. “You’re going to have smaller design teams working on those chips. They can’t afford to be late.”

But the complexity of SoCs also is reflected in the complexity of metrics. The reality is that everything has to be done faster, but it also has to be done better—and the only measure of quality is a chip that doesn’t cause problems, said Wingard. “You can measure bandwidth latency when you shut down for power reasons, but how do you figure out when is the right time to lower power? That has to come from system data. There are two main users of that kind of data. One is in the lab, where the software guy has got the spec. The second is when you leave it turned on and available so it’s a participant in the system and you can debug the power management control loop.”

Being able to do that is part of the efficiency equation in designing chips. Another strategy is basically raising the level of abstraction, then working downward to find out what needs to be measured.

“You need to start big and go small,” said Kurt Shuler, vice president of marketing at Arteris. “When you kick it out the door, then you go back and look at a lot of the different phases in the design. This is why you’re seeing companies standardizing on tools instead of using what might be the best tool for a particular job, and on pieces of IP. And they’re using a consistent methodology so that people on all sides of the operation are at least speaking the same language and communicating with each other.”

Homegrown metrics
So what exactly is it that design teams are trying to measure? The answer increasingly is less about time to market than quality and coverage, but those can vary greatly from one market to another, and from one company to another even within the same industry.

“The big question is how many coverage points you have,” said Michael Sanie, senior director of verification marketing at Synopsys. “But it’s even more than that. It’s also which coverage points you add. In the automotive and mil/aero markets, they insert bugs and see if they can catch them and when. In SoC design, it’s time spent to find a bug. If you use technique A versus B for the same bug, can you do it earlier and cheaper? Efficiency in this case is beyond the metrics, though. It’s the methodology of how you found the bugs.”

He said Synopsys customers are asking for a minimum set of standard metrics and a system that is open enough for them to insert their own database metrics.

That’s similar to the view from Cadence, as well. “What you’re really trying to do is measure the inherent complexity in the design,” said Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence. “What we find is that the function points in hardware correlate well with the effort for creation of the design. But from a tools perspective, the question is how you benchmark the tools. Is it throughput, cycles per second, number of regressions, or how many tasks can you execute?”

Thinking way outside the box
Finally, there is a completely different way of thinking about design efficiency, which involves the overall packaging in fan-outs, 2.5D and 3D stacks, as well as re-usability of IP blocks, platforms, software and even subsystems and chips.

“As the amount of software and hardware increase, there is more capability for adding efficiency everywhere,” said Wally Rhines, chairman and CEO of Mentor Graphics. “What you look for is where are the breaks between disciplines. So you’ve got functional verification, which is emulation and simulation. And you’ve got physical effects, which can be handled by new tools or capabilities—finFETs, new parasitics, thermal and strain analysis and TSVs. And at the front end you’ve got enhancements in the tools and changes in physics.”

That adds a whole new set of metrics that are sure to confuse design teams for years to come. At least for now, though, efficiency will be much harder to measure by standard means, and companies will have to implement metrics wherever possible—even if they are just metrics comparing a design to their own previous design—to continue improving the power, performance and area equation.



Leave a Reply


(Note: This name will be displayed publicly)