The Cost Of Accuracy

Accuracy is a relative term that complicates design and verification. Machine learning makes the industry face some of those realities head on.

popularity

How accurate does a system need to be, and what are you willing to pay for that accuracy?

There are many sources of inaccuracy throughout the development flow of electronic systems, most of which involve complex tradeoffs. Inaccuracy leaves an impact on your design in ways you are not even aware of, hidden by best practices or guard-banding. EDA tools also inject some inaccuracy.

As the industry moves toward greater adoption of machine learning (ML), accuracy is becoming a primary design consideration. Training systems use a level of accuracy that is not possible when doing inferencing at the edge. Teams knowingly introduce inaccuracy to reduce costs. A better understanding of accuracy’s implications is required, particularly when used within safety-critical applications such as autonomous driving.

Accuracy often is confused with predictability and repeatability, especially when it comes to processes. For example, in the development flow, lowered accuracy is an acceptable tradeoff that comes through abstraction and enables evaluation of a system in a larger context. This can lead to better design with more predictable behavior. However, if that reduced accuracy leads to bad design decisions, it is no longer considered desirable. Fidelity is more important than accuracy.

“Abstraction is something that is necessary to deal with complexity and the desire to be able to handle bigger simulations,” says Magdy Abadir, VP of marketing for Helic. “The price of abstraction is that you have to remove detail and still be able to do what you need to do. If you are able to do bigger things without loosing too much accuracy, then your results are useable. Otherwise, you can run fast simulation, or make decisions but if you miss key piece of information and have to come back to fix things, then it has no value.”

Building on a shaky foundation
Before looking at specific inaccuracies inherent in either EDA tools or machine learning, it’s important to expose an inaccuracy contained in the mathematics of computers. Floating-point calculations are inherently approximations and not completely predictable. Analog design, digital design by implication, and machine learning all rely on these computations.

Google has done a lot of analysis recently while defining a new floating-point representation for machine learning. It says the decimal number 0.1 has no exact double or float representation and that you actually get 0.1000000000000000055511151231257827021181583404541015625. In addition, it says that normal mathematical rules do not apply to floating-point, and thus (a+b)+c does not necessarily provide the same result as a+(b+c). The company says, “It follows that there is rarely one exact correct result for any method doing floating point arithmetic.”

Google is hardly alone here. “EDA tools have never provided 100% correct result,” says Benjamin Prautsch, group manager for Advanced Mixed-Signal Automation at Fraunhofer EAS. “Maybe there is no such thing.”

Analog circuits are the foundation of most electronics, either directly or through the characterization of libraries. “Performance is usually a complicated function of many different interrelated parameters,” says Christoph Sohrmann, member of the advanced physical verification group at Fraunhofer EAS. “Loosening the grip on accuracy might lead to reduced reliability of the entire system. While there is high potential in this technique for some less critical applications, the trends in automotive are in fact into the opposite direction, such as increased accuracy and reliability.”

As designs become more complex, the number of corners that have to be considered has increased. “Running billions of brute-force simulations within production timelines is simply not feasible,” points out Wei Lii Tan, product manager for AMS Verification at Mentor, a Siemens Business. “Design teams have to compensate by adding margin to design flows, decreasing overall power, performance, area (PPA) metrics.”

Inaccuracies and their countermeasures run through the rest of the development flow. “The digital design, implementation, and signoff flow relies on static timing analysis,” says Tan. “There is an upper limit on how much accuracy can be achieved because they rely on Liberty models. Liberty models themselves are an abstraction of SPICE models and are typically within 1% to 3% of SPICE, depending on process node and absolute value measured.”

Adding machine learning, either in the EDA flow or in the final application, inserts additional forms of inaccuracy. “Heuristics have always been an important part of algorithms,” says Prautsch. “Machine learning is just another way of finding close-to-optimal solutions. As long as this solution can be measured properly, the value of ML will be clear. However, ML has strengths and weaknesses and should be used together with previous implementations, depending on the application.”

Machine learning is basically curve fitting, and EDA tools always have used this approach to find optimal solutions. “Before the curve-fitting trend that we see today, people would try and do image recognition by looking for features,” explains Raymond Nijssen, vice president and chief technologist for Achronix. “If those features could be found, then an identification was made. These were patterns that were prescribed up front, based on domain knowledge by humans and not the result of training.”

Similar concepts apply to many application areas, including EDA tools.

Conceptually, some people have more problems accepting errors in machine learning compared with those that are feature-based. “The inexactness of the training process makes it universal curve fitting,” says Nijssen. “It is true that we don’t really know which set of weights is the best set. If you try to find the weights that are responsible for recognizing a particular object, you will not be able to identify them. It could be different with different training or with slightly perturbed input data. Nobody knows why the weights are what they are.”

Nijssen point out that this is not unusual for curve-fitting functions. “There are many simpler cases in linear algebra where nobody quite knows why the weights are what they are.”

Inaccurate data or processes can lead to a loss of fidelity. That is where problems start because the result is not just inaccurate—it is wrong. The problem is how to identify when this happens. Have you ever tried putting an autonomous car on autopilot while following a horse in a parade? Driving in the city is very different than driving in rural areas, and driving in California is very different from driving in India. The limitations of training always must be considered, and when presented with unfamiliar situations, solutions need to know when to ask for help.

“I once looked out onto a railroad crossing that often experienced technical problems,” says Nijssen. “Cars would drive up to a closed gate when no train was coming. Some people would sit there for hours, others would turn around and others would slalom between the closed gates. What would a machine do? Curve fitting doesn’t help. There are routine tasks where there is a lot of previous information, and ML can do tremendous things. But when it comes to things that are non-routine and creativity is required, creativity is not an extrapolation of past events. It is doing something new.”

To understand what accuracy means, we have to be able to quantify it. “What we need first of all is consensus on metrics and, of course, standards should play a bigger role,” says Jörg Grosse, product manager for functional safety at OneSpin Solutions. “EDA companies must provide not just tools, but solutions that help users achieve all their targets with automation and adequate accuracy. Even formal tools, which are the antithesis of approximation, can provide rigorous results that match the pragmatic needs of IP and SoC developers. In safety applications, for example, the accuracy of formal results may depend on which design stage the analysis is applied to, the fault sample, or the type of analysis done.”

Not all of this is clear up front, either. “Systems are getting larger, complexity is growing, and so there is strong desire to move into more abstraction,” said Helic’s Abadir. “However, because the technology that we are using underneath is being pushed into more complicated processes—and there are phenomena that are adding new physics to the issue—certain types of detail are becoming necessities that were not important before. In a chip, we have been using RC extraction to extract information about the design and be able to perform things like timing analysis, power estimation etc. The desire is to continue to do this on bigger chips and as long as we say that the underlying assumptions, I can abstract away details and still be able to do it accurately, then I am happy. The problem is that there are additional phenomena that are happening, such as the emergence of inductance, that makes it important to extract inductance and mutual inductance and that adds an extra dimension to the problem that was not there before. This is because frequencies are going higher and physics are making it necessary.”

Accuracy of machine learning
While machine learning in EDA can result in additional guard-banding or the failure to find an optimal solution, it becomes a different problem when designing systems for machine learning.

All machine learning today starts with floating point. “Cost and accuracy are major considerations for machine-learning performance,” says Francisco Socal, product manager for Vision & AI at Imagination Technologies. “Cost applies to any design, regardless of whether it is a GPU, CPU or FPGA. Accuracy is also very important because it is the quality element. Many embedded and mobile inferencing solutions require mapping and tuning the original network model from floating point to fixed point or even integers, introducing a tradeoff between cost, accuracy and performance that need to be measured.”

This is particularly true when doing inferencing on the edge. “You have physical design issues that have to be taken into account,” says Marc Naddell, vice president of marketing for Gyrfalcon Technologies. “You have to consider the use case and the user experience, all of which impacts the size of the technology that is being integrated into a design. Device specific factors impact things such as battery life. In some cases, there could be security issues. There are other factors around the device such as reliability and in an industrial setting, conformance to industrial requirements and environmental factors.”

In many cases accuracy relates to many other features of a design, such as the amount of memory and the throughput required. “Insights into the end application are critical to understanding how much accuracy is enough for that situation, and how much throughput is required,” says Gordon Cooper, product marketing manager in the Solutions Group of Synopsys. “The goal is to build a network that is just powerful enough to solve the problem at hand.”

This is a change from just a short while ago. “All of the research until 9 or 12 months ago seemed to be focused on improving accuracy, and now it is about how to get the same accuracy with less computation,” says Cooper. “There has been a technology evolution where the horse race was about how many MACs you shove in and now it is the compression techniques available, or sparsity handling, or other techniques that can be applied. What are the tradeoffs? If you do pruning of your coefficients, you can significantly save bandwidth or the number of memory stores, but the tradeoff may be a loss of accuracy. There are many tradeoffs that developers have to make, and we have to provide the tools that give them those choices.”

There are times when accuracy is driven by architectural choices. “The bottlenecks are shifting,” said Steven Woo, vice president of enterprise solutions technology and distinguished inventor at Rambus. “They’re now in data movement. The industry has done a great job of enabling better compute, but if you’re waiting for data then you need to look at different approaches.”

If you don’t have enough bandwidth, then you can reduce the accuracy or apply other techniques. “There are compression technologies that can be used to reduce memory bandwidth, latency and power,” says Jem Davies, an Arm fellow. “This is a balance. Compute is really cheap. Compression/decompression is cheap. Storing and loading into memory is not. Or looked at another way, picojoules per bit are not decreasing as fast as picojoules per flop.”

There are open questions about how much precision is required to achieve a certain level of accuracy. “Most likely the training that you did was using floating point, so the coefficients will be in that format,” explains Pulin Desai, product marketing director for Tensilica Vision DSP product line in Cadence. “But we need to do the inferencing in fixed point, so we go through the quantization process and make sure that the deduction rate is within a certain percentage of what was achieved in floating point. That is normally under 1%.”

So is 1% error rate acceptable? That may also depend upon the end application. It may also be important to ask, “Which 1% will it get wrong?”

A lot of today’s research is looking at using in-memory processing for the MAC functions, and most of these deploy analog computation, meaning that the results will be imprecise and potentially change due to environmental conditions. Before such systems can be effectively deployed, it will be necessary to find ways to perform sensitivity analysis on these functions, and this, in turn, could lead to a better understanding of metrics for accuracy.

Conclusion
Tools are built using an inaccurate and often unpredictable numerical system. Those tools, and the underlying designs they are trying to build and verify, are so complex that development teams must accept inaccuracy in these flows. Most of this has been hidden by margining, and thus an inherent, invisible cost. Machine learning is bringing some of these issues to the forefront because development of systems for the edge must actively consider reducing accuracy in order to obtain acceptable costs.

The bottom line: Accuracy is a big knob to turn, but better metrics are needed—particularly where safety is concerned.



Leave a Reply


(Note: This name will be displayed publicly)