Systems & Design
SPONSOR BLOG

AI Chips Must Get The Floating-Point Math Right

Formal verification of FPUs is no longer a prerogative of big companies spending big bucks.

popularity

Most AI chips and hardware accelerators that power machine learning (ML) and deep learning (DL) applications include floating-point units (FPUs). Algorithms used in neural networks today are often based on operations that use multiplication and addition of floating-point values, which subsequently need to be scaled to different sizes and for different needs. Modern FPGAs such as Intel Arria-10 and Xilinx Everest include floating-point units in their DSP slices that can be leveraged to optimize classification, detection, and image recognition tasks. Convolutional neural networks (CNNs) are popular for computer vision applications and are demanding on compute power. The computational workload of a convolution layer may involve deeply nested loops.


Figure 1: This example highlights a loop-nest representing the computation in a convolution layer of a CNN. (Source: J. Cong and B. Xiao, Minimizing Computation in Convolutional Neural Networks.)

Reducing power consumption and area are crucial goals. In many cases, half precision (16 bits) is sufficient for AI platforms. Lower precisions, 12 bits or 8 bits, have also been demonstrated to adequately support certain applications of CNNs. Implementing CNNs on embedded devices poses even tougher requirements on storage area and power consumption. Low-precision fixed-point representations of CNN weights and activations may be an option. But, as argued in this paper, using floating-point numbers for weights representation may result in significantly more efficient hardware implementations. Fused multiply-add (FMA) operations, where rounding is computed on the final results, may provide additional performance improvements.

FPU challenges
Floating-point representations of real numbers have significant advantages over fixed-point. For example, given a fixed bit width for binary encoding, floating-point formats cover a much wider range of values without losing precision. However, FPUs are much harder to implement in hardware than integer or fixed-point arithmetic. The IEEE 754 standard defines many corner-case scenarios and non-ordinary values, such as +0, -0, signed infinity, and NaN (not a number). Moreover, there are four possible rounding modes (roundTowardZero, roundTiesToEven, roundTowardPositive and roundTowardNegative), as well as five exceptions flags (invalid operation, division by zero, inexact result, underflow and overflow).


Figure 2: The binary representation of IEEE 754 floating-point numbers has three components: sign bit, exponent, and mantissa. The total number of bits available determines the precision of the floating-point number.

Verification of FPU designs is challenging and critically important. Even an apparently minor bug in the implementation, for example causing a small rounding mistake, can have a huge impact as errors accumulate over many operations. Simulation has a hard time hitting the many corner cases defined by the IEEE standard. Further, simulation cannot perform complete verification. Even operations on half-precision floating-point numbers are impossible to verify exhaustively using simulation as two operands of 16 bits each generate 2^32 different combinations.

Formal verification of FPUs
Since the 1994 Intel Pentium floating-point division bug with an estimated cost of $475 million, a number of academic institutions and semiconductor heavyweights, including Intel, AMD and IBM, have carried out a considerable amount of research in the application of formal methods to FPU verification. Although these methods have been successful in the verification of complex industrial designs, they suffer from drawbacks that have hindered widespread adoption. Some methods use non-commercial, proprietary tools, while others rely on theorem provers, thus requiring highly specialized skills. Further, the resulting methodologies are largely not reusable across different FPU implementations.

FPU design and verification engineers need formal solutions that are easy to apply. Formal applications (apps) that automate recurrent verification tasks have been crucial in driving widespread adoption of formal tools. The good news is that there also formal apps available for FPU verification. Some modern formal tools can automatically recognize and tackle complex floating-point arithmetic proofs, including for multiplication, once known to be a no-go zone for formal. Developing an FPU verification environment requires significant expertise and effort. FPU formal apps provide that environment out of the box, with the additional advantage of being proven on multiple IPs and projects.

At DVCon US 2018, Xilinx presented a paper on how its engineers verified their FPU using an automated formal solution, and how the formal flow compared to simulation-based verification. Within days of effort, Xilinx engineers were able to find corner-case bugs and achieve exhaustive verification of floating-point multiplication, type conversion functions between floating-point numbers of different precision, and other operations.


Figure 3: Modern formal tools with engines that automatically solve complex arithmetic proof problems and apps capturing the specification of IEEE 754 operations make rigorous verification of FPUs efficient and widely accessible.

Conclusion
Chip developers are well aware of how hard it is to design and verify FPUs. For a long time, big companies invested big bucks in using formal verification, as they recognized that formal was the only way to achieve rigorous, exhaustive verification. With floating-point arithmetic being at the core of many modern AI chips, and dozens of start-ups involved in the development of AI accelerators, it is crucial to provide easy-to-use formal verification solutions that can enable rigorous, exhaustive verification while also reducing cost and development time. Formal apps encoding the specification of IEEE 754 operations, paired with innovative proof technology that overcomes the traditional complexity issues of formal verification for floating-point arithmetic, is the answer. For more information about formal verification of FPUs, visit onespin.com/fpu.



Leave a Reply


(Note: This name will be displayed publicly)