Will Floating Point 8 Solve AI/ML Overhead?


While the media buzzes about the Turing Test-busting results of ChatGPT, engineers are focused on the hardware challenges of running large language models and other deep learning networks. High on the ML punch list is how to run models more efficiently using less power, especially in critical applications like self-driving vehicles where latency becomes a matter of life or death. AI already ... » read more

FP8: Cross-Industry Hardware Specification For AI Training And Inference (Arm, Intel, Nvidia)


Arm, Intel, and Nvidia proposed a specification for an 8-bit floating point (FP8) format that could provide a common interchangeable format that works for both AI training and inference and allow AI models to operate and perform consistently across hardware platforms. Find the technical paper titled " FP8 Formats For Deep Learning" here. Published Sept 2022. Abstract: "FP8 is a natural p... » read more