Will Floating Point 8 Solve AI/ML Overhead?


While the media buzzes about the Turing Test-busting results of ChatGPT, engineers are focused on the hardware challenges of running large language models and other deep learning networks. High on the ML punch list is how to run models more efficiently using less power, especially in critical applications like self-driving vehicles where latency becomes a matter of life or death. AI already ... » read more

Challenges For New AI Processor Architectures


Investment money is flooding into the development of new AI processors for the data center, but the problems here are unique, the results are unpredictable, and the competition has deep pockets and very sticky products. The biggest issue may be insufficient data about the end market. When designing a new AI processor, every design team has to answer one fundamental question — how much flex... » read more

Formal Verification Of Floating-Point Hardware With Assertion-Based VIP


Hardware for integer or fixed-point arithmetic is relatively simple to design, at least at the register-transfer level. If the range of values and precision that can be represented with these formats is not sufficient for the target application, floating-point hardware might be required. Unfortunately, floating-point units are complex to design, and notoriously challenging to verify. Since the ... » read more

Achieving Numerical Precision And Design Customization With Flexible Floating-Point IP


Floating-point operations in application-specific hardware have gained in popularity mostly because they are easier to use than fixed-point operations and they are a better match to numerical behavior in software algorithms. Fixed-point operations present design challenges in the definition of input/output ranges and internal precision for each operation. On the other hand, floating-point opera... » read more