Insights From The AI Hardware & Edge AI Summit

The good, bad, and unknowns of AI, and what’s missing for design and verification.

popularity

By Ashish Darbari, Fabiana Muto, and Nicky Khodadad

In today’s rapidly changing technology landscape, artificial intelligence (AI) is more than a buzzword. It is transforming businesses and societies. From advances in scalable AI methodology to urgent calls for sustainability, the AI Hardware & Edge AI Summit recently held in London, sparked vibrant discussions that will determine the future of technology and our lives.

At the forefront of AI innovation stands Rich Sutton’s concept, “The Bitter Lesson,” which is quite profound. According to Sutton (2019), the most significant advances in AI are made not through precisely crafted rules or human-like programming, but through scalable techniques that take use of rising processing capacity. This viewpoint, which was discussed during the summit, pushes for the creation of self-learning and adaptable AI systems that can change dynamically with computational resources. Dan Wilkinson’s (Imagination Technologies) use of Sutton’s principles in “Foundation Models at the Edge” exemplifies the practical application of scalable approaches, which extend AI capabilities to edge devices while optimising energy economy and computing flexibility. Sutton’s paradigm, which emphasizes adaptability over rigidity, offers a path for realizing AI’s full potential across a wide range of environments and applications.

The Green Problem with AI

Manoj Saxena, founder and director of the Responsible AI Institute, emphasized the crucial need for sustainable AI, particularly in terms of its environmental impact and energy consumption. According to Saxena, everyday processes such as querying huge language models like ChatGPT can require 1,000 times more energy than typical Google searches, resulting in significant carbon emissions. Furthermore, Preetipadma (2020) shows that Google’s AlphaGo Zero emitted 96 tonnes of CO2 over a 40-day training period, which is comparable to the carbon footprint of 1,000 hours of air travel or 23 American homes. Hao (2019) discovered that training a single GPU-based deep learning model for natural language processing (NLP) can emit more than 626,000 pounds of CO2, equivalent to the lifetime emissions of five cars. Furthermore, Saxena’s calculations show that AI data centers might consume up to 25% of total American electricity by 2030, a significant rise from their current usage of less than 4%.

In response to these findings, Saxena’s calls for an “energy revolution”. Addressing the expected exponential growth in energy consumption by AI data centers presents a tremendous issue for future sustainability projects, needing joint efforts from researchers, developers, and legislators. The collaborative goal goes beyond technology innovation to include ethical development approaches that reduce AI’s environmental impact and ensure its positive participation in global sustainability efforts.

Edge AI for Health Care

Professor Kanjo discussed democratizing AI with extreme edge using tiny ML. She highlighted the potential for edge AI in health care. With rising stress levels, there is a growing interest in designing effective stress management systems capable of detection and reduction of stress levels.

Deep Neural Networks (DNNs) have demonstrated capacity of effectively diagnosing stress; however, modern systems frequently rely on visible cloud infrastructure or big sensors for processing. The introduction of tinyML provides a pathway for overcoming this gap and enabling ubiquitous intelligent systems. Kanjo’s idea pushes for a context-aware approach to stress detection, using a microcontroller to continuously monitor physical activity, reducing motion artefacts caused by heart rate and electrodermal activity during stress assessments.

By deploying two DNNs on a single, resource-constrained microcontroller, they reported achieving high accuracies of 88% and 98% for stress and activity identification, respectively – improving both accuracy and privacy by eliminating the need to retain or transmit sensitive health data, which we believe is remarkable.

Mirjo’s application, “Tag in the Park,” is an example of this paradigm, combining AI, Bluetooth Low Energy (BLE), and Near-Field Communication (NFC) technologies to provide interactive, gamified experiences in public settings. This method encourages not only user engagement but also physical movement and exploration, demonstrating AI’s potential to transform daily interactions by intelligently responding to environmental stimuli.

Test and Verification

We found the summit deeply insightful. However, we felt the conference was too focused on design, power and energy, and applications of AI, not as much on test and verification. As verification experts who value quality through the use of formal methods application there are plenty of verification challenges for hardware including functional, security, safety and low power which we regularly see in our projects.

As AI systems get trained on 405 billion parameters using 16,000 GPUS, it goes beyond saying that testing the functionality of these systems is becoming a major challenge in light of high-bandwidth memory, multi-level cache-coherent memory subsystems, high-performance NoCs and ensuring that the arithmetic implementation with numerous number formats such as FP8, FP16, FP32 doesn’t lose the accuracy invalidating the results of learning.

However, one of the most formidable ones is in dealing with hallucinations – something that goes well beyond the hardware layer.

Summary

Moving forward, integrating scalable, sustainable, and context-aware AI approaches will continue to change businesses and improve user experiences. Future research should prioritize ethical and environmental aspects to ensure AI’s responsible and meaningful integration into society. Last but not at least, validation and verification with formal would play a crucial role in this.



Leave a Reply


(Note: This name will be displayed publicly)