Standard Benchmarks For AI Innovation

Lack of a common understanding of performance is hampering AI growth.

popularity

There is no standard measurement for machine learning performance today, meaning there is no single answer for how companies build a processor for ML across all use cases while balancing compute and memory constraints. For the longest time, every group would pick a definition and test to suit their own needs. This lack of common understanding of performance hinders customers’ buying decisions and slows down growth of the industry, limiting the rate of AI innovation in the world today.

To solve these challenges and accelerate innovation in the industry, we need standard benchmarks, datasets, and best practices in all markets. Arm and MLCommons, a global engineering consortium, are working together to push the industry forward in all three of these areas. By combining these three, we can create sustainable and healthy growth of breakthrough applications for the world.

  • Benchmarks: Benchmarks make tremendous impact to the end users and consumers with results used for purchasing decisions worth one billion USD and growing rapidly. MLCommons has defined ML performance for the industry with ML Perf and has over 2000 formal submissions from 30 organizations.
  • Datasets: Before having a benchmark in AI, you have to start with good data. Innovation can only occur when there are open datasets that can be both commercial and academic entities. To help enable open datasets, MLCommons is creating People Speech with over 80,000+ hours of diverse language speech. This initiative is the largest dataset available for industry-wide use, the ImageNet of speech datasets.
  • Best practices: To mature, the industry needs best practices. To support best practices, MLCommons has been running an initiative called MLCube (https://github.com/mlperf/mlcube) providing portable models for experimentation and benchmarking. MLCommons also provides other working groups focused on system design, logging, and power measurement.

Why MLCommons?

MLCommons is a global engineering nonprofit which successfully employs a holistic approach to measuring performance, creating datasets and best practices. The benchmarking group enable open and transparent consensus with competing entities to create a fair playing field. And they are supported by the 30+ founding members from commercial and research communities. Their practices enforce replicability to ensure reliable results and are complementary to micro benchmark efforts. MLCommons is keeping benchmarking efforts affordable, so all can participate to help grow the market and increase innovation together. Dave Kanter elaborates on MLCommons below:

“We are at a unique inflection point in the development of ML and its ability to solve challenges in communication, access to information, health, safety, commerce, and education,” said David Kanter, Executive Director of MLCommons. “At MLCommons, the brightest minds from leading organizations across the globe will collaborate to accelerate machine learning innovation for the benefit of humanity as a whole.”

What is an IP provider’s place in MLCommons?

Arm and other AI pioneers are working together with MLCommons to share and deliver industry insights and market trends in mobile, servers, HPC, tiny embedded, and autonomous to ensure that the benchmarks are representative of real-world use cases. (See MLCommons organization diagram Below for more information.)


Fig. 1: MLCommons organization diagram.

Can companies act alone?

Companies often balance efforts between internal benchmarking and industry benchmarking. Internal efforts focus on improving the processor IP for the needs of specific customers, while industry benchmarking efforts improve processor IP for the broad needs of the industry. In order to achieve this balance in a cost-efficient way, we need industry-wide support to create benchmarks, datasets, and best practices to empower the whole industry. Working collaboratively can be a powerful enabler of improved business performance, but successful collaboration rarely emerges out of the blue and should not be taken for granted. So, if you are thinking about joining the efforts, check out MLCommons for more information.



Leave a Reply


(Note: This name will be displayed publicly)