Knowledge Center
Navigation
Knowledge Center

Amdahl’s Law

The theoretical speedup when adding processors is always limited by the part of the task that cannot benefit from the improvement.
popularity

Description

Gene Amdahl (1922-2015) recognized that software and hardware both place limits on how fast an application can run and published a paper about it in 1967. It provided the theoretical speedup for a defined task that could be expected when additional hardware resources were added and became known as Amdahl’s Law. What it comes down to is that the theoretical speedup is always limited by the part of the task that cannot benefit from the improvement. The theory is used in parallel computing to predict the effect multiple processors will have on workloads.

The concept from Amdahl’s Law is becoming particularly relevant when applied to machine learning, and particularly inferencing. Dedicated chips have large arrays of multiply/accumulate (MAC) functions that are the most time-consuming function for inference. But if time spent here was theoretically reduced to zero, what would now consume all of the time?

There is no definitive right answer to this question, and that lack of definition is driving a vast array of silicon being developed today. Companies must evaluate which end-user tasks they want to optimize and target those tasks. At the same time, because the field is evolving so rapidly, they must maintain a degree of flexibility to be able to handle other tasks, or variants of the tasks originally targeted.