What Comes After Moore’s Law And Dennard Scaling?

FPGAs are helping shape the computing platforms of the future.

popularity

For decades, Moore’s Law has been an important semiconductor industry mainstay that has helped fuel a relentless progression in computing performance. However, most industry experts agree that Moore’s Law is waning, with an end on the horizon due to a combination of physical limitations and economic factors. With the loss of Dennard Scaling roughly 10 years ago, the industry is at a critical juncture as it contemplates a future in which the two historical driving forces behind semiconductor development and design are no longer present.

As advances in process technology slow, increased attention is being paid to emerging computing paradigms and alternative system architectures to drive future performance improvements. For example, during Supercomputing 2016 (SC16) in Salt Lake City, researchers discussed key trends and new approaches to enable the next wave of innovation in computer architecture. In this context, the first International Workshop on Post-Moore Era Supercomputing (PMES) convened at the conference and explored potential methods of advancing semiconductor design in a post-transistor scaling world.

As part of the workshop, Tom Conte detailed the IEEE’s Rebooting Computing Initiative and the International Roadmap for Devices and Systems. Meanwhile, Franck Cappello, Kazutomo Yoshii, Hal Finkel and Jason Cong presented a paper that discussed FPGA-powered true co-design flow for high-performance computing in the post-Moore’s Law era.

“Multicore scaling will end soon because of practical power limits. Dark silicon is becoming a major issue even more than the end of Moore’s Law. In the post-Moore era, the energy efficiency of computing will be a major concern. FPGAs could be a key to maximizing the energy efficiency,” the researchers wrote. “FPGAs are gaining the spotlight as a computing resource; modern FPGAs include thousands of hard DSPs or floating-point units. In the preparatory stages, we addressed the technology gaps in adopting FPGA technology for HPC. Our goal is to design and implement ‘Re-form,’ an FPGA-powered true co-design flow that significantly improves the energy efficiency of the post-Moore era supercomputers.”

Field-programmable gate arrays were also the topic of a paper presented by Hiroka Ihara and Kenjiro Taura of the University of Tokyo, who explored the use of the silicon in future HPC scenarios.

“It is known that intermediate fabrics for FPGA accelerators can improve the end-user productivity through both program deployment free of logic synthesis and high portability,” Ihara and Taura explained. “[We discuss] one possible ecosystem for intermediate fab, where pipelined reconfigurable architecture is employed to enable scalable and parallel execution. Such [an] ecosystem can improve the utilization rate of FPGA accelerators in the field of supercomputing.”
Clearly, the continued evolution of HPC (alongside conventional computing) will require system designers to rethink traditional architectures and software, while considering the use of new devices and materials. As the above papers and Microsoft’s Project Catapult illustrate, FPGAs are already helping the semiconductor industry shape the computing platforms of the future. As we prepare for the Post-Moore Era, system architectures will need to evolve to move forward. Traditional processors coupled with FPGAs, along with technologies to minimize data movement, offer new approaches to improving performance and power efficiency and offer a glimpse of things to come in next-gen systems.



Leave a Reply


(Note: This name will be displayed publicly)