Home
TECHNICAL PAPERS

Communication Algorithm-Architecture Co-Design for Distributed Deep Learning

Communication Algorithm-Architecture Co-Design for Distributed Deep Learning

popularity

“Abstract—Large-scale distributed deep learning training has enabled developments of more complex deep neural network models to learn from larger datasets for sophisticated tasks. In particular, distributed stochastic gradient descent intensively invokes all-reduce operations for gradient update, which dominates communication time during iterative training epochs. In this work, we identify the inefficiency in widely used allreduce algorithms, and the opportunity of algorithm-architecture co-design. We propose MULTITREE all-reduce algorithm with topology and resource utilization awareness for efficient and scalable all-reduce operations, which is applicable to different interconnect topologies. Moreover, we co-design the network interface to schedule and coordinate the all-reduce messages for contention-free communications, working in synergy with the algorithm. The flow control is also simplified to exploit the bulk data transfer of big gradient exchange. We evaluate the co-design using different all-reduce data sizes for synthetic study, demonstrating its effectiveness on various interconnection network topologies, in addition to state-of-the-art deep neural networks for real workload experiments. The results show that MULTITREE achieves 2.3× and 1.56× communication speedup, as well as up to 81% and 30% training time reduction compared to ring all-reduce and state-of-the-art approaches, respectively.”

Jiayi Huang (UCSB); Pritam Majumder(Texas A&M), Sungkeun Kim (Texas A&M), Abdullah Muzahid (Texas A&M), Ki Hwan Yum (Texas A&M), Eun Jung Kim (Texas A&M)

 

Find technical paper here.

2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture.

 

 



Leave a Reply


(Note: This name will be displayed publicly)