Indexed by:
Abstract:
As a distributed machine learning framework, federated learning has received considerable attention in recent years and has been researched and applied in various scenarios. However, the system heterogeneity due to the physical characteristics of various terminal devices has led to the straggler effect, making the practical implementation of federated learning challenging. Therefore, we propose a semi-asynchronous federated optimization method based on buffer pre-aggregation. This method allows every participant to engage in training through pre-aggregation and establishes a training time framework based on the pre-aggregation time. It updates the model adaptively using a semi-asynchronous communication method combined with lag factors, improving communication efficiency while maintaining stable accuracy. Experimental results on datasets demonstrate that our proposed method can effectively accelerate the training process of federated learning compared to existing federated optimization methods.
Keyword:
Reprint 's Address:
Version:
Source :
2024 5TH INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKS AND INTERNET OF THINGS, CNIOT 2024
Year: 2024
Page: 13-18
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: