Indexed by:
Abstract:
Federated learning (FL) has been extensively studied as a means of ensuring data privacy while cooperatively training a global model across decentralized devices. Among various FL approaches, asynchronous federated learning (AFL) has distinct advantages in overcoming the straggler problem via server-side aggregation as soon as it receives a local model. However, AFL still faces several challenges in large-scale real-world applications, such as stale model problems and modality heterogeneity across geographically distributed and industrial devices with different functions. In this article, we propose a multimodal fusion framework for AFL to address the aforementioned problems. Specifically, a novel multilinear block fusion model is designed to fuse various multimodal information, which serves as an enhancement for perceiving and transmitting the important modality and block during local training. An adaptive aggregation strategy is further developed to fully utilize heterogeneous data by allowing the global model to favor the received local model based on both freshness and the importance of the local data. Extensive simulations with different data distributions demonstrate the superiority of the proposed framework in heterogeneity scenarios, which exhibits significant merits in the improvement of modality-based generalization without sacrificing convergence speed and communication consumption.
Keyword:
Reprint 's Address:
Version:
Source :
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS
ISSN: 1551-3203
Year: 2024
Issue: 12
Volume: 20
Page: 14083-14093
1 1 . 7 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: