Federated learning has received extensive attention as a new distributed learning framework, which enables joint modeling without data sharing. However, it is still affected by the communication bottleneck, and most clients may not be able to participate in the joint learning modeling at the same time, resulting in slow convergence. To solve the above problems, we propose a federated learning aggregation algorithm based on a global perspective, which considers the data distribution of participating clients. The server builds a feature distribution table according to the data distribution, and each time the server selects a set of clients for training, it will cover more features to a greater extent to learn the global data more fully. Specifically, the selection of these clients is not random. When the server selects, it will construct a set of clients with the largest mutual distribution difference within the range of visible clients, and place it at the end of the selected chain after each training until all clients’ end is selected. We demonstrate the effectiveness of our work through comprehensive experiments and comparisons between the two most popular algorithms. Specifically, our algorithm achieves an average speedup of 40% compared to traditional algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.