Multiple Parallel Federated Learning via Over-the-Air Computation

Gaoxin Shi, Shuaishuai Guo, Jia Ye, Nasir Saeed, Shuping Dang

Research output: Contribution to journalArticlepeer-review

Abstract

This paper investigates multiple parallel federated learning in cellular networks, where a base station schedules several FL tasks in parallel and each task has a group of devices involved. To reduce the communication overhead, over-the-air computation is introduced by utilizing the superposition property of multiple access channels (MAC) to accomplish the aggregation step. Since all devices use the same radio resource to transfer their local updates to the BS, in order to separate the received signals of different tasks, we use the zero-forcing receiver combiner to mitigate the mutual interference across different groups. Besides, we analyze the impact of receiver combiner and device selection on the convergence of our multiple parallel FL framework. Also, we formulate an optimization problem that jointly considers receiver combiner vector design and device selection for improving FL performance. We address the problem by decoupling it into two sub-problems and solve them alternatively, adopting successive convex approximation (SCA) to derive the receiver combiner vector, and then solve the device scheduling problem with a greedy algorithm. Simulation results demonstrate that the proposed framework can effectively solve the straggler issue in FL and achieve a near-optimal performance on all tasks.
Original languageEnglish (US)
Pages (from-to)1-1
Number of pages1
JournalIEEE Open Journal of the Communications Society
DOIs
StatePublished - Jul 28 2022

Fingerprint

Dive into the research topics of 'Multiple Parallel Federated Learning via Over-the-Air Computation'. Together they form a unique fingerprint.

Cite this