Online Federated Learning via Non-Stationary Detection and Adaptation Amidst Concept Drift

Bhargav Ganguly, Vaneet Aggarwal

Research output: Contribution to journalArticlepeer-review


Federated Learning (FL) is an emerging domain in the broader context of artificial intelligence research. Methodologies pertaining to FL assume distributed model training, consisting of a collection of clients and a server, with the main goal of achieving optimal global model with restrictions on data sharing due to privacy concerns. It is worth highlighting that the diverse existing literature in FL mostly assume stationary data generation processes; such an assumption is unrealistic in real-world conditions where concept drift occurs due to, for instance, seasonal or period observations, faults in sensor measurements. In this paper, we introduce a multiscale algorithmic framework which combines theoretical guarantees of FedAvg and FedOMD algorithms in near stationary settings with a non-stationary detection and adaptation technique to ameliorate FL generalization performance in the presence of concept drifts. We present a multi-scale algorithmic framework leading to Undefined control sequence \Tilde dynamic regret for T rounds with an underlying general convex loss function, where L is the number of times non-stationary drifts occurred and Δ is the cumulative magnitude of drift experienced within T rounds.
Original languageEnglish (US)
Pages (from-to)1-11
Number of pages11
StatePublished - Aug 1 2023
Externally publishedYes

Bibliographical note

KAUST Repository Item: Exported on 2023-08-31
Acknowledgements: This work was supported in part by Cisco Inc., Defense Advanced Research Projects Agency (DARPA), under Grant W911NF2020003 and in part by the National Science Foundation (NSF) under Grant CCF-2149588.


Dive into the research topics of 'Online Federated Learning via Non-Stationary Detection and Adaptation Amidst Concept Drift'. Together they form a unique fingerprint.

Cite this