Federated learning (FL) is becoming a popular paradigm for collaborative learning over distributed, private datasets owned by non-Trusting entities. FL has seen successful deployment in production environments, and it has been adopted in services such as virtual keyboards, auto-completion, item recommendation, and several IoT applications. However, FL comes with the challenge of performing training over largely heterogeneous datasets, devices, and networks that are out of the control of the centralized FL server. Motivated by this inherent setting, we make a first step towards characterizing the impact of device and behavioral heterogeneity on the trained model. We conduct an extensive empirical study spanning close to 1.5K unique configurations on five popular FL benchmarks. Our analysis shows that these sources of heterogeneity have a major impact on both model performance and fairness, thus shedding light on the importance of considering heterogeneity in FL system design.
|Original language||English (US)|
|Title of host publication||Proceedings of the 2nd European Workshop on Machine Learning and Systems|
|Number of pages||9|
|State||Published - Apr 5 2022|
Bibliographical noteKAUST Repository Item: Exported on 2022-12-09
Acknowledgements: We thank Muhammad Bilal for his help with the work.