Abstract
Machine learning (ML) solutions to challenging networking problems, while promising, are hard to interpret; the uncertainty about how they would behave in untested scenarios has hindered adoption. Using a case study of an ML-based video rate adaptation model, we show that carefully applying interpretability tools and systematically exploring the model inputs can identify unwanted or anomalous behaviors of the model; hinting at a potential path towards increasing trust in ML-based solutions.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of the 2019 Workshop on Network Meets AI & ML - NetAI'19 |
Publisher | ACM Press |
Pages | 29-36 |
Number of pages | 8 |
ISBN (Print) | 9781450368728 |
DOIs | |
State | Published - Aug 14 2019 |
Bibliographical note
KAUST Repository Item: Exported on 2020-10-01Acknowledgements: We thank the anonymous reviewers for their feedback. We are grateful to Nikolaj Bjørner, Bernard Ghanem, Hao Wang and Xiaojin Zhu for their valuable comments and suggestions. We also thank the Pensieve authors, in particular Mohammad Alizadeh and Hongzi
Mao, for their help and feedback.