Cracking open the black box: What observations can tell us about reinforcement learning agents

Arnaud Dethise, Marco Canini, Srikanth Kandula

Research output: Chapter in Book/Report/Conference proceedingConference contribution

30 Scopus citations

Abstract

Machine learning (ML) solutions to challenging networking problems, while promising, are hard to interpret; the uncertainty about how they would behave in untested scenarios has hindered adoption. Using a case study of an ML-based video rate adaptation model, we show that carefully applying interpretability tools and systematically exploring the model inputs can identify unwanted or anomalous behaviors of the model; hinting at a potential path towards increasing trust in ML-based solutions.
Original languageEnglish (US)
Title of host publicationProceedings of the 2019 Workshop on Network Meets AI & ML - NetAI'19
PublisherACM Press
Pages29-36
Number of pages8
ISBN (Print)9781450368728
DOIs
StatePublished - Aug 14 2019

Bibliographical note

KAUST Repository Item: Exported on 2020-10-01
Acknowledgements: We thank the anonymous reviewers for their feedback. We are grateful to Nikolaj Bjørner, Bernard Ghanem, Hao Wang and Xiaojin Zhu for their valuable comments and suggestions. We also thank the Pensieve authors, in particular Mohammad Alizadeh and Hongzi
Mao, for their help and feedback.

Fingerprint

Dive into the research topics of 'Cracking open the black box: What observations can tell us about reinforcement learning agents'. Together they form a unique fingerprint.

Cite this