Analyzing Learning-Based Networked Systems with Formal Verification

Arnaud Dethise, Marco Canini, Nina Narodytska

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Scopus citations


As more applications of (deep) neural networks emerge in the computer networking domain, the correctness and predictability of a neural agent's behavior for corner case inputs are becoming crucial. Enabling the formal analysis of agents with nontrivial properties, we bridge between specifying intended high-level behavior and expressing low-level statements directly encoded into an efficient verification framework. Our results support that within minutes, one can establish the resilience of a neural network to adversarial attacks on its inputs, as well as formally prove properties that were previously relying on educated guesses. Finally, we also show how formal verification can help create an accurate visual representation of an agent behavior to perform visual inspection and improve its trustworthiness.
Original languageEnglish (US)
Title of host publicationIEEE INFOCOM 2021 - IEEE Conference on Computer Communications
ISBN (Print)978-1-6654-3131-6
StatePublished - 2021

Bibliographical note

KAUST Repository Item: Exported on 2021-07-29


Dive into the research topics of 'Analyzing Learning-Based Networked Systems with Formal Verification'. Together they form a unique fingerprint.

Cite this