Adversarial examples on power systems state estimation

Ali Sayghe, Olugbenga Moses Anubi, Charalambos Konstantinou

Research output: Chapter in Book/Report/Conference proceedingConference contribution

26 Scopus citations

Abstract

The number of cyber-attacks targeting power system infrastructures is increasing at an alarming rate. Among them, False Data Injection Attacks (FDIAs) can disturb the normal operation of state estimation routines in power systems and potentially lead to outages. Several studies utilize machine learning algorithms to detect FDIAs with high accuracy. However, such algorithms can be susceptible to adversarial examples able to lower the accuracy of the detection model. Adversarial examples are crafted inputs intentionally designed to falsify machine learning algorithms. In this paper, we examine the effect of adversarial examples on machine learning algorithms that are used to detect FDIAs in state estimation. Specifically, we demonstrate the impact on Support Vector Machines (SVM) and Multilayer Perceptrons (MLP) against poisoning and evasion adversarial attacks. The algorithms are tested on IEEE 14 bus system using load data collected from the New York Independent System Operator (NYISO).
Original languageEnglish (US)
Title of host publication2020 IEEE Power and Energy Society Innovative Smart Grid Technologies Conference, ISGT 2020
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Print)9781728131030
DOIs
StatePublished - Feb 1 2020
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2022-09-13

Fingerprint

Dive into the research topics of 'Adversarial examples on power systems state estimation'. Together they form a unique fingerprint.

Cite this