TY - GEN
T1 - Adversarial examples on power systems state estimation
AU - Sayghe, Ali
AU - Anubi, Olugbenga Moses
AU - Konstantinou, Charalambos
N1 - Generated from Scopus record by KAUST IRTS on 2022-09-13
PY - 2020/2/1
Y1 - 2020/2/1
N2 - The number of cyber-attacks targeting power system infrastructures is increasing at an alarming rate. Among them, False Data Injection Attacks (FDIAs) can disturb the normal operation of state estimation routines in power systems and potentially lead to outages. Several studies utilize machine learning algorithms to detect FDIAs with high accuracy. However, such algorithms can be susceptible to adversarial examples able to lower the accuracy of the detection model. Adversarial examples are crafted inputs intentionally designed to falsify machine learning algorithms. In this paper, we examine the effect of adversarial examples on machine learning algorithms that are used to detect FDIAs in state estimation. Specifically, we demonstrate the impact on Support Vector Machines (SVM) and Multilayer Perceptrons (MLP) against poisoning and evasion adversarial attacks. The algorithms are tested on IEEE 14 bus system using load data collected from the New York Independent System Operator (NYISO).
AB - The number of cyber-attacks targeting power system infrastructures is increasing at an alarming rate. Among them, False Data Injection Attacks (FDIAs) can disturb the normal operation of state estimation routines in power systems and potentially lead to outages. Several studies utilize machine learning algorithms to detect FDIAs with high accuracy. However, such algorithms can be susceptible to adversarial examples able to lower the accuracy of the detection model. Adversarial examples are crafted inputs intentionally designed to falsify machine learning algorithms. In this paper, we examine the effect of adversarial examples on machine learning algorithms that are used to detect FDIAs in state estimation. Specifically, we demonstrate the impact on Support Vector Machines (SVM) and Multilayer Perceptrons (MLP) against poisoning and evasion adversarial attacks. The algorithms are tested on IEEE 14 bus system using load data collected from the New York Independent System Operator (NYISO).
UR - https://ieeexplore.ieee.org/document/9087789/
UR - http://www.scopus.com/inward/record.url?scp=85086222863&partnerID=8YFLogxK
U2 - 10.1109/ISGT45199.2020.9087789
DO - 10.1109/ISGT45199.2020.9087789
M3 - Conference contribution
SN - 9781728131030
BT - 2020 IEEE Power and Energy Society Innovative Smart Grid Technologies Conference, ISGT 2020
PB - Institute of Electrical and Electronics Engineers Inc.
ER -