Optimal Energy Scheduling of Flexible Industrial Prosumers via Reinforcement Learning

Nick van den Bovenkamp, Juan S. Giraldo, Edgar Mauricio Salazar Duque, Pedro P. Vergara, Charalambos Konstantinou, Peter Palensky

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper introduces an energy management system (EMS) aiming to minimize electricity operating costs using reinforcement learning (RL) with a linear function approximation. The proposed EMS uses a Q-learning with tile coding (QLTC) algorithm and is compared to a deterministic mixed-integer linear programming (MILP) with perfect forecast information. The comparison is performed using a case study on an industrial manufacturing company in the Netherlands, considering measured electricity consumption, PV generation, and wholesale electricity prices during one week of operation. The results show that the proposed EMS can adjust the prosumer's power consumption considering favorable prices. The electricity costs obtained using the QLTC algorithm are 99% close to those obtained with the MILP model. Furthermore, the results demonstrate that the QLTC model can generalize on previously learned control policies even in the case of missing data and can deploy actions 80% near to the MILP's optimal solution.
Original languageEnglish (US)
Title of host publication2023 IEEE Belgrade PowerTech
PublisherIEEE
DOIs
StatePublished - Jun 25 2023

Bibliographical note

KAUST Repository Item: Exported on 2023-08-14

Fingerprint

Dive into the research topics of 'Optimal Energy Scheduling of Flexible Industrial Prosumers via Reinforcement Learning'. Together they form a unique fingerprint.

Cite this