TY - JOUR
T1 - Recurrent Neural-Linear Posterior Sampling for Nonstationary Contextual Bandits
AU - Ramesh, Aditya
AU - Rauber, Paulo
AU - Conserva, Michelangelo
AU - Schmidhuber, Juergen
N1 - KAUST Repository Item: Exported on 2022-09-19
PY - 2022/9/9
Y1 - 2022/9/9
N2 - An agent in a nonstationary contextual bandit problem should balance between exploration and the exploitation of (periodic or structured) patterns present in its previous experiences. Handcrafting an appropriate historical context is an attractive alternative to transform a nonstationary problem into a stationary problem that can be solved efficiently. However, even a carefully designed historical context may introduce spurious relationships or lack a convenient representation of crucial information. In order to address these issues, we propose an approach that learns to represent the relevant context for a decision based solely on the raw history of interactions between the agent and the environment. This approach relies on a combination of features extracted by recurrent neural networks with a contextual linear bandit algorithm based on posterior sampling. Our experiments on a diverse selection of contextual and noncontextual nonstationary problems show that our recurrent approach consistently outperforms its feedforward counterpart, which requires handcrafted historical contexts, while being more widely applicable than conventional nonstationary bandit algorithms. Although it is very difficult to provide theoretical performance guarantees for our new approach, we also prove a novel regret bound for linear posterior sampling with measurement error that may serve as a foundation for future theoretical work.
AB - An agent in a nonstationary contextual bandit problem should balance between exploration and the exploitation of (periodic or structured) patterns present in its previous experiences. Handcrafting an appropriate historical context is an attractive alternative to transform a nonstationary problem into a stationary problem that can be solved efficiently. However, even a carefully designed historical context may introduce spurious relationships or lack a convenient representation of crucial information. In order to address these issues, we propose an approach that learns to represent the relevant context for a decision based solely on the raw history of interactions between the agent and the environment. This approach relies on a combination of features extracted by recurrent neural networks with a contextual linear bandit algorithm based on posterior sampling. Our experiments on a diverse selection of contextual and noncontextual nonstationary problems show that our recurrent approach consistently outperforms its feedforward counterpart, which requires handcrafted historical contexts, while being more widely applicable than conventional nonstationary bandit algorithms. Although it is very difficult to provide theoretical performance guarantees for our new approach, we also prove a novel regret bound for linear posterior sampling with measurement error that may serve as a foundation for future theoretical work.
UR - http://hdl.handle.net/10754/681559
UR - https://direct.mit.edu/neco/article/doi/10.1162/neco_a_01539/112951/Recurrent-Neural-Linear-Posterior-Sampling-for
U2 - 10.1162/neco_a_01539
DO - 10.1162/neco_a_01539
M3 - Article
C2 - 36112923
SN - 0899-7667
SP - 1
EP - 41
JO - Neural Computation
JF - Neural Computation
ER -