An anti-hebbian learning rule to represent drive motivations for reinforcement learning

Varun Raj Kompella, Sohrob Kazerounian, Jürgen Schmidhuber

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

We present a motivational system for an agent undergoing reinforcement learning (RL), which enables it to balance multiple drives, each of which is satiated by different types of stimuli. Inspired by drive reduction theory, it uses Minor Component Analysis (MCA) to model the agent's internal drive state, and modulates incoming stimuli on the basis of how strongly the stimulus satiates the currently active drive. The agent's dynamic policy continually changes through least-squares temporal difference updates. It automatically seeks stimuli that first satiate the most active internal drives, then the next most active drives, etc. We prove that our algorithm is stable under certain conditions. Experimental results illustrate its behavior. © 2014 Springer International Publishing Switzerland.
Original languageEnglish (US)
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
PublisherSpringer [email protected]
Pages176-187
Number of pages12
ISBN (Print)9783319088631
DOIs
StatePublished - Jan 1 2014
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2022-09-14

Fingerprint

Dive into the research topics of 'An anti-hebbian learning rule to represent drive motivations for reinforcement learning'. Together they form a unique fingerprint.

Cite this