Evolutionary computation versus reinforcement learning

Research output: Chapter in Book/Report/Conference proceedingConference contribution

8 Scopus citations


Many applications of reinforcement learning (RL) and evolutionary computation (EC) are addressing the same problem, namely, to maximize some agent's fitness function in a potentially unknown environment. The most challenging open issues in such applications include partial observability of the agent's environment, hierarchical and other types of abstract credit assignment, and the learning of credit assignment algorithms. I summarize why EC provides a more natural framework for addressing these issues than RL based on value functions and dynamic programming. Then I point out fundamental drawbacks of traditional EC methods in case of stochastic environments, stochastic policies, and unknown temporal delays between actions and observable effects. I discuss a remedy called the success-story algorithm which combines aspects of RL and EC.
Original languageEnglish (US)
Title of host publicationIECON Proceedings (Industrial Electronics Conference)
PublisherIEEE Computer [email protected]
Number of pages6
StatePublished - Jan 1 2000
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2022-09-14


Dive into the research topics of 'Evolutionary computation versus reinforcement learning'. Together they form a unique fingerprint.

Cite this