Q(λ)-learning uses TD(λ)-methods to accelerate Q-learning. The update complexity of previous online Q(λ) implementations based on lookup tables is bounded by the size of the state/action space. Our faster algorithm's update complexity is bounded by the number of actions. The method is based on the observation that Q-value updates may be postponed until they are needed.
Bibliographical noteGenerated from Scopus record by KAUST IRTS on 2022-09-14
ASJC Scopus subject areas
- Artificial Intelligence