Point-based policy iteration

Ji Shihao, Ronald Parr, Li Hui, Liao Xuejun, Lawrence Carin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

26 Scopus citations

Abstract

We describe a point-based policy iteration (PBPI) algorithm for infinite-horizon POMDPs. PBPI replaces the exact policy improvement step of Hansen's policy iteration with point-based value iteration (PBVI). Despite being an approximate algorithm, PBPI is monotonie: At each iteration before convergence, PBPI produces a policy for which the values increase for at least one of a finite set of initial belief states, and decrease for none of these states. In contrast, PBVI cannot guarantee monotonie improvement of the value function or the policy. In practice PBPI generally needs a lower density of point coverage in the simplex and tends to produce superior policies with less computation. Experiments on several benchmark problems (up to 12,545 states) demonstrate the scalability and robustness of the PBPI algorithm. Copyright ©2007, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Original languageEnglish (US)
Title of host publicationProceedings of the National Conference on Artificial Intelligence
Pages1243-1249
Number of pages7
StatePublished - Nov 28 2007
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2021-02-09

Fingerprint

Dive into the research topics of 'Point-based policy iteration'. Together they form a unique fingerprint.

Cite this