The full text of this article is unavailable through your IP address: 172.17.0.1
Contents Online
Communications in Mathematical Sciences
Volume 20 (2022)
Number 3
A note on optimization formulations of Markov decision processes
Pages: 727 – 745
DOI: https://dx.doi.org/10.4310/CMS.2022.v20.n3.a5
Authors
Abstract
This note summarizes the optimization formulations used in the study of Markov decision processes. We consider both the discounted and undiscounted processes under the standard and the entropy-regularized settings. For each setting, we first summarize the primal, dual, and primal-dual problems of the linear programming formulation. We then detail the connections between these problems and other formulations for Markov decision processes such as the Bellman equation and the policy gradient formulation.
Keywords
Markov decision processes, reinforcement learning, optimization
2010 Mathematics Subject Classification
60J10, 60J22, 90C05
Received 17 December 2020
Accepted 29 August 2021
Published 21 March 2022