More Risk-Sensitive Markov Decision Processes
成果类型:
Article
署名作者:
Baeuerle, Nicole; Rieder, Ulrich
署名单位:
Helmholtz Association; Karlsruhe Institute of Technology; Ulm University
刊物名称:
MATHEMATICS OF OPERATIONS RESEARCH
ISSN/ISSBN:
0364-765X
DOI:
10.1287/moor.2013.0601
发表日期:
2014
页码:
105-120
关键词:
Discrete-time
utility
optimization
optimality
criterion
policies
chains
摘要:
We investigate the problem of minimizing a certainty equivalent of the total or discounted cost over a finite and an infinite horizon that is generated by a Markov decision process (MDP). In contrast to a risk-neutral decision maker this optimization criterion takes the variability of the cost into account. It contains as a special case the classical risk-sensitive optimization criterion with an exponential utility. We show that this optimization problem can be solved by an ordinary MDP with extended state space and give conditions under which an optimal policy exists. In the case of an infinite time horizon we show that the minimal discounted cost can be obtained by value iteration and can be characterized as the unique solution of a fixed- point equation using a sandwich argument. Interestingly, it turns out that in the case of a power utility, the problem simplifies and is of similar complexity than the exponential utility case, however has not been treated in the literature so far. We also establish the validity (and convergence) of the policy improvement method. A simple numerical example, namely, the classical repeated casino game, is considered to illustrate the influence of the certainty equivalent and its parameters. Finally, the average cost problem is also investigated. Surprisingly, it turns out that under suitable recurrence conditions on the MDP for convex power utility, the minimal average cost does not depend on the parameter of the utility function and is equal to the risk-neutral average cost. This is in contrast to the classical risk-sensitive criterion with exponential utility.