Markov decision process

Markov decision process

Markov decision processes (MDPs), named after Andrey Markov, provide a mathematical framework for modeling decision-making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for studying a wide range of optimization problems solved via dynamic programming and reinforcement learning. MDPs were known at least as early as the 1950s (cf. Bellman 1957). Much research in the area was spawned due to Ronald A. Howard's book, Dynamic Programming and Markov Processes, in 1960. Today they are used in a variety of areas, including robotics, automated control, economics and manufacturing.

More precisely, a Markov Decision Process is a discrete time stochastic control process. At each time step, the process is in some state s, and the decision maker may choose any action a that is available in state s. The process responds at the next time step by randomly moving into a new state s', and giving the decision maker a corresponding reward Ra(s,s').

The probability that the process moves into its new state s' is influenced by the chosen action. Specifically, it is given by the state transition function Pa(s,s'). Thus, the next state s' depends on the current state s and the decision maker's action a. But given s and a, it is conditionally independent of all previous states and actions; in other words, the state transitions of an MDP possess the Markov property.

Markov decision processes are an extension of Markov chains; the difference is the addition of actions (allowing choice) and rewards (giving motivation). Conversely, if only one action exists for each state and all rewards are zero, a Markov decision process reduces to a Markov chain.

Contents

Definition

Example of a simple MDP with 3 states and 2 actions.

A Markov decision process is a 4-tuple (S,A,P_\cdot(\cdot,\cdot),R_\cdot(\cdot,\cdot)), where

  • S is a finite set of states,
  • A is a finite set of actions (alternatively, As is the finite set of actions available from state s),
  • P_a(s,s') = \Pr(s_{t+1}=s' \mid s_t = s, a_t=a) is the probability that action a in state s at time t will lead to state s' at time t + 1,
  • Ra(s,s') is the immediate reward (or expected immediate reward) received after transition to state s' from state s with transition probability Pa(s,s').

(The theory of Markov decision processes does not actually require S or A to be finite,[citation needed] but the basic algorithms below assume that they are finite.)

Problem

The core problem of MDPs is to find a policy for the decision maker: a function π that specifies the action π(s) that the decision maker will choose when in state s. Note that once a Markov decision process is combined with a policy in this way, this fixes the action for each state and the resulting combination behaves like a Markov chain.

The goal is to choose a policy π that will maximize some cumulative function of the random rewards, typically the expected discounted sum over a potentially infinite horizon:

\sum^{\infty}_{t=0} {\gamma^t R_{a_t} (s_t, s_{t+1})}    (where we choose at = π(st))

where \ \gamma \ is the discount factor and satisfies 0 \le\ \gamma\ < 1. (For example, γ = 1 / (1 + r) when the discount rate is r.) γ is typically close to 1.

Because of the Markov property, the optimal policy for this particular problem can indeed be written as a function of s only, as assumed above.

Algorithms

MDPs can be solved by linear programming or dynamic programming. In what follows we present the latter approach.

Suppose we know the state transition function P and the reward function R, and we wish to calculate the policy that maximizes the expected discounted reward.

The standard family of algorithms to calculate this optimal policy requires storage for two arrays indexed by state: value V, which contains real values, and policy π which contains actions. At the end of the algorithm, π will contain the solution and V(s) will contain the discounted sum of the rewards to be earned (on average) by following that solution from state s.

The algorithm has the following two kinds of steps, which are repeated in some order for all the states until no further changes take place. They are

 \pi(s) := \arg \max_a \left\{ \sum_{s'} P_a(s,s') \left( R_a(s,s') + \gamma V(s') \right) \right\}
 V(s) := \sum_{s'} P_{\pi(s)} (s,s') \left( R_{\pi(s)} (s,s') + \gamma V(s') \right)

Their order depends on the variant of the algorithm; one can also do them for all states at once or state by state, and more often to some states than others. As long as no state is permanently excluded from either of the steps, the algorithm will eventually arrive at the correct solution.

Notable variants

Value iteration

In value iteration (Bellman 1957), which is also called backward induction, the π array is not used; instead, the value of π(s) is calculated whenever it is needed. Shapley's 1953 paper on stochastic games included as a special case the value iteration method for MDPs, but this was recognized only later on.[1]

Substituting the calculation of π(s) into the calculation of V(s) gives the combined step:

 V(s) := \max_a \left\{ \sum_{s'} P_a(s,s') \left( R_a(s,s') + \gamma V(s') \right) \right\}.

This update rule is iterated for all states s until it converges with the left-hand side equal to the right-hand side (which is the Bellman equation for this problem).

Policy iteration

In policy iteration (Howard 1960), step one is performed once, and then step two is repeated until it converges. Then step one is again performed once and so on.

Instead of repeating step two to convergence, it may be formulated and solved as a set of linear equations.

This variant has the advantage that there is a definite stopping condition: when the array π does not change in the course of applying step 1 to all states, the algorithm is completed.

Modified policy iteration

In modified policy iteration (van Nunen, 1976; Puterman and Shin 1978), step one is performed once, and then step two is repeated several times. Then step one is again performed once and so on.

Prioritized sweeping

In this variant, the steps are preferentially applied to states which are in some way important - whether based on the algorithm (there were large changes in V or π around those states recently) or based on use (those states are near the starting state, or otherwise of interest to the person or program using the algorithm).

Extensions and generalizations

A Markov decision process is a stochastic game with only one player.

Partial observability

The solution above assumes that the state s is known when action is to be taken; otherwise π(s) cannot be calculated. When this assumption is not true, the problem is called a partially observable Markov decision process or POMDP.

Reinforcement Learning

If the probabilities or rewards are unknown, the problem is one of reinforcement learning (Sutton and Barto, 1998).

For this purpose it is useful to define a further function, which corresponds to taking the action a and then continuing optimally (or according to whatever policy one currently has):

\ Q(s,a) = \sum_{s'} P_a(s,s') (R_a(s,s') + \gamma V(s')).\

While this function is also unknown, experience during learning is based on (s,a) pairs (together with the outcome s'); that is, "I was in state s and I tried doing a and s' happened"). Thus, one has an array Q and uses experience to update it directly. This is known as Q-learning.

The power of reinforcement learning lies in its ability to solve the Markov decision process without computing the transition probabilities; note that transition probabilities are needed in value and policy iteration. Also, reinforcement learning can be combined with function approximation, and thereby one can solve problems with a very large number of states. Reinforcement Learning can also be handily performed within Monte Carlo simulators of systems.

Continuous-time Markov Decision Process

In Discrete-time Markov Decision Process, decisions are made at discrete time epoch. However, for Continuous-time Markov Decision Process, decisions can be made at any time when decision maker wants. Different than Discrete-time Markov Decision Process, Continuous-time Markov Decision Process could better model the decision making process when the interested system has continuous dynamics, i.e., the system dynamics is defined by Partial Differential Equations(PDEs).

Definition

In order to discuss the Continuous-time Markov Decision Process, we introduce two sets of notations:

If the state space and action space are finite,

  • \mathcal{S}: State space;
  • \mathcal{A}: Action space;
  • q(i | j,a): \mathcal{S}\times \mathcal{A} \rightarrow \triangle \mathcal{S}, transition rate function;
  • R(i,a): \mathcal{S}\times \mathcal{A} \rightarrow \mathbb{R}, a reward function.

If the state space and action space are continuous,

  • \mathcal{X}: State space.;
  • \mathcal{U}: Space of possible control;
  • f(x,u): \mathcal{X}\times \mathcal{U} \rightarrow \triangle \mathcal{X}, a transition rate function;
  • r(x,u): \mathcal{X}\times \mathcal{U} \rightarrow \mathbb{R}, a reward rate function,((dR(x(t),u(t)) = r(x(t),u(t))dt and R(x,u) is the reward function we discussed in previous case.)

Problem

Like the Discrete-time Markov Decision Processes, in Continuous-time Markov Decision Process we want to find the optimal policy or control which could give us the optimal expected integrated reward:

max \quad \mathbb{E}_u[\int_0^{\infty}\gamma^t r(x(t),u(t)))dt|x_0]

Where 0\leq\gamma< 1

Linear programming formulation

If the state space and action space are finite, we could use linear programming formulation to find the optimal policy, which was one of the earliest solution approaches. Here we only consider the ergodic model, which means our continuous-time MDP become ergodic Continuous-time Markov Chain under stationary policy. Under this assumption, although the decision maker could make decision at any time, on the current state, he could not get more benefit to make more than one actions. It is better for him to take action only at the time when system transit from current state to another state. Under some conditions,(for detail check Corollary 3.14 of Continuous-Time Markov Decision Processes), if our optimal value function V * is independent of state i, we will have a following equation:

g\geq R(i,a)+\sum_{j\in S}q(j|i,a)h(j) \quad \forall i \in S \,\, and \,\, 
a\in A(i)

If there exists a function h, then \bar V^* will be the smallest g could satisfied the above equation. In order to find the \bar V^*, we could have the following linear programming model:

  • Primal linear program(P-LP)

\begin{align}
\text{Minimize}\quad &g\\
\text{s.t} \quad & g-\sum_{j \in S}q(j|i,a)h(j)\geq R(i,a)\,\,
\forall i\in S,\,a\in A(i)
\end{align}
  • Dual linear program(D-LP)

\begin{align}
\text{Maximize} &\sum_{i\in S}\sum_{a\in A(i)}R(i,a)y(i,a)\\
\text{s.t.} &\sum_{i\in S}\sum_{a\in A(i)} q(j|i,a)y(i,a)=0 \quad
\forall j\in S,\\
& \sum_{i\in S}\sum_{a\in A(i)}y(i,a)=1,\\
& y(i,a)\geq 0 \qquad \forall a\in A(i)\,\,and\,\, \forall i\in S
\end{align}

y(i,a) is a feasible solution to the D-LP if y(i,a) is nonnative and satisfied the constraints in the D-LP problem. A feasible solution y * (i,a) to the D-LP is said to be an optimal solution if


\begin{align}
\sum_{i\in S}\sum_{a\in A(i)}R(i,a)y^*(i,a) \geq  \sum_{i\in
S}\sum_{a\in A(i)}R(i,a)y(i,a)
\end{align}

for all feasible solution y(i,a) to the D-LP. Once we found the optimal solution y * (i,a), we could use those optimal solution to establish the optimal policies.

Hamilton-Jacobi-Bellman equation

In Continuous-time MDP, if the state space and action space are continuous, the optimal criterion could be found by solving Hamilton-Jacobi-Bellman partial differential equation. In order to discuss the HJB equation, we need to reformulate our problem

\begin{align} V(x(0),0)=&\text{max}_u\int_0^T r(x(t),u(t))dt+D[x(T)]\\
s.t.\quad & \frac{dx(t)}{dt}=f[t,x(t),u(t)]
\end{align}

D(\cdot) is the terminal reward function, x(t) is the system state vector, u(t) is the system control vector we try to find. f(\cdot) shows how the state vector change over time. Hamilton-Jacobi-Bellman equation is as follows:

0=\text{max}_u r(t,x,u)+\frac {\partial V(t,x)}{\partial t}+\frac{\partial V(t,x)}{\partial x}f(t,x,u)

We could solve the equation to find the optimal control u(t), which could give us the optimal value V *

Application

Queueing system, epidemic processes, Population process.

Alternative notations

The terminology and notation for MDPs are not entirely settled. There are two main streams — one focuses on maximization problems from contexts like economics, using the terms action, reward, value and calling the discount factor β or γ, while the other focuses on minimization problems from engineering and navigation, using the terms control, cost, cost-to-go and calling the discount factor α. In addition, the notation for the transition probability varies.

in this article alternative comment
action a control u
reward R cost g g is the negative of R
value V cost-to-go J J is the negative of V
policy π policy μ
discounting factor \ \gamma \ discounting factor α
transition probability Pa(s,s') transition probability pss'(a)

In addition, transition probability is sometimes written Pr(s,a,s'), Pr(s' | s,a) or, rarely, ps's(a).

See also

Notes

  1. ^ Lodewijk Kallenberg, Finite state and action MDPs, in Eugene A. Feinberg, Adam Shwartz (eds.) Handbook of Markov decision processes: methods and applications, Springer, 2002, ISBN 0792374592

References

  • R. Bellman. A Markovian Decision Process. Journal of Mathematics and Mechanics 6, 1957.
  • R. E. Bellman. Dynamic Programming. Princeton University Press, Princeton, NJ, 1957. Dover paperback edition (2003), ISBN 0486428095.
  • Ronald A. Howard Dynamic Programming and Markov Processes, The M.I.T. Press, 1960.
  • D. Bertsekas. Dynamic Programming and Optimal Control. Volume 2, Athena, MA, 1995.
  • M. L. Puterman. Markov Decision Processes. Wiley, 1994.
  • H.C. Tijms. A First Course in Stochastic Models. Wiley, 2003.
  • Sutton, R. S. and Barto A. G. Reinforcement Learning: An Introduction. The MIT Press, Cambridge, MA, 1998.
  • J.A. E. E van Nunen. A set of successive approximation methods for discounted Markovian decision problems. Z. Operations Research, 20:203-208, 1976.
  • S. P. Meyn, 2007. Control Techniques for Complex Networks, Cambridge University Press, 2007. ISBN 9780521884419. Appendix contains abridged Meyn & Tweedie.
  • S. M. Ross. 1983. Introduction to stochastic dynamic programming. Academic press
  • X. Guo and O. Hernández-Lerma. Continuous-Time Markov Decision Processes, Springer, 2009.
  • M. L. Puterman and Shin M. C. Modified Policy Iteration Algorithms for Discounted Markov Decision Problems, Management Science 24, 1978.

External links


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать реферат

Look at other dictionaries:

  • Partially observable Markov decision process — A Partially Observable Markov Decision Process (POMDP) is a generalization of a Markov Decision Process. A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot… …   Wikipedia

  • Markov model — In probability theory, a Markov model is a stochastic model that assumes the Markov property. Generally, this assumption enables reasoning and computation with the model that would otherwise be intractable. Contents 1 Introduction 2 Markov chain… …   Wikipedia

  • Markov chain — A simple two state Markov chain. A Markov chain, named for Andrey Markov, is a mathematical system that undergoes transitions from one state to another, between a finite or countable number of possible states. It is a random process characterized …   Wikipedia

  • Markov process — In probability theory and statistics, a Markov process, named after the Russian mathematician Andrey Markov, is a time varying random phenomenon for which a specific property (the Markov property) holds. In a common description, a stochastic… …   Wikipedia

  • Markov property — In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process. It was named after the Russian mathematician Andrey Markov.[1] A stochastic process has the Markov property if the… …   Wikipedia

  • Decision field theory — (DFT), is a dynamic cognitive approach to human decision making. It is a cognitive model that describes how people make decisions rather than a rational model that prescribes what people should do. It is also a dynamic model of decision making… …   Wikipedia

  • Andrey Markov — For other people named Andrey Markov, see Andrey Markov (disambiguation). Andrey (Andrei) Andreyevich Markov Born June 14, 1856( …   Wikipedia

  • Info-gap decision theory — is a non probabilistic decision theory that seeks to optimize robustness to failure – or opportuneness for windfall – under severe uncertainty,[1][2] in particular applying sensitivity analysis of the stability radius type[3] to perturbations in… …   Wikipedia

  • Chaîne de Markov — Selon les auteurs, une chaîne de Markov est de manière générale un processus de Markov à temps discret ou un processus de Markov à temps discret et à espace d états discret. En mathématiques, un processus de Markov est un processus stochastique… …   Wikipédia en Français

  • Processus de Markov — En mathématiques, un processus de Markov est un processus stochastique possédant la propriété de Markov. Dans un tel processus, la prédiction du futur à partir du présent n est pas rendue plus précise par des éléments d information concernant le… …   Wikipédia en Français

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”