Faster Dynamic Programming for Markov Decision Processes Peng Dai and Judy Goldsmith Computer Science Dept. University of Kentucky Lexington, KY 40506-0046 [email protected]

Abstract Markov decision processes (MDPs) are a general framework used in artificial intelligence (AI) to model decision theoretic planning problems. Solving real world MDPs has been a major and challenging research topic in the AI literature, since classical dynamic programming algorithms converge slowly. We discuss two approaches in expediting dynamic programming. The first approach combines heuristic search strategies with dynamic programming to expedite the convergence process. The second makes use of graphical structures in MDPs to perform dynamic programming in a better order.

Introduction The problem of decision theoretic planning has become a central research topic in AI, not only because it is an extension to classical planning, but also due to its close connection with solving real world problems. Markov decision processes (MDPs) provide a graphical and mathematical framework, which has been utilized by AI researchers to model decision theoretic planning problems. Solving MDPs has been an interesting research area for a long time, because of the slow convergence of MDP algorithms on real world domains. This paper concentrates on our advances in expediting the convergence of dynamic programming, a basic tool to solve MDPs.

Background Markov decision processes A Markov decision process (MDP) is a four-tuple hS, A, T, Ci, where S is a finite set of system states, A a finite set of actions, T the transition function or conditional probability function, and C the cost function. The MDP system develops in a sequence of discrete time slots named stages. At each stage t, the system is at one particular state s, where s has an associated set of applicable actions Ats . Applying any action makes the system change from the current state s to the next state s′ and proceeds to stage t + 1. Unlike classical AI planning, the state transition is not deterministic in MDPs. The transition function for each action a, Ta : S × S → [0, 1], tells the probabilities of state transitions under action a. Ta (s′ |s) stands for the probability c 2007, Association for the Advancement of Artificial Copyright Intelligence (www.aaai.org). All rights reserved.

ofPthe system changing from s to s′ by performing action a ( s′ ∈S Ta (s′ |s) = 1). The cost function C : S × A → R gives the instantaneous cost of applying an action at a state. The horizon of an MDP is the total number of stages the system is evaluated. When the horizon is a finite number H, solving the MDP means finding the best action to take at each stage and state that minimizes the total expected cost. More concretely, the chosen actions a0 , . . . , aH−1 should minimize the expectation of the value PH−1 f (s) = i=0 C(si , ai ), where s0 = s. For infinite-horizon or indefinite-horizon problems, problems when the horizon is infinite or unknown, the cost is accumulated over an infinitely long path. To emphasize the relative importance of instant costs, a discount factor γ ∈ [0, 1] is used for future costs. With discount factor P∞ γ, our goal is to minimize the expectation of f (s) = i=0 γ i C(si , ai ). We consider a special type of MDPs called goal-based MDPs in this paper. A goal-based MDP usually has two additional components s0 and G, where s0 ∈ S is the initial state and G ⊆ S is a set of goal state. A solution to the MDP guides the system change from s0 to some state in G with the smallest expected cost. A goal-based MDP is usually considered as an indefinite-horizon problem, where the horizon of the problem is finite but without an upper bound. The discount factor of a goal-based MDP is normally set to 1. The solution of an MDP is usually represented in the form of a policy. Given a goal-based MDP, we define a policy π : S → A to be a mapping from the state space to the action space. A value function Vπ for policy π, Vπ : S → R denotes the value of the total expected cost starting from state s and following the policy π. A policy π1 dominates another policy π2 if Vπ1 (s) ≤ Vπ2 (s) for all s ∈ S. An optimal policy π ∗ is a policy that is not dominated by any other policy. For goal-based MDPs, the policy and value function of different states are stationary (Puterman 1994). We describe the expected cost accumulated by starting at state s and following the optimal policy by the optimal value function V ∗ . Solving goal-based MDPs means to find an optimal value function and policy. Bellman (1957) showed that the expected value of a policy π can be computed using the set of value functions V π . The value function of a policy

π is defined as:

Geffner 2003b) and HDP (Bonet & Geffner 2003a) are two other heuristic search algorithms that use a clever labeling V π (s) = C(s, π(s)) + γ Tπ(s) (s′ |s)V π (s′ ), γ ∈ [0, 1]. technique to mark converged states so that those states can s′ ∈S be exempted from future search and backups. (1) The second type prioritizes the backups over states to deand the optimal value function is defined as: crease the number of backups, the most time-consuming X portion of dynamic programming, and we call them priorityV ∗ (s) = mina∈A(s) [C(s, a)+γ Ta (s′ |s)V ∗ (s′ )], γ ∈ [0, 1]. based algorithms. The prioritized sweeping (PS) algos′ ∈S rithm (Moore & Atkeson 1993) was first introduced in the (2) reinforcement learning literature, but is a general technique The Bellman equation is satisfied by a system of value functhat has also been used in dynamic programming (Andre, tions in the form of Equation 1 or 2. Updating the value Friedman, & Parr 1998; McMahan & Gordon 2005). The function of a particular state by applying the Bellman equamain consideration of PS is to order future backups more tion on that state is called a Bellman backup. Based on wisely by maintaining a priority queue, where the priority Bellman equations, we can use dynamic programming techof each element (state) in the queue represents the potential niques to compute the exact value of value functions. An improvement for other state values as a result of backing up optimal policy is easily extracted by choosing an action for that state. The priority queue is updated as the algorithm each state that contributes to the optimal value function. sweeps the state space. Focussed dynamic programming Dynamic programming (Bellman 1957) is widely used to (FDP) (Ferguson & Stentz 2004) is another priority-based solve MDPs. Dynamic programming approaches explicitly dynamic programming algorithm, where the priorities are store value functions of the state space, and from time to calculated in a different way than PS. Improved Prioritized time back up states, until a time when the potential changes sweeping (IPS) (McMahan & Gordon 2005) is an improved of value functions are very small, and we say the value funcversion of the prioritized sweeping based dynamic program1 tions converge . Value iteration (Bellman 1957), for examming algorithm that uses a different priority metric. It conple, iteratively updates value functions by performing Bellverges faster than PS and FDP. man backups on the existing value functions. The algorithm halts when the maximum change of the value funcAlgorithms and results tions in the most recent iteration is smaller than a threshold value. Although value iteration converges in polynomial We briefly describe our algorithms and summarize our extime (Littman, Dean, & Kaelbling 1995), its convergence perimental results here. is usually slow on big problems. First, it does not use initial state information to eliminate unreachable states from Multi-threaded BLAO* dynamic programming; second, backups are performed in We extended BLAO* into multi-threaded BLAO* an arbitrary order and over every state in every iteration. (MBLAO*) (Dai & Goldsmith 2007a). The idea is to To overcome these problems, two types of approaches were concurrently start several threads. One of them is the same proposed. as the forward search in BLAO*, and the rest of them are backward searches, but with different starting points. In Previous work that way we extend single-source backward search trials The first type combines dynamic programming with heurisinto multiple-source backward search trials. The reason tic search, so as to minimize the number of relevant states for this change is: On the one hand, one backward search and the number of expansions in search. Hansen and Zilfrom the goal could help propagate more accurate values berstein (2001) proposed the first heuristic search algorithm from the goal, but not from other sources. On the other for MDPs, named LAO*. The basic idea of LAO* is to only hand, the value of a state depends on the values of all its consider part of the state space by constructing a partial sosuccessors, so the function of a single-source backward lution graph and searching implicitly from the initial state search is limited. This could be complemented by backward toward the goal state. The algorithm only expands the most searches from other places. promising branch of an MDP according to heuristic functions. LAO* converges much faster than VI since it only Topological value iteration considers part of the state space. Bhuma and Goldsmith Topological value iteration (TVI) (Dai & Goldsmith 2007b) extended LAO* into BLAO* (Bhuma & Goldsmith 2003), is based on the observation that state values are dependent the first bidirectional heuristic search algorithm. BLAO* on each other. In an MDP M , if state s′ is a successor state searches not only in the forward direction, but also from of s after applying an action a, then V (s) is dependent on the goal state toward the initial state in parallel. It outperV (s′ ). For this reason, we want to back up s′ before s. We formed LAO* since the heuristic values can be improved by can regard value dependency as a causal relation over the the backward search and backups when the forward search designated states. Since MDPs are cyclic, the causal relafrontier has not reached an goal state. The algorithm works tion can be cyclic and therefore quite complicated. The idea the best when ma , the maximum number of actions of each of TVI is to group states that are mutually causally related state, is large (Dai & Goldsmith 2006). LRTDP (Bonet & together and make them a metastate, and let these metastates 1 they are sufficiently close to the optimal value functions form a new MDP M ′ . Then M ′ is no longer cyclic. In this X

case, we can back up metastates in M ′ according to their reverse topological order. In other words, we can back up these big states in only one virtual iteration.

Results summary We have done extensive experiments on the performance of MBLAO* and TVI. We summarize our results here. We found that MBLAO* outperformed BLAO*, its single-source backward search version, and several other state-of-the-art forward heuristic search algorithms, such as LAO*, LRTDP, and HDP. The reason is that MBLAO* required the least number of backups before convergence. This result is consistent with our original considerations that backward search helps propagate more accurate heuristics from various sources. Better heuristic values not only improve the value functions, but also lead to more focused forward search. We also found that MBLAO* worked best when the initial heuristic values are not good enough (Dai 2007). In the investigation of TVI, we found that TVI achieved the highest speedup against value iteration when the state space is evenly distributed into a number of strongly connected components. Experimental results showed that TVI converged faster, sometimes a magnitude of 10 faster, than algorithms that do not make use of the topological order of strongly connected components.

Ongoing and future work We believe that heuristic search and priority-based approaches are very promising research topics in AI planning. We recently proposed a simple priority-based algorithm (Dai & Hansen 2007) without the use of a priority queue. Experimental results showed that it is faster than algorithms that use a priority queue. The reason is that the overhead of maintaining a priority queue sometimes exceeded its computational savings. One of our ongoing research project is on using graphical structure to expedite the convergence time in reinforcement learning MDP algorithms (Sutton & Barto 1998). Apart from regarding the two topics individually, an integration of heuristic search and prioritization is also very interesting. For example, focussed dynamic programming (Ferguson & Stentz 2004) can be regarded as a combination of both. In the future, we plan to dig deeper along this path. We also think these two strategies can be used in combination with other common techniques such as factored MDPs, value approximation, and linear programming.

References Andre, D.; Friedman, N.; and Parr, R. 1998. Generalized prioritized sweeping. In Proc. of the 10th conference on Advances in neural information processing systems (NIPS97), 1001–1007. Bellman, R. 1957. Dynamic Programming. Princeton, NJ: Princeton University Press. Bhuma, K., and Goldsmith, J. 2003. Bidirectional LAO* algorithm. In Proc. of Indian International Conferences on Artificial Intelligence (IICAI), 980–992.

Bonet, B., and Geffner, H. 2003a. Faster heuristic search algorithms for planning with uncertainty and full feedback. In Proc. of 18th International Joint Conf. on Artificial Intelligence (IJCAI-03), 1233–1238. Morgan Kaufmann. Bonet, B., and Geffner, H. 2003b. Labeled RTDP: Improving the convergence of real-time dynamic programming. In Proc. 13th International Conference on Automated Planning and Scheduling (ICAPS-03), 12–21. Dai, P., and Goldsmith, J. 2006. LAO*, RLAO*, or BLAO*? In AAAI Workshop on Heuristic Search, 59–64. Dai, P., and Goldsmith, J. 2007a. Multi-threaded BLAO* algorithm. In Proc. 20th International FLAIRS Conference, 56–62. Dai, P., and Goldsmith, J. 2007b. Topological value iteration algorithm for Markov decision processes. In Proc. 20th International Joint Conference on Artificial Intelligence (IJCAI-07), 1860–1865. Dai, P., and Hansen, E. A. 2007. Prioritizing Bellman backups without a priority queue. In Proc. of the 17th International Conference on Automated Planning and Scheduling (ICAPS-07), this volumn. Dai, P. 2007. Faster dynamic programming for Markov decision processes. Master’s thesis, University of Kentucky, Lexington. Ferguson, D., and Stentz, A. 2004. Focussed dynamic programming: Extensive comparative results. Technical Report CMU-RI-TR-04-13, Carnegie Mellon University, Pittsburgh, PA. Hansen, E., and Zilberstein, S. 2001. LAO*: A heuristic search algorithm that finds solutions with loops. Artificial Intelligence J. 129:35–62. Littman, M. L.; Dean, T.; and Kaelbling, L. P. 1995. On the complexity of solving Markov decision problems. In Proc. of the 11th Annual Conference on Uncertainty in Artificial Intelligence (UAI-95), 394–402. McMahan, H. B., and Gordon, G. J. 2005. Fast exact planning in Markov decision processes. In Proc. of the 15th International Conference on Automated Planning and Scheduling (ICAPS-05). Moore, A., and Atkeson, C. 1993. Prioritized sweeping: Reinforcement learning with less data and less real time. Machine Learning 13:103–130. Puterman, M. 1994. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley, New York. Sutton, R. S., and Barto, A. G. 1998. Reinforcement Learning: An Introduction. The MIT Press.

Faster Dynamic Programming for Markov Decision ... - Semantic Scholar

number H, solving the MDP means finding the best ac- tion to take at each stage ... time back up states, until a time when the potential changes of value functions ...

51KB Sizes 1 Downloads 272 Views

Recommend Documents

Hidden Markov Models - Semantic Scholar
A Tutorial for the Course Computational Intelligence ... “Markov Models and Hidden Markov Models - A Brief Tutorial” International Computer Science ...... Find the best likelihood when the end of the observation sequence t = T is reached. 4.

Hidden Markov Models - Semantic Scholar
Download the file HMM.zip1 which contains this tutorial and the ... Let's say in Graz, there are three types of weather: sunny , rainy , and foggy ..... The transition probabilities are the probabilities to go from state i to state j: ai,j = P(qn+1 =

implementing dynamic semantic resolution - Semantic Scholar
testing of a rule of inference called dynamic semantic resolution is then ... expressed in a special form, theorem provers are able to generate answers, ... case of semantic resolution that Robinson called hyper-resolution uses a static, trivial mode

implementing dynamic semantic resolution - Semantic Scholar
expressed in a special form, theorem provers are able to generate answers, .... First Australian Undergraduate Students' Computing Conference, 2003 page 109 ...

OSNAP: Faster numerical linear algebra ... - Semantic Scholar
in a low rank matrix A have been revealed, and the goal is to then recover A. This ...... Disc. Math. and Theor. Comp. Sci., AC:145–154, 2003. [30] Bo'az Klartag ...

Dynamic Approaches to Cognition - Semantic Scholar
neurocognitive model of the state of the brain-mind. In R. Bootzin, J. Kihlstrom ... nition is a dynamical phenomenon and best understood in dynamical terms. ... cal research, particularly in connectionist form (Smolensky. 1988). By the 1990s, it ...

Stable communication through dynamic language - Semantic Scholar
texts in which particular words are used, or the way in which they are ... rules of grammar can only be successfully transmit- ted if the ... are much more likely to pass through the bottleneck into the ... ternal world is not sufficient to avoid the

Dynamic Moral Hazard and Stopping - Semantic Scholar
Jan 3, 2011 - agencies “frequently” to find a wide variety of workers. ... 15% and 20% of searches in the pharmaceutical sector fail to fill a post (Pharmafocus. (2007)). ... estate agent can affect buyer arrival rates through exerting marketing

Somatosensory Integration Controlled by Dynamic ... - Semantic Scholar
Oct 19, 2005 - voltage recording and representative spike waveforms (red) and mean ..... Note the deviation of the experimental data points from the unity line.

Dynamic Moral Hazard and Stopping - Semantic Scholar
Jan 3, 2011 - agencies “frequently” to find a wide variety of workers. ... 15% and 20% of searches in the pharmaceutical sector fail to fill a post (Pharmafocus. (2007)). ... estate agent can affect buyer arrival rates through exerting marketing

Optimal Dynamic Hedging of Cliquets - Semantic Scholar
May 1, 2008 - Kapoor, V., L. Cheung, C. Howley, Equity securitization; Risk & Value, Special Report, Structured. Finance, Standard & Poor's, (2003). Laurent, J.-P., H. Pham, Dynamic Programming and mean-variance hedging, Finance and Stochastics, 3, 8

Exploring Dynamic Branch Prediction Methods - Semantic Scholar
Department of Computer Science and Engineering, Michigan State University ... branch prediction methods and analyze which kinds of information are important ...

Secure Dependencies with Dynamic Level ... - Semantic Scholar
evolve due to declassi cation and subject current level ... object classi cation and the subject current level. We ...... in Computer Science, Amsterdam, The Nether-.

Exploring Dynamic Branch Prediction Methods - Semantic Scholar
Department of Computer Science and Engineering, Michigan State University. {wuming .... In the course of pursuing the most important factors to improve prediction accuracy, only simulation can make .... basic prediction mechanism. Given a ...

Dynamic Approaches to Cognition - Semantic Scholar
structure” (Newell and Simon 1976) governing orthodox or. “classical” cognitive science, which ... pirical data on the target phenomenon confirms the hypothe- sis that the target is itself .... Artificial Intelligence 72: 173–215. Bingham, G.

Stable communication through dynamic language - Semantic Scholar
In E. Briscoe, editor, Linguistic. Evolution through Language Acquisition: Formal and Computational Models, pages 173–203. Cam- bridge University Press ...

Path Consolidation for Dynamic Right-Sizing of ... - Semantic Scholar
is the number of reducers assigned for J to output; f(x) is the running time of a mapper vs size x of input; g(x) is the running time of a reducer vs size x of input. We compute the number of map and reduce tasks by dividing the input size S and outp

Allocating Dynamic Time-Spectrum Blocks for ... - Semantic Scholar
[email protected]. ABSTRACT. A number .... prototype we are currently developing at Microsoft [24]. The ..... three consecutive time-windows [tcur, tcur + Γ], at least one of which it ...... Assignment for Mobile Ad Hoc Networks. In I-SPAN ...

Optimal Dynamic Hedging of Cliquets - Semantic Scholar
May 1, 2008 - some presumed mid price of vanillas results in a replicating strategy for the exotic. Risk management departments (also called Risk Control) are charged with feeding the pricing model sensitivity outputs into a VaR model, and generally

Allocating Dynamic Time-Spectrum Blocks for ... - Semantic Scholar
The problem of allocating spectrum in cognitive radio net- works poses new challenges that do not arise in traditional wireless technologies, including Wi-Fi.

Baseline Shift versus Decision Bias - Semantic Scholar
Jul 8, 2009 - time, we show that prestimulus gamma-band fluctuations in LO behave as a decision bias at ... According to this view, each perceptual decision.

Path Consolidation for Dynamic Right-Sizing of ... - Semantic Scholar
time of a reducer vs size x of input. We compute the number of map and reduce tasks by dividing the input size S and output size S by the HDFS (Hadoop Distributed File System) block size respectively. HDFS block size is typically 64MB. We now determi