Batch Mode Reinforcement Learning based on the Synthesis of Artificial Trajectories

R. Fonteneau(1),(2) Joint work with Susan A. Murphy(3) , Louis Wehenkel(2) and Damien Ernst(2) (1)

Inria Lille – Nord Europe, France (2) University of Liège, Belgium (3) University of Michigan, USA

December 10th, 2012 CMS Winter Meeting – Montreal, Canada

Outline ●







Batch Mode Reinforcement Learning –

Reinforcement Learning



Batch Mode Reinforcement Learning



Objectives



Main Difficulties & Usual Approach



Remaining Challenges

A New Approach: Synthesizing Artificial Trajectories –

Formalization



Artificial Trajectories: What For?

Estimating the Performances of Policies –

Model-free Monte Carlo Estimation



The MFMC Algorithm



Theoretical Analysis



Experimental Illustration

Conclusions

Batch Mode Reinforcement Learning

Reinforcement Learning Environment

Agent

Actions

Observations, Rewards

Examples of rewards:



Reinforcement Learning (RL) aims at finding a policy maximizing received rewards by interacting with the environment

Batch Mode Reinforcement Learning ●

All the available information is contained in a batch collection of data



Batch mode RL aims at computing a (near-)optimal policy from this collection of data

Agent

Environment Actions Batch mode RL Observations, Rewards

Finite collection of trajectories of the agent



Near-optimal decision strategy

Examples of BMRL problems: dynamic treatment regimes (inferred from clinical data), marketing optimization (based on customers histories), finance, etc...

Batch Mode Reinforcement Learning 0

1

T

Time

1

?

p Patients

'optimal' treatment ?

Batch Mode Reinforcement Learning 0

1

T

Time

1

?

p Patients Batch collection of trajectories of patients

'optimal' treatment ?

Objectives ●

Main goal: Finding a "good" policy

Objectives ●

Main goal: Finding a "good" policy



Many associated subgoals:

Objectives ●

Main goal: Finding a "good" policy



Many associated subgoals: –

Evaluating the performance of a given policy

Objectives ●

Main goal: Finding a "good" policy



Many associated subgoals: –

Evaluating the performance of a given policy



Computing performance guarantees

Objectives ●

Main goal: Finding a "good" policy



Many associated subgoals: –

Evaluating the performance of a given policy



Computing performance guarantees



Computing safe policies

Objectives ●

Main goal: Finding a "good" policy



Many associated subgoals: –

Evaluating the performance of a given policy



Computing performance guarantees



Computing safe policies



Choosing how to generate additional transitions



...

Main Difficulties & Usual Approach ●

Main difficulties of the batch mode setting:

Main Difficulties & Usual Approach ●

Main difficulties of the batch mode setting: –

Dynamics and reward functions are unknown (and not accessible to simulation)

Main Difficulties & Usual Approach ●

Main difficulties of the batch mode setting: –

Dynamics and reward functions are unknown (and not accessible to simulation)



The state-space and/or the action space are large or continuous

Main Difficulties & Usual Approach ●

Main difficulties of the batch mode setting: –

Dynamics and reward functions are unknown (and not accessible to simulation)



The state-space and/or the action space are large or continuous



The environment may be highly stochastic

Main Difficulties & Usual Approach ●



Main difficulties of the batch mode setting: –

Dynamics and reward functions are unknown (and not accessible to simulation)



The state-space and/or the action space are large or continuous



The environment may be highly stochastic

Usual Approach:

Main Difficulties & Usual Approach ●



Main difficulties of the batch mode setting: –

Dynamics and reward functions are unknown (and not accessible to simulation)



The state-space and/or the action space are large or continuous



The environment may be highly stochastic

Usual Approach: –

To combine dynamic programming with function approximators (neural networks, regression trees, SVM, linear regression over basis functions, etc)

Main Difficulties & Usual Approach ●



Main difficulties of the batch mode setting: –

Dynamics and reward functions are unknown (and not accessible to simulation)



The state-space and/or the action space are large or continuous



The environment may be highly stochastic

Usual Approach: –

To combine dynamic programming with function approximators (neural networks, regression trees, SVM, linear regression over basis functions, etc)



Function approximators have two main roles: ●



To offer a concise representation of state-action value function for deriving value / policy iteration algorithms To generalize information contained in the finite sample

Remaining Challenges ●

The black box nature of function approximators may have some unwanted effects:

Remaining Challenges ●

The black box nature of function approximators may have some unwanted effects: –

hazardous generalization

Remaining Challenges ●

The black box nature of function approximators may have some unwanted effects: –

hazardous generalization



difficulties to compute performance guarantees

Remaining Challenges ●

The black box nature of function approximators may have some unwanted effects: –

hazardous generalization



difficulties to compute performance guarantees



unefficient use of optimal trajectories

Remaining Challenges ●



The black box nature of function approximators may have some unwanted effects: –

hazardous generalization



difficulties to compute performance guarantees



unefficient use of optimal trajectories

A New Approach: Synthesizing Artificial Trajectories

A New Approach: Synthesizing Artificial Trajectories

Formalization Reinforcement learning ●

System dynamics:

Formalization Reinforcement learning ●

System dynamics:

Formalization Reinforcement learning ●

System dynamics:



Reward function:

Formalization Reinforcement learning ●

System dynamics:



Reward function:



Performance of a policy

where

Formalization Batch mode reinforcement learning ●

The system dynamics, reward function and disturbance probability distribution are unknown

Formalization Batch mode reinforcement learning ●



The system dynamics, reward function and disturbance probability distribution are unknown Instead, we have access to a sample of one-step system transitions:

Formalization Artificial trajectories ●

Artificial trajectories are (ordered) sequences of elementary pieces of trajectories:

Artificial Trajectories: What For? ●

Artificial trajectories can help for: –

Estimating the performances of policies



Computing performance guarantees



Computing safe policies



Choosing how to generate additional transitions

Artificial Trajectories: What For? ●

Artificial trajectories can help for: –

Estimating the performances of policies



Computing performance guarantees



Computing safe policies



Choosing how to generate additional transitions

Estimating the Performances of Policies

Model-free Monte Carlo Estimation ●

If the system dynamics and the reward function were accessible to simulation, then Monte Carlo estimation would allow estimating the performance of h

Model-free Monte Carlo Estimation MODEL OR SIMULATOR REQUIRED!

Model-free Monte Carlo Estimation ●



If the system dynamics and the reward function were accessible to simulation, then Monte Carlo (MC) estimation would allow estimating the performance of h We propose an approach that mimics MC estimation by rebuilding p artificial trajectories from one-step system transitions

Model-free Monte Carlo Estimation ●





If the system dynamics and the reward function were accessible to simulation, then Monte Carlo (MC) estimation would allow estimating the performance of h We propose an approach that mimics MC estimation by rebuilding p artificial trajectories from one-step system transitions These artificial trajectories are built so as to minimize the discrepancy (using a distance metric ∆) with a classical MC sample that could be obtained by simulating the system with the policy h; each one step transition is used at most once

Model-free Monte Carlo Estimation ●







If the system dynamics and the reward function were accessible to simulation, then Monte Carlo (MC) estimation would allow estimating the performance of h We propose an approach that mimics MC estimation by rebuilding p artificial trajectories from one-step system transitions These artificial trajectories are built so as to minimize the discrepancy (using a distance metric ∆) with a classical MC sample that could be obtained by simulating the system with the policy h; each one step transition is used at most once We average the cumulated returns over the p artificial trajectories to obtain the Model-free Monte Carlo estimator (MFMC) of the expected return of h:

Model-free Monte Carlo Estimation

The MFMC algorithm Example with T = 3, p = 2, n = 8

The MFMC algorithm

The MFMC algorithm

The MFMC algorithm

The MFMC algorithm

The MFMC algorithm

The MFMC algorithm

The MFMC algorithm

The MFMC algorithm

The MFMC algorithm

The MFMC algorithm

The MFMC algorithm

The MFMC algorithm

The MFMC algorithm

The MFMC algorithm

The MFMC algorithm

The MFMC algorithm

The MFMC algorithm

The MFMC algorithm

The MFMC algorithm

The MFMC algorithm

Theoretical Analysis Assumptions ●

Lipschitz continuity assumptions:

Theoretical Analysis Assumptions ●

Distance metric ∆



k-sparsity



denotes the distance of (x,u) to its k-th nearest neighbor (using the distance ∆) in the sample

Theoretical Analysis Assumptions ●

The k-sparsity can be seen as the smallest radius such that all ∆-balls in X×U contain at least k elements from

Theoretical Analysis Theoretical results ●

Expected value of the MFMC estimator

Theoretical Analysis Theoretical results ●

Expected value of the MFMC estimator



Theorem

with

Theoretical Analysis Theoretical results ●

Variance of the MFMC estimator

Theoretical Analysis Theoretical results ●

Variance of the MFMC estimator



Theorem

with

Experimental Illustration Benchmark ●

Dynamics:



Reward function:



Policy to evaluate:



Other information:

pW(.) is uniform,

Experimental Illustration Influence of n ●

Simulations for p = 10, n = 100 … 10 000, uniform grid, T = 15, x0 = - 0.5 . Model-free Monte Carlo estimator

n = 100 … 10 000, p = 10

Monte Carlo estimator

p = 10

Experimental Illustration Influence of p ●

Simulations for p = 1 … 100, n = 10 000 , uniform grid, T = 15, x0 = - 0.5 . Model-free Monte Carlo estimator

p = 1 … 100, n=10 000

Monte Carlo estimator

p = 1 … 100

Experimental Illustration MFMC vs FQI-PE ●

Comparison with the FQI-PE algorithm using k-NN, n=100, T=5 .

Experimental Illustration MFMC vs FQI-PE ●

Comparison with the FQI-PE algorithm using k-NN, n=100, T=5 .

Conclusions Stochastic setting MFMC: estimator of the expected return Bias / variance analysis

Illustration

Estimator of the VaR

Deterministic setting Continuous action space

Finite action space

Bounds on the return

CGRL Convergence + additional properties

Sampling strategy

Convergence

Illustration

Illustration

Conclusions Stochastic setting MFMC: estimator of the expected return Bias / variance analysis

Illustration

Estimator of the VaR

Deterministic setting Continuous action space

Finite action space

Bounds on the return

CGRL Convergence + additional properties

Sampling strategy

Convergence

Illustration

Illustration

References

"Batch mode reinforcement learning based on the synthesis of artificial trajectories". R. Fonteneau, S.A. Murphy, L. Wehenkel and D. Ernst. To appear in Annals of Operations Research, 2012. "Generating informative trajectories by using bounds on the return of control policies". R. Fonteneau, S.A. Murphy, L. Wehenkel and D. Ernst. Proceedings of the Workshop on Active Learning and Experimental Design 2010 (in conjunction with AISTATS 2010), 2-page highlight paper, Chia Laguna, Sardinia, Italy, May 16, 2010. "Model-free Monte Carlo-like policy evaluation". R. Fonteneau, S.A. Murphy, L. Wehenkel and D. Ernst. In Proceedings of The Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2010), JMLR W&CP 9, pp 217-224, Chia Laguna, Sardinia, Italy, May 13-15, 2010. "A cautious approach to generalization in reinforcement learning". R. Fonteneau, S.A. Murphy, L. Wehenkel and D. Ernst. Proceedings of The International Conference on Agents and Artificial Intelligence (ICAART 2010), 10 pages, Valencia, Spain, January 22-24, 2010. "Inferring bounds on the performance of a control policy from a sample of trajectories". R. Fonteneau, S.A. Murphy, L. Wehenkel and D. Ernst. In Proceedings of The IEEE International Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL 2009), 7 pages, Nashville, Tennessee, USA, 30 March-2 April, 2009. Acknowledgements to F.R.S – FNRS for its financial support.

Appendix

Estimating the Performances of Policies Risk-sensitive criterion



Consider again the p artificial trajectories that were rebuilt by the MFMC estimator



The Value-at-Risk of the policy h

can be straightforwardly estimated as follows:

with

Deterministic Case: Computing Bounds Bounds from a Single Trajectory



Given an artificial trajectory :

Deterministic Case: Computing Bounds Bounds from a Single Trajectory



Let

with

Proposition: be an artificial trajectory. Then,

Deterministic Case: Computing Bounds Maximal Bounds



Maximal lower and upper-bounds

Deterministic Case: Computing Bounds Tightness of Maximal Bounds



Proposition:

Inferring Safe Policies From Lower Bounds to Cautious Policies ●

Consider the set of open-loop policies:



For such policies, bounds can be computed in a similar way





We can then search for a specific policy for which the associated lower bound is maximized:

A O( T n ² ) algorithm for doing this: the CGRL algorithm (Cautious approach to Generalization in RL)

Inferring Safe Policies Convergence



Theorem

Inferring Safe Policies Experimental Results ●

The puddle world benchmark

Inferring Safe Policies Experimental Results CGRL

The state space is uniformly covered by the sample

Information about the Puddle area is removed

FQI (Fitted Q Iteration)

Inferring Safe Policies Bonus



Theorem

Sampling Strategies An Artificial Trajectories Viewpoint ●

Given a sample of system transitions

How can we determine where to sample additional transitions ? ●

We define the set of candidate optimal policies:



A transition

and we denote by

is said compatible with

the set of all such compatible transitions.

if

Sampling Strategies An Artificial Trajectories Viewpoint ●

Iterative scheme:

with



Conjecture:

Sampling Strategies Illustration ●

Action space:



Dynamics and reward function:



Horizon:



Initial sate:



Total number of policies:



Number of transitions needed for discriminating:

Connexion to Classic Batch Mode RL Towards a New Paradigm for Batch Mode RL ●

FQI (evaluation mode) with k-NN:

l

1,1

l 1,2 l l

l

1

l

k

1,k

l l

2

l

k,1

l

k,2

l

1,1,. .. ,1

k,k

2,1 2,2

l 2,k l

l

k , 2,1 k , 2,2

l k , 2,k l l

k , k ,... ,k

Connexion to Classic Batch Mode RL Towards a New Paradigm for Batch Mode RL ●

The k-NN FQI-PE algorithm:



The k-NN FQI-PE estimator:

Batch Mode Reinforcement Learning based on the ... - Orbi (ULg)

Dec 10, 2012 - Theoretical Analysis. – Experimental Illustration ... data), marketing optimization (based on customers histories), finance, etc... Batch mode RL.

3MB Sizes 1 Downloads 297 Views

Recommend Documents

Recent Advances in Batch Mode Reinforcement Learning - Orbi (ULg)
Nov 3, 2011 - Illustration with p=3, T=4 .... of the Workshop on Active Learning and Experimental Design 2010 (in conjunction with AISTATS 2010), 2-.

Recent Advances in Batch Mode Reinforcement Learning - Orbi (ULg)
Nov 3, 2011 - R. Fonteneau(1), S.A. Murphy(2), L.Wehenkel(1), D. Ernst(1) ... To combine dynamic programming with function approximators (neural.

Batch Mode Reinforcement Learning based on the ...
We give in Figure 1 an illustration of one such artificial trajectory. ..... 50 values computed by the MFMC estimator are concisely represented by a boxplot.

Batch mode reinforcement learning based on the ...
May 12, 2014 - Proceedings of the Workshop on Active Learning and Experimental Design ... International Conference on Artificial Intelligence and Statistics ...

Batch mode reinforcement learning based on the ...
May 12, 2014 - "Model-free Monte Carlo-like policy evaluation". ... International Conference on Artificial Intelligence and Statistics (AISTATS 2010), JMLR ...

Batch Mode Reinforcement Learning based on the ...
Nov 29, 2012 - Reinforcement Learning (RL) aims at finding a policy maximizing received ... data), marketing optimization (based on customers histories), ...

Min Max Generalization for Deterministic Batch Mode ... - Orbi (ULg)
Nov 29, 2013 - One can define the sets of Lipschitz continuous functions ... R. Fonteneau, S.A. Murphy, L. Wehenkel and D. Ernst. Agents and Artificial.

Min Max Generalization for Deterministic Batch Mode ... - Orbi (ULg)
Electrical Engineering and Computer Science Department. University of Liège, Belgium. November, 29th, 2013. Maastricht, The Nederlands ...

Contributions to Batch Mode Reinforcement Learning
Feb 24, 2011 - A new approach for computing bounds on the performances of control policies in batch mode RL. ✓ A min max approach to generalization in ...

Contributions to Batch Mode Reinforcement Learning
B Computing bounds for kernel–based policy evaluation in reinforcement learning. 171. B.1 Introduction ... a subproblem of reinforcement learning: computing a high-performance policy when the only information ...... to bracket the performance of th

Modelfree Monte Carlolike Policy Evaluation - Orbi (ULg)
May 19, 2010 - Many techniques for solving such problems use an oracle that evaluates the performance of any given policy in order to determine a ...

bilateral robot therapy based on haptics and reinforcement learning
means of adaptable force fields. Patients: Four highly paretic patients with chronic stroke. (Fugl-Meyer score less than 15). Methods: The training cycle consisted ...

bilateral robot therapy based on haptics and reinforcement learning
Conclusion: Bilateral robot therapy is a promising tech- nique, provided that it ... From the 1Italian Institute of Technology, 2Neurolab, DIST, University of Genova and 3ART Education and Rehabilitation. Center, Genova, Italy. ..... Parlow SE, Dewey

Batch Mode Adaptive Multiple Instance Learning for ... - IEEE Xplore
positive bags, making it applicable for a variety of computer vision tasks such as action recognition [14], content-based image retrieval [28], text-based image ...

MeqTrees Batch Mode: A Short Tutorial - GitHub
tdlconf.profiles is where you save/load options using the buttons at ... Section is the profile name you supply ... around the Python interface (~170 lines of code).

Kernel-Based Models for Reinforcement Learning
cal results of Ormnoneit and Sen (2002) imply that, as the sample size grows, for every s ∈ D, the ... 9: until s is terminal. Note that in line 3 we compute the value ...

Asymptotic tracking by a reinforcement learning-based ... - Springer Link
Department of Mechanical and Aerospace Engineering, University of Florida, Gainesville, FL 32611, U.S.A.;. 2.Department of Physiology, University of Alberta, ...

Asymptotic tracking by a reinforcement learning-based ... - Springer Link
NASA Langley Research Center, Hampton, VA 23681, U.S.A.. Abstract: ... Keywords: Adaptive critic; Reinforcement learning; Neural network-based control.

Gradient-Based Relational Reinforcement-Learning of ...
concept language in which concepts and relations can be tem- poral. We evaluate our ...... ner that would plan from scratch for each episode, are slightly mis-.

Reinforcement Learning Trees
Feb 8, 2014 - size is small and prevents the effect of strong variables from being fully explored. Due to these ..... muting, which is suitable for most situations), and 50% ·|P\Pd ..... illustration of how this greedy splitting works. When there ar

Bayesian Reinforcement Learning
2.1.1 Bayesian Q-learning. Bayesian Q-learning (BQL) (Dearden et al, 1998) is a Bayesian approach to the widely-used Q-learning algorithm (Watkins, 1989), in which exploration and ex- ploitation are balanced by explicitly maintaining a distribution o

Min Max Generalization for Deterministic Batch Mode ...
Introduction. Page 3. Menu. Introduction. I Direct approach .... International Conference on Agents and Artificial Intelligence (ICAART 2010), 10 pages, Valencia ...