Recent Advances in Batch Mode Reinforcement Learning Synthesizing Artificial Trajectories

R. Fonteneau(1), S.A. Murphy(2), L.Wehenkel(1), D. Ernst(1) (1)

University of Liège, Belgium – (2) University of Michigan, USA GRASCOMP's Day, November 3th, 2011

Outline ●





Batch Mode Reinforcement Learning –

Reinforcement Learning & Batch Mode Reinforcement Learning



Formalization, Objectives, Main Difficulties & Usual Approach

A New Approach: Synthesizing Artificial Trajectories –

Artificial Trajectories



Estimating the Performances of Policies



Computing Bounds & Inferring Safe Policies



Sampling Strategies



Connexion to Classic Batch Mode Reinforcement Learning

Conclusions

Batch Mode Reinforcement Learning

Reinforcement Learning Environment

Agent

Actions

Observations, Rewards

Examples of rewards:



Reinforcement Learning (RL) aims at finding a policy maximizing received rewards by interacting with the environment

Batch Mode Reinforcement Learning ●

All the available information is contained in a batch collection of data



Batch mode RL aims at computing a (near-)optimal policy from this collection of data

Agent

Environment Actions Batch mode RL Observations, Rewards

Finite collection of trajectories of the agent

(near-)optimal policy

Formalization ●

System dynamics:



Reward function:



Performance of a policy



Expected T-stage return:



Value-at-risk:

Formalization ●



The system dynamics, reward function and disturbance probability distribution are unknown Instead, we have access to a sample of one-step system transitions:

Objectives ●

Main goal: Finding a "good" policy



Many associated subproblems: –

Evaluating the performance of a given policy



Computing performance guarantees and safe policies



Generating additional sample transitions



...

Main Difficulties & Usual Approach Main Difficulties ●

Functions are unknown (and not accessible to simulation)



The state-space and/or the action space are large or continuous



Highly stochastic environments

Usual Approach ●



To combine dynamic programming with function approximators (neural networks, regression trees, SVM, linear regression over basis functions, etc) Function approximators have two main roles: –

To offer a concise representation of state-action value function for deriving value / policy iteration algorithms



To generalize information contained in the finite sample

Remaining Challenges ●

The black box nature of function approximators may have some unwanted effects: hazardous generalization, difficulties to compute performance guarantees, unefficient use of optimal trajectories, no straightforward sampling strategies,...

A New Approach: Synthesizing Artificial Trajectories

Artificial Trajectories ●

Artificial trajectories are (ordered) sequences of elementary pieces of trajectories:

Estimating the Performances of Policies Expected Return ●





If the system dynamics and the reward function were accessible to simulation, then Monte Carlo estimation would allow estimating the performance of h We propose an approach that mimics Monte Carlo (MC) estimation by rebuilding p artificial trajectories from one-step system transitions These artificial trajectories are built so as to minimize the discrepancy (using a distance metric ∆) with a classical MC sample that could be obtained by simulating the system with the policy h; each one step transition is used at most once



We average the cumulated returns over the p artificial trajectories to obtain the Model-free Monte Carlo estimator (MFMC) of the expected return of h:

Estimating the Performances of Policies Monte Carlo Estimator ●

Illustration with p=3, T=4

MODEL OR SIMULATOR REQUIRED !

Estimating the Performances of Policies Model-free Monte Carlo Estimator ●

Illustration with p=3, T=4

Estimating the Performances of Policies Additionnal Assumptions

Estimating the Performances of Policies Theoretical Results

Estimating the Performances of Policies Experimental Results

Estimating the Performances of Policies Value-at-Risk



Consider again the p artificial trajectories that were rebuilt by the MFMC estimator



The Value-at-Risk of the policy h can be straightforwardly estimated as follows:

Deterministic Case: Computing Bounds Lower Bound from a Single Trajectory

Deterministic Case: Computing Bounds Maximal Bounds

Deterministic Case: Computing Bounds Tightness of Maximal Bounds

Inferring Safe Policies From Lower Bounds to Cautious Policies ●

Consider the set of open-loop policies:



For such policies, bounds can be computed in a similar way





We can then search for a specific policy for which the associated lower bound is maximized:

A O( T n ² ) algorithm for doing this: the CGRL algorithm (Cautious approach to Generalization in RL)

Inferring Safe Policies Convergence

Inferring Safe Policies Experimental Results ●

The puddle world benchmark

Inferring Safe Policies Experimental Results CGRL

The state space is uniformly covered by the sample

Information about the Puddle area is removed

FQI (Fitted Q Iteration)

Inferring Safe Policies Bonus

Sampling Strategies An Artificial Trajectories Viewpoint ●

Given a sample of system transitions

How can we determine where to sample additional transitions ? ●

We define the set of candidate optimal policies:



A transition

and we denote by

is said compatible with

the set of all such compatible transitions.

if

Sampling Strategies An Artificial Trajectories Viewpoint ●

Iterative scheme:

with



Conjecture:

Connexion to Classic Batch Mode RL Towards a New Paradigm for Batch Mode RL ●

FQI (evaluation mode) with k-NN:

l

1,1

l 1,2 l l

l

1

l

k

1,k

l l

2

l

k,1

l

k,2

l

1,1,. .. ,1

k,k

2,1 2,2

l 2,k l

l

k , 2,1 k , 2,2

l k , 2,k l l

k , k ,... ,k

Connexion to Classic Batch Mode RL Towards a New Paradigm for Batch Mode RL

Conclusions ●

Rebuilding artificial trajectories: a new approach for batch mode RL



Several types of problems can be addressed



Towards a new paradigm for developing new algorithms ?

"Batch mode reinforcement learning based on the synthesis of artificial trajectories". R. Fonteneau, S.A. Murphy, L. Wehenkel and D. Ernst. Submitted. "Generating informative trajectories by using bounds on the return of control policies". R. Fonteneau, S.A. Murphy, L. Wehenkel and D. Ernst. Proceedings of the Workshop on Active Learning and Experimental Design 2010 (in conjunction with AISTATS 2010), 2page highlight paper, Chia Laguna, Sardinia, Italy, May 16, 2010. "Model-free Monte Carlo-like policy evaluation". R. Fonteneau, S.A. Murphy, L. Wehenkel and D. Ernst. In Proceedings of The Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2010), JMLR W&CP 9, pp 217-224, Chia Laguna, Sardinia, Italy, May 13-15, 2010. "A cautious approach to generalization in reinforcement learning". R. Fonteneau, S.A. Murphy, L. Wehenkel and D. Ernst. Proceedings of The International Conference on Agents and Artificial Intelligence (ICAART 2010), 10 pages, Valencia, Spain, January 22-24, 2010. "Inferring bounds on the performance of a control policy from a sample of trajectories". R. Fonteneau, S.A. Murphy, L. Wehenkel and D. Ernst. In Proceedings of The IEEE International Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL 2009), 7 pages, Nashville, Tennessee, USA, 30 March-2 April, 2009.

Recent Advances in Batch Mode Reinforcement Learning - Orbi (ULg)

Nov 3, 2011 - Illustration with p=3, T=4 .... of the Workshop on Active Learning and Experimental Design 2010 (in conjunction with AISTATS 2010), 2-.

2MB Sizes 0 Downloads 344 Views

Recommend Documents

Recent Advances in Batch Mode Reinforcement Learning - Orbi (ULg)
Nov 3, 2011 - R. Fonteneau(1), S.A. Murphy(2), L.Wehenkel(1), D. Ernst(1) ... To combine dynamic programming with function approximators (neural.

Batch Mode Reinforcement Learning based on the ... - Orbi (ULg)
Dec 10, 2012 - Theoretical Analysis. – Experimental Illustration ... data), marketing optimization (based on customers histories), finance, etc... Batch mode RL.

Min Max Generalization for Deterministic Batch Mode ... - Orbi (ULg)
Nov 29, 2013 - One can define the sets of Lipschitz continuous functions ... R. Fonteneau, S.A. Murphy, L. Wehenkel and D. Ernst. Agents and Artificial.

Min Max Generalization for Deterministic Batch Mode ... - Orbi (ULg)
Electrical Engineering and Computer Science Department. University of Liège, Belgium. November, 29th, 2013. Maastricht, The Nederlands ...

Batch mode reinforcement learning based on the ...
May 12, 2014 - "Model-free Monte Carlo-like policy evaluation". ... International Conference on Artificial Intelligence and Statistics (AISTATS 2010), JMLR ...

Batch Mode Reinforcement Learning based on the ...
We give in Figure 1 an illustration of one such artificial trajectory. ..... 50 values computed by the MFMC estimator are concisely represented by a boxplot.

Contributions to Batch Mode Reinforcement Learning
Feb 24, 2011 - A new approach for computing bounds on the performances of control policies in batch mode RL. ✓ A min max approach to generalization in ...

Batch mode reinforcement learning based on the ...
May 12, 2014 - Proceedings of the Workshop on Active Learning and Experimental Design ... International Conference on Artificial Intelligence and Statistics ...

Contributions to Batch Mode Reinforcement Learning
B Computing bounds for kernel–based policy evaluation in reinforcement learning. 171. B.1 Introduction ... a subproblem of reinforcement learning: computing a high-performance policy when the only information ...... to bracket the performance of th

Batch Mode Reinforcement Learning based on the ...
Nov 29, 2012 - Reinforcement Learning (RL) aims at finding a policy maximizing received ... data), marketing optimization (based on customers histories), ...

Modelfree Monte Carlolike Policy Evaluation - Orbi (ULg)
May 19, 2010 - Many techniques for solving such problems use an oracle that evaluates the performance of any given policy in order to determine a ...

Recent Advances in Dependency Parsing
Jun 1, 2010 - auto-parsed data (W. Chen et al. 09) ... Extract subtrees from the auto-parsed data ... Directly use linguistic prior knowledge as a training signal.

Recent Advances in Surfactant EOR.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Recent Advances in Surfactant EOR.pdf. Recent Advances in Surfactant EOR.pdf. Open. Extract. Open with. Sign

Recent advances in ketene chemistry - Arkivoc
Flash vacuum pyrolysis of N-(2-pyridyl)acetamides 78 generates ketenes 79 by elimination, and these are ...... Sun, L.; Liang, Z.; Ye, S. Acta Chim. Sinica 2014 ...

Recent advances in ketene chemistry - Arkivoc
method and used to interpret the excited state of the molecule.16 Electron scattering ... Ketene formation from ground-state oxygen atom reaction with ethylene.

Recent Advances in Nutritional Sciences
men. Subjects consumed a meal containing 15% of energy from protein, 55% from carbohydrate and 30% from fat, in the form of corn oil (CO) and animal fat or MCT oil (56% octanoate, 40% decanoate) in random order. Energy expendi- ture measurements were

Recent advances in ipso-nitration reactions - Arkivoc
©ARKAT USA, Inc. The Free ..... indicated, ipso-nitration with acetic anhydride/nitric acid ensures a good yield of p-nitrocalix[n]arenes; ...... Iyer, L. M. Formation and reactions of adducts from ipso nitration of nitroarenes, University of Victor

Recent advances in ipso-nitration reactions - Arkivoc
presence of a protonable site at the γ-position of the phenolic oxygen atom. Due to .... In those reactions, aryl-/heteroaryl-/polyaryl carboxylic acids with electron ...

Recent advances in asymmetric Strecker reactions - Arkivoc
of ketimines bearing electron-withdrawing and electron-donating substituents at different positions ...... Yan, H. L. Oh, J. S.; Lee, J. -W.; Song, C. E. Nat. Commun ...

Bioluminescence Recent Advances in Oceanic Measurements and ...
Bioluminescence Recent Advances in Oceanic Measurements and Laboratory Applications.pdf. Bioluminescence Recent Advances in Oceanic Measurements ...

Recent advances in asymmetric Strecker reactions - Arkivoc
Scheme 3. In 2011, Brigaud and coworkers32 reported a concise synthesis of enantiopure (S)- and (R)- α-Tfm-aspartic acid (Scheme 4) and α-Tfm-serine ...

Recent advances in asymmetric Strecker reactions - Arkivoc
Lu, W-Y.; Chen, P-R.; Lin, G-Q. Tetrahedron, 2008, 64, 7822. http://dx.doi.org/10.1016/j.tet.2008.05.113. 25. García, R. J. L.; Martín-Castro, A. M.; Tato, F.; Alonso, ...

Batch Mode Adaptive Multiple Instance Learning for ... - IEEE Xplore
positive bags, making it applicable for a variety of computer vision tasks such as action recognition [14], content-based image retrieval [28], text-based image ...

How do Recent Machine Learning Advances Impact the ...
stone on top of a period of successes, machine learning has helped to solve many .... in Fig. 1. Although the parameter space that an ML tool can explore is .... visual analytics focusing on public health, dietary analysis, social media, criminal ...