Proceedings of the 25th Chinese Control Conference 7–11 August, 2006, Harbin, Heilongjiang

A New Algorithm for Finding Numerical Solutions of Optimal Feedback Control Law Bao-Zhu Guo1 , Bing Sun2 1. Institute of Systems Science, Academy of Mathematics and Systems Science Chinese Academy of Sciences, Beijing 100080, P. R. China E-mail: [email protected] 2. School of Computational and Applied Mathematics, University of the Witwatersrand Private Bag 3, Wits 2050, Johannesburg, South Africa E-mail: [email protected]

Abstract: A new algorithm for finding numerical solutions of optimal feedback control of a class of general finite dimensional systems with multi-input is developed. The algorithm is based on the fact that the value function of the optimal control problem is the viscosity solution of its associated Hamilton-Jacobi-Bellman equation. An example that the closed form solutions of optimal feedback control-trajectory pairs are available is validated. It is shown that the numerical solutions completely tally with the analytical solutions. Key Words: Optimal Feedback Control, Viscosity Solution, Dynamic Programming, Numerical Solution

1

INTRODUCTION

problem. In application of the multiple shooting method, the guess of the initial data to start the iterative numerical process is required, which demands users understanding well the essential physical feature of the problem. This is usually not a trivial task. The second kind of method what so called the direct method involves partial discretization or full discretization process. Both of them convert the original problem into a large-scale nonlinear programming problem with restrictions, which, due to the simplification, leads to the fall of the reliability and accuracy in the process of the partial discretization. For the full discretization process, when the degree of discretization and parameterization is very high, the complexity of computation stands out and the problem solving becomes highly difficult ([2]). Again the fatal drawbacks for both numerical methods are the same: the solutions are only in open-loop form. Since Pontryagin’s time, it has been realized that the dynamic programming method which leads to the HamiltonJacobi-Bellman (HJB) equation satisfied by the value function of optimal control problem may give out the feedback control law provided that the HJB equation is solvable. Unfortunately, even for some simple optimal control problems with sufficiently smooth coefficients, the existence and uniqueness of the solution to the HJB equations are questionable. This state of art is unchanged until the introduction of viscosity solution in the 1980’s ([3]). The remarkable advantage of this new notion is that, in most situations, the value function of the optimal control problem is the unique viscosity solution of associated HJB equation. Once again, seeking analytical viscosity solution of the associated HJB equation for an optimal control problem is also formidable. In this note, we give a new algorithm

The determination of optimal or near optimal feedback control laws is the most important task in optimal control theory. This is understandable due to huge success of the linear quadratic Gaussian (LQG) theory and applications in the past half-century ([8]). However, for most of nonlinear control systems, or even if for linear systems with nonlinear cost functionals, determination of the optimal feedback control law is almost formidable through analytic way like LQG problems. It is out of question that for many practical optimal control problems, there is a unique optimal control law, and the powerful Pontryagin’s maximum principle does serve not only the necessary but also the sufficient condition. However, two of major difficulties usually occur in application of Pontryagin’s maximum principle. The first is that one has to solve a two-point boundary value problem of a system of nonlinear differential equations, composed of the original state equation and its adjoint equation, and the second one is that the solution obtained from Pontryagin’s maximum principle is usually of open-loop form. Obviously, numerical method is almost the unique feasible way of seeking the optimal feedback control law. Roughly speaking, we have, by far, two different numerical methods to solve the optimal control problems ([10]). The first method is usually referred as the indirect method. The indirect method, mainly utilizing the powerful multiple shooting method, needs to solve a two-point boundary value IEEE Catalog Number: 06EX1310 This work was supported by the National Natural Science Foundation of China and the National Research Foundation of South Africa.

568

with the terminal value:

of seeking numerical solutions of optimal feedback control law for very general problems. This algorithm, based on the HJB equation and the viscosity solution theory, is constructive and applicable for very general optimal control systems. The paper is organized as follows. Section 2 is devoted to the theoretical foundation of the new algorithm. In Section 3, the new algorithm is constructed step by step for the general finite-dimensional optimal control problems. To verify the validity of the method, an example for which the closed-form solutions of control-trajectory pairs are available is tested in Section 4. It is shown that both the numerical optimal feedback control law and the optimal trajectory completely tally with the analytical solutions. It should be pointed out that, due to the consideration of the clearness and simplicity, the convergence of the algorithm is not addressed in this note but left for a forthcoming work.

2

V (T, x) = ψ(x) for all x ∈ Rn .

It is well known that if V is smooth enough, say V ∈ C 1 ([0, T ] × Rn ) , then it satisfies the following HJB equation ([1]):  ⎧ ⎪ V f (x, u) · ∇x V (t, x) (t, x) + inf ⎪ t ⎪ u∈U ⎪ ⎨ (8) +L(t, x, u) = 0, (t, x) ∈ [0, T ) × Rn , ⎪ ⎪ ⎪ ⎪ ⎩ V (T, x) = ψ(x), x ∈ Rn where ∇x V (t, x) stands for the gradient of V in x. The following Theorem 1 and Theorem 2 show up the important role played by the value function in seeking the optimal feedback control law ([1]). We omit the proofs because they are standard in dynamic programming theory.

PRELIMINARY

Consider the following MI control system:   y (t) = f (y(t), u(t)), t ∈ [0, T ], y(0) = z

Theorem 1 Let V ∈ C 1 ([0, T ]×Rn ) be the value function. Then if there exists a control u∗ (·) ∈ U[0, T ] such that f (y ∗ (t), u∗ (t)) · ∇x V (t, y ∗ (t)) + L(t, y ∗ (t), u∗ (t))  = inf f (y ∗ (t), u) · ∇x V (t, y ∗ (t)) + L(t, y ∗ (t), u) ,

(1)

where y(·) ∈ Rn is the state, u(·) ∈ U[0, T ] = L∞ ([0, T ]; U ) is the admissible control with compact set U ⊂ Rm and z is the initial data. Assume that f : Rn × U → Rn is continuous in its variables. Given a running cost L(t, y, u) and a terminal cost ψ(y), the optimal control problem for the system (1) is to seek an optimal control u∗ (·) ∈ U[0, T ], such that J(u∗ (·)) =

inf

u(·)∈U [0,T ]

J(u(·))

where J is the cost functional given by  T L(τ, y(τ ), u(τ ))dτ + ψ(y(T )). J(u(·)) =

u∈U

(2)

Theorem 2 Let V (t, x) ∈ C 1 ([0, T ] × Rn ) be the value function. Then (u∗ (·), y ∗ (·)) is an optimal controltrajectory pair in feedback form if and only if

(3)

Vt (t, y ∗ (t)) + f (y ∗ (t), u∗ (t)) · ∇x V (t, y ∗ (t)) +L(t, y ∗ (t), u∗ (t)) = 0

Finally, we state the following important Theorem 3 from which we see that the feedback control law can be synthesized through the value function. Theorem 3 Let V (t, z) ∈ C 1 ([0, T ] × Rn ) be the value function. Suppose u(t, z) satisfies f (z, u(t, z)) · ∇z V (t, z) + L(t, z, u(t, z))  = inf f (z, u) · ∇z V (t, z) + L(t, z, u) .

(5)

(12)

u∈U

Define the value function inf

(11)

for almost all t ∈ [0, T ).

with the cost functional  T Jt,x (u(·)) = L(τ, yt,x (τ ), u(τ ))dτ + ψ(yt,x (T )).

u(·)∈U [t,T ]

(10)

for almost all t ∈ [0, T ], where y ∗ is the state corresponding to u∗ , then u∗ (·) is an optimal control.

In stead of considering optimal control problem (1) and (2), we utilize the Bellman’s dynamic programming method, one of three recognized milestones in modern optimal control theory, to deals with a family of optimal control problems initiating from (1) and (2). That is, for any (t, x) ∈ [0, T ) × Rn , consider the optimal control problem of the following:   yt,x (s) = f (yt,x (s), u(s)), s ∈ [t, T ], (4) yt,x (t) = x

V (t, x) =

(9)

for which we write in usual way as  u∗ (t) ∈ arg inf f (y ∗ (t), u) · ∇x V (t, y ∗ (t)) u∈U +L(t, y ∗ (t), u)

0

t

(7)

Substitute u(t, ·) above into (1), to obtain

Jt,x (u(·)), ∀ (t, x) ∈ [0, T ) × Rn

yz (t) = f (yz (t), u(t, yz (t))), yz (0) = z, t ∈ [0, T ].

(6)

569

Then

u∗z (t)

= u(t, yz (t)) is the feedback law of the optimal control problem (1) and (2). Proof. The proof is simple. The details are omitted here also due to the limitation of space. 2 By Theorem 3, we see that in order to find the feedback control law, we not only need to know the value function V itself but also its gradient ∇x V . Unfortunately, as we shall see from the example in the final section that the equation (8) has no classical solution usually no matter how smooth the function f is. However, the viscosity solution theory tells us that the value function V is the unique viscosity solution to (8) under some assumptions on f ([1]). It is obvious that for general nonlinear functions f and L, finding an analytic solution of (8) is almost impossible. This is the motivation of this note to seek numerically the solution of (8). Actually, several difference schemes to find the viscosity solution of (8) are already available in literature ([6]). The major purpose of this note is not to discuss the numerical viscosity solution of (8) but the numerical solution of optimal feedback control law through (8). Nevertheless, the viscosity solution theory for (8) is the foundation stone for our numerical algorithm. Having solved (8) numerically, we are able to follow the procedure of Theorem 3 to find the numerical solutions of the feedback control law.

3

ALGORITHM OF FINDING OPTIMAL FEEDBACK LAWS

In this section, we follow the procedure of Theorem 3 to construct the algorithm for finding numerical solutions of the optimal feedback control law. This is constituted by two coupled discretization steps. The first one is to discretize the HJB equation (8) for finding the feedback control law and the second one is to discretize the state equation (1) for finding the optimal trajectory. In the last two decades, many different approximation schemes have been developed for the numerical solution of (8) such as the finite difference scheme ([6]), the method of vanishing viscosity ([4]) and the parallel algorithm based on the domain decomposition technique ([5]), name just a few. As for the state equation (1), there are numerous classical numerical methods available such as the Euler method, the Runge-Kutta method, and the Hamming algorithm, etc. ([9]). Notice that when we use Theorem 3 to find the optimal feedback control law, we actually need only ∇x V · f not ∇x V itself. This motivates us to design our scheme. A key step is to approximate ∇x V · f , by definition, as follows: ∇x V (t, x) · f (x, u)

1 + f (x, u) f (x, u) = ∇x V (t, x) · η 1 + f (x, u) η ≈

where  ·  denotes the Euclidean norm of Rn and η > 0 is a small number. Based on the above idea, we can now construct the algorithm for the numerical solutions of the optimal feedback control law. Step 1: initial partition on time and space. Let tj = T +jΔt, j = 0, 1, · · · , N be a backward partition of [0, T ], where Δt = −T /N for some positive integer N . For any initial given u ˜ ∈ U , let initial state x0 = z and xi = xi−1 + ηf (xi−1 , u ˜)/(1 + f (xi−1 , u ˜)), i = 1, 2, · · · , M , for some positive integer M . Step 2: approximate equation (8) by a finite difference scheme. This is the approximation of (8) based on (13): ⎧ j+1 j − Vij Vi+1 ⎪ Vi − Vij ⎪ ⎪ + (1 + fij ) + Lji = 0, ⎪ ⎪ Δt η ⎪ ⎪ ⎪  j+1 ⎪ ⎪ ⎨ j+1 Vi+1 − Vij+1 ui ∈ arg inf · u∈U η ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ (1 + f (xi , u)) + L(tj+1 , xi , u) ⎩ (14) for i = 0, 1, · · · , M and j = 0, 1, · · · , N − 1, where Vij = V (tj , xi ), fij = f (xi , uji ), Lji = L(tj , xi , uji ). The stability condition for this finite difference scheme stated as (15) of the following |Δt| (1 + max fij ) ≤ 1 i,j η

(15)

is assumed ([9]). Step 3: initialization of value function and control. Vi0 = ψ(xi ), u0i ∈ arg inf

u∈U



0 Vi+1 − Vi0 · (1 + f (xi , u)) η

+L(T, xi , u) , i = 0, 1, · · · , M. Step 4: iteration for HJB equation. By (14) and Step 3, j M N N we obtain all {{Vij }M i=0 }j=0 and {{ui }i=0 }j=0 :

 ⎧ Δt j+1 j ⎪ ⎪ V = 1 + ) Vij (1 + f ⎪ i i ⎪ η ⎪ ⎪ ⎪ ⎪ ⎪ Δt ⎪ j ⎪ − (1 + fij )Vi+1 − ΔtLji , ⎪ ⎪ ⎪ η ⎨  j+1 Vi+1 − Vij+1 j+1 ⎪ ⎪ u ∈ arg inf · ⎪ i ⎪ u∈U η ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ (1 + f (xi , u)) + L(tj+1 , xi , u) . ⎩ (16)

V (t, x + ηf (x, u)/(1 + f (x, u))) − V (t, x) η ·(1 + f (x, u)) (13) 570

0 N (uN 0 , y ) = (u0 , y(0)) = (u(0), z) is considered to be the first optimal feedback control-trajectory pair. Step 5: iteration for state equation. Solve the state equation y1 − y0 = f (y 0 , u(0)), −Δt

A simple calculation shows that

=

to obtain y 1 = y(tN −1 ). Replace (˜ u, z) in Step 1 by −1 (u(0), y 1 ) and goto Step 3 and Step 4, to obtain uN . 0 N −1 1 Then (u0 , y ) = (u(tN −1 ), y(tN −1 )) is considered to be the second optimal feedback control-trajectory pair. Step 6: continuation of iteration. Once (u(tN −j ), y j ) = (u(tN −j ), y(tN −j )) is known, solve the state equation:

where sign(y) =

(u(tN −j ), y(tN −j )), j = 0, 1, · · · , N.

∗ yt,x (s)

In this section, we use an example to show the effect of our new algorithm. For this example, the value function is not the classical solution but the viscosity solution to its associated HJB equation ([11]). The most advantage of this example is that both the value function and the feedback control law have closed-form solutions. It is owing to this fact that we are able to have a better comparison between the numerical solution and the analytical solution to see the power of new algorithm. The state equation is given by   y (t) = u(t), t ∈ [0, 1], (17) y(0) = z

⎪ ⎪ ⎩

=

1,

y > 0,

{±1},

y = 0,

−1,

y<0

x + (s − t), x ≥ 0, x − (s − t), x ≤ 0.

(24)

Now, we follow step by step the steps in Section 3 to find the numerical solutions of the optimal feedback controltrajectory pairs for different initial values from −1 to 1 with step size 0.2. All computations are performed in Visual C++ 6.0 and numerical results are plotted by MATLAB 6.0. The parameters are taken as N = M = 100, η = 0.02. Figure 1 shows the computed numerical solutions of the optimal trajectories, where the solid lines denote the analytical solutions (24). It is seen that analytical solutions and numerical solutions completely coincide with each other. Figure 2 displays the computed numerical solutions of the optimal feedback laws where solid lines denote the analytical solutions (23). The numerical feedback control laws completely tally with the analytical solutions (23).

with R to be the state space. The control constraint U = [−1, 1] and hence U[0, 1] = L∞ ([0, 1]; U ). The cost functional is J(u(·)) = −y 2 (1). (18) Now we introduce the value function of the problem. For any (t, x) ∈ [0, 1) × R, consider the following system:   yt,x (s) = u(s), s ∈ [t, 1], (19) yt,x (t) = x.

5

CONCLUSIONS

We gave a new algorithm for finding numerical solutions of optimal feedback law for very general optimal control problems. Moreover, by a simple example, the effectiveness and validity of the algorithm are validated. The possible generalization to the infinite-dimensional system has been done for population system described by partial differential equation with total fertility rate (TFR) control ([7]).

Correspondingly, the cost functional is 2 (1). Jt,x (u(·)) = −yt,x

Define the value function Jt,x (u(·)).

⎧ ⎪ ⎪ ⎨



and

NUMERICAL EXAMPLE

inf

(21)

¿From [11], we know that the value function V is the unique viscosity solution to HJB equation (22) and the optimal feedback control-trajectory pair are given analytically as ∗ u∗t,x (s) = sign(yt,x (s)) (23)

After Step 1-6, we finally get all desired optimal feedback control-trajectory pairs:

u(·)∈U [t,1]

t ∈ [0, 1],

It is seen that V is continuous in its variable but not in C 1 ([0, 1] × R). The associated HJB equation is  ⎧ ⎪ V (t, x) + inf (t, x) · u = 0, V x t ⎪ ⎪ |u|≤1 ⎨ (22) (t, x) ∈ [0, 1) × R, ⎪ ⎪ ⎪ ⎩ V (1, x) = −x2 , x ∈ R.

to obtain y j+1 = y(tN −j−1 ). Replace (˜ u, z) in Step 1 by (u(tN −j ), y j+1 ) and goto Step 3 and Step −j−1 . Then (u0N −j−1 , y j+1 ) = 4, to obtain uN 0 (u(tN −j−1 ), y(tN −j−1 )) is considered to be the j + 2-th optimal feedback control-trajectory pair. The computation continues till all (u(tN −j ), y(tN −j ))N j=1 are obtained.

V (t, x) =

−[x − (1 − t)]2 , x ≤ 0,

= −[|x| + (1 − t)]2 , x ∈ R, t ∈ [0, 1].

y j+1 − y j = f (y j , u(tN −j )), −Δt

4

V (t, x)  −[x + (1 − t)]2 , x ≥ 0,

(20) 571

REFERENCES [1] M. Bardi and I. Capuzzo-Dolcetta, Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations, Birkh¨auser, Boston, 1997. [2] A. E. Jr. Bryson, Optimal control - 1950 to 1985, IEEE Control Syst. Mag., Vol.16, 26-33, 1996. [3] M. G. Crandall, Viscosity solutions: a primer, Viscosity Solutions and Applications, Lecture Notes in Math., 1660, Springer-Verlag, Berlin, 1-43, 1997. [4] M. G. Crandall and P. L. Lions, Two approximations of solutions of Hamilton-Jacobi equations, Math. Comp., Vol.43, No.167, 1-19, 1984. [5] M. Falcone, P. Lanucara, and A. Seghini, A splitting algorithm for Hamilton-Jacobi-Bellman equations, Appl. Numer. Math., Vol.15, No.2, 207-218, 1994. [6] W. H. Fleming and H. M. Soner, Controlled Markov Processes and Viscosity Solutions, Springer-Verlag, New York, 1993. [7] B. Z. Guo and B. Sun, Numerical solution to the optimal birth feedback control of a population dynamics: viscosity solution approach, Optimal Control Appl. Methods, Vol.26, No.5, 229-254, 2005. [8] Y. C. Ho, On centralized optimal control, IEEE Trans. Au-

Figure 2: Numerical Solutions of the Optimal Feedback Laws tomat. Control, Vol.50, No.4, 537-538, 2005. [9] R. H. Li and G. C. Feng, Numerical Solutions of Differential Equations, Higher Education Press, Beijing, 1996 (in Chinese). [10] H. J. Pesch, Optimal control applications - offline and online methods - for ODE, DAE and PDE problems, Technical Report, German-Israeli Minerva School, Thurnau, 22-26 September 2003, available at http://www.unibayreuth.de/departments/ingenieurmathematik/aktuelles.html [11] J. M. Yong, The Method of Dynamic Programming and Hamilton-Jacobi -Bellman Equations, Shanghai Scientific and Technical Publishers, Shanghai, 1992 (in Chinese).

Figure 1: Numerical Solutions of the Optimal Trajectories

572

A New Algorithm for Finding Numerical Solutions of ... - IEEE Xplore

optimal control problem is the viscosity solution of its associated Hamilton-Jacobi-Bellman equation. An example that the closed form solutions of optimal ...

141KB Sizes 0 Downloads 318 Views

Recommend Documents

A new algorithm for finding numerical solutions of ...
Mar 2, 2009 - Department of Mathematics, Beijing Institute of Technology, Beijing .... Given a running cost L(t, y, u) and a terminal cost ψ(y), the optimal ...

The Viterbi Algorithm - IEEE Xplore
HE VITERBI algorithm (VA) was proposed in 1967 as a method of decoding convolutional codes. Since that time, it has been recognized as an attractive solu-.

A New Outer Bound for the Gaussian Interference ... - IEEE Xplore
Wireless Communications and Networking Laboratory. Electrical Engineering Department. The Pennsylvania State University, University Park, PA 16802.

Ubiquitous Robot: A New Paradigm for Integrated Services - IEEE Xplore
virtual pet modeled as an artificial creature, and finally the. Middleware which seamlessly enables interconnection between other components. Three kinds of ...

A New Parameter for UWB Indoor Channel Profile ... - IEEE Xplore
Abstract—This paper proposes a new parameter for identifying the room typology when the receiver is in ultra wideband. (UWB) indoor environments.

Efficient Pattern Matching Algorithm for Memory ... - IEEE Xplore
intrusion detection system must have a memory-efficient pat- tern-matching algorithm and hardware design. In this paper, we propose a memory-efficient ...

Batch scheduling algorithm for SUCCESS WDM-PON - IEEE Xplore
frames arrived at OLT during a batch period are stored in Virtual. Output Queues (VOQs) and scheduled at the end of the batch period. Through simulation with ...

A New Approach in Synchronization of Uncertain Chaos ... - IEEE Xplore
Department of Electrical Engineering and. Computer Science. Korea Advanced Institute of Science and Technology. Daejeon, 305–701, Republic of Korea.

An Algorithm for Maximizing a Quotient of Two Hermitian ... - IEEE Xplore
Associate Institute for Signal Processing, Technische Universität München, 80290 Munich, Germany. Telephone: +49 89 289-28508, Fax: +49 89 289-28504, ...

Effectively finding relevant web pages from linkage ... - IEEE Xplore
The page similarity analysis and definition are based on hyperlink information ... Numerical analysis for the experimental data and comparison of the algorithms.

a generalized model for detection of demosaicing ... - IEEE Xplore
Hong Cao and Alex C. Kot. School of Electrical and Electronic Engineering. Nanyang Technological University. {hcao, eackot}@ntu.edu.sg. ABSTRACT.

Optimized Software Implementation of a Full-Rate IEEE ... - IEEE Xplore
Hardware implementations are often used to meet the high-data- rate requirements of 802.11a standard. Although software based solutions are more attractive ...

IEEE Photonics Technology - IEEE Xplore
Abstract—Due to the high beam divergence of standard laser diodes (LDs), these are not suitable for wavelength-selective feed- back without extra optical ...

Effectively finding relevant web pages from linkage ... - IEEE Xplore
algorithm comes from the extended cocitation analysis of the Web pages. It is intuitive .... analysis for the experimental data and comparison of the algorithms.

An Efficient Geometric Algorithm to Compute Time ... - IEEE Xplore
An Efficient Geometric Algorithm to Compute Time-optimal trajectories for a Car-like Robot. Huifang Wang, Yangzhou Chen and Philippe Sou`eres.

Discrete Binary Cat Swarm Optimization Algorithm - IEEE Xplore
K. N. Toosi university of Tech. ... its best personal experience and the best experience of the .... the cat and the best position found by members of cat swarm.

Refined Routing Algorithm in Hybrid Networks with ... - IEEE Xplore
Abstract—In this paper, a low-complexity routing strategy is introduced for a hybrid network in which wireless nodes are randomly located and multiple base ...

Finding Multiple Coherent Biclusters in Microarray Data ... - IEEE Xplore
Finding Multiple Coherent Biclusters in Microarray. Data Using Variable String Length Multiobjective. Genetic Algorithm. Ujjwal Maulik, Senior Member, IEEE, ...

wright layout - IEEE Xplore
tive specifications for voice over asynchronous transfer mode (VoATM) [2], voice over IP. (VoIP), and voice over frame relay (VoFR) [3]. Much has been written ...

Device Ensembles - IEEE Xplore
Dec 2, 2004 - time, the computer and consumer electronics indus- tries are defining ... tered on data synchronization between desktops and personal digital ...

wright layout - IEEE Xplore
ACCEPTED FROM OPEN CALL. INTRODUCTION. Two trends motivate this article: first, the growth of telecommunications industry interest in the implementation ...