648

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 42, NO. 5, MAY 1997

Approximate Set-Valued Observers for Nonlinear Systems Jeff S. Shamma, Member, IEEE, and Kuang-Yang Tu, Student Member, IEEE

Abstract— A set-valued observer (SVO) produces a set of possible states based on output measurements and a priori models of exogenous disturbances and noises. Previous work considered linear time-varying systems and unknown-but-bounded exogenous signals. In this case, the sets of possible state vectors take the form of polytopes whose centers are optimal state estimates. These polytopic sets can be computed by solving several small linear programs. An SVO can be constructed conceptually for nonlinear systems; however, the set of possible state vectors no longer takes the form of polytopes, which in turn inhibits their explicit computation. This paper considers an “extended SVO.” As in the extended Kalman filter, the state equations are linearized about the state estimate, and a linear SVO is designed along the linearization trajectory. Under appropriate observability assumptions, it is shown that the extended SVO provides an exponentially convergent state estimate in the case of sufficiently small initial condition uncertainty and provides a nondivergent state estimate in the case of sufficiently small exogenous signals.

I. INTRODUCTION

C

ONSTRUCTIONS of observers for nonlinear systems often rely on some form of underlying linear dynamics. The extended Kalman filter (EKF) [6] linearizes the state trajectory about the current state estimate, resulting in approximate linear time-varying error dynamics. Output injection methods [8] employ a state transformation to obtain exact linear time-invariant error dynamics. Similarly, [5] employs a state transformation to obtain exact linear time-varying error dynamics, where the “time-variations” actually are due to a measured endogenous signal. See [11], [12], and [16] and the references therein for a further overview of nonlinear observers. Reference [12] takes a more direct approach by viewing state observation as finding the unique solution to simultaneous nonlinear equations through successive iterations. In the case of no process and measurement noise, the authors construct an observer for which the estimation error exponentially converges to zero for sufficiently small initial condition uncertainty. They also show that the extended Kalman filter resembles a single iteration of their approach. In case of nonzero exogenous signals, the simultaneous nonlinear equations formed in [12] no longer have a unique Manuscript received November 16, 1995; revised November 30, 1996. Recommended by Associate Editor, A. Vicino. This work was supported by the NSF under Grant ECS–9258005, EPRI under Grant 8030–23, and Ford Motor Co. The authors are with the Department of Aerospace Engineering and Engineering Mechanics, The University of Texas at Austin, Austin, TX 78712 USA (e-mail: [email protected]). Publisher Item Identifier S 0018-9286(97)03434-X.

solution, but rather a set of possible solutions. This situation resembles guaranteed state estimation for linear systems. A guaranteed state estimator, alternatively called set-valued observer (SVO), assumes a priori bounds on exogenous disturbances and noises and constructs sets of possible states which are consistent with the a priori bounds and current measurements. The survey article [10] presents a historical account of such methods. See also the text [2] and collection [9]. References [14] and [15] also consider the construction of SVO’s for linear time-varying systems. The authors present a recursive method to construct these sets of possible states and show that the centers of these sets represent optimal state -induced norm sense. estimates in an In the linear case, sets of possible states generally take the form of (convex) polytopes. While it is possible to define conceptually an SVO for nonlinear systems [2], an explicit construction of the set of possible states is essentially prevented by the generality of (possibly disconnected) shapes. In this paper, we mimic the EKF and construct an extended SVO for nonlinear systems. Following the construction in [14] and [15], we construct an observer which bounds the actual set of possible states. We show that the extended SVO provides an exponentially convergent state estimate for sufficiently small initial condition uncertainty. (A similar convergence property for the EKF was established in [4] and [16].) Furthermore, the extended SVO provides a nondivergent state estimate for sufficiently small unknown exogenous signals. In the special case of linearizable error dynamics as in [5] and [8], the extended SVO produces the exact set of possible states. As in the EKF, the extended SVO linearizes the state equations about the current state estimate. Unlike the EKF, the extended SVO does not neglect the linearization errors. Rather, the linearization errors are considered as exogenous disturbances and are used to bound the set of possible states. An attractive feature of this approach is that the linear SVO optimally minimizes the effect of exogenous disturbances, and hence possibly the effect of linearization errors, on the estimation error. The main shortcoming of the extended SVO is the real-time computational burden of solving several small linear programs. The remainder of this paper is organized as follows. Section II reviews some basic notation and terminology. Section III discusses the general nonlinear observer problem and reviews the linear SVO. Section IV defines the extended SVO and derives its convergence and nondivergence properties while assuming observability along the estimated trajectory. Section V relates the observability of a nonlinear system to

0018–9286/97$10.00  1997 IEEE

SHAMMA AND TU: APPROXIMATE SET-VALUED OBSERVERS

649

that of a trajectory linearization. Finally, Section VI contains a simulation example, and Section VII presents some concluding remarks.

Assumption 3.1: There signals

and

exist bounding such that for all in (1) satisfy

functions the

II. NOTATION For let denote the th component of and define The closed-unit box in centered at is denoted Define to be the set of all (nonempty) compact subsets of Then, is a metric space when equipped with the Hausdorff metric [13, p. 279]. For denotes the corresponding induced matrix norm. In case and denotes the left pseudo-inverse, Let denote the set of nonnegative integers. For a sequence define

The following definitions describe various desirable observer properties (see also [12]). We will separate the cases of noise-free dynamics and noisy dynamics. Definition 3.1 (Observer Properties): Case I) Unbiased: For all there exists a such that

Asymptotically Convergent: For all exist

there and such that

and whenever .. . where is possible. For denotes the Jacobian matrix. For notational simplicity, also denotes the Jacobian matrix of when For denotes the Jacobian matrix with respect to the first variable. III. OBSERVERS

FOR

Case II) Nondivergent: For all and

there exist such that

whenever and Furthermore, there exists a such that

NONLINEAR SYSTEMS

A. General Definitions Consider a nonlinear system of the form

(1) is unknown process noise, where is the measured output, and is measurement noise. The effects of a known input can be incorporated as time variations. Initial conditions are restricted to the set which is introduced in case the dynamics evolve over a (not necessarily small) compact set (cf., Section V). Unless otherwise specified, we will take We assume for now that and are continuous. Additional differentiability assumptions will be made as necessary. An observer is a dynamical system

Similar properties are called “quasilocal” in [12] and [16] since only the initial condition uncertainty must be small, while the initial condition itself can be arbitrary in Global properties also can be defined but are not needed here. Nondivergence implies that the estimation error tends to zero as collectively tend to zero. The time allows some “startup time” for the observer. B. The SVO for Nonlinear Systems We are interested in constructing the set of possible states, denoted , which are consistent with the current measurement trajectory and a priori Assumption 3.1. First define the set by for some

where is the state estimate, and takes its values in some metric space, Typically, However, in the case of SVO’s, will represent with the Hausdorff metric. We make the following a priori assumptions on (1).

In words, represents the set of possible states at time based on the single measurement only. Similarly, define for some

650

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 42, NO. 5, MAY 1997

In words, denotes the anticipated set of possible states at time based on measurements up to time Note that and all depend on the current measurement trajectory. However, this dependence is not explicitly expressed for the sake of notational simplicity. Algorithm 3.1 (SVO): Let be a measurement trajectory of (1) under Assumption 3.1. Suppose Initialization:

Let (1) now take the linear form

(2) where is a known input. Remark: For the sake of notational simplicity, we assume without loss of generality for the remainder of this section the absence of a “known input” in the state dynamics. The inclusion of a known input follows from minor modifications. Define

Propagation: and for some

Note that the SVO algorithm is causally dependent on the measurement trajectory. Furthermore, the SVO can be written in the state-space form of Section 3.1 as follows. Let denote the mapping

and

.. . Assumption 3.2: The linear time-varying system (2) satisfies the following. 1) The matrices and are uniformly bounded. 2) There exist and such that for all

denote the mapping

Then

Continuity properties of and assure that whenever Associated with the set is the central estimate defined as follows. For each component, , with define

Assumption 3.2-2) is an observability assumption which assures the existence of an asymptotically convergent and nondivergent observer. In particular, we will need the following gramian-based observer. Proposition 3.1: Under Assumption 3.2, the gramian-based observer

.. .

.. .

.. .

..

.

.. .

.. .

Then, the central estimate is defined as

Optimality properties of central estimates are considered in [10], [14], and [15] and are reviewed briefly in the next section.

where

is asymptotically convergent whenever and nondivergent whenever

C. The SVO for Linear Systems In this section, we review the SVO for linear time-varying systems. In the case of linear dynamics, the resulting sets of possible states take the form of polytopes, and computational procedures exist which implement Algorithm 3.1. These procedures propagate the set of possible states based on the current measurement trajectory through the solution of several small linear programs (cf., the survey paper [10], text [2], papers [14] and [15], and references therein). In the following discussion, we consider convergence and nondivergence properties of the central estimate,

Proof: Over any interval

and

with

SHAMMA AND TU: APPROXIMATE SET-VALUED OBSERVERS

651

for appropriately defined matrices and Since the observer state simply stores the previous measurements

IV. AN EXTENDED SVO

FOR

NONLINEAR SYSTEMS

In the nonlinear case, the SVO algorithm must propagate general sets in This essentially prevents any computational implementation of the algorithm. In this section, we mimic the EKF and construct an extended SVO for nonlinear systems. We will consider the simplified nonlinear system

(4) (3) from which convergence and nondivergence properties are clear. The parameter in the definition of nondivergence may be set to An important feature of the gramian-based observer is that the estimation error at time can be bounded by the inputs over the horizon regardless of initial conditions. Proposition 3.2: Consider Algorithm 3.1 applied to the linear time-varying system (2). Under Assumption 3.2, the linear SVO with state estimate is unbiased and asymptotically convergent whenever and is nondivergent whenever Proof: First, consider the case when The unbiased property follows by initializing As for quasilocal asymptotic convergence, we see that the initial condition is uniquely determined after time steps. In terms of Definition 3.1, can be arbitrary, and Now let To show nondivergence, the proof of Proposition 3.1 implies the set can be bounded using (3) as long as the SVO initial condition, , satisfies The value can be arbitrary, and We emphasize that the propagation of sets in Algorithm 3.1 relies on the bounds and in an a priori manner. In particular, we have not shown that tends to zero if the linear SVO believes that and are nonzero, but in actuality Such an asymptotic convergence property is conjectured. Optimality properties of central estimates are discussed in [10] and more recently in [14] and [15]. Clearly, central estimates are optimal in an absolute error sense. In fact, central estimates are optimal in an induced norm sense as follows. Let and let Define

Having the disturbances, , enter linearly can always be satisfied at the cost of higher order dynamics by augmenting the system with a delay. The linear output assumption is made with some loss of generality. In some cases, the output can be part of the state vector after an appropriate transformation. Assumption 4.1: For any the function in (4) is continuously differentiable, and for all

where

Assumption 4.1, as stated, requires that the linearization residuals are uniformly quadratically bounded. In fact, the forthcoming extended SVO only requires that these residuals are uniformly bounded over all state/estimate trajectories. This essentially reflects that the system evolves over a (not necessarily small) compact set (cf., Section V). The forthcoming extended SVO will produce sets of states, , which bound the actual sets of possible states, i.e.,

Let denote the central estimate based on earizing (4) about leads to

Lin-

(5) Based on this linearization, the bounding sets computed as follows. As before, define

can be

for some Let be a measurement trajectory, and define the following -dependent performance index at time

It will be convenient to express the sets from their centers. Toward this end, define

as deviations

and consistent with

up to time

References [14] and [15] show that central estimates minimize the above induced-norm performance index.

Algorithm 4.1 (Extended SVO): Let be a measurement trajectory of (4) under Assumptions 3.1 and 4.1. Suppose

652

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 42, NO. 5, MAY 1997

Initialization:

A.

Propagation:

That the extended SVO is unbiased follows from initializing Then for all We now establish quasilocal asymptotic convergence. Proposition 4.1: Along a particular estimate trajectory, , consider the “false” linear time-varying system for some where represents a known input and unknown input which satisfies the bounds

represents an

As with the nonlinear SVO, the initial condition of the extended SVO when expressed in state-space form is the set We see that the extended SVO bounds the sets by considering the linearization residuals, as exogenous disturbances. This is unlike the traditional EKF which simply ignores the linearization residuals (although it is possible to include “expected” residuals in a “second-order” EKF). If available, tighter bounds may be used in place of In fact, the residual bounds can be a function of the current setvalued state estimate (at the cost of increased computational burden). As with the linear SVO, the sets are polytopes and can be computed by solving several linear programs. We now establish convergence and nondivergence properties of the extended SVO estimate, Along a particular estimate trajectory, define

Let denote the state of a linear SVO applied to the above false system initialized with Then whenever

and

the sets

satisfy

Proof: According to (5) (with

and

.. . Assumption 4.2: Consider Algorithm 4.1 applied to (4). There exist and such that along any estimate trajectory, : 1) 2) Assumption 4.2 states that the linear time-varying system associated with linearizing (4) about any estimate trajectory, , satisfies the observability conditions of Assumption 3.2. An important issue is whether Assumption 4.2 can be guaranteed a priori. This topic is considered Section V. Theorem 4.1: Consider Algorithm 4.1 applied to (4). Under Assumption 4.2, the extended SVO with state estimate is unbiased and asymptotically convergent whenever and is nondivergent whenever

Subtracting from both sides and identifying with and with leads to the desired result. The vector reflect the error of the extended SVO. The term “false” is used since the term in the right-hand side of the equation depends on and hence the dynamics are noncausal. Proposition 4.2: Given any let

Then

whenever Proof: For

Thus for any The remainder of this section is devoted to the proof of Theorem 4.1.

)

recall that

SHAMMA AND TU: APPROXIMATE SET-VALUED OBSERVERS

A component-by-component analysis shows that this bound also applies to even though need not belong to (for dimensions ). Therefore

653

and Assumption 4.2 assures that is uniformly bounded over all estimate trajectories. These parameters assure that which then implies exponential decay of the estimation error. Note that is actually independent of B. We now prove nondivergence. Proposition 4.3: Given any

let

for Proceeding by induction leads to the desired result. Note that the proof of Proposition 4.2 does not rely on Assumption 4.2-2). We are now in a position to establish quasi-local asymptotic convergence. Consider a gramian-based observer (modified to accommodate a known input) applied to the false system of Proposition 4.1. According to the proof of Proposition 3.1, Assumption 4.2 assures that there exists an such that any satisfies Then As before, this bound also applies to One possible is corresponding to the false system. Pick Proposition 4.2 with Then

even when for the and as in

whenever and Proof: We first bound As before

in terms of

and

Therefore whenever

Thus, for

we have

provided that

Similarly

The uniform boundedness in Assumption 4.2 allows this analysis process to be repeated. Therefore

provided that

Proceeding recursively

which establishes the desired exponential decay for sufficiently small In terms of Definition 3.1, given any we can select

where

provided that for Collecting all implied bounds on and leads to the desired result (cf., the proof of Proposition 4.3). Note that Proposition 4.3 also does not rely on Assumption 4.2-2). Similarly to the case where the false system which produces the same sets of states is now

654

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 42, NO. 5, MAY 1997

Furthermore, a gramian-based observer applied to this system assures that there exist and such that any satisfies

(6) and Pick Then whenever

as in Proposition 4.3 with and

(7)

The first system, considered in [3] and [8], represents a nonlinear system which is state equivalent to a linear system with output injection. The second system, considered in [5], represents a special structure which resembles a linear timevarying system whose “time-variations” are actually due to the measured output. For either structure, it is possible to generate error dynamics which are either linear time-invariant or resemble linear timevarying dynamics via the observer

Let

Then

Rearranging this inequality leads to

(8) for appropriately defined

Then

The underlying linear error dynamics then greatly simplify observer design. It turns out that for these classes of systems, the extended SVO actually generates the exact set of possible states. In particular, consider

(9) (10) (13) and (11) together imply (12) We may proceed inductively to establish that is uniformly bounded. As before, is bounded by the initial condition uncertainty, In terms of Definition 3.1, given any we can select

and

and there where the state vector is partitioned is no measurement noise. Note that (13) includes both previous structures, but after appropriate state transformations have been made. Proposition 4.4: Consider Algorithm 4.1 applied to (13) with The sets exactly represent the nonlinear SVO of Algorithm 3.1. Proof: The proof follows by observing that for all

Noting that the -dynamics in (13) are linear and completes the proof. Example 4.1: Consider the dynamics of a freely rotating rigid body

These parameters assure that whenever

and the estimation error remains bounded. Furthermore, the total estimation error after time approaches zero as collectively approach zero. C. Special Cases Two special classes of systems previously considered for nonlinear observers are nonlinear systems whose dynamics after state transformations take the form

These dynamics were considered in [7] as well as [5]. Both references derived state transformations which linearize the error dynamics, however the transformation considered in [7] relies on a priori knowledge of maximal and minimal values of A discretized version of these dynamics yields

or more generally which are already in the form of (13) without any state transformations. The quantities can be used to reflect discretization errors.

SHAMMA AND TU: APPROXIMATE SET-VALUED OBSERVERS

655

V. OBSERVABILITY AND TRAJECTORY LINEARIZATION Assumption 4.2 was critical in establishing the convergence and nondivergence of the extended SVO. Reference [16] considered the case of no exogenous disturbances and noises and showed that an assumption analogous to Assumption 4.2 for the EKF can be bypassed whenever (4) evolves on a compact set and the initial condition uncertainty is sufficiently small. We now prove a similar result tailored to the extended SVO which accommodates nonzero process and measurement noise. An appealing feature of the present approach (cf., Theorem 5.1) is that the proof follows from only slight modifications of the proof of Theorem 4.1. We begin with the following assumptions on (4). Assumption 5.1: The state equations (4) are replaced by the time-invariant dynamics

Assumption 5.2: 1) and 2) There exist compact sets that implies

for all and

3) Let mapping such that for any

For some tiable whenever

such

denote the

there exists a continuously differensuch that

for all Assumptions 5.2-1) and 5.2-2) state that the system evolves within a (not necessarily small) compact set Assumption 5.2-3) is called uniform -observability in [12]. In words, the mapping relates the output of (4) to a previous state and to exogenous signals over a prescribed interval. The implication of Assumption 5.2-3) is that there exists an observer, , which uniquely determines the state at time , given knowledge of the observations, , disturbances, , and noises, Along any trajectory of (4), define

Proposition 5.1: Under Assumptions 5.1 and 5.2, there exist and such that for all trajectories of (4) with

Proof: Assumption 5.2 implies that for

Therefore, rank However

has

The uniformity of the bounds and follow from the compactness of the set and from continuous differentiability of We now show that Theorem 4.1 holds without relying on Assumption 4.2. In the definition of asymptotic convergence and nondivergence, we will use Theorem 5.1: Theorem 4.1 holds with Assumption 4.2 replaced by Assumptions 5.1 and 5.2. Proof: We will focus on the proof of nondivergence. The proof of asymptotic convergence is similar. Let and be such that for all 1) 2) implies

The existence of such constants is assured by the timeinvariance and compactness assumptions. Let If we can assure that for

then (14) Toward this end, let us assume that

and define

and

Then an argument similar to that for Proposition 4.3 shows that and together imply that

and

.. .

which in turn implies (14). The remainder of the proof follows equations (7)–(12) with and replacing and and with the same and

656

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 42, NO. 5, MAY 1997

Since proceeding by induction shows that for all we are assured that the corresponding in (6) are uniformly bounded. In terms of Definition 3.1, there exists a such that for any

and

there do not exist uniform bounds Assumption 5.2 is satisfied.

and

—even though

VI. A SIMULATION EXAMPLE

and

We will consider state estimation for a discretized Van der Pol equation together imply In words, we have shown that the extended SVO is asymptotically convergent and nondivergent whenever 1) the system evolves within a (not necessarily small) compact set, and 2) the initial condition is uniquely determined (in a continuously differentiable manner) in a finite number of steps given knowledge of all exogenenous signals. Finally, the following example demonstrates that in the time-varying case, compactness need not assure that uniformity constants exist. Example 5.1: Consider the second-order system

This example satisfies Assumption 5.2 as follows. 1) Set 2) Set and Then for all 3) Let Then we have (y), shown at the bottom of the page. Note that

which implies that

is bijective. Let

Then define (z), shown at the bottom of the page. Since

which was also considered in [1]. Performing a discretization of step size leads to

where and denote discrete-time noises. Note that these equations are in the form of (13) in case Let Linearizing the above right-hand side about leads to

where

be such The simulated extended SVO followed Algorithm 4.1, except that it exploited maximal values of to bound on An EKF as described in [16] also was included in the simulations. The following simulation parameters were used for Simulations 1–4. • System parameters: • Noise bounds:

SHAMMA AND TU: APPROXIMATE SET-VALUED OBSERVERS

Fig. 1. State trajectory x2 (k ) and extended SVO error bounds (Simulation 1).

Fig. 2. State trajectory x2 (k ) and extended SVO error bounds (Simulation 2).

• SVO initial condition:

657

Fig. 3. State trajectory x1 (k ) with extended SVO and EKF estimates (Simulation 2).

Fig. 4. State trajectory x2 (k ) with extended SVO and EKF estimates (Simulation 2). TABLE I MEAN-SQUARE ESTIMATION ERROR AFTER TIME k = 0

• Initial a priori covariance matrix: • Initial a priori state estimate: The particular simulations are described as follows. 1) Constant Noise: 2) Constant Noise: 3) Uniform Random Noise: 4) Bang–Bang Random Noise: Both the EKF and extended SVO generally follow the state trajectory, however the SVO state bounds are very conservative. Fig. 1 illustrates these bounds for Simulation 1.

Fig. 2 of Simulation 2 shows that the true state initially follows the SVO lower bound. Figs. 3 and 4 show the various time responses in Simulation 2. Table I summarizes the mean-square estimation errors starting after time The extended SVO seems to outperform the EKF whenever the simulation significantly departs from the stochastic structure for which the linear Kalman filter is optimal. This includes Simulation 4, in which the extended Kalman filter was provided with the correct variances.

658

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 42, NO. 5, MAY 1997

The following simulation parameters exhibiting large initial condition uncertainty led to divergence of the EKF, while the extended SVO locked onto the true state within two time steps. • Noise Bounds: • SVO initial condition: • Initial a priori covariance matrix: • Initial a priori state estimate: • Uniform random noise:

Despite these results, it is unclear whether either observer generally exhibits superior convergence and performance. The extended SVO does have a significantly larger computational burden. VII. CONCLUDING REMARKS In this paper, we have considered an extended SVO for nonlinear systems and derived guaranteed convergence and nondivergence properties. The main shortcoming of the extended SVO is the significant computational burden of solving several linear programs. The number of variables for these linear programs approximately equals the number of state variables and exogenous disturbances. The number of constraints depends on the complexity of the resulting sets of possible states. Theoretically, this number could increase with the number of measurements. In the simulation example, however, the number of constraints was rarely larger than 20 and usually less than 10. Given this computational burden, real-time application seems unlikely for systems with fast dynamics. REFERENCES [1] M. S. Ahmed, “An innovation representation for nonlinear systems with application to parameter and state estimation,” Automatica, vol. 30, no. 12, pp. 1967–1974, 1994. [2] F. L. Chernousko, State Estimation for Dynamic Systems. Boca Raton, FL: CRC, 1994. [3] S.-T. Chung and J. W. Grizzle, “Sampled-data observer error linearization,” Automatica, vol. 26, no. 6, pp. 997–1008, 1990. [4] D. B. Grunberg and M. Athans, “Guaranteed properties of the extended Kalman filter,” MIT Laboratory Information Decision Systems, Tech. Rep. LIDS-P-1724, Dec. 1987. [5] M. Jankovic, “A new observer for a class of nonlinear systems,” J. Mathematics, Estimation, Contr., vol. 3, no. 2, pp. 225–246, 1993. [6] A. H. Jazwinski, Stochastic Processes and Filtering Theory. New York: Academic, 1970.

[7] W. Kang and A. J. Krener, “Observation of a rigid body from measurements of a principal axis,” J. Mathematical Syst., Estimation, Contr., no. 1, pp. 197–208, 1991. [8] A. J. Krener and A. Isidori, “Linearization by output injection and nonlinear observers,” Syst. Control Lett., no. 3, pp. 47–52, 1983. [9] A. B. Kurzhanski and V. M Veliov, Eds., Modeling Techniques for Uncertain Systems. Boston: Birkh¨auser, 1993. [10] M. Milanese and V. Vicino, “Optimal estimation theory for dynamic systems with set membership uncertainty: An overview,” Automatica, vol. 27, pp. 997–1009, 1991. [11] E. A. Misawa and J. K. Hedrick, “Nonlinear observers—A state-of-theart survey,” ASME J. Dynamic Syst., Measurement, Contr., vol. 111, pp. 344–352, 1989. [12] P. E. Moraal and J. W. Grizzle, “Observer design for nonlinear systems with discrete-time measurements,” IEEE Trans. Automat. Contr., vol. 40, no. 3, pp. 395–404, 1995. [13] J. R. Munkres, Topology: A First Course. Englewood Cliffs, NJ: Prentice-Hall, 1975. [14] J. S. Shamma and K.-Y. Tu, “Optimality of set-valued observers for linear systems,” in Proc. 34th IEEE Conf. Decision Contr., New Orleans, LA, Dec. 1995. , “Set-valued observers and optimal disturbance rejection,” IEEE [15] Trans. Automat. Contr., submitted. [16] Y. Song and J. W. Grizzle, “The extended Kalman filter as a local asymptotic observer for discrete-time nonlinear systems,” J. Mathematics, Estimation, Contr., vol. 5, no. 1, pp. 59–78, 1995.

Jeff S. Shamma (S’85–M’88) was born in New York, NY, November 1963 and raised in Pensacola, FL. He received the Ph.D. degree in 1988 from the Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA. After one year of postdoctoral research, he joined the University of Minnesota, Minneapolis, where he was an Assistant Professor of Electrical Engineering from 1989 to 1992. He then joined the University of Texas, Austin, where he is currently an Associate Professor of Aerospace Engineering. His research interests include robust control for linear parameter varying and nonlinear systems. Dr. Shamma is a recipient of a 1992 NSF Young Investigator Award and the 1996 Donald P. Eckman Award of the American Automatic Control Council. He is on the editorial boards of the IEEE TRANSACTIONS ON AUTOMATIC CONTROL and Systems & Control Letters.

Kuang-Yang Tu (S’95) was born in Taipei, Taiwan, in December 1967. He received the B.S. degree in industrial engineering from National Chiao Tung University, Hsingchu, Taiwan, in 1989 and the M.S. degree in aerospace engineering from the University of Texas, Austin, in 1993, where he is currently a Ph.D. candidate. His research interests include robust control, estimation, and nonlinear optimization.

Approximate Set-valued Observers For Nonlinear ...

Recommended by Associate Editor, A. Vicino. ..... Kalman filter,” MIT Laboratory Information Decision Systems, Tech. Rep. ... He received the B.S. degree.

548KB Sizes 2 Downloads 186 Views

Recommend Documents

Introduction to Linear and Nonlinear Observers - Semantic Scholar
Sometimes all state space variables are not available for measurements, or it is not practical to measure all of them, or it is too expensive to measure all state space variables. In order to be able to apply the state feedback control to a system, a

Human Rights Observers CCAY.pdf
over the non-permitted drilling operation of the Dakota Access Pipeline on the banks of the. Missouri River is as a manager of Federal property, and that as such ...

Approximate Time-Optimal Control via Approximate ...
and µ ∈ R we define the set [A]µ = {a ∈ A | ai = kiµ, ki ∈ Z,i = 1, ...... [Online]. Available: http://www.ee.ucla.edu/∼mmazo/Personal Website/Publications.html.

Human Rights Observers CCAY.pdf
In an independent initiative of the Original Nations of Indigenous Peoples of the ... it acknowledged that Indian title appeared to have been extinguished by fraud.

Nonlinear time-varying compensation for ... - Semantic Scholar
z := unit right shift operator on t 2 (i.e., time ... rejection of finite-energy (i.e., ~2) disturbances for .... [21] D.C. Youla, H.A. Jabr and J.J. Bongiorno, Jr., Modern.

Nonlinear time-varying compensation for ... - Semantic Scholar
plants. It is shown via counterexample that the problem of .... robustness analysis for structured uncertainty, in: Pro- ... with unstructured uncertainty, IEEE Trans.

q-Gram Tetrahedral Ratio (qTR) for Approximate Pattern Matching
possible to create a table of aliases for domain- specific alphanumeric values, however, it is unlikely that all possible errors could be anticipated in advance. 2.

Nonlinear Spectral Transformations for Robust ... - Semantic Scholar
resents the angle between the vectors xo and xk in. N di- mensional space. Phase AutoCorrelation (PAC) coefficients, P[k] , are de- rived from the autocorrelation ...

pdf-1831\graphs-for-determining-the-approximate-elevation-of-the ...
... the apps below to open or edit this item. pdf-1831\graphs-for-determining-the-approximate-ele ... esources-summary-arkansas-geological-commission.pdf.

Optimal hash functions for approximate closest pairs on ...
Oct 11, 2001 - hash functions for various metrics have been given: projecting an ... Here, on the other hand, we will call a region optimal for p if no region ... D. Gordon and P. Ostapenko are with the IDA Center for Communications Research, ...

q-Gram Tetrahedral Ratio (qTR) for Approximate Pattern Matching
matching is to increase automated record linkage. Valid linkages will be determined by the user and should represent those “near matches” that the user.

Learning Cost-Aware, Loss-Aware Approximate Inference Policies for ...
thermore that distribution will be used for exact inference and decoding (i.e., the system ... one should train to minimize risk for the actual inference and decoding ...

Approximate Inference for Infinite Contingent Bayesian ...
of practical applications, from tracking aircraft based on radar data to building a bibliographic database based on ci- tation lists. To tackle these problems, ..... Suppose it is usually Bob's responsi- bility to prepare questions for discussion, bu

Approximate Linear Programming for Logistic ... - Research at Google
actions (e.g., ads or recommendations), without explicitly accounting for the long-term impact ..... the fact that at the optimal solution only a small number of constraints will be .... Both model-based MDP methods [5, 4, 2, 10] and model-free RL ..

Approximate reasoning for real-time ... - Research at Google
does it suffice to prove a property about an approximant? ... of the metric do not play any essential role. ..... states 〈si, ˜ci〉 generates f : [0, ∞) → G as follows:.

A Fast Bit-Vector Algorithm for Approximate String ...
Mar 27, 1998 - Simple and practical bit- ... 1 x w blocks using the basic algorithm as a subroutine, is significantly faster than our previous. 4-Russians ..... (Eq or (vin = ;1)) capturing the net effect of. 4 .... Figure 4: Illustration of Xv compu

Binarization Algorithms for Approximate Updating in ...
In this section we review the basics of Bayesian networks (BNs) and their extension to convex sets of ...... IOS Press. [4] C. P. de Campos and F. G. Cozman. The inferential complexity of Bayesian and credal networks. In Proceedings of the Internatio

Approximate Entropy for EEG-based Movement Detection
BCI Group, Dept. of Computing and Electronic Systems, University of Essex, ... An approximate entropy feature is tested with parameters appropriate for online BCI - ... Such signals can be best distinguished using complexity measures which are .....

a double metaphone encoding for approximate name ...
However, there is always a large degree of phonetic similarity in the spelling .... BHA. \u09AD. “b”. 78 x\u09CD \u09AE... Not Coded. @ the beginning sরণ (ʃɔron).

A Fast Bit-Vector Algorithm for Approximate String ...
Mar 27, 1998 - algorithms compute a bit representation of the current state-set of the ... *Dept. of Computer Science, University of Arizona Tucson, AZ 85721 ...

Approximate Is Better than "Exact" for Interval ...
Feb 26, 2007 - The American Statistician, Vol. 52, No. 2. (May, 1998), pp. 119-126. Stable URL: ... http://www.jstor.org/about/terms.html. JSTOR's Terms and ...

Hybrid Approximate Gradient and Stochastic Descent for Falsification ...
able. In this section, we show that a number of system linearizations along the trajectory will help us approximate the descent directions. 1 s xo. X2dot sin.

Optimal hash functions for approximate closest pairs on ...
Jan 30, 2009 - Use a different hash function, such as mapping n bits to codewords of an. [n, k] error-correcting code. GMO (IDA/CCR). Optimal hash functions.

IAEG-SDG Observers: Discussion platform - United Nations ESCAP
Goal 2: End hunger, achieve food security and improved nutrition and promote sustainable agriculture . ...... environment and use resources in a sustainable way.