Joint 48th IEEE Conference on Decision and Control and 28th Chinese Control Conference Shanghai, P.R. China, December 16-18, 2009

ThBIn4.3

Optimal Measurement Selection For Any-time Kalman Filtering With Processing Constraints † Nima Moshtagh, Lingji Chen, Raman Mehra Abstract— In an embedded system with limited processing resources, as the number of tasks grows, they interfere with each other through preemption and blocking while waiting for shared resources such as CPU time and memory. The main task of an Any-time Kalman Filter (AKF) is real-time state estimation from measurements using available processing resources. Due to limited computational resources, the AKF may have to select only a subset of all the available measurements or use outof-sequence measurements for processing. This paper addresses the problem of measurement selection needed to implement AKF on systems that can be modeled as double-integrators, such as mobile robots, aircraft, satellites etc. It is shown that a greedy sequential selection algorithm provides the optimal selection of measurements for such systems given the processing constraints.

I. I NTRODUCTION This paper addresses the problem of measurement selection as part of an Any-time Kalman Filter (AKF) for obtaining the best state estimate of systems that can be modeled as a double-integrator, such as mobile robots, aircrafts, satellites etc [9]. State estimators are typically designed with fixed measurement and propagation update step sizes, nominally equal to the real-time interval. However, this may increasingly become difficult to achieve in practice as the complexity of the estimator in terms of number of states and measurements as well as the number of other resource requesting tasks on a processor grows. For instance, TPF-I [19] (NASA’s first space-based mission to directly observe planets outside our own solar system) uses multiple smaller telescopes that need to collaborate with each other and maintain extremely precise formations. Within a processor, tasks interfere with each other through preemption and blocking when waiting for shared resources such as CPU time and memory. The execution times of the tasks themselves may be data dependent or may vary due to hardware features such as caches. To achieve good performance in embedded applications with limited resources, the constraints of implementation platform must be taken into account at design-time. Therefore, the achievable estimation accuracy depends not only on the algorithms, but also on their actual implementation and communication related delays. Typically, the algorithms are implemented on a real-time multi-tasking processor that allocates on-board computational resources to multiple tasks and functions according † This work is supported in part by NASA, Jet Propulsion Laboratory, contract # NNC08CA34C. ∗ The authors are with Scientific Systems Company, Inc. 500 W. Cummings Park, Suite 3000, Woburn, MA 01801. nmoshtagh, chen, [email protected]

978-1-4244-3872-3/09/$25.00 ©2009 IEEE



to some scheduling policy. The processor’s task scheduler may induce delays that were unaccounted for at design-time and may sometimes preempt measurement processing and estimation tasks in favor of other tasks. Hence, estimation accuracy and in general the performance of any embedded algorithm can be significantly lower than expected during execution. A direct consequence of the preemption of the measurement processing is that the estimator may have to select only a subset of all the available measurements for processing, or update the state using out-of-sequence measurement. Thus, an AKF is needed to select and process such measurements stored in the buffer. The focus of this paper is on the measurement selection problem. There are two issues to be addressed in this problem; one is the minimum subset that is necessary to maintain observability of the state, and the second is the selection of the best set in terms of information extraction for estimation if more than the minimum is available. Our goal is to provide algorithms to help optimize the measurement selection processing. The algorithm can implement a computationally expensive, but optimal measurement selection search, and it can be used as a benchmark for evaluation purposes. In Section III the measurement selection problem is formulated as an optimization problem. In the most general case the size of the problem grows exponentially with the desired number of measurements. But, we show that in the absence of the process noise the optimization problem can be converted into a convex problem [1] and easily solved to find the global solution. In the presence of the process noise, however, the problem is not convex anymore, and a greedy sequential selection (GSS) algorithm is presented in Section IV that finds a selection polynomially in the number of desired measurements. Based on extensive simulations, and comparison with optimal solutions, we have conjectured that GSS algorithm actually produces the optimal set of measurements for a double-integrator model, in the presence of process noise. In Section V we study the problem

Fig. 1. The structure of the Any-time Kalman Filter (AKF) with measurement selection.

5074

ThBIn4.3 of selecting measurements that are coming from different sensors, and show that our GSS algorithm can also be used in that scenario. A. Related Work Different variations of measurement selection are studied in the literature. In [12], [15], [16] optimal timing of the measurements are considered whereas in [11], [15], [17] the optimal location of the measurements are designed. The measurement scheduling problem is studied in [13], [15], [16] where the timing of the measurement are determined a priori. The effect of system controllability and observability on the measurement selection process are studied in [17] and [15]. The relationship between optimal timing of the measurements and the control strategy is studied in [21]. Sensor scheduling and selection are studied in [4], [6]–[8], [13], [15]. The problem of causally selecting measurements is studied in [18] and [20] where an adapted scheme is used to sample measurement instances based on the past measurements. In most cases the initial conditions of the system are ignored [12], [15], however [11] briefly describes the effect of initial estimates on the measurement selection. More recently, the problems of measurement and sensor selection are studied in the context of mobile robots with applications for target tracking [7] and area coverage problem [6].

{t1 , t2 , . . . , tn } corresponds to an ordered discrete set where ti ∈ R+ . If there is not enough time to process all the measurements, a particular state estimator for the interval [0, t¯) can be obtained by first selecting the measurements (both the number of measurements and the measurement times), and then performing the estimations. By fixing the estimator’s structure, the only difference between the state estimators are the timings of their associated set of measurement. Thus, from now on we represent an estimator by its corresponding set τ ⊆ τn . The set of all such estimators is denoted by E. To choose the best estimator, we need to specify a cost function J : E → [0, ∞). Possible functions are J1 (τ ) = det P (t),

with τ ∈ E, where J1 is proportional to the volume of the confidence ellipsoid of the estimate, and J 2 is the mean squared error of the estimate. The appropriate metric is chosen in Section III. Let φ(τ ) denote the amount of CPU time required to complete the calculations involved in estimation using the set of measurements τ ∈ E for system (1). The function φ(τ ) increases monotonically with the number of measurements. Let θ > 0 be the maximum amount of CPU time allowed for state estimation. The state estimation problem under CPU constraints becomes: minimize

II. P ROBLEM S TATEMENT Consider the following linear stochastic system whose dynamics, on the interval [0, t¯) are described by the differential equation: ˙ x(t) = z(t) =

A(t)x(t) + Bw(t), H(t)x(t) + v(t).

(1)

where state x ∈ Rn , process noise w ∈ Rq , measurement z ∈ Rp , and measurement noise v ∈ R p . The process noise w(t) is a zero-mean white Gaussian process with covariance E[w(t)w(τ )T ] = Q(t)δ(t − τ ) where Q(t) is a symmetric positive definite matrix. The initial state conditions for the system are E[x(0)] = x0 ,

The measurement noise v(t) is a zero-mean white Gaussian process with covariance E[v(t)v(τ ) T ] = R(t)δ(t − τ ). ˜ (t) = x(t) − x ˆ (t) has covariance The estimation error x P (t) = E[˜ x(t)˜ x(t)T ], which is symmetric and positive definite. The Kalman-Bucy filter provides a minimum-variance unbiased estimate of the state conditional on all past data. The minimal covariance matrix P (t) of the filtered state estimate obeys the following Riccati differential equation: P˙ = AP + P A + BQB − P H R T

T

−1

J(τ )

(2)

φ(τ ) ≤ θ

subject to

i.e. find the best state estimator with CPU time less than the maximum CPU time allowed. Assuming measurements take equal amount of time to process, the time constraint in (2) can be translated into the requirement that “at most m measurements can be processed”. Thus, our design problem now becomes that of choosing the subset τ m = {ti1 , · · · , tim } such that the cost J(τm ) is minimized. More formally: J(τm ) τm ⊆ τn .

minimize subject to

(3)

∗ with optimal solution τ m .

E[(x(0) − x0 )(x(0) − x0 )T ] = P0 .

T

or J2 (τ ) = trP (t)

III. M EASUREMENT S ELECTION The level of measurement precision, R −1 (t), can be controlled by the designer, by selecting the number of measurements and the measurement times t i . Thus, if m measurements are selected at times {t 1 , · · · , tm } to process, then the level of precision is R−1 (t) =

HP,

m 

¯i. δ(t − ti )R

(4)

i=1

with initial condition P (0) = P 0 . Since the measurement selection affects R(t), P (t) depends on the measurement schedule. Suppose {z(t1 ), z(t2 ), . . . , z(tn )} is a given set of measurements. The set of the measurement times τ n =

¯ i = R−1 (ti ) is the given measurement precision at where R time ti . As the result we consider a discrete-time system:

5075

x(ti+1 ) = Φ(ti+1 , ti )x(ti ) + w(ti+1 , ti ) z(ti ) = Hx(ti ) + v(ti ),

(5)

ThBIn4.3 where Φ(ti+1 , ti ) is the transition matrix. For instance, for a double-integrator model the state vector is given by its position and velocity x(t) = [x(t) v(t)] T , and we have   1 ti+1 − ti Φ(ti+1 , ti ) = . (6) 0 1 In discrete times, the recursion relations for the covariance matrix in the Kalman filter can be written as [14]: P (t+ i ) − P (ti+1 )

−1

(t− i )

T

−1

A. Without Process Noise To gain insight into the measurement selection problem we consider a simplified case. We make the assumption that there is no process noise, i.e. Q(t i ) ≡ 0. We would like to express the optimal estimation algorithm (Kalman Filter) in terms of the inverse of the covariance matrix, instead of the covariance itself. The inverse covariance matrix is directly related to the Fisher information matrix, allowing an interpretation of filter performance in terms of information theoretic concepts. Without process noise, the recursions for the inverse of the error covariance matrix are given by: P

−1

(t− i+1 )

=

T ¯ P −1 (t− i ) + H Ri H T

Φ (ti , ti+1 )P

−1

(t+ i )Φ(ti , ti+1 )

(7)

+

ΦT (0, t¯)P −1 (0)Φ(0, t¯) n  ¯ k HΦ(tk , t¯) . ΦT (tk , t¯)H T R

(9)

k=1

The Fisher Information Matrix FIM( t¯, τn ), defined as FIM(t¯, τn ) =

n 

¯ k HΦ(tk , t¯) ΦT (tk , t¯)H T R

(10)

k=1

is a measure of the certainty of the state estimate due to measurement data alone; i.e., without considering the a priori ˆ (0) and covariance P 0 . The eigenvalues of information of x (10) represent the amount of information along different state space. If any eigenvalues of (10) are zero, there are directions in state space along which our measurements give us no information. B. Convex Optimization Formulation When ignoring the process noise, the measurement selection problem (3) can be written as a convex optimization problem. To see this let Y0 = Φ (0, t¯)P −1 (0)Φ(0, t¯), ¯ i HΦ(ti , t¯) . Yi = ΦT (ti , t¯)H T R n Let us define Y (λ) = Y0 + i=1 λi Yi , where λi ∈ {0, 1}, and the objective function J(λ) = log det(Y (λ)), which is a T

subject to

n 

λi = m,

λi ∈ {0, 1} .

i=1

Remark 3.1: Note that tr(Y (λ)) is not an appropriate objective function in our scenario, because the initial condition Y0 (a constant term) would have no effect on the selection. Problem (11) is a binary integer programming problem (combinatorial), and of course, it can be solved exactly by exhaustive search, i.e. by computing the determinant for all possible measurement sets of size m. However, this is not practical for large n and m. In other words problem (11) is NP-hard and there is no generic algorithm that works well for large-sized problem instances. The following relaxation of the problem is, however, a convex optimization problem:   n  (12) λi Yi maximize log det Y0 + i=1

subject to

(8)

where Φ(ti , ti+1 ) is the transition matrix for propagating the system state backward in time. The inverse of the error covariance of the estimation at the final time t¯, using the set of measurements τ n , can be computed using equations (7) and (8) [14]: P −1 (t¯) =

i=1

−1

= [P + H R (ti )H] T = Φ(ti+1 , ti )P (t+ i )Φ (ti+1 , ti ) + Q(ti+1 , ti ).

P −1 (t+ i ) =

concave function (also known as logarithmic barrier for the linear matrix inequality Y (λ) > 0). Then the optimization problem (3) can be reformulated as   n  (11) λi Yi maximize log det Y0 +

1T λ = m,

0 ≤ λi ≤ 1 ,

because the objective is concave and the constraints are linear functions of variable λ. Since this problem has a larger ¯ is an upper feasible set than (11), its optimal value, J( λ), ∗ ¯ the bound on J(λ ), the optimal value of (11) . In general λ, solution of (12), is not a 0-1 vector, i.e. it is not a feasible solution for (11), and one needs to use a heuristic to obtain a Boolean vector from the optimal solution of (12). One ¯ simple heuristic solution is to set the m largest values of λ ˆ to 1, and the remaining entries to 0. Let λ be such a 0-1 ˆ is a lower bound on J(λ ∗ ). If the gap vector. Then J( λ) ¯ ˆ δ = J(λ) − J(λ) = 0, then the solution of (12) is optimal for the original problem (11). C. Example A standard SDP solver can be used to solve (12) for moderate problem sizes where m is up to 1000 or so. For a double integrator model (5) with transition matrix (6), we ¯ can solve problem (12) and find the optimal solution λ, using CVX toolbox, a package for specifying and solving convex programs, [5]. Table I shows the results of solving optimization (12) for n = 10 measurements, τ 10 = {0.3, 1.7, 3.9, 6.5, 6.5, 6.8, 7.4, 7.6, 8.5, 9.4}. where t¯ = 10 sec., and       1 2 1 0 ¯= 2 0 , R , H= . P0 = 2 5 0 1/2 0 1 Therefore, the best single measurement is the last one {z(t10 )}, the best pair is {z(t9 ), z(t10 )} and so on. In this example, it turned out the solution of (12) was Boolean, hence, optimal for problem (11) as well. Under what circumctances the gap δ (between the upper and lower bounds of J(λ ∗ )) vanishes is an open problem. Of course,

5076

ThBIn4.3 TABLE I

w(1)

M EASUREMENT S ELECTION USING CVX TOOLBOX . w(2)

m (# meas.) 1 2 3 4 5 6 7 8 9 10

det(inv(P )) 237.3 434.7 598.0 732.4 843.4 937.0 1020.5 1106.9 1188.4 1271.9

∗ indices of τm 10 9,10 8,9,10 7,8,9,10 6,7,8,9,10 5,6,7,8,9,10 1,5,6,7,8,9,10 1,4,5,6,7,8,9,10 1,2,4,5,6,7,8,9,10 1,2,3,4,5,6,7,8,9,10

x(2) Σ

Z −1

Fig. 2.

in the presence of the process noise, the cost function will no longer be an affine function of the variables λ i and thus it will not be concave. In that case, problem (3) cannot be converted to a convex optimization problem. Therefore, in order to solve the more general problem we apply a greedy algorithm.

x(1) Σ

Z −1

Double-integrator system.

presence of process noise, we compared the solution of the GSS algorithm with that of the exhaustive search for a large number of scenarios and observed that GSS algorithm actually provides the optimal subset of measurements. This lead us to Conjecture 4.2. A. Special Case: No Process Noise Consider a double-integrator model   1 dt x(ti ) x(ti+1 ) = 0 1

(13)

z(ti ) = x(ti ) + v(ti ) .

IV. G REEDY S EQUENTIAL S ELECTION Using simple combinatorial arguments, the total number of subsets of size m that must be considered for selection out of n possible measurements, as in problem (3), is given by  n! n . = m (n − m)!m! It is easily seen that the exhaustive search is intractable for modest measurement selection problems. One approach to reduce the number of subsets evaluated is to use a greedy search procedure that adds the best measurement at each round [2], [3], [22]. Such a greedy search algorithm selects the best single measurement first, then selects the best pair that includes the best single measurement selected. This process is continued by selecting a single measurement that appears to be best when combined with the previously selected subset of measurements. Hence, the number of subsets searched to find a subset of m measurements from n possible measurements is given by m−1 

(n − i) = m[n − (m − 1)/2] ,

i=0

which is polynomial in the number of desired measurements. Such a greedy sequential selection algorithm is presented in Algorithm 1. Algorithm 1 Greedy Sequential Selection (GSS) algorithm Given τn = {t1 , . . . , tn }, τ0 = ∅, and m: for k = 1 to m do ∗ ∪ {ti }) find t∗ik = arg minti J(τk−1 ∗ ∗ ∗ set τk = τk−1 ∪ {tik } end for

First we show that without process noise the greedy algorithm gives us the optimal set of measurements. We present this result in the form of Theorem 4.1. In the

where dt = ti+1 − ti . Suppose the measurements of both position and velocity are available at each time instance t i for estimation, (i.e. H = I 2×2 , the identity matrix), and measurement precisions are the same for all the measurements ¯ ∀i = 1, . . . , n). ¯ i = R, (i.e. R The following theorem holds for the special case of no process noise, and when both velocity and position measurements are available. Theorem 4.1: Consider a double integrator system (13). Given n measurements of position and velocity at times t i ∈ {t1 , . . . , tn } = τn , and only m measurements to process, the ∗ to problem (3) is given by Algorithm 1. optimal solution τm Proof: The proof is omited due to space limitations. B. The General Case Consider the double-integrator model with process noise, as shown in Figure 2, with dynamics   1 dt x(ti+1 ) = x(ti ) + w(ti+1 , ti ) (14) 0 1 z(ti ) = x(ti ) + v(ti ) . The covariance of the discretized white, zero-mean continuous-time process noise is  3  dt /3 dt2 /2 Q = E(wwT ) = q · , dt2 /2 dt

where q is the power spectral density of the continuous-time process noise. Simulation studies show that the greedy Algorithm 1 finds the optimal set of measurements for (14), even when q = 0. Now we present the following conjecture for measurement selection regarding a single double-integrator system: Conjecture 4.2: Consider the stochastic system (14). t), Given a set of measurements τn over the interval [0, ¯ ∗ and m measurements to process, the optimal solution τ m to problem (3) is given by the greedy Algorithm 1.

5077

The proof of this general case is an ongoing work.

ThBIn4.3 C. Example A radar measures position by time delay of returning pulses and velocity by the Doppler shift of the pulses. For accurate position measurement a short pulse is desirable, whereas accurate velocity information desires a longer pulse. Therefore, we assume    2  r 0 2m 0 R = E(vvT ) = = . 0 1/r 0 0.5m2 /s2 Suppose in the interval (0, 10 sec.), n = 10 measurements are taken and stored in the buffer, and the process noise density is q = 0.1 m2 /s3 . Figure 3 shows that the GSS algorithm provides the optimal solution in the presence of process noise. It can also be seen in Figure 3 that the value of the objective function (estimation precision) does not improve much with approximately more than k = 7 measurements. This suggests that even if one has the processing resources for m measurement, one may not need to process all. This is a consequence of the fact that the objective function is submodular, which intuitively means that as more measurements are processed, the marginal gain decreases [10]. V. S ENSOR S ELECTION P ROBLEM We can apply the above result to deal with the sensor selection problem. Suppose the position and velocity measurements are taken separately and stored in the buffer. Now, the measurement equation becomes z∗ (ti ) = H∗ x(ti ) + v∗ (ti ),

(15)

where H∗ is either the position measurement matrix H p = [1, 0] with an i.i.d. Gaussian noise v p ∼ N (0, rp ); or H∗ is the velocity measurement matrix H v = [0, 1] with an i.i.d. Gaussian noise vv ∼ N (0, rv ). After position and velocity measurements have been queued up (with time stamps t 1 < t2 < · · · < tn ), at time t¯ > tn the CPU will choose m out of n measurements to perform the estimation task. The selection depends on the

latest state estimation and the accuracy of each sensor. We would like to find out, for instance, how one is supposed to select the measurements when good estimate of the velocity is available, while the position sensor is very noisy, or vice verse. Suppose k position measurements and (m − k) velocity measurements are to be selected with 1 ≤ k ≤ m. The inverse error covariance (9) at the final time t¯, as a function of k, after incorporating all m measurements, is given by: P −1 (k, t¯) = + +

ΦT (0, t¯)Y (0)Φ(0, t¯) k  1 T Φ (ts , t¯)HpT Hp Φ(ts , t¯) r p s=1 m−k  s=1

1 T Φ (ts , t¯)HvT Hv Φ(ts , t¯) rv

where ts are the measurement time stamps. The objective is to maximize J(k, τ k ) = det(P −1 (k, t¯)) for k ∈ {1, . . . , m}. Therefore, one can apply the greedy Algorithm 1 for all k, and find τ k corresponding to the maximum value. The output of the GSS algorithm is the optimal set of position measurements. Once we have the set of k ∗ best position measurements, the other (m−k ∗ ) velocity measurements are selected arbitrarily because velocity is constant (consequence of the no-process-noise assumption). A. Example Consider the double-integrator system (14). Suppose our system is equipped with two sensors that measure position and velocity separately. The individual measurements of position and velocity, collected using measurement equation (15), are stored in the buffer. Depending of the initial estimate of the state, P (0), and the measurement covariance rp , and rv , we would like to select m = 4 out of n = 10 measurements. Figure 4 shows the effect of three different sets of measurement errors on the selection of the measurements. The velocity measurement covariance increases (precision decreases) from (a) r v = 0.15 to (b) rv = 1.5 to (c) rv = 15. As the precision of the velocity measurements decreases, less and less velocity measurements are selected until in case of (c) no velocity measurement is selected at all. VI. S UMMARY AND C ONCLUSIONS

Fig. 3. The Greedy Sequential Selection (GSS) algorithm 1 provides the optimal measurement selection in the presence of process noise q = 0.1m2 /s3 .

We studied the problem of finding a subset of measurements that provides the best state estimate given processing constraints. We formulated the problem as an optimization problem, and showed that in the absence of the process noise, the measurement selection problem for a single system is a convex optimization problem and the global solution can be easily found. However, in the presence of processing noise, the problem is no longer convex and finding the optimal solution is in general computationally intractable. Surprisingly, for a double-integrator system, we showed that a greedy sequential selection algorithm can find the best subset of measurements. The GSS algorithm is computationally

5078

ThBIn4.3

(a) k∗ = 2

(b) k∗ = 3

(c) k∗ = 4

Fig. 4. The initial covariance of the estimation is P0 = I2×2 . The covariance of velocity measurement increases from σv2 = 0.15 in (a) to σv2 = 1.5 in (b) to σv2 = 15 in (c). As the velocity measurement error increases (the confidence ellipse streches along the y axis), more position measurements are selected.

tractable because it grows polynomially with the size of the measurement set. The GSS algorithm can also be applied to a multiagent system where each agent is modeled by a doubleintegrator and is capable of taking relative position and velocity measurements. The problem formulation for a multiagent system is out of the scope of this paper and is part of the on-going work. However, it is sufficient to say that the convex optimization formulation presented in Section III can be extended to the multi-agent scenario when there is no process noise. In future work we will study the multi-agent scenario in more details. VII. ACKNOWLEDGMENTS The authors would like to thank Jovan Boskovic, Jayesh Amin and Danial Scharf for their technical support and valuable comments. R EFERENCES [1] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [2] Thomas M. Cover. The best two independent measurements are not the two best. IEEE Transactions on Systems, Man, and Cybernetics, 4:116–117, 1974. [3] Thomas M. Cover and Jan M. Van Campenhout. On the possible orderings in the measurement selection problem. IEEE Transactions on Systems, Man and Cybernetics, 7:657–661, Sept. 1977. [4] Abhimanyu Das and David Kempe. Sensor selection for minimizing worst-case prediction error. Proceedings of the 7th international conference on Information processing in sensor networks, pages 97– 108, 2008. [5] M. Grant and S. Boyd. Cvx: Matlab software for disiplined convex programming. (webpage and software), February 2009. [6] Vijay Gupta, Timothy H. Chung, Babak Hassibi, and Richard M. Murray. On a stochastic sensor selection algorithm with applications in sensor scheduling and sensor coverage. Automatica, 42(2):251–260, 2006. [7] V. Isler and R. Bajcsy. The sensor selection problem for bounded uncertainty sensing models. IEEE Transactions on Automation Science and Engineering, 3:372 – 381, Oct. 2006. [8] S. Joshi and S. Boyd. Sensor selection via convex optimization. IEEE Transactions on Signal Processing, 2009. [9] B.H. Kang, F.Y. Hadaegh, and D.P. Scharf. On the validity of the double-integrator approximation in deep space formation flying. International symposium on formation flying missions and technologies, Oct 2002.

[10] Andreas Krause and Carlos Guestrin. Near-optimal observation selection using submodular functions. AAAI, 2007. [11] S. Kumar and J. Seinfeld. Optimal location of measurements for distributed parameter estimation. tac, 23:690– 698, Aug 1978. [12] H. Kushner. On the optimum timing of observations for linear control systems with unknown initial state. tac, 9:144 – 150, Apr 1964. [13] A. Logothetis and A. Isaksson. On sensor scheduling via information theoretic criteria. Proceedings of the 1999 American Control Conference, 4:2402–2406, 1999. [14] Peter S. Maybeck. Stochastic models, estimation, and control, volume 141 of Mathematics in Science and Engineering. Academic Press, Inc., 1979. [15] R. Mehra. Optimization of measurement schedules and sensor designs for linear dynamic systems. tac, 21:55 – 64, Feb 1976. [16] L. Meier III, J. Peschon, and R.M. Dressler. Optimal control of measurement subsystems. tac, 12:528 – 536, Oct 1967. [17] P.C. Muller and H. I. Weber. analysis and optimization of certain qualities of controllability and observability for linear dynamical systems. Automatica, 8:237–246, Pegamon Press, 1972. [18] M. Rabi and J.S. Baras. Sampling of diffusion processes for real-time estimation. 43rd IEEE Conference on Decision and Control, 4:4163– 4168, Dec. 2004. [19] D. Scharf, F. Hadaegh, Z. Rahman, J. Shields, and G. Singh. An overview of the formation and attitude control system for the terrestrial planet finder formation flying interferometer. September 2004. [20] D. Sinno and D. Cochran. Dynamic estimation with selectable linear measurements. Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, 4:2193–2196, May 1998. [21] E. Skafidas and A. Nerode. Optimal measurement scheduling in linear quadratic gaussian control problems. Proceedings of the IEEE International Conference on Control Applications, 2:1225–1229, Sep 1998. [22] A.W. Whitney. A direct method of nonparametric measurement selection. IEEE Transactions on Computers, C-20:1100– 1103, Sept. 1971.

5079

Optimal Measurement Selection For Any-time Kalman ...

simulations, and comparison with optimal solutions, we have conjectured that GSS ... Joint 48th IEEE Conference on Decision and Control and. 28th Chinese ...

378KB Sizes 1 Downloads 229 Views

Recommend Documents

Optimal Schedules for Monitoring Anytime Algorithms
The user visually examines each solution and responds either with a semicolon to ... the interpreter is extended to allow presentation of more than one solution at a time and that ... analysis and an experimental evaluation using simulated data.

Optimal Schedules for Parallelizing Anytime Algorithms
Computer Science Department. Technion - Israel Institute ..... it is reflected in. Lemma 3. 6. ut and uσi stand for partial derivatives of u by t and by σi respectively.

Optimal Schedules for Parallelizing Anytime Algorithms
a student tries to decide between two elective courses by trying to schedule each of ... with the set of her compulsory courses. ..... See Figure 8 for an illustration.

Optimal Orbit Selection and Design for Airborne Relay ...
A software with interactive GUI is designed and ... planning software and discuss results. ... on Bidirectional Analytic Ray Tracing and Radiative Transfer).

Measurement Invariance Versus Selection Invariance
intercept, and εg denotes the residual or error score. Finally, we assume that the errors ε are normally distributed with variance ε. 2 constant across levels of (i.e., ...

OPTIMAL PARAMETER SELECTION IN SUPPORT ...
Abstract. The purpose of the paper is to apply a nonlinear programming ... convex optimization, large scale linear and quadratic optimization, semi-definite op-.

Optimal Training Data Selection for Rule-based Data ...
affair employing domain experts, and hence only small .... free rules. A diverse set of difficult textual records are given to set of people making sure that each record is given to a ..... writes when presented with the 100 chosen patterns. A.

Speculative Markov Blanket Discovery for Optimal Feature Selection
the remaining attributes in the domain. Koller and Sahami. [4] first showed that the Markov blanket of a given target at- tribute is the theoretically optimal set of ...

Multi-Constrained Optimal Path Selection
Marwan Krunz. Department of Electrical & Computer Engineering ... HE current Internet has been designed to support connectiv- ity based ...... 365–374, 1999.

OPTIMAL PARAMETER SELECTION IN SUPPORT ...
Website: http://AIMsciences.org ... K. Schittkowski. Department of Computer Science ... algorithm for computing kernel and related parameters of a support vector.

Time-optimal Active Portfolio Selection
Time-optimal Active Portfolio Selection. Thomas Balzer [email protected]. November 27, 2001. Abstract. In a complete financial market model where the prices of the assets are modeled as Ito diffusion processes, we consider portfolio problems wh

The Kalman Filter
The joint density of x is f(x) = f(x1,x2) .... The manipulation above is for change of variables in the density function, it will be ... Rewrite the joint distribution f(x1,x2).

On efficient k-optimal-location-selection query ...
a College of Computer Science, Zhejiang University, Hangzhou, China ... (kOLS) query returns top-k optimal locations in DB that are located outside R. Note that ...

disutility, optimal retirement, and portfolio selection
DISUTILITY, OPTIMAL RETIREMENT, AND PORTFOLIO SELECTION. KYOUNG JIN CHOI. School of Computational Sciences, Korea Institute for Advanced Study,. Seoul, Korea. GYOOCHEOL SHIM. Graduate School of Management, Korea Advanced Institute of Science and Tech

Optimal Interventions in Markets with Adverse Selection
Mar 8, 2010 - ‡Leonard Stern School of Business, Kaufman Management Center, 44 West .... they are significantly more complicated to set up and administer ...

An Efficient Genetic Algorithm Based Optimal Route Selection ... - IJRIT
Wireless sensor Network (WSN) is getting popular especially for applications where installation of the network infrastructure is not possible, such as.

An Efficient Genetic Algorithm Based Optimal Route Selection ... - IJRIT
infrastructure, but imposes some drawbacks and limitations (mainly on .... Networks”, http://www.monarch.cs.rice.edu/monarch-papers/dsr-chapter00.pdf.

On efficient k-optimal-location-selection query ... - Semantic Scholar
Dec 3, 2014 - c School of Information Systems, Singapore Management University, ..... It is worth noting that, all the above works are different from ours in that (i) .... develop DBSimJoin, a physical similarity join database operator for ...

Optimal landmarks selection and fiducial marker ...
Our method computes a globally optimal solution which does not ..... B. Ma and R. E. Ellis, "Analytic expressions for fiducial and surface target registration error," ...

Optimal Interventions in Markets with Adverse Selection
part in a government program carries a stigma, and outside options are .... show that bailouts can be designed so as not to distort ex-ante lending incentives.

Importance Sampling-Based Unscented Kalman Filter for Film ... - Irisa
Published by the IEEE Computer Society. Authorized ..... degree of penalty dictated by the potential function. ..... F. Naderi and A.A. Sawchuk, ''Estimation of Images Degraded by Film- ... 182-193, http://www.cs.unc.edu/˜welch/kalman/media/.

selection of the optimal insulator type for a new hv overhead ... - ntua.gr
and-pin type (standard suspension insulators and fog type). Each insulator has its own geometrical characteristics: a) the height of the insulator H (in cm), b) the form factor of the insulator F, c) the creepage distance L (in cm), d) the diameter o