Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

Model Based Learning of Sigma Points in Unscented Kalman Filtering Ryan Turner

Kittil¨a, Finland September 1, 2010 joint work with Carl Rasmussen

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

1

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

Motivation measurement device (sensor)

position, velocity g(position,noise)

system

filter

p(position, velocity)

throttle

controller

(a) State estimation for control

(b) Time series prediction

Estimating (latent) states and predicting future observations from noisy measurements Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

2

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

Setup xt−1

f

g

zt−1

g

xt+1 g

zt+1

zt

xt = f (xt−1 ) + w, zt = g(xt ) + v,

f

xt

w ∼ N (0, Q)

v ∼ N (0, R)

x ∈ RD : latent state, z ∈ RM : measurement

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

3

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

Filtering

time update p(xt−1 |z1:t−1 ) xt−1

f

measurement update

p(xt |z1:t−1 )

p(xt |z1:t−1 )

p(xt |z1:t )

xt

xt

xt g zt

zt p(zt |z1:t−1 ) 1) predict next hidden state

2) predict measurement

Turner (Engineering, Cambridge)

measure zt

3) hidden state posterior

Learning of Sigma Points in Unscented Kalman Filtering

4

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

Approximate predictions Extended Kalman filter (EKF): local linearizations of the function; propagating Gaussians exact through linearized function [5] h h0

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

5

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

Approximate predictions Extended Kalman filter (EKF): local linearizations of the function; propagating Gaussians exact through linearized function [5] h h0

Unscented Kalman filter (UKF): approximation of the density by a number of sigma points [3] h X

Turner (Engineering, Cambridge)

h(X )

Learning of Sigma Points in Unscented Kalman Filtering

5

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

The UKF

1

We will focus on the UKF

2

The UKF uses the whole distribution on xt not just the mean (like the EKF)

3

Filtering and prediction uses unscented transform (UT)

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

6

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

The UKF

1

We will focus on the UKF

2

The UKF uses the whole distribution on xt not just the mean (like the EKF)

3

Filtering and prediction uses unscented transform (UT)

4

In 1D approximates distribution by mean and α-standard deviation points

5

Loosely interpret as approximating distribution by 2D + 1 point masses h X

Turner (Engineering, Cambridge)

h(X )

Learning of Sigma Points in Unscented Kalman Filtering

6

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

The UKF

1

We will focus on the UKF

2

The UKF uses the whole distribution on xt not just the mean (like the EKF)

3

Filtering and prediction uses unscented transform (UT)

4

In 1D approximates distribution by mean and α-standard deviation points

5

Loosely interpret as approximating distribution by 2D + 1 point masses h X

h(X )

6

β affects weight of center point; α, κ affect the spread of points

7

Sample mean and covariance of sigma points match original distribution

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

6

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

Remarks

1

Reconstructs the mean and covariance on xt+1 had the dynamics been linear

2

No guarantee of matching the true moments of the non-Gaussian distribution

3

Must fix parameters θ := {α, β, κ} before seeing data

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

7

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

Remarks

1

Reconstructs the mean and covariance on xt+1 had the dynamics been linear

2

No guarantee of matching the true moments of the non-Gaussian distribution

3

Must fix parameters θ := {α, β, κ} before seeing data

4

Some heuristics for setting them e.g. β = 2 optimal for Gaussian state distribution [9, 2]

5

Common default α = 1, β = 0, and κ = 2

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

7

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

The Achilles’ Heel of the UKF

1 0.5 0 −0.5 −1 −10

−5

0

5

10

−10

−5

0

5

10

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

8

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

The Achilles’ Heel of the UKF

1 0.5 0 −0.5 −1 −10

−5

0

5

10

−10

−5

0

5

10

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

9

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

The Achilles’ Heel of the UKF

1 0.5 0 −0.5 −1 −10

−5

0

5

10

−10

−5

0

5

10

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

10

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

The Achilles’ Heel of the UKF

1 0.5 0 −0.5 −1 −10

−5

0

5

10

−10

−5

0

5

10

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

11

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

The Achilles’ Heel of the UKF 1 0.5 0 −0.5 −1 −10

−5

0

5

10

−10

−5

0

5

10

Typical UKF failure mode: sigma point collapse. α = 1 gives delta spike while α = 0.68 gives optimal moment matches solution. Can we find empirically which θ are most likely to give good solutions? Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

12

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

Alternative View of the UKF

1 2 3 4 5

EKF and UKF approximate nonlinear system as nonstationary linear system The UKF defines its own generative process of the time series Can sample from the UKF via predict-sample-correct {α, β, κ} are generative parameters

We can learn the parameters θ in a principled way!

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

13

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

Model Based Learning

1

Learn the parameters θ in a principled way

2

In model based view, maximize marginal likelihood: `(θ) := log p(z1:T |θ) =

3 4

T X t=1

log p(zt |z1:t−1 , θ) .

(1)

With learning: UKF-L. Using Default θ: UKF-D One-step-ahead predictions p(zt |z1:t−1 )

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

14

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

−1.5

2

−2

0 0

1

2

(c) α

3

−2.5 4

−0.5

0.5

0 0

−1

1

2

(d) β

3

−1.5 4

2

0

1

−1

0

D

4

1

log likelihood (nats/obs)

−1

D

−0.5

6

log likelihood (nats/obs)

8

D

log likelihood (nats/obs)

Likelihood Illustrations

1

2

3

4

−2 5

(e) κ

Not much hope of gradient based optimization based on these cross-sections :(

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

15

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

Gaussian Process Optimizers

1

Do derivative free optimization [6]

2

Treat optimization as a sequential decision problem: reward r for right input θ to get a large `(θ)

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

16

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

Gaussian Process Optimizers

1

Do derivative free optimization [6]

2

Treat optimization as a sequential decision problem: reward r for right input θ to get a large `(θ)

3

Must place model on `(θ) to compute E [`(θ)] and Var [`(θ)]

4

Gaussian processes (GPs) are priors on functions

5

Used for integration in [7]

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

16

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

Gaussian Process Optimizers 4

f(x)

2

0

−2

−4 −5

1 2

0 x

5

greedy strategy will go where E [`(θ)] is maximized explorative strategy will go where Var [`(θ)] is maximized

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

17

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

Gaussian Process Optimizers 4

f(x)

2

0

−2

−4 −5

1 2 3

0 x

5

greedy strategy will go where E [`(θ)] is maximized explorative strategy will go where Var [`(θ)] is maximized J(θ) trades-off exploration with exploitation using K p J(θ) := E [`(θ)] + K Var [`(θ)] Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

(2) 17

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

GPO Demo 2.5

2

1.5

Log Likelihood

1

0.5

0

−0.5

−1

−1.5

−2

−2.5

0

1

2

3

4

Turner (Engineering, Cambridge)

5 θ

6

7

8

9

10

Learning of Sigma Points in Unscented Kalman Filtering

18

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

GPO Demo 2.5

2

1.5

Log Likelihood

1

0.5

0

−0.5

−1

−1.5

−2

−2.5

0

1

2

3

4

Turner (Engineering, Cambridge)

5 θ

6

7

8

9

10

Learning of Sigma Points in Unscented Kalman Filtering

19

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

GPO Demo 2.5

2

1.5

Log Likelihood

1

0.5

0

−0.5

−1

−1.5

−2

−2.5

0

1

2

3

4

Turner (Engineering, Cambridge)

5 θ

6

7

8

9

10

Learning of Sigma Points in Unscented Kalman Filtering

20

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

GPO Demo 2.5

2

1.5

Log Likelihood

1

0.5

0

−0.5

−1

−1.5

−2

−2.5

0

1

2

3

4

Turner (Engineering, Cambridge)

5 θ

6

7

8

9

10

Learning of Sigma Points in Unscented Kalman Filtering

21

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

GPO Demo 2.5

2

1.5

Log Likelihood

1

0.5

0

−0.5

−1

−1.5

−2

−2.5

0

1

2

3

4

Turner (Engineering, Cambridge)

5 θ

6

7

8

9

10

Learning of Sigma Points in Unscented Kalman Filtering

22

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

GPO Demo 2.5

2

1.5

Log Likelihood

1

0.5

0

−0.5

−1

−1.5

−2

−2.5

0

1

2

3

4

Turner (Engineering, Cambridge)

5 θ

6

7

8

9

10

Learning of Sigma Points in Unscented Kalman Filtering

23

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

GPO Demo 2.5

2

1.5

Log Likelihood

1

0.5

0

−0.5

−1

−1.5

−2

−2.5

0

1

2

3

4

Turner (Engineering, Cambridge)

5 θ

6

7

8

9

10

Learning of Sigma Points in Unscented Kalman Filtering

24

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

GPO Demo 2.5

2

1.5

Log Likelihood

1

0.5

0

−0.5

−1

−1.5

−2

−2.5

0

1

2

3

4

Turner (Engineering, Cambridge)

5 θ

6

7

8

9

10

Learning of Sigma Points in Unscented Kalman Filtering

25

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

GPO Demo 2.5

2

1.5

Log Likelihood

1

0.5

0

−0.5

−1

−1.5

−2

−2.5

0

1

2

3

4

Turner (Engineering, Cambridge)

5 θ

6

7

8

9

10

Learning of Sigma Points in Unscented Kalman Filtering

26

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

Now We Can Learn

1

We can use derivative free optimization to learn `(θ)

2

We can find the best α, β, and κ

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

27

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

Experimental Setup

1

Three dynamical systems: sinusoidal dynamics [8], Kitagawa dynamics [1, 4], and pendulum dynamics [1]

2

Compare UKF-D, EKF, the GP-UKF, and GP-ADF, and the time independent model (TIM)

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

28

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

Experimental Setup

1

Three dynamical systems: sinusoidal dynamics [8], Kitagawa dynamics [1, 4], and pendulum dynamics [1]

2

Compare UKF-D, EKF, the GP-UKF, and GP-ADF, and the time independent model (TIM)

3

UKF-D used standard parameters α = 1, β = 0, κ = 2

4

Method evaluated on negative log-predictive likelihood (NLL) and the mean squared error (MSE)

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

28

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

The Dynamical Systems

Sinusoidal dynamics: xt+1 = 3 sin(xt ) + w, w ∼ N (0, 0.12 ) , 2

zt = σ(xt /3) + v, v ∼ N (0, 0.1 ) .

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

(3) (4)

29

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

The Dynamical Systems

Sinusoidal dynamics: xt+1 = 3 sin(xt ) + w, w ∼ N (0, 0.12 ) , 2

zt = σ(xt /3) + v, v ∼ N (0, 0.1 ) .

(3) (4)

The Kitagawa model: xt+1 = 0.5xt +

25xt + w, w ∼ N (0, 0.22 ) , 1 + x2t

zt = 5 sin(2xt ) + v, v ∼ N (0, 0.012 ) .

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

(5) (6)

29

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

Pendulum

u

ϕ

1

Track pendulum (fairly linear system)

2

Nonlinear measurements

3

Partially observed (no measurements of angular velocity)

4

Time series of 80 s

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

30

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

Quantitative Results Results for accuracy of p(zt+1 |z1:t−1 ): Method

NLL p-value MSE p-value Sinusoid (T = 500 and R = 10) UKF-D 10−1 × -4.58±0.168 <0.0001 10−2 × 2.32±0.0901 <0.0001 UKF-L ? −5.53 ± 0.243 N/A 1.92 ± 0.0799 N/A EKF -1.94±0.355 <0.0001 3.03±0.127 <0.0001 GP-ADF -4.13±0.154 <0.0001 2.57±0.0940 <0.0001 GP-UKF -3.84±0.175 <0.0001 2.65±0.0985 <0.0001 -0.779±0.238 <0.0001 4.52±0.141 <0.0001 TIM Kitagawa (T = 10 and R = 200) UKF-D 100 × 3.78±0.662 <0.0001 100 × 5.42±0.607 <0.0001 UKF-L ? 2.24 ± 0.369 N/A 3.60 ± 0.477 N/A EKF 617±554 0.0149 9.69±0.977 <0.0001 GP-ADF 2.93±0.0143 0.0001 18.2±0.332 <0.0001 GP-UKF 2.93±0.0142 0.0001 18.1±0.330 <0.0001 TIM 48.8±2.25 <0.0001 37.2±1.73 <0.0001 T is the length of the test sequences and R is the number of restarts averaged over.

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

31

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

Quantitative Results Continued Results for accuracy of p(zt+1 |z1:t−1 ): Method

NLL p-value MSE p-value Pendulum (T = 200 = 80 s and R = 100) UKF-D 100 × 3.17±0.0808 <0.0001 10−1 × 5.74±0.0815 <0.0001 UKF-L ? 0.392 ± 0.0277 N/A 1.93 ± 0.0378 N/A EKF 0.660±0.0429 <0.0001 1.98±0.0429 0.0401 1.18±0.00681 <0.0001 4.34±0.0449 <0.0001 GP-ADF GP-UKF 1.77±0.0313 <0.0001 5.67±0.0714 <0.0001 TIM 0.896±0.0115 <0.0001 4.13±0.0426 <0.0001 T is the length of the test sequences and R is the number of restarts averaged over.

Learned θ: 1 2 3

Sinusoid: θ = {α = 2.0216, β = 0.2434, κ = 0.4871}

Kitagawa: θ = {α = 0.3846, β = 1.2766, κ = 2.5830}

Pendulum: θ = {α = 0.5933, β = 0.1630, κ = 0.6391}

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

32

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

Qualitative Results

1.5

1.5

1

1

x1

2

x1

2

0.5

0.5

0

0

−0.5

−0.5 196

198

200

202

204

206

Time step

208

210

212

214

(f) Default θ

196

198

200

202

204

206

Time step

208

210

212

(g) Learned θ

UKF-D vs UKF-L for one-step-ahead prediction for dimension 1 of zt in the Pendulum model. Red line is the truth, black line and shaded area are prediction.

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

33

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

Conclusions

1

Automatic and model based learning of UKF parameters {α, β, κ}

2

The UKF can be reinterpreted as a generative process

3

Learning makes sigma point collapse less likely

4

UKF-L significantly better than UKF-D for all error measures and data sets http://www.TurnerComputing.com/

Turner (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

34

Filtering Methods The UKF The Generative Process and Learning Experiments and Results Conclusions and Future Work

References Marc P. Deisenroth, Marco F. Huber, and Uwe D. Hanebeck. Analytic moment-based Gaussian process filtering. In Proceedings of the 26th International Conference on Machine Learning, pages 225–232, Montreal, QC, 2009. Omnipress. S. J. Julier and J. K. Uhlmann. Unscented filtering and nonlinear estimation. Proceedings of the IEEE, 92(3):401–422, 2004. Simon J. Julier and Jeffrey K. Uhlmann. A new extension of the Kalman filter to nonlinear systems. In Proceedings of AeroSense: 11th Symposium on Aerospace/Defense Sensing, Simulation and Controls, pages 182–193, Orlando, FL, 1997. Genshiro Kitagawa. Monte Carlo filter and smoother for non-Gaussian nonlinear state space models. Journal of Computational and Graphical Statistics, 5(1):1–25, 1996. Peter S. Maybeck. Stochastic Models, Estimation, and Control, volume 141 of Mathematics in Science and Engineering. Academic Press, Inc., 1979. Michael A. Osborne, Roman Garnett, and Stephen J. Roberts. Gaussian processes for global optimization. In 3rd International Conference on Learning and Intelligent Optimization (LION3), Trento, Italy, January 2009. Carl E. Rasmussen and Zoubin Ghahramani. Bayesian Monte Carlo. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 489–496. The MIT Press, Cambridge, MA, USA, 2003. Ryan Turner, Marc Peter Deisenroth, and Carl Edward Rasmussen. State-space inference and learning with Gaussian processes. In the 13th International Conference on Artificial Intelligence and Statistics, volume 9, Sardinia, Italy, 2010. Eric A. Wan and Rudolph van der Merwe. The unscented Kalman filter forTurner nonlinear estimation. (Engineering, Cambridge)

Learning of Sigma Points in Unscented Kalman Filtering

35

Model Based Learning of Sigma Points in Unscented ...

Sep 1, 2010 - α = 0.68 gives optimal moment matches solution. Can we find ..... Analytic moment-based Gaussian process filtering. In Proceedings of the 26th ...

1MB Sizes 1 Downloads 200 Views

Recommend Documents

Importance Sampling-Based Unscented Kalman Filter for Film ... - Irisa
Published by the IEEE Computer Society. Authorized ..... degree of penalty dictated by the potential function. ..... F. Naderi and A.A. Sawchuk, ''Estimation of Images Degraded by Film- ... 182-193, http://www.cs.unc.edu/˜welch/kalman/media/.

Model-based Detection of Routing Events in ... - Semantic Scholar
Jun 11, 2004 - To deal with alternative routing, creation of items, flow bifurcations and convergences are allowed. Given the set of ... the tracking of nuclear material and radioactive sources. Assuring item ... factor in achieving public acceptance

Machine learning-based 3D resist model
Accurate prediction of resist profile has become more important as ... ANN corresponds to our machine learning-based resist 3D model (ML-R3D model). Due to ...

Machine learning-based 3D resist model
94.5 signals. We performed 10-fold cross-validation of two alternative ANNs. In case of ANN for regression,. ANN in turn had 69 input nodes and 5 hidden layers, ...

Supervised Learning Based Model for Predicting ...
being stored in a sequential element. Error-tolerant applications ... we consider the current data input (xi[t]) and the previous data input. (xi[t − 1]) jointly as the ...

A Theory of Model Selection in Reinforcement Learning - Deep Blue
seminar course is my favorite ever, for introducing me into statistical learning the- ory and ..... 6.7.2 Connections to online learning and bandit literature . . . . 127 ...... be to obtain computational savings (at the expense of acting suboptimall

A Theory of Model Selection in Reinforcement Learning
4.1 Comparison of off-policy evaluation methods on Mountain Car . . . . . 72 ..... The base of log is e in this thesis unless specified otherwise. To verify,. γH Rmax.

Energy-Based Model-Reduction of Nonholonomic ... - CiteSeerX
provide general tools to analyze nonholonomic systems. However, one aspect ..... of the IEEE Conference on Robotics and Automation, 1994. [3] F. Bullo and M.

Comparing Machine Learning Methods in Estimation of Model ...
Abstract− The paper presents a generalization of the framework for assessment of predictive models uncertainty using machine learning techniques. Historical ...

Implementing Problem Based Learning in Leadership Development ...
Implementing Problem Based Learning in Leadership Development - Joseph Murphy.pdf. Implementing Problem Based Learning in Leadership Development ...

Discriminative Parameter Training of Unscented ...
(GPS data in urban), the learning process can be performed. Parameter ... log p(yt|z0:t,u1:t). = arg max. R,Q,Ω ... Observation: GPS and Compass. ・Environment : ...

An Exploration of Deep Learning in Content-Based Music ... - GitHub
Apr 20, 2015 - 10. Chord comparison functions and examples in mir_eval. 125. 11 ..... Chapter VII documents the software contributions resulting from this study, ...... of such high-performing systems, companies like Google, Facebook, ...

activity based activity based learning learning
through listening,. ➢ Thinking, ... for both multi grade and multi level. ❖Low Level Black Board serve as an effective tool .... Green. Maths. Maroon. Social Science.

Unscented Information Filtering for Distributed ...
This paper represents distributed estimation and multiple sensor information fusion using an unscented ... Sensor fusion can be loosely defined as how to best extract useful information from multiple sensor observations. .... with nυ degrees of free

MACHINE LEARNING BASED MODELING OF ...
function rij = 0 for all j, the basis function is the intercept term. The matrix r completely defines the structure of the polynomial model with all its basis functions.

Graph Theory Techniques in Model-Based Testing - Semantic Scholar
Traditional software testing consists of the tester studying the software system .... In the case of a digraph, an Euler tour must also take account of the direction.

Model-Based Similarity Measure in TimeCloud
Our experimental results suggest that the approach proposed is efficient and scalable. Keywords: similar measure, time-series, cloud computing. 1 Introduction.

feature matching in model-based software engineering
There is a growing need to reduce the cycle of business information systems development and make it independent of .... Examples of this approach to the engineering of embedded and .... memory management, synchronization, persistence ...

Using Pre-Oracled Data in Model-Based Testing
The term “test oracle,” or simply “oracle,” describes how we determine if the output we observed was correct. By “pre-oracled” data, we mean that our test engine ...

Model-Based Similarity Measure in TimeCloud
trajectories from moving objects, and scientific data. Despite the ... definitions, and establishes the theoretical foundations for the kNN query process presented ...

Graph Theory Techniques in Model-Based Testing
help us use this model information to test applications in many different ways. This paper ... What's Wrong with Traditional Software Testing? Traditional software testing consists of the tester studying the software system and then writing and.

A Disturbance Rejection Supervisor in Multiple-Model Based Control
and Shahrokhi, 2000; Karimi and Landau, 2000; Boling et. al., 2007). In most of articles, MMST is assumed robust if each model- controller pair is individually ...