Tracking Across Multiple Cameras with Overlapping Views Based on Brightness and Tangent Transfer Functions Chun-Te Chu, Jenq-Neng Hwang

Kung-Ming Lan, Shen-Zheng Wang

Department of Electrical Engineering, Box 352500 University of Washington Seattle, WA 98195, USA {ctchu, hwang}@u.washington.edu

Service Systems Technology Center, Industrial Technology Research Institute Hsinchu, Taiwan 31040, R.O.C {blueriver, st}@itri.org.tw

Abstract—The appearance of one object may be seen differently from distinct cameras with overlapping views due to the color deviation and perspective difference. In this paper, we study these problems and propose an appearance modeling technique in order to perform the tracking across the multiple cameras. For single camera tracking, an effective integrated Kalman filter and multiple kernels tracking scheme is adopted. When maneuvering the tracking across multiple cameras, we build the brightness transfer functions (BTFs) to compensate the color difference between camera views. The BTF is constructed from the overlapping area during tracking by employing robust principal component analysis (RPCA). Moreover, the perspective difference can also be compensated by applying the tangent transfer functions (TTFs) derived by the homography between two cameras. We evaluate the proposed method using several real-scenario videos and obtain the promising results.

main contributions of this paper are: (i) Applying robust principal component analysis (RPCA) [8] to extract the brightness transfer function (BTF) that can compensate the color deviation. (ii) Propose a simple yet effective tangent transfer function (TTF) for dealing with perspective issue. (iii) Update the transfer functions adaptively. To our knowledge, there are no previous works using RPCA to model the BTF, and the TTF is the first time proposed and employed to deal with the perspective problem.

Keywords-Brightness transfer, multiple camera, homography

Most annotations in this paper are adopted from [1] and [6]. Suppose there are N cameras Ci , i=1~N, for each pair of cameras with overlapping views, the spatial relationship can be built by constructing the FOV lines. This task can be done automatically by employing the feature detection and matching technique [9]. After finding the correspondence points between two views, we can determine the ground plane homography, which stands for the coordination transform in two images from different views (cameras) [10]. In this way, we can obtain the FOV lines between two views; i.e., where the boundaries of one view in the other view are located. In the mean time, the overlapping area in both views can thus be identified automatically [6]. After successfully tracking objects within each camera separately, the visibility map for each object in each camera can be established. Let Vi j ( x, y ) 1 denotes the

I.

INTRODUCTION

Video surveillance systems have been broadly employed due to the security consideration. The tracking procedure is one of the main issues in the surveillance system. Tracking multiple objects (mainly human) within a single camera has been investigated for several years. As the scale of the camera networks getting larger, the demands have gradually moved to how to track across the cameras; that is, it is necessary to know while an object appears in one view, whether it has already shown up in another view or not. Thus, researchers have focused on linking the objects in two camera views. The field of view (FOV) line, as proposed in [1], represents how the view limits of one camera are seen in the others. Javed et al. [2] presented a system using the distance to FOV line to find the correspondence of the objects. By further exploiting Javed’s method, the extracted motion trends and appearance of objects were combined in [3]. In Kang’s method [4], the spatial-temporal homography was utilized to register multiple trajectories to track the objects across multiple cameras. Moller et al. [5] suggested a hierarchical relocation procedure based on the color information during the hand off stage. Image registration, appearance and spatial cues were utilized in tracking over multiple cameras [6]. In this paper, we further extend our previous work in [6] [7] and propose a robust system that overcomes the color deviation and perspective difference while doing the multiple objects tracking over multiple cameras with overlapping views. The

978-1-4577-1707-9/11/$26.00 ©2011 IEEE

The paper proceeds by describing the method used for multiple cameras tracking in Section II. Experimental results will be in Section III. Finally, in Section IV is the conclusion and future work. II.

MULTIPLE CAMERAS TRACKING

location at (x, y) in camera Ci is also visible in camera j; otherwise, Vi j ( x, y ) 0 . If the n-th object in camera Ci is located at (x, y), the camera subset that also see this n-th object can be expressed as Ci (n) { j | Vi j ( x, y ) 1, j z i} .

(1)

Moreover, the FOV line along side s of camera Ci showing in camera C j is defined as Lij, s and s={left, top, right, bottom}. For camera i, as soon as the n-th object Oin enters the view from the side s, the system decides if this position is in the overlapping area by checking the set Ci (n) . If it only appears

in the camera i, the set Ci (n) is empty, and the system gives it n i

a new global label. Otherwise, the object O must can be seen by the other cameras in the set Ci (n) . Then the goal is how to search and compare all the objects in C j  Ci (n) in order to

data matrix D, the uncorrupted matrix A can be estimated by solving the following optimization problem [8]:

rank ( A)  J E 0

min A, E

AE

subject to

(4)

D

assign the same label to the object. In this paper, three kinds of clues are effectively exploited.

where E represents the error matrix.

A. Vicinity Cue The distance between the object to the FOV line is the first clue which can be used to remove the impossible ones [6]. In each camera C j  Ci (n) , we form the object candidate set in

Due to the characteristic of RPCA, the low rank matrix A consists of the uncorrupted data which represents the better BTFs for our purpose. Several applications of RPCA have been proposed and proved to be effective in video background modeling, shadow removal, etc [8]. The mean vector of the matrix A is considered to be our final BTF that is more robust to the existence of noise in the video.

camera C j , (2)

For each object O kj in the candidate set C ij,n ,s , we evaluate

where dis (O kj , Lij,s ) is the minimum distance between feet location of the k-th tracked object in camera j and the FOV line along side s of camera i showing in camera j. Td is a predefined threshold based on the scene. After deleting those ones far away from the entering FOV line, we keep the distance measurement value dis (O kj , Lij,s ) for the future usage.

the distance between the histogram of O kj and the transferred

C ij,n ,s

{k | dis (O kj , Lij,s )  Td }

Since there may be several people whose distances to FOV line are similar, we can not only count on the vicinity cue. Moreover, if we use the homography to transform the feet location to its corresponding point in the other view, it may not give us the exact location due to the imperfection of the homography, especially when there are several objects, it would be difficult to make association. Therefore, besides the cues related to spatial location, more information is necessary. B. Color Cue Color histogram can serve as a robust clue when searching the best match among the candidate set C ij, n , s created from the previous step. However, even the same object can be seen differently due to the cameras’ color difference. Thus, the color correspondence between two views has to be established. For each two cameras at each time instant, we can use the cumulative histograms of the overlapping area to compute the instant brightness transfer function btf [6] which is similar to the intensity mapping used in [11]. Since the noises usually have negative impacts on BTF computation, we collect several instant BTFs from several frames and form a set of instant BTFs {btf i } , i=1~m. Each instant BTF btf i has d dimensions, where d is equal to the number of bins of histogram. We can combine them into a single matrix D,

D [btf1  btf m ]

of Oin . Here, we call the distance as dis (O kj , BTFij (Oin )) , where BTFij (Oin ) means the histogram of

histogram

the n-th object in camera Ci after applying the final BTF from camera Ci to C j . C. Edge Cue The edge direction on the object may also change due to the difference of perspective. Here we proposed the transformation between the edge directions (i.e., the tangent values). We already know the ground plane homography between two camera views in the FOV line detection stage. Assume we have two cameras whose coordinates can be written as ( x1 , y1 ) and ( x2 , y 2 ) , the homography transform can be shown as ªOx2 º «O y » « 2» «¬ O »¼

In order to recover the uncorrupted version from the corrupted matrix D, we employ the RPCA to extract the lowrank matrix A, also a d-by-m matrix, from D. Given a corrupted

c º ª x1 º f »» «« y1 »» 1 »¼ «¬ 1 »¼

(5)

We can rewrite Equation (5) into an explicit form: x2

O gx1  hy1  1 ( ax1  by1  c) O (ax1  by1  c) ( gx1  hy1  1)

(6) (7)

y2

(dx1  ey1  f ) O

(8)

(dx1  ey1  f ) ( gx1  hy1  1)

Take the derivative of (7) and (8) with respect to x1 , wx 2 wx1

§ § wy · wy · ¨¨ a  b 1 ¸¸ gx1  hy1  1  ax1  by1  c ¨¨ g  h 1 ¸¸ wx1 ¹ w x 1 ¹ © © gx1  hy1  1 2

(9)

wy 2 wx1

§ § wy · wy · ¨¨ d  e 1 ¸¸ gx1  hy1  1  dx1  ey1  f ¨¨ g  h 1 ¸¸ w x wx1 ¹ 1 © ¹ © gx1  hy1  1 2

(10)

(3)

where D is a d-by-m matrix. Note that some { btf i } are unreliable (corrupted) due to noisy video data collection.

ªa b «d e « «¬ g h

Divide the Equation (10) by (9),

wy 2 wx 2

§ § wy · wy · ¨¨ d  e 1 ¸¸ gx1  hy1  1  dx1  ey1  f ¨¨ g  h 1 ¸¸ wx1 ¹ wx1 ¹ © © § § wy1 · wy1 · ¨¨ a  b ¸ gx1  hy1  1  ax1  by1  c ¨¨ g  h ¸ wx1 ¸¹ wx1 ¸¹ © ©

P

(11)

Let m1 and m2 denote the tangent value wy1 wx1 and wy2 wx2 , respectively, and (11) becomes

d  em1  y2 g  hm1 a  bm1  x2 g  hm1

(dis (O kj , Lij,s )) 2

Vv

k

§ § wy1 · wy1 · ¨¨ d  e wx ¸¸  y 2 ¨¨ g  h wx ¸¸ 1 ¹ 1 ¹ © © § § wy1 · wy1 · ¨¨ a  b ¸  x 2 ¨¨ g  h ¸ wx1 ¸¹ wx1 ¸¹ © ©

m2

arg max [exp( 

(12)

Since this is the tangent transfer function (TTF), we use the histogram of oriented tangent (HOT) as our feature to express edge directions, where the tangent vector is obtained from the gradient vector extracted by the Sobel operator based on the fact that the tangent direction is always perpendicular to the gradient direction. By applying the transfer function, we can get the tangent value at each point on the object after transferring to the other camera, and the HOT can be built. Comparison is made by computing the distance dis (O kj , TTFij (Oin )) , where TTFij (Oin ) denotes the histogram of nth object in camera Ci after applying the TTF from camera Ci to C j . In our simulation, we assume the cameras are set up at the high elevation places, which is the normal set up in several public spaces, so the object height is much smaller than the distance between the ground plane and the camera. Hence, the ground plane homography can be utilized as the approximation as the global homography between two views [15]; that is, all the pairs of corresponding points share the same homography matrix. However, the performance can also be enhanced if multiple layers of planar homographies are employed. In this way, for each point on the object we can choose the homography corresponding to the plane on which the point lies to make the tangent value transformation. The multiple layers homographies have also been applied in several works [16][17][18]. D. Histogram Distance The histogram after transfer will deviate a little bit from the correct one since the histogram has been quantized. To alleviate this effect, Earth Mover’s Distance (EMD) [12] is chosen to compute the distance between two histograms since it can tolerate well some amount of deformations that shift features in the feature space. Here, we adopt the efficient EMD algorithm [13] in order to have robust comparison and computation efficiency as well. E. Object Matching By combining the three clues above, we can write our likelihood function and determine the best match by choosing the one with maximum likelihood in the following way:

u exp( 

2

(dis (O kj , TTFij (Oin ))) 2

V e2

) u exp(

( dis (O kj , BTFij (Oin ))) 2

V c2

)

(13)

)], where k  C ij,n , s

Therefore, we assign the label of the p-th object in camera C j to the n-th object in camera Ci . The V values are determined empirically as V v 15, V c 2, V e 2 in the experiments. F. Update the Transfer Functions Since the lighting condition changes through time, the BTF needs to be updated from time to time. At each time instant, an instant BTF is constructed and saved for the future update while discarding the oldest ones. The system only maintains the K newest instant BTFs. When the update stage takes place, we take these K BTFs to get the new final BTF by utilizing the method mentioned in Section II.B. Here we take K=100. The update rate is predetermined based on the environment. If the lighting condition changes fast, we can set the rate higher. The original homography matrix is extracted from the initial stage described in the beginning of Section II. Since there may be noise that affects the accuracy of the homography matrix computation, the matrix needs to be refined during tracking procedure in order to build more reliable TTF. After each dependable label handoff with high enough likelihood value in (13), we can use this positive result to update the matrix since they have high probability to be the correct correspondence objects. Assume the original transform matrix is obtained from two set of corresponding matching feature points with size M in two camera views: P {( xi , yi ) Q {( xi ' , yi ' )

i 1 ~ M} i 1 ~ M}

where point ( xi , yi ) in camera 1 matches the point ( xi ' , yi ' ) in camera 2. Suppose we have a positive result; that is, the object at ( xˆ, yˆ ) in camera 1 is labeled as the same object at ( xˆ' , yˆ ' ) in camera 2 with high enough likelihood in (13), we add this pair to the original set. Hence, we have the new set of matching points with size M+1. The updated homography can be extracted by using the least square method [10]. Since the positive result is the location that the object really passes by, it is highly probable that other objects may pass by that point in the future. By taking it into account, the homography transform matrix is more reliable; therefore, the TTF would make less deviation around those positions. III.

EXPERIMENTAL RESULTS

In this section, we will first show the effectiveness of our histogram transfer functions including BTF and TTF. After that, the multiple cameras tracking scheme is evaluated by several video clips. All the test videos are captured from two cameras with overlapping views from the high elevation places. The image size is 640x480. Some of the simulation results are

(a)

(b) (c)

(d)

Figure 1. Histogram of color (only one channel is shown here). (a)(b) The same object observed in different cameras. It is the same person as the one with ID 2 (green box) in the Fig. 4. (c)(d) Histogram comparison. The hist2 (blue curve, corresponding to (a)) is unchanged. The hist1 (red curve, corresponding to (b)) before and after transfer are shown in (c) and (d) respectively. The distance is much smaller after applying BTF.

(a)

(c)

(b)

(d) (e)

(f)

Figure 2. Histogram of tangent. (a) View of camera 1. (b) View of camera 2. (c)(d) The same object observed in different perspective. (e)(f) Histogram comparison. The hist2 (blue curve, corresponding to (c)) is unchanged. The hist1 (red curve, corresponding to (d)) before and after transfer are shown in (e) and (f) respectively. The distance is much smaller after applying TTF.

shown to demonstrate the efficacious tracking across cameras by using the proposed method.

include the results during hand off stage in order to demonstrate the performance of our proposed method.

A. Histogram Transfer The effectiveness of the transfer functions can be sown in the Fig. 1 and Fig. 2. We can see the distance between the histogram extracted from the same person in different views is much lower after applying the transfer functions. In our implementation, we use RGB color space, and each channel has 32 bins. The histograms from the three channels are concatenated together into a single vector while employing RPCA. The edge direction is divided into 30 bins within the range from 0 to 180 degree. We increase the number of bins in the figures just for demonstration purpose.

The video size in each camera is 640x480. Fig. 3 shows the result of tracking 3 people across the cameras, and all of them are labeled correctly. In order to make people differentiable, people are labeled with numbers and different colors bounding boxes as well. In Fig. 4, the color difference between two cameras is even larger. Three people walk into the left view nearly at the same time. Since the geometrical distance to FOV line is similar, we count on the color and edge clues which lead us to the correct results. There are more people walking across the cameras views in Fig. 5 and 6. Some of them even pass the FOV line nearly at the same time which makes hand off more challenging. Fig. 7 concludes the accuracy of the hand off. The accuracy is defined as:

B. Tracking Hand Off Results The proposed method has been integrated in a fully automatic tracking system. For tracking within each individual camera, we apply the Kalman filter combined with multiple kernels tracking [14], which makes the tracking reliable even under occlusion or segmentation error. In this paper, we only

accuracy

# of the people correctly labeled after hand off # of all the people walking across the cameras

During the hand off procedure, all the three cues are considered. We can see the performance is the best if we apply both transfer functions.

(a)

(a)

(b) Figure 3. Two overlapping views from two cameras are shown in the same row. (a) Three people are going to enter the right view. (b) All people appear in the right view and are labeled correctly.

(b) Figure 4. Two overlapping views from two cameras are shown in the same row. (a) Three people enter the left view in the same time. (b) All people appear in the left view and are labeled correctly. TABLE I

Accuracy

Figure 7. Hand off accuracy. 1: Without transfer function. 2: Apply TTF only. 3: Apply BTF only. 4: Proposed method, apply TTF + BTF.

C. Comparison We also made the comparison between our proposed scheme and other two different approaches. In [1], the vicinity cue is the only feature considered during the hand off, and in [5], the color information without applying BTF is also utilized in hand off stage. Table I shows the improvement of the hand off accuracy by using our method. One can see that the accuracy of using “Vicinity cue + Color cue (no BTF is applied)” in Table I (0.79) is similar to its of using all the three cues without applying transfer functions in Fig. 7 (0.77). If the perspective difference between two cameras is large, utilizing the edge cue without applying TTF will sometimes cause error, such as Fig. 2. There are several videos with large perspective difference in our test set. It can explain why the accuracy is nearly the same after adding one more cue if the transfer functions are not employed. IV.

CONCLUSION

We have built the transfer functions for multiple objects

Our proposed method

Vicinity cue

Vicinity + Color cue

0.96

0.72

0.79

tracking across multiple cameras with overlapping views. In order to compensate the color difference between cameras and consider the potential existence of the noise, we use RPCA to construct the brightness transfer function. For the perspective issue, the tangent transfer function is proposed by utilizing the homography matrix. The matching score is calculated from the EMD-based histogram distance. The update stage yields the transfer functions more reliable over time. The promising results from several test videos demonstrate the effectiveness of our proposed method. In the future, we will try to apply the transfer functions of different kinds of features for the multiple cameras tracking with non-overlapping views. REFERENCES [1]

[2]

[3]

[4]

[5]

[6]

S. Khan and M. Shah, “Consistent labeling of tracked objects in multiple cameras with overlapping fields of view,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 25, no. 10, pp. 1355-1360, Oct 2003. O. Javed, Z. Rasheed, O. Alatas, and M. Shah, “KNIGHT: A real time surveillance system for multiple overlapping and non-overlapping cameras,” IEEE Intl. Conf. Multimedia and Expo, vol. 1, 2003. O. Javed, Z. Rasheed, O. Alatas, and M. Shah, “Tracking across multiple cameras with disjoint views,” Proc. IEEE Conf. on Computer Vision, France, pp. 952-957, 2003. J. Kang, I. Cohen, and G. Medioni, “Continuous tracking within and across camera streams,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 1, 2003. B. Moller, T. Plotz, and G. A. Flink, “Calibration-free camera hand-over for fast and reliable person tracking in multi-camera setups,” Intl. Conf. on Pattern Recognition, pp. 1-4, 2008. L. Zhu, J. Hwang, and H. Cheng, “Tracking of multiple objects across multiple cameras with overlapping and non- overlapping view,” IEEE Intl. Symposium on Circuits and Systems, pp. 1056-10360, Taipei, 2009.

(a)

(b)

(c)

(d)

Figure 5. Two overlapping views from two cameras are shown in the same row. The system successfully labels the same person with the same color bounding boxes in two cameras’ views. (a)-(d) are four representative frames in chronological order.

(a)

(b)

(c)

(d)

Figure 6. Two overlapping views from two cameras are shown in the same row. The system successfully labels the same person with the same color bounding boxes in two cameras’ views. (a)-(d) are four representative frames in chronological order. [7]

J. Hwang and V. Gau, “Ch7: Tracking of multiple objects over camera networks with overlapping and non-overlapping views,” in Distributed Video Sensor Network, Springer, 2011. [8] J. Wright, Y. Peng, Y. Ma, A. Ganesh, and S. Rao, “Robust Principal Component Analysis: Exact recovery of corrupted low-rank matrices by convex optimization,” Neural Information Processing Systems (NIPS), Dec, 2009. [9] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Intl. Journal of Computer Vision, pp. 91-110, 2004. [10] A. Agarwal, C. V. Jawahar, and P. J. Narayanan, “A survey of planar homography estimation techniques,” IIIT Technique Report, 2005. [11] M. D. Grossberg and S. K. Nayar, “Determining the camera response from images: What is knowable?” IEEE Trans. Pattern Analysis and Machine Intelligence, 25(11), pp. 1455-1467, 2003. [12] Y. Rubner, C. Tomasi, and L. J. Guibas, “The earth mover’s distance as a metric for image retrieval,” Intl. Journal of Computer Vision, 40(2), pp. 99-121, 2000.

[13] H. Ling, and K. Okada, “An efficient earth mover’s distance algorithm for robust histogram comparison,” IEEE Trans.Pattern Analysis and Machine Intelligence, pp. 840-853, 2007. [14] C. Chu, J. Hwang, H. Pai, K. Lan, “Robust video object tracking based on multiple kernels with projected gradients”, IEEE Conf. on Acoustics, Speech and Signal Processing, Prague, May 2011. [15] J. Zhou and B. Li, “Homography-based ground detection for a mobile robot platform using a single camera,” Proc. IEEE Conf. on Robotics and Automation, May 2006. [16] S. M. Khan and M. Shah, “Tracking multiple occluding people by localizaing on multiple scene planes,” IEEE Trans.Pattern Analysis and Machine Intelligence, vol. 31, no. 3, 2009. [17] R. Eshel and Y. Moses, “Homography based multiple camera detection and tracking of people in a dense crowd,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2008. [18] D. Arsic, E. Hristov, N. Lehment, B. Hornler, B. Schuller, and G. Rigoll, “Applying multi layer homography for multi camera person tracking,” ACM/IEEE Intl. Conf. on Distributed Smart Cameras, 2008.

Tracking Across Multiple Cameras with Overlapping ...

line is similar, we count on the color and edge clues which lead us to the correct results. There are .... and Automation, May 2006. [16] S. M. Khan and M. Shah, ...

871KB Sizes 8 Downloads 284 Views

Recommend Documents

Continuously Tracking Objects Across Multiple Widely Separated ...
The identities of moving objects are maintained when they are traveling from one cam- era to another. Appearance information and spatio-temporal information.

Tracking Across Nonoverlapping Cameras Based On ...
identification using Haar-based and DCD-based signature,” IEEE Intl. Conf. on ... evaluation of multimodal biometric authentication using state-of-the-art systems ...

robust video object tracking based on multiple kernels with projected ...
finding the best match during tracking under predefined constraints. .... A xδ and. B xδ by using projected gradient [10],. B. A x x. C)C(CC. JC)C(CCI x. 1 x. T.

Collaborative Tracking of Objects in EPTZ Cameras
A brief training period is required where statistics from a few frames are used to model the background .... meaningful tracking in the LRT view. On the other hand ...

Collaborative Tracking of Objects in EPTZ Cameras
Given High Definition (1280x720) Video sequence. (of a sports .... Planar Homography Constraint, 9th European Conference on Computer Vision ECCV 2006,.

Research Article Evaluating Multiple Object Tracking ... - CVHCI
research field with applications in many domains. These .... (i) to have as few free parameters, adjustable thresholds, ..... are missed, resulting in 100% miss rate.

Matching Tracking Sequences Across Widely ...
in surveillance applications to monitor activities over an ex- tended area. In wide ... cameras is to build a representation for the appearance of objects which is ...

Recovering the Topology of Multiple Cameras by ...
possible for human operators to monitor and analyze ... stract network of nodes and connections [2]. ... nectivity between nodes in a network of cameras. The.

Touchscreen Biometrics Across Multiple Devices - Usenix
they read in our Android sensor application, which logged their keystrokes, including .... 10 subjects use some form of biometrics to unlock their mo- bile devices ...

Touchscreen Biometrics Across Multiple Devices - Usenix
cles from a set of recent stories featured in local and national news sources. ... We developed a sensor application for Android to record all touchscreen ...

Motion-Based Multiple Object Tracking MATLAB & Simulink Example.pdf
Motion-Based Multiple Object Tracking MATLAB & Simulink Example.pdf. Motion-Based Multiple Object Tracking MATLAB & Simulink Example.pdf. Open.

Multiple Object Tracking in Autism Spectrum Disorders
were made using two large buttons connected to a Mac-. Book Pro (resolution: 1,920 9 .... were required to get 4 of these practice trials correct in a row for the program to ...... The mathematics of multiple object tracking: From proportions correct

Reduced-power GPS-based system for tracking multiple objects from ...
May 12, 2000 - signal is tracked through code-correlation at the receiver. This provides the ...... '51) can be easily recorded by an inexpensive timer located at.

Cyclone Tracking using Multiple Satellite Image Sources
Nov 6, 2009 - California Institute of. Technology. Pasadena, CA 91109. {Anand. .... sures near ocean surface wind speed and direction under all weather and ...

Tracking multiple sports players through occlusion, congestion and ...
Tracking multiple sports players through occlusion, congestion and scale.pdf. Tracking multiple sports players through occlusion, congestion and scale.pdf.

Efficient Multiple Hypothesis Tracking by Track Segment ... - IEEE Xplore
Burlington, MA, USA. {chee.chong, greg.castanon, nathan.cooprider, shozo.mori balasubramaniam.ravichandran}@baesystems.com. Robert Macior. Air Force ...

Particle PHD Filter Multiple Target Tracking in Sonar ...
The matrices in the dynamic model ... The PHD is approximated by a set of discrete samples, or ... observation covariance matrix R. The dot product of the.

Reduced-power GPS-based system for tracking multiple objects from ...
May 12, 2000 - ping or cargo containers, trucks, truck trailers, automobiles, etc. can be .... include monitoring channel status and control, signal acqui sition and ...

Iterative Learning Control for Optimal Multiple-Point Tracking
on the system dynamics. Here, the improved accuracy in trajectory tracking results has led to the development of various control schemes, such as proportional ...

Reduced-power GPS-based system for tracking multiple objects from ...
May 12, 2000 - the propagation time differences of the signals transmitted from the ..... code-time offset for each of the satellite signals and for determining the ...

Tracking of Multiple, Partially Occluded Humans based ...
Institute for Robotics and Intelligent Systems. Los Angeles, CA 90089- ..... while those of the building top set are mainly panning and zooming. The frame size of ...

A Modified KLT Multiple Objects Tracking Framework ...
RGB template model to modify traditional KLT tracker more robust .... the current template model by: Ti(x) = µTi (x) + dn. ∑ j=1. λjE j i (x), for. (µTi ,E j i ) = PCA(Î1.