Proceedings of the 2011 IEEE International Conference on Robotics and Biomimetics December 7-11, 2011, Phuket, Thailand

Visual Tracking by Partition-based Histogram Backprojection and Maximum Support Criteria Jae-Yeong Lee and Wonpil Yu Abstract— Histogram-based mean-shift is an efficient tool for visual object tracking. However, it often fails to locate a target object correctly in complex environment especially when the background contains similar colors with the object. In this paper, we present a novel visual tracking method that combines advantages of real-time performance of the mean-shift and exact localization of template matching and is robust to background changes, partial occlusions, and pose changes. The proposed method uses a partition-based object model represented by multiple patch histograms. The method first estimates the densities of the object pixels by histogram backprojection of each patch histogram, which gives a set of patch-wise density estimates. A target object is then located by pixel-wise evaluation of the maximum likelihood which is computed by the sum of the densities of the object pixels within target candidate. The suggested localization criteria overcomes many problems of the conventional mean-shift and gives significant improvement of tracking performance. The proposed tracker is very fast and the tracking accuracy is comparable to recent state-of-the-art trackers. The experiment on extensive challenging video sequences confirms the efficiency of our method.

I. INTRODUCTION Object tracking is an important and challenging problem in computer vision. For many high-level applications such as surveillance, human robot interaction, action recognition, and navigation of intelligent vehicles reliable tracking is essentially required. In natural scenes, however, reliable tracking is still a challenging problem with respect to robustness. Histogram-based object tracking methods like mean-shift have been used widely because of its simplicity and realtime performance [4]. Mean-shift is well suited for the tracking of non-rigid objects and works well for the objects having discriminant colors from the background. However, it is well known that the mean-shift is very sensitive to the background colors and shows low localization accuracy in complex environment. One of the main sources of the problem is the loss of spatial information in the use of histograms. For example, histogram of an object consisting of upper part with blue color and lower part with yellow color will just say that the object consists of blue and yellow, losing spatial information. This issue has been addressed by several works. Yang et al. [11] introduced a new similarity measure which takes This work was supported by the R&D program of the Korea Ministry of Knowledge and Economy (MKE) and the Korea Evaluation Institute of Industrial Technology (KEIT). (The Development of Low-cost Autonomous Navigation Systems for a Robot Vehicle in Urban Environment, 10035354) J. Lee and W. Yu are with Robot Research Department, Electronics and Telecommunications Research Institute, 138 Gajeongno, Yuseong-gu, Daejeon, Korea [email protected]

978-1-4577-2137-3/11/$26.00 © 2011 IEEE

2860

(a)

(b)

(c)

(d)

(e)

Fig. 1. A sample object model and backprojection images: (a) input image and target object (pedestrian) with 2×1 partitions, (b) backprojection image using the upper part histogram of the object, (c) backprojection image using the lower part histogram of the object, (d) combined backprojection image of (b) and (c), (e) conventional backprojection image using single histogram.

into account both colors and pixel positions. Their approach is a kind of template matching that is able to produce a motion vector from the current estimate of target location, which enables target localization by the mean-shift iteration. Birchfield and Rangarajan [12] also addressed this issue by introducing the concept of spatiogram, which is a single histogram in which each bin is spatially weighted by the mean and covariance of the locations of the pixels that contribute to that bin. Another category of approaches addressing this issue is to use multiple histograms to model an object. Prez et al. [10] introduced a multi-part color modeling to capture a rough spatial layout ignored by global histograms. In their approach, the similarity is computed based on the sum of Bhattacharyya coefficients between the corresponding patch histograms of the object model and image regions. A similar approach using a partition-based object model has been reported recently by Adam et al. [1]. In [1], the similarity is measured by the Earth Mover’s Distance (EMD) between the corresponding patch histograms. To reduce intensive computational load of building patch histograms for every candidate location, an integral histogram was introduced. Histogram of Oriented Gradients (HOG) by Dalal et al. [13] which is popular in object detection problems also use a set of histograms to model an object. In this paper, we presents a novel partition-based visual object tracking method that combines advantages of realtime performance of the mean-shift and exact localization of template matching. The proposed visual tracking method is based on partition-based histogram backprojection and a new target localization criteria. The object model is represented

by multiple patch histograms and a target is located by pixel-wise evaluation of the maximum likelihood which is computed by the sum of the densities of the object pixels within target candidate. The suggested localization criteria copes with conventional problems of the mean-shift like partial occlusions and inaccurate localization, giving more reliable results. In our method, the object model is represented by multiple patch histograms similar with the previous approaches [10], [1] but target objects are located in a different way. In [10] and [1], patch histograms are built for every candidate location and compared with the model patch histograms to locate target, invloving high computational burden. In our approach, we do not build patch histograms for input image but instead estimate the densities of object pixels in a single pass by histogram backprojection (see Sect. III-B). We then locate targets by evaluating the likelihood based on the density estimates rather than by directly comparing patch histograms. In the next section we briefly describe the mean-shift method, and then presents our approach in Sect. III and Sect. IV. The experimental results on real video sequences are given in Sect. V. Finally we conclude the paper in Sect. VI.

bounding box) is divided into a fixed partitions and histograms are built from each partition, forming an object model. In contrast with traditional part-based model, the division is arbitrary and is not based on the parts of an object. As described previously, traditional mean-shift loses spatial information in the use of histograms. Our object model is a kind of template that consists of patch histograms. Therefore the spatial information is preserved in partition level.

II. HISTOGRAM-BASED MEAN-SHIFT TRACKING

B. Histogram Backprojection

In this section, we briefly review histogram-based meanshift tracking method. Mean-shift is a mathematical tool for seeking the mode of a density of sample distribution [3]. In tracking framework, a target object is located by iteratively ˆ old to moving a target candidate from the current location x ˆ new according to the relation the new location x  ˆ old )w(xi )xi K(xi − x ˆ new = i , (1) x ˆ old )w(xi ) i K(xi − x

Let an object model be composed of m patch histograms, H1 , ..., Hm . Given the current frame I, we wish to compute the sample probabilities of the object model for every histogram. Let S be the current search region, which is defined as a rectangular region surrounding the previous target location. Firstly we build a histogram HS from the current search region to model the color distribution of background. For each patch histogram Hk , we then compute the sample probabilities wk (x) by projecting Hk on the current search region as the following:  (3) wk (x) = hk (I(x))/hS (I(x)),

where K(·) is a radially symmetric kernel defining an influence zone and w(x) is the sample weight. Typically, w(x) is determined by using histogram backprojection as the following:  w(x) = hm (I(x))/hc (I(x)), (2) where I(x) is the pixel color, and hm and hc are density estimates of the pixel color from the histograms of target model and target candidate, respectively. Figure 1(e) shows a typical example of backprojection image, which considers w(x) as a pixel value. ˆ new − x ˆ old , Theoretically the mean shift vector, Δx = x computed from (1) is proportional to the normalized density gradient estimate and points toward the direction that maximizes Bhattacharyya coefficient between two histograms of target model and target candidate [4]. III. PROPOSED APPROACH A. Object Model While traditional mean-shift [4] uses a single histogram model, our object model is represented by multiple patch histograms. For a given target, the object region (typically

2861

Fig. 2.

The surface that represents the distribution of voting scores.

for k = 1, ..., m. Finally we thus have m backprojection images, which have the same size with the current search region. These backprojection results are used as the sample probabilities of the object pixels in next searching step (Subsect. III-C). Figure 1 illustrates this procedure when using 2 × 1 partitions. Figures 1(b) and 1(c) show the backprojection images computed from the upper patch histogram and the lower patch histogram, respectively. Figure 1(e) shows the conventional backprojection image using a single histogram model. We are able to see that the object is locally better discriminated from the background in every backprojection image when using partition-based model, compared to the case of single histogram model. It is because each partition of an object region will have the smaller color variance than the whole object region, and thus they will have the higher chance of discrimination from the background. The presented partition-based object model contributes to the tracking performance in two aspects. Firstly it enables

(a)

(b)

(c)

Fig. 3. Comparison of the maximum support criteria (MSC) and the meanshift with sample backprojection images. The MSC results are depicted by red boxes and the mean-shift by dashed green boxes. (a) Density mean is different from the target center. (b) The background have similar color distribution. (c) Occlusion.

more reliable tracking as the object region is better discriminated from the background. Secondly we can locate a target object more accurately since the spatial information of each partition is preserved. C. Target Localization by Maximum Support Criteria Based on the sample weights wk (x) computed from (3), we evaluate every possible position of the target in the current search region. For the evaluation, we interpret the sample weights as the voting values of the pixels for a target candidate. Let {xi }i=1,...,n be the pixel locations in the region ˆ. defined as a target candidate. The region is centered at x The function g : R2 → {1, ..., m} associates to the pixel at location x the index g(x) of its partition in the target model. Every pixel in the candidate region votes for the target candidate with the corresponding sample weight wk (xi ). The voting score for the target candidate is then computed as P (ˆ x) =

n  m 

ˆ )wk (xi )δ[g(xi − x ˆ ) − k], K(xi − x

(4)

i=1 k=1

where δ is the Kronecker delta function. Figure 2 shows the surface obtained by computing the voting scores with the backprojection images in Figs. 1(b) and 1(c). Finally, the target object is located as the target candidate which obtained the maximal support: ˆ ∗ = arg max P (ˆ x). x ˆ x

(5)

D. Comparison with Mean-shift As described previously, the mean-shift procedure locates a target object by moving it towards a local density mean of sample distribution. When applied with the histogram backprojection, this procedure implicitly assumes that the backprojection weights wk (x) represent the density estimates of the target location. However, it is more natural to interpret the weights as the object probabilities of the pixels, not the probability of the target object being centered at that location. It explains why we used the weight sum for target localization rather than conventional density mean. In practical point of view, the distribution of backprojection weights vary largely depending on the color distributions

2862

of the target object and the background, often giving unreliable tracking results with the mean-shift. The suggested maximum support criteria (MSC) is less sensitive to those variations and gives better results in most cases. Figure 3 illustrates three typical cases that the MSC outperforms the mean-shift: (a) the density mean is different from the target center, (b) the background have similar colors with the target object, (c) the target object is occluded. We are able to see that the suggested MSC locates the target objects more accurately in all cases. Note that our method is robust to occlusions although it is not dealt with explicitly. IV. GREEDY SEARCH The direct implementation of the proposed algorithm requires (p − r)(q − s) times of evaluation of (4), where the search region is p × q pixels and the target object is r × s pixels. This global search can be computationally expensive for a large target and large search radius. One way to reduce the computation time is to use a greedy search technique. Here, we briefly describe the procedure. ˆ 0 be the target location in the previous frame. Given Let x ˆ 0 and current frame, we first compute the voting scores at x at its 4-neighbor locations according to (4). Next, we move to the location that obtains the largest voting score. This procedure is repeated until the current location has the largest score compared to its 4-neighbors. V. EXPERIMENTS In this section, we evaluate the performance of the proposed tracking system and compare the results with other state-of-the-art trackers. Algorithms compared are the meanshift tracker (MS) by Comaniciu et al. [4], Fragment Tracker (FragT) [1], Incremental Learning Tracker (IncL) [6], and MIL tracker [2]. For comparison, we implemented a meanshift tracker based on the algorithm described in [4]. For testing FragT, IncL, and MIL, we used the softwares given by the authors with their original parameter setting. The RGB color space was used to build color histograms in our tracker. The histograms we used contained 16 × 16 × 16 bins. Our tracker did not distinguish grayscale images and color images in building histograms. The search radius was set to 25 pixels from the previous target position. The scale of a target was set to the initial target size and not adjusted during the tracking procedure. The FragT tracker and MIL tracker also were tested with fixed scale as the original implementations did not provide scale adaptation. We also tested the mean-shift tracker (MS) with fixed scale. However, the IncL tracker was tested with scale adaptation according to the original implementation. We tested our tracker and the other trackers on extensive challenging video sequences, all of which are publicly available (Subsect. V-A). The tracking accuracies were measured based on the average overlap ratio between the tracking results and the ground truth (Subsect. V-B). The experimental results are given in Subsects. V-C and V-D. All tests were performed on Intel Pentium PC 3GHz with 2GB RAM.

TABLE I P ERFORMANCE OF THE PROPOSED TRACKER WITH VARYING

TABLE II C OMPARISON OF TRACKING PERFORMANCES . R ED INDICATES BEST PERFORMANCE .

PARTITIONING AND SEARCHING OPTIONS .

Accuracy (%) Runtime (ms)

greedy global greedy global

1î1 39 42 13 235

2î2 45 41 19 238

3î3 50 53 11 222

4î4 54 56 14 263

5î5 Average 55 49 % 56 50 % 18 15 ms 241 240 ms

A. Dataset The experimental dataset consisted of eighteen publicly available video sequences (two video clips from FragT [1], five ones from IncL [6], and the others from MIL [2]). We used the original ground truths provided by the authors for our experiments. For video clips missing ground truth (”car11”, ”car4”, ”fish”, and ”trellis70”), we manually labeled the ground truths. All video clips except three (”face”, ”woman”, and ”surfer”) were grayscale. Note that the dataset consisted of all the video sequences that were used by the authors of the algorithms in comparison. The purpose is to gauge absolute performance and avoid possible bias of algorithm-specific dataset.

Video Clip face woman car11 car4 dudek fish trellis70 cliffbar david dollar coke11 faceocc2 girl surfer sylv tiger1 tiger2 twinnings Avg. Accuracy (%) Avg. Runtime (ms)

MS

FragT

IncL

MIL

Proposed

72 3 14 15 29 40 19 6 42 68 4 46 73 47 58 11 2 68 34 10

90 72 7 34 48 38 35 25 44 37 8 59 66 25 57 21 16 66 42 289

79 2 67 65 58 40 44 29 32 86 16 53 30 32 40 10 13 31 40 418

65 8 24 39 61 31 34 54 43 62 34 64 52 71 59 42 34 71 47 234

68 72 33 28 50 71 53 67 58 65 18 76 50 72 60 29 59 64 55 18

Video Source FragT FragT IncL IncL IncL IncL IncL MIL MIL MIL MIL MIL MIL MIL MIL MIL MIL MIL

B. Accuracy Measure

C. Experiment I

Performance of visual trackers conventionally has been evaluated by average center location errors (pixels) from the ground truth [1], [2], [5], [6]. One problem of this accuracy measure is that the performance is affected by a random factor of displacement error during wrong tracking. For example, a background region close to ground truth position will be better credited than the background regions far away from the ground truth. This is clearly undesirable as any wrong tracking is simply a tracking failure regardless of its center error. Another problem is that the same value of center error can have different meaning according to target size. In our experiments, for more strict evaluation, the tracking accuracy was measured based on the overlap ratio between a searching result and the corresponding ground truth in percentile as the following:

In the first experiment, we tested our tracker with two searching options of global search and greedy search and with varying partitions. The purpose of Experiment I is to identify the influence of the parameters of our tracker on the tracking performance. We used uniform k × k partitioning of the object region with k = 1, ..., 5, as shown in Fig. 4. For each searching method and partitioning, we measured average tracking accuracies by (6) and average runtimes for each video sequence in the dataset. The runtime is the time for processing one image frame in miliseconds. Next, we averaged the resulting average accuracies and runtimes over all video clips in the dataset, giving the final performance measure for each parameter setting. Table I shows the experimental results. We are able to observe that the global search and the greedy search show similar tracking accuracies. The global search showed slightly better performance of 50% accuracy than the greedy search of 49% accuracy on average. However, runtimes were greatly reduced when using greedy search. The results shows that using the greedy search does not degrade the tracking accuracy of the proposed tracker, implying that distribution of the local weight sums forms a smooth surface for the tracker to converge a global peak by greedy search. As for partitioning parameter, the results show that the accuracies are slightly improved as the number of partitions increases and the runtimes do not change much over different partitioning. In conclusion, the results shows that performance of the proposed tracker does not vary too much according to parameter setting (searching method and partition sizes), suggesting the greedy search option with 5 × 5 partitioning

λaccuracy =

|R ∩ G| × 100, |R ∪ G|

(6)

where R denotes a searched region and G a ground truth. This accuracy measure involves both translation and scale error and commonly used in the vision community for object detection problems [7], [8], [9].

Fig. 4.

Partitions used in Experiment I.

2863

for best performance. D. Experiment II In Experiment II, we compare the tracking performance of our system with other trackers. For the experiments, we applied mean-shift (MS), Fragment Tracker (FragT), Incremental Learning Tracker (IncL), and MIL tracker under same condition and measured average accuracies and runtimes for every video clip in the test dataset. Table II shows the experimental results. We show the source of each video clip (’Video Source’) for reference. The accuracies and the average runtime for the proposed system are those that were measured by using the greedy search option with 5 × 5 partitioning. This choice of representative performance of our tracker is based on the result of Experiment I. Note that the results for our system were obtained with fixed parameter setting. It means that we did not change parameters according to test video clips. It also were true for the other trackers except IncL. In case of IncL, we used tuned parameters given by the authors of IncL for their video clips (’car11’, ’car4’, ’dudek’, ’fish’, ’trellis70’) and default parameter of IncL for other videos. We are able to observe that our tracker showed the best performance with 55% tracking accuracy on average and other trackers compared showed relatively good performances for their own video clips. In fact, the average performance of 55% accuracy of our tracker is not low when we consider that a searching result having 50% overlap with the ground has 33% accuracy by (6). As for runtime, our tracker showed 18 milliseconds per frame (i.e. 55 fps) on average. It is slightly slower than the mean-shift (MS) but much faster than FragT, IncL, and MIL. Figure 5 shows several examples of tracking results. E. Discussions The proposed tracker was shown to have the best performance with 5 × 5 partitioning in Experiment I. However, we empirically found that our tracker in general has better performance with small number of partitioning for non-rigid objects like pedestrians. Our tracker originally was developed to track pedestrians and vehicles in driving environment. And we were able to obtain more reliable results with 2 × 1 partitioning for pedestrian tracking. In fact, 2 × 1 partitioning is well suited for modeling pedestrians as the color distribution of most pedestrian is clustered into upper half and lower half part. Another point is that the lower half part of pedestrians involve large deformations as they walk, and thus finer partitioning can degrade the performance. As for non-rigid objects, the performance did not vary too much but slightly better with fine partitioning. Note that most of the tracking targets in the experimental dataset were basically rigid objects, explaining the preference of fine partitioning in the experimental results. In conclusion, we recommend to use our tracker with 2 × 1 partitioning for non-rigid objects and with 4 × 4 or 5 × 5 partitioning for general rigid objects. Experiment II showed that the proposed tracker gave the best results for the experimental dataset, compared with

2864

Fig. 5. Examples of tracking results. The results are drawn by red box for our tracker, white for MS, yellow for FragT, blue for IncL, and green for MIL. The frames are from ’face’, ’woman’, ’car11’, ’car4’, ’dudek’, ’fish’, ’trellis70’, ’cliffbar’, ’david’, ’dolloar’, ’coke11’, ’faceocc2’, ’girl’, ’surfer’, ’sylv’, ’tiger1’, ’tiger2’, and ’twinnings’ in order, respectively.

FragT, IncL, and MIL. However, the result should be regarded just as a reference for the performance measure as the dataset could not represent all possible video categories. An important point is that the proposed tracker is very fast and simple (the implementation is straightforward), but the performance (accuracy) is comparable to recent state-of-theart trackers, encouraging practical application of our method for real applications. VI. CONCLUSIONS In this work, we presented a novel visual tracking method that is based on partition-based histogram backprojection and maximum support criteria. The proposed backprojection method using partition-based histogram model enables reliable tracking by minimizing dependency on the background. Another main contribution of our work is the introduction of a new criteria for target localization. The suggested maximum support criteria copes with conventional problems of the mean-shift such as partial occlusions and inaccurate localization, giving more reliable results. The experimental

results on extensive challenging video sequences confirms the efficiency of our method. The proposed method is very fast (55 fps), and robust to partial occlusions and pose changes. And tracking accuracy of our system is shown to be comparable to recent state-of-the-art trackers. R EFERENCES [1] A. Adam, E. Rivlin, and I. Shimshoni, ”Robust fragments-based tracking using the integral histogram”, in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 798-805, 2006. [2] B. Babenko, M. Yang, and S. Belongie, Visual tracking with online multiple instance learning, IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 983-990, December 2010. [3] D. Comaniciu and P. Meer, Mean shift: a robust approach toward feature space analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(5):603-619, May 2002. [4] D. Comaniciu, V. Ramesh, and P. Meer, Kernel-based object tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, pp. 564-575, 2003. [5] J. Kwon and K.M. Lee, ”Visual tracking decomposition”, in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 1269-1276, 2010. [6] D. A. Ross, J. Lim, R. Lin, and M. Yang, Incremental Learning for Robust Visual Tracking. International Journal of Computer Vision, vol.77, pp. 125-141, August 2008. [7] The PASCAL Visual Object Classes (VOC) Challenge, http://pascallin.ecs.soton.ac.uk/challenges/VOC/ [8] L. D. Bourdev, S. Maji, T. Brox, and J. Malik, ”Detecting People Using Mutually Consistent Poselet Activations”, in Proc. European Conf. on Computer Vision (ECCV), pp. 168–181, 2010. [9] C. Gu, J. Lim, P. Arbelaez and J. Malik, ”Recognition using Regions”, in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2009. [10] P. Prez, C. Hue, J. Vermaak and M. Gangnet, ”Color-Based Probabilistic Tracking”, in Proc. European Conf. on Computer Vision (ECCV), pp. 661-675, 2002. [11] C. Yang, R. Duraiswami, and L. Davis, ”Efficient mean-shift tracking via a new similarity measure”, in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2005. [12] S. Birchfield and S. Rangarajan, ”Spatiograms vs. histograms for region based tracking”, in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2005. [13] N. Dalal and B. Triggs, ”Histograms of Oriented Gradients for Human Detection”, in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2005.

2865

IEEE ROBIO 2011 Conference Paper

in computer vision. For many .... Comparison of the maximum support criteria (MSC) and the mean- ... C. Target Localization by Maximum Support Criteria.

1019KB Sizes 2 Downloads 237 Views

Recommend Documents

IEEE ROBIO 2010 Paper
in IEEE International Conference on Robotics and Automation (ICRA. 2006), pp. .... design and realization of an open humanoid platform for cognitive and.

MobileHCI Conference Paper Format
Jul 2, 2012 - DP pointing and proposed a two-phase adaption to Fitt‟s law which they .... breaks between the levels as they wished, with warm tea on hand.

SIGCHI Conference Paper Format - Microsoft
Backstory: A Search Tool for Software Developers. Supporting Scalable Sensemaking. Gina Venolia. Microsoft Research. One Microsoft Way, Redmond, WA ...

Halliburton 2011 First Quarter Conference Call
Jan 24, 2011 - The company will issue a press release regarding the 2011 first quarter earnings prior to the conference call. Halliburton's first quarter press ...

Halliburton 2011 Second Quarter Conference Call
Apr 18, 2011 - The company will issue a press release regarding the 2011 second quarter earnings prior to the conference call. Halliburton's second quarter ...

Halliburton 2011 Second Quarter Conference Call
Apr 18, 2011 - The company will issue a press release regarding the 2011 second quarter earnings prior to the conference call. Halliburton's second quarter ...

2011 FLSE Conference Registration Brochure.pdf
Scentsy. Columbia Southern University. Page 3 of 12. 2011 FLSE Conference Registration Brochure.pdf. 2011 FLSE Conference Registration Brochure.pdf.

IEEE ICMA 2007 Paper
processors and motors, UAV technology has become applicable in civilian circumstances .... battery and wireless technology combined with more and more RC.

Halliburton 2011 First Quarter Conference Call
Jan 24, 2011 - The company will issue a press release regarding the 2011 first quarter earnings prior to the conference call. Halliburton's first quarter press ...

SOCDA 2011 Conference Brochure.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. SOCDA 2011 Conference Brochure.pdf. SOCDA 2011 Conference Brochure.pdf. Open. Extract. Open with. Sign In. M

SIGCHI Conference Paper Format
Mar 22, 2013 - Author Keywords. Text visualization, topic-based, constrained clustering. .... Moreover, placing a word cloud representing the same topic across.

Engineering Creative Design (Conference Paper)
promotes environmental or system changes to achieve safety. .... solving or opportunity searching in-house and with customers, and helping line managers ...

MobileHCI Conference Paper Format - Research Explorer
Jul 2, 2012 - numerous consumer mobile phone applications such as. Augmented Reality .... smartphone running Android 2.3. ..... CHI 2010, ACM Press,.

SIGCHI Conference Paper Format
taking into account the runtime characteristics of the .... the ability for Software to control the energy efficiency of ... Thus as software-assisted approach is used.

SIGCHI Conference Paper Format
trialled by Morris [21], that depicts one‟s social network in the form a solar system. ..... participants, this opposition extended to computers: “I don‟t think I should ..... They also felt that the composition of emails allows for a degree of

SIGCHI Conference Paper Format
Web Accessibility; Social Networking; Human Computer ..... REFERENCES. 1. Accessibility of Social Networking Services. Observatory on ICT Accessibility ...

Eurodyn 2011 paper
side, the trend to an increase of heavy load vehicle traffic has not stopped until now. In Germany, an increase ... vibration based approaches for automatic identification of structural damage in early stages have been .... locations on the bridge de

SIGCHI Conference Paper Format
Sep 24, 2008 - computing systems with rich context information. In the past ... In recent years, GPS-phone and GPS-enabled PDA become prevalent in .... Figure 2 depicts how we calculate features from GPS logs. Given two ..... Threshold Hc (degree). A

SIGCHI Conference Paper Format
the analyst to perceive structure, form and content within a given collection. ... business?” Or perhaps ... it is for researchers, designers, or intelligence analysts. It.

SIGCHI Conference Paper Format - Research at Google
the Google keyboard on Android corrects “thaml” to. “thank”, and completes ... A large amount of research [16, 8, 10] has been conducted to improve the qualities ...

Engineering Creative Design (Conference Paper)
Since that time company accident statistics have often been interpreted to conclude that .... our corporate vision through education in creative thinking techniques, ... Technology Agency (Anderson, 1985), but are Japanese and their culture ...

SIGCHI Conference Paper Format
real time. In addition, it is not possible for a finite number of people to enumerate all the possible ... stream, the real time location information and any personal.

Wind Turbine Noise Conference Paper
Masters thesis, Technical University of Denmark, ... Administration, 1989. [25] Crouch ... online at: http://www.prod.sandia.gov/cgi-bin/techlib/access-control.pl/.

IEEE ICMA 2007 Paper
Department of Automation, Xiamen University. Xiamen ... decrease in steady state which could deteriorate the system ... When a system is in sliding mode, it is.