Comparing Alternatives for Capturing Dynamic Information in Bag-of-Visual-Features Approaches Applied to Human Actions Recognition Ana Paula B. Lopes #∗ , Rodrigo Silva Oliveira ∗ , Jussara M. de Almeida ∗ , Arnaldo de A. Ara´ujo ∗ #

Exact and Technological Sciences Department – State University of Santa Cruz Rodovia Ilh´eus-Itabuna, km 16 – Pavilh˜ao Jorge Amado, CEP 45600-000, Ilh´eus, BA, Brazil ∗ Computer Science Department – Federal University of Minas Gerais Av. Antˆonio Carlos, 6627, Pampulha, CEP 31270–901, Belo Horizonte, MG, Brazil

{paula, rsilva, jussara, arnaldo}@dcc.ufmg.br

Abstract—Bag-of-Visual-Features (BoVF) representations have achieved a great success when used for object recognition, mainly because of their robustness to several kinds of variations and occlusion. Recently, a number of BoVF approaches has been proposed also for recognition of human actions from videos. One important issue that arises when using BoVF for videos is how to take dynamic information into account, and most proposals rely on 3D extensions of 2D visual descriptors for this. However, we envision alternative approaches based on 2D descriptors applied to the spatio-temporal video planes, instead of to the traditionally explored by previous work. Thus, in this paper, we address the following question: what is the costeffectiveness of a BoVF approach built from such 2D descriptors when compared to one based on the state-of-the-art 3D SpatioTemporal Interest Points (STIPs) descriptor? We evaluate the recognition rate and time complexity of alternative 2D descriptors applied to different sets of spatio-temporal planes, and the stateof-the-art STIPs. Experimental results show that, with proper settings, 2D descriptors can yield the same recognition results as those provided by STIP, but at a significantly higher time complexity.

I. I NTRODUCTION Recognizing different human actions from videos is an important task for several applications like surveillance systems, which must be able to automatically detect abnormal events, high-level indexing aimed at Content-Based Video Retrieval (CBVR) or gesture recognition for enhancing human computer interfaces. The vast majority of proposed solutions to human actions recognition from videos rely on modeling the moving objects in the scene and then trying to associate different model parameter sets to different actions. Such solutions can be grouped into three major approaches. The first group consists of approaches based on explicit models of moving objects, like stick models for the human body or parts-based models. MMSP’09, October 5-7, 2009, Rio de Janeiro, Brazil. c 978-1-4244-4464-9/09/$25.00 2009 IEEE.

The second group is based on implicit models. Regions where the moving object are supposed to be are detected as silhouettes or bounding boxes, for example. These regions are then described in terms of low level features. The third class of approaches are comprised of techniques that do not rely on modeling the internal structure of the moving objects, but their trajectories. Moving objects of interest can be the entire human body, body parts or other objects related to the application domain, like airplanes or automobiles or even unlabeled moving regions. Recent surveys on implicit and explicit model-based approaches and on trajectory-based ones are available at [1], [2], [3], [4]. Approaches based on detecting and then modeling and/or tracking moving objects share the common drawback of depending on intermediate tasks which are themselves open research issues, like segmentation and tracking. The lack of general solutions to these tasks lead to techniques for human actions recognition that end up making too many assumptions about what is in the scene, being applicable only to very constrained settings. For instance, spatio-temporal volumetric approaches like that described in [5] rely on the existence of the entire body of a unique person against a static and uncluttered background for the silhouettes to be adequately extracted. To avoid this, a number of authors has been proposing model-free techniques, in which no previous assumption about the scene content is done. Mostly, these techniques are based on the statistics of low level features. Among model-free approaches, those based on bag-of-visual-features (BoVF) have been proved to be the most consistently successful1 . Bag-of-visual-features (BoVF) representations are inspired in traditional textual Information Retrieval (IR) techniques. Typically, the features vectors that represent each text document are histograms of word occurrences [6]. In practice, only 1 Due to a lack of standard terminology, such approaches have also been denominated bag-of-visual-words, bag-of-keypoints, bag-of-features or bagof-words in the literature.

the most discriminative words are taken into account, so as to keep feature vectors under manageable sizes. Additionally, they are normally grouped in word families. In the case of images, the equivalent to those word families are small images patches clustered by visual similarity. In most cases, only patches around a sparse selection of interest points are considered. Representations based on a vocabulary of such patches have been used in object recognition showing good robustness to scale, lighting, pose and occlusion [7]. The success of BoVF approaches for object recognition gave rise to extensions for videos, mainly aimed at human actions recognition. For videos to be represented by BoVFs, though, an important issue that arises is how to represent dynamic information. Most existing proposals consider the video as a spatio-temporal volume and then describe “volumetric patches” around 3D interest points. However, we envision alternative approaches based on 2D descriptors applied to the spatial-temporal video planes, as explained in Section IV. Thus, in this paper, we investigate the cost-effectiveness of those alternatives when compared to one based on the state-of-the-art 3D Spatio-Temporal Interest Points (STIPs) descriptors [8]. This is done by measuring the recognition rate and time complexity of some different alternatives of 2D descriptors and sets of spatio-temporal planes. Experimental results on the Weizmann database [9], which are detailed in Section V, show that, with proper settings, the 2D descriptors can indeed yield the same recognition results as those provided by STIP, but at a significantly higher time complexity.

In [16], a simple BoVF representation based on the brightness gradients features of [11] is used together with generative models – instead of discriminative ones – for actions recognition. Also using features similar to the ones from [11], [17] proposed the use of a Maximization of Mutual Information (MMI) criteria to merge the cuboid clusters output by k-means. These new clusters are then called Video Words Clusters (VWC). As it can be seen, all proposals above see a video as a spatiotemporal volume, and build their BoVF representations based on local descriptors which are spatio-temporal extensions of 2D descriptors. Such extensions are aimed at capturing local motion information, which is fundamental for actions recognition, because of the intrinsic dynamic aspect of the concepts being classified. In this work, we reason that the dynamic information is on the temporal axis and therefore, it is possible to include the dynamic aspect of the videos by gathering 2D descriptors from the planes formed by one spatial dimension and the temporal one. This idea is illustrated in Figure 1 and detailed in Section IV. In [18], a preliminary version of this work is presented, while in the present paper the comparison to the STIP-based BoVF is included, which allowed a more complete and elaborated analysis of the results.

II. R ELATED W ORK A number of authors has tried to extend BoFV for human actions classification in videos, mostly using spatiotemporal descriptors. In [10], points are selected with the Spatio-Temporal Interest Points (STIP) algorithm [8] and are described by spatio-temporal jets. In [11], the interest points selection is based on separable linear filters, and then cuboids are defined around those points. Different descriptors to the cuboids are evaluated, and brightness gradients in all three dimensions are the most well succeeded for actions recognition. Another BoVF approach for human actions recognition is proposed by [12], based on an extension of the ScaleInvariant Features Transform (SIFT) descriptor [13]. The new descriptor adds temporal information extending the original SIFT descriptor to a 3D spatio-temporal space. In [14] the local descriptors are based on the responses to a bank of 3D Gabor Filters, followed by a MAX-like operation. The BoVF histograms are generated by the quantization of the orientations in nine directions. Instead of a sparse selection of interest points, the features of that work are computed on patches delimited by a sliding window. In [15], another BoVF representation built from STIPs is proposed. The descriptors are built on the spatio-temporal volumes around these interest points, by computing coarse histograms of oriented gradients (HoG) and optical flow (HoF).

(a) xy planes (traditional frames)

(b) xt planes or “frames”

(c) yt planes or “frames”

Fig. 1: The video can be seen as a xyt volume that can be “sliced” in frames along different directions. III. B UILDING BAG - OF -V ISUAL -F EATURES R EPRESENTATIONS Figure 2 shows the main steps for creating a BoVF representation for visual items (images or videos). The first step (a) is the selection of some specific points. Point selection can be

done densely, by sliding a window over the whole image, but this makes the BoVF computation too expensive. A sparser point selection can be done either by a random sampling or, what is more common, by applying an interest point detector like the widely used SIFT. The next step (b) is to describe the region around the selected points. Again, there are a number of alternatives for this step, going from raw gray level values to those more sophisticated descriptors generally delivered by interest point detectors. The next step (c) in Figure 2 consists of clustering similar descriptors, so that each cluster can be considered a unique word in the visual vocabulary. In most cases, the clustering step is preceded by a dimensionality reduction procedure, usually Principal Component Analysis (PCA) [19]2 . In this work, because of computational cost, not all the descriptors are considered for clustering, but only a percentage of randomly chosen ones. After vocabulary definition, every descriptor on the image or video is associated with one word from this vocabulary (step d) and then the words are counted to form the histogram that constitutes the BoVF representation for the image or video (step e). Our BoVF implementation is able to use any point selector/descriptor as input. Vocabulary is created by the k-means clustering algorithm [20], and the vocabulary size k is defined empirically. The final BoVF histogram is normalized to one by computing the relative frequencies of each word. Given the BoVF representations for the videos, it is possible to apply any modeling/classifying scheme. In the present work, a linear SVM classifier (Support Vector Machines) [21] is used for classification. IV. C APTURING DYNAMIC I NFORMATION A direct extension for BoVF from still images to videos would be achieved by applying the process to every frame and then computing the histogram over all frames. The problem with such a simplistic approach is that, in this case, only the appearance information is gathered and therefore the dynamics of the scene is not taken into account. As it was discussed in Section II, most authors try to include dynamic information by using 3D extensions of some traditional 2D descriptor. In this work, we argue that since the dynamic information is on the temporal axis, it can be captured by 2D descriptors applied to the planes composed from one spatial dimension (x or y) and the temporal axis, i.e., the xt and yt planes (Figure 1). Such planes are called spatio-temporal frames throughout this work. In Section V, this idea is tested in the task of human actions recognition. The 2D descriptors chosen are well-known SIFT descriptors [13] and a newer competitor called SpeedUp Robust Features (SURF) [22]. The best results achieved with SIFT and SURF applied to the spatio-temporal frames 2 The PCA reduction is omitted in Figure 2 because, although it is commonly used, it is not an essential step for building the BoVF.

are compared to a BoVF built on the 3D STIP descriptors, used in [15]. SIFT is an interest point detector and descriptor which looks for points which present invariance to position, scale and location, besides robustness to affine transformations as well as illumination changes. These characteristics turned SIFT’s interest points quite successful for object recognition and make it a natural candidate for extensions to video. SURF algorithm pursues similar goals, but it is modified for better performance. SURF’s authors claim that it achieves results comparable to SIFT’s at a lower computational cost. Since the computational effort for processing videos is always potentially huge, this work proposes to experimentally verify if the claimed similar SURF results still hold in this specific setting. As a baseline for our proposal, the STIP algorithm proposed in [8] was selected. This algorithm is an extension to the Harris feature detector in the 3D space composed by the spatial frames and the time dimension. The features looked for by this detector maximize variations on gray level gradients and have shown their adequacy to motion estimation in [23] and [15]. V. E XPERIMENTAL E VALUATION To compare the STIP, SIFT and SURF descriptors in terms of their ability to properly capture the dynamic information contained in videos, a number of experiments were performed on the Weizmann database [9]. This database has been used for several authors and became a de facto standard for human actions recognition evaluation. The premise here is that a descriptor which better captures dynamic information is also better suited for actions recognition. The Weizmann database is comprised of short video segments, containing nine different people performing ten different actions. The actions considered are bending to the floor, jumping-jacking, jumping forward, jumping in place, running, walking laterally, jumping on one foot, walking, waving with one hand and with two hands. A snapshot can be seen in Figure 1. The issue of the best combination of frames among the xy (traditional frames), xt and yt (spatio-temporal frames), for both SURF and SIFT, is addressed in the first round of experiments. Then, the best results from this first round is compared to the baseline, based on STIP descriptors. In all cases, an extensive search for the best vocabulary size (the k-means’ k value) and SVM’s penalty error parameter is performed. A. Do points from spatio-temporal frames really capture video dynamics? In this subset of experiments, interest points coming from the three different sets of 2D frames (xy, xt and yt) are tested separately and in different combinations (xy + xt, xy + yt, xt + yt, xy + xt + yt). Both SURF and SIFT based BoVFs are tried out. For every set of frames and interest points algorithm, the experiments are carried out as follows: the whole process

Fig. 2: Steps for creating a BoVF representation for images or videos. See the text for details on each step (This picture is best understood in color).

presented in Figure 2 is performed on several values for the vocabulary size k. In case of planes combinations, the BoVFs obtained for every plane set are concatenated to form a final BoVF. Values between 60 and 900 for the final BoVF representation are tried, with steps of 60. For each vocabulary size, an extensive search for the SVM’s penalty error C that would provide the higher recognition rate was done. A logarithmic scale between 10−10 and 1010 with 10 as a multiplicative step was used for the C values. Every recognition rate for every k and C was measured on a 5-fold cross validation run. Once the best k and coarse C is found, an finer search of C values is performed, around the previous best one, this time with a multiplicative step of 10−1 . With the best k and best C at hand, ten 5-fold cross validation are run on the database, varying the random selection for the folds. The ten mean recognition rates found at these runs are averaged to compute the confidence intervals. Figure 3 shows the confidence intervals for the best recognition rates achieved with SURF points using different combinations of sets of frames (at a 95% confidence level). The combinations are indicated in the y axis of this graph, while the recognition rates are indicated in the x axis. From this graph, it is possible to see that just by using information from one of the spatio-temporal frames to build the video BoVF gives significant improvement on the recognition rate over the BoVF created from points detected on the original xy frames only (±11% higher). Also, the combination of the points coming from the xy (pure spatial) frames together with one of the spatio-temporal frames (xt OR yt) performs even better (±5% more). Nevertheless, combining the points from all the frames together does not provide further improvement on recognition rate, as it could be expected at a first sight. As can be seen from Figure 3, the recognition rates provided by the combinations xy + xt, xy + yt, xt + yt and xy + xt + yt have no statistically significant difference. This result indicates that while pure spatial and the spatio-temporal frames are complementary between them, spatio-temporal frames from different directions carry redundant information. Finally, the

Fig. 3: Confidence intervals for the recognition rates obtained with SURF points gathered from different frames sets, at a confidence level of 95%.

xy+yt combination provided the best results. Figure 4 shows the results of the equivalent experiments with SIFT descriptors. As it can be noticed, these results are quite consistent with the SURF ones, including the fact that the recognition rates achieved by using points from xy and yt frames together are the best ones. Since the Weizmann database is specifically focused on human actions, these results provide a strong indication that 2D interest points descriptors can indeed be used to capture dynamic information in videos, when applied to the spatiotemporal frames. Yet, by comparing Figures 3 and 4, it is clear that SIFT points provide significantly higher recognition rates. This is probably due to the fact that SIFT selects more points, providing a denser sampling than SURF. This result contradicts the claims of SURF’s authors since, at least in this scenario, SIFT performs better.

[24] the BoVF features are fused with features based on body silhouettes, thus loosing the generic nature of a pure BoVF; and in [16], the descriptors are smoothed in varied scales, while we did not consider multi-scale descriptors in our tests. Taking a closer look at the graph of Figure 5, it is possible to see another similar peak for the spatio-temporal SIFT results, at a much smaller vocabulary size of 180. Figure 6 includes the results for both peaks, showing that there is no statistical difference among STIP and spatio-temporal SIFT with both vocabulary sizes. In other words, SIFT can provide the same recognition rates as STIP with a third of the vocabulary size. This result suggests some advantage for the spatio-temporal SIFT, but such an assumption requires further examination.

Fig. 4: Confidence intervals for the recognition rates obtained with SIFT points gathered from different frames sets, at a confidence level of 95%.

B. Comparing SURF, SIFT and STIP The next sequence of experiments is aimed at comparing the best results of SURF and SIFT on spatio-temporal frames with those that could be achieved with the 3D STIP descriptor, considered as a baseline. Figure 5 shows the results for various vocabulary sizes. As it can be seen, except for small values of k, the spatio-temporal SIFT (using xy and yt frames) based approach is consistently better than the other two, and the spatio-temporal SURF is consistently the worst among the three alternatives.

Fig. 6: Comparing the best achieved results for each descriptor type, including both similar peaks found for SIFT. As can be seen in Table I, the STIP algorithm is much faster than SIFT and generates much less interest points. This yields a considerably lower histogram computation time also for STIP3 . In other words, although it is indeed possible to achieve a state-of-the art recognition rate on this database with a BoVF based on the SIFT descriptors applied to spatio-temporal frames, this result is achieved at a much higher complexity in time than with STIP. In addition, the apparent advantage of a smaller final BoVF representation in favor of spatio-temporal SIFT is somewhat unstable – can be seen in Figure 5 – and also very specific to this experimental setting.

Fig. 5: Comparing results for spatio-temporal SURF, spatiotemporal SIFT and STIP at various vocabulary sizes. The best results with the STIP-based BoVF are achieved at a vocabulary size of 540 words, while the best result using the spatio-temporal SIFT was around 480 words. The values are 88.5±4% and 90.7±5% – thus showing no statistical difference. Also, these rates are comparable with the best results found in literature on this database with BoVF approaches: [24] reported 89.3% and [16] reported 90%. Nevertheless, in

VI. C ONCLUSIONS In this paper, different alternatives for capturing dynamic information in BoVF representations for videos are compared by computing human actions recognition rates. The underlying assumption is that dynamic information is essential for actions recognition and therefore, a BoVF representation carrying more dynamic information is better suited for this task. 3 There are differences in PCA reduction and clustering times also, but since they are at least two orders of magnitude lower, they were disregarded in this analysis. The same for the SVM training and classification times.

TABLE I: Comparison between spatio-temporal SIFT and STIP. ST-SIFT

STIP

Selection+Description Time

1327s

582s

Number of Points

504766

10886

Histogram Computation

1395s

113s

We envisioned a scheme to gather dynamic information which is not dependant on sophisticated 3D extensions for 2D detectors and descriptors, but collects 2D descriptors from the spatio-temporal frames. We compared this approach using both SIFT and SURF 2D interest points to that using the 3D STIP algorithm. Our results show that: a) the points detected on the spatiotemporal frames are indeed able to capture dynamic information from the video to the BoVF representations; b) SIFT descriptors perform better than SURF descriptors, probably due to the greater number of points selected; c) BoVFs built on 2D spatio-temporal frames indeed provide the same recognition rates than that based on a state-of-the-art 3D descriptor, but at higher time complexity. Future work includes the validation of these results on other actions databases and a deeper analysis on the issue of vocabulary formation for BoVFs representations. VII. ACKNOWLEDGMENTS The authors thank Brazilian funding agencies CNPq, CAPES and FAPEMIG. R EFERENCES [1] J. K. Aggarwal and S. Park, “Human motion: Modeling and recognition of actions and interactions,” in 3DPVT ’04. Washington, DC, USA: IEEE Computer Society, 2004, pp. 640–647. [2] T. B. Moeslund, A. Hilton, and V. Kr¨uger, “A survey of advances in vision-based human motion capture and analysis,” Comput. Vis. Image Underst., vol. 104, no. 2, pp. 90–126, 2006. [3] W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behaviors,” SMC-C, vol. 34, no. 3, pp. 334–352, August 2004. [4] R. Chellappa, A. K. Roy-Chowdhury, and S. K. Zhou, “Recognition of humans and their activities using video,” Synthesis Lectures on Image, Video, and Multimedia Processing, vol. 1, no. 1, pp. 1–173, 2005. [5] A. Mokhber, C. Achard, and M. Milgram, “Recognition of human behavior by space-time silhouette characterization,” Pattern Recogn. Lett., vol. 29, no. 1, pp. 81–89, 2008. [6] R. A. Baeza-Yates and B. A. Ribeiro-Neto, Modern Information Retrieval. ACM Press / Addison-Wesley, 1999.

[7] Y.-G. Jiang, C.-W. Ngo, and J. Yang, “Towards optimal bag-of-features for object categorization and semantic video retrieval,” in CIVR ’07, 2007, pp. 494–501. [8] I. Laptev and T. Lindeberg, “Space-time interest points,” in ICCV ’03, 2003, pp. 432–439. [9] L. Gorelick, M. Blank, E. Shechtman, M. Irani, and R. Basri, “Actions as space-time shapes,” TPAMI, vol. 29, no. 12, pp. 2247–2253, December 2007. [10] C. Schuldt, I. Laptev, and B. Caputo, “Recognizing human actions: a local SVM approach,” in ICPR ’04, 2004, pp. III: 32–36. [11] P. Dollar, V. Rabaud, G. Cottrell, and S. Belongie, “Behavior recognition via sparse spatio-temporal features,” in ICCCN ’05. Washington, DC, USA: IEEE Computer Society, 2005, pp. 65–72. [12] P. Scovanner, S. Ali, and M. Shah, “A 3-dimensional sift descriptor and its application to action recognition,” in MULTIMEDIA ’07. New York, NY, USA: ACM, 2007, pp. 357–360. [13] D. Lowe, “Object recognition from local scale-invariant features,” ICCV ’99, vol. 2, pp. 1150–1157 vol.2, 1999. [14] H. Ning, Y. Hu, and T. Huang, “Searching human behaviors using spatial-temporal words,” in Proceedings of the IEEE International Conference on Image Processing, 2007, pp. 337–340. [15] I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld, “Learning realistic human actions from movies,” CVPR ’08, pp. 1–8, June 2008. [16] J. C. Niebles, H. Wang, and L. Fei-Fei, “Unsupervised learning of human action categories using spatialtemporal words,” IJCV, vol. 79, no. 3, pp. 299–318, 2008. [17] J. Liu and M. Shah, “Learning human actions via information maximization,” in CVPR ’08, June 2008. [18] A. P. B. Lopes, R. S. Oliveira, J. M. de Almeida, and A. de Albuquerque Ara´ujo, “Spatio-temporal frames in a bag-of-visual-features approach for human actions recognition,” in SIBGRAPI ’09. IEEE Computer Society, 2009. [19] D. C. Lay, Linear algebra and its applications, 3rd ed. New York: Addison-Wesley, 2002. [20] T. M. Mitchell, Machine Learning. New York: McGrawHill, 1997. [21] C.-C. Chang and C.-J. Lin, LIBSVM: a library for support vector machines, 2001, software available at http://www.csie.ntu.edu.tw/ cjlin/libsvm. [22] H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speededup robust features (SURF),” CVIU, vol. 110, no. 3, pp. 346–359, 2008. [23] I. Laptev, B. Caputo, C. Schuldt, and T. Lindeberg, “Local velocity-adapted motion events for spatio-temporal recognition,” CVIU, vol. 108, no. 3, pp. 207–229, December 2007. [24] J. Liu, S. Ali, and M. Shah, “Recognizing human actions using multiple features,” CVPR ’08, pp. 1–8, June 2008.

Comparing Alternatives for Capturing Dynamic ...

the application domain, like airplanes or automobiles or even unlabeled moving ... then modeling and/or tracking moving objects share the common drawback of ..... LIBSVM: a library for support vector machines, 2001, software available at.

539KB Sizes 2 Downloads 269 Views

Recommend Documents

Letter of support for Patient Data Platform for capturing patient ...
May 18, 2016 - integration as well as to produce reports and summaries that can be shared with physicians. As such, it is patient-friendly and brings direct ...

Comparing Authentication Protocols for Securely ...
wireless, hands-free, voice-only communication device without ... designing a wireless, voice-response communication ..... The wireless technology (currently.

Comparing methods for diagnosing temporomandibular ...
strate good agreement with diagnoses obtained by MRI. .... with a 1.5 tesla MRI scanner and a dedicated circular-polarized .... intra- and interob- server reliability.

DDOUser Manual For Employee Data Capturing ... -
Comprehensive Financial Management System. 9. This Employee data capturing application mainly contains two key roles to be played by. • DDO. • Employee. DDO – The prominent role is played by DDO in terms of capturing important information like

Capturing Architectural Requirements.pdf
PDF-XCHANGE ... The database will be Oracle 8i. ... requirements pertaining to order processing or stock control, for example. ... The "+" in the FURPS+ acronym is used to identify additional categories that generally represent constraints.

capturing multisensory spatial attention - NeuroBiography
closed-circuit television camera and monitor was installed in front of the ...... (1997) used neutral objects (e.g., ping pong balls or a square of cardboard), ...

Issues and Alternatives
Jun 8, 2011 - Universidad de Oriente in Valladolid for your careful, patient work: Ricardo Caba˜nas. Haas, Samuel Canul Yah, Nercy Chan Moo, Rosi Couoh ...

ALTERNATIVES TO GUARDIANSHIP.pdf
Attorney and the Declaration for Mental Health Treatment. Texas Council for Developmental Disabilities – Guardianship Alternatives. Other Articles Worth ...

PDF-Download The Rules for Online Dating: Capturing ...
PDF-Download The Rules for Online Dating: Capturing the Heart of Mr. Right in Cyberspace. New E-Book. Books detail. New q. Mint Condition q. Dispatch same day for order received before 12 q noon. Guaranteed packaging q. No quibbles returns q. Book sy

adistributed approach for capturing and synthesizing ...
competency interview skills using visually and culturally diverse virtual human experiences improves the user's ability to conduct a ..... life-size immersive display, infrared tracking system, and wireless microphone. An example of a ..... The proje

Panorama: Capturing System-wide Information Flow for Malware ...
Keywords. Malware Detection, Malware Analysis, Dynamic Taint Anal- ysis, Spyware ... tainted data, how the data propagates through the system, and finally, to ...

Alternatives for local control of water commons
public goods such as environmental regulation, health care and education, .... .org/EXT/epic.nsf/ImportDocs/8525729D0055F87B852572F00054DB08?opendocument ...... By definition, PUPs can include only public partners (though this.

Great Alternatives For Simcity Buildit Hack No Survey That Anybody ...
Great Alternatives For Simcity Buildit Hack No Survey That Anybody Can Follow.pdf. Great Alternatives For Simcity Buildit Hack No Survey That Anybody Can Follow.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying Great Alternatives For Simc

Alternatives provided by recent system modelling in animal health for ...
system modelling in animal health for the upscaling of vaccine efficacy to the population level. Hans-Hermann Thulke. Page 2. Field trials. On purpose. Page 3 ...

Path to Prosperity? - Canadian Centre for Policy Alternatives
economics/statistical-review-of-world-energy-2013.html>. 2 Ernst and Young, Analysis of the Competitiveness of BC's Proposed Fiscal Framework for LNG ...

Moving to the City - Canadian Centre for Policy Alternatives
Status. Housing. Education. Income. M. 55. FN. Manitoba Housing. Grade 11. EIA. F ..... NA source: Eagle Urban Training Centre Adult Housing Case Database ...