608

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 15, NO. 3, APRIL 2013

A Gram-Based String Paradigm for Efficient Video Subsequence Search Zi Huang, Jiajun Liu, Bin Cui, and Xiaoyong Du

Abstract—The unprecedented increase in the generation and dissemination of video data has created an urgent demand for the large-scale video content management system to quickly retrieve videos of users’ interests. Traditionally, video sequence data are managed by high-dimensional indexing structures, most of which suffer from the well-known “curse of dimensionality” and lack of support of subsequence retrieval. Inspired by the high efficiency of string indexing methods, in this paper, we present a string paradigm called VideoGram for large-scale video sequence indexing to achieve fast similarity search. In VideoGram, the feature space is modeled as a set of visual words. Each database video sequence is mapped into a string. A gram-based indexing structure is then built to tackle the effect of the “curse of dimensionality” and support video subsequence matching. Given a high-dimensional query video sequence, retrieval is performed by transforming the query into a string and then searching the matched strings from the index structure. By doing so, expensive high-dimensional similarity computations can be completely avoided. An efficient sequence search algorithm with upper bound pruning power is also presented. We conduct an extensive performance study on real-life video collections to validate the novelties of our proposal. Index Terms—High-dimensional indexing, sequence indexing, similarity search, video subsequence search.

I. INTRODUCTION

W

ITH the advances of video technologies in both hardware and software, the amount of video data has grown rapidly in many fields, such as broadcasting, advertising, filming, personal video archive, and medical/scientific video repository. In addition, Web has generated enormous impact by publishing and sharing videos online. Online delivery of video content has surged to an unprecedented level. According to comScore1, a leader in measuring the digital world, U.S. Internet users viewed more than 12 billion online videos in May 2008 alone, representing an increase of 45% versus a year ago. The average duration of online videos was 2.7 minutes.

Manuscript received February 01, 2012; revised June 22, 2012, August 20, 2012; accepted September 14, 2012. Date of publication December 24, 2012; date of current version March 13, 2013. This work was supported in part by the Natural Science Foundation of China under Grant 61073019 and 60933004. The associate editor coordinating the review of this manuscript and approving it for publication was Francesco G. B. De Natale. Z. Huang and J. Liu are with School of Information Technology & Electrical Engineering, The University of Queensland, Australia (e-mail: huangitee.uq. edu.au; jiajunitee.uq.edu.au; [email protected]). B. Cui is with the Department of Computer Science and Technology, Peking University, Beijing 100871, China (e-mail: [email protected]). X. Du is with the School of Information, Renmin University of China, Beijing 100871, China (e-mail: [email protected]). Digital Object Identifier 10.1109/TMM.2012.2236307 1http://www.comscore.com

The wide availability of video content fuels many novel applications, such as near-duplicate video detection [33], [35], visual event detection [31], in-video advertising [18], [20], video recommendation [34], and so on. With these demanding applications, how to index large-scale video databases and how to search similar video content are of uttermost importance. A video can be defined as a sequence of (key)frames and each frame is represented as a high-dimensional image feature vector. Video data is large and complex in nature, mainly caused by the high dimensionality of feature vector and various subsequence information inherent in the video. In database literature, while many high-dimensional indexing methods have been proposed to manage multimedia databases [3], [10], [12], [22], [23], [30], most of them suffer from the known “curse of dimensionality” and incur expensive high-dimensional (dis)similarity computations. Furthermore, they do not consider the sequence information in the video. As a result, subsequence retrieval is not well supported. Although some spatial-temporal indexing structures have been designed for indexing time series and moving objects [5], [13], [21], they are not applicable to high-dimensional video sequence data. In fact, lack of efficient indexing methods has been recognized as one of the bottlenecks in existing content-based video search engines [9]. The high complexity of video data, coupled with the large volume, poses a huge challenge towards large-scale video subsequence search. As the volume of video data continues to grow in a fast pace, the demand of efficient video indexing is increasingly imperative. Interestingly, string indexing has been actively researched in the last few decades and achieved great success in supporting fast keyword/document search from large text collections. Many string indexing methods have been proposed [16], [36]. Comparing with high-dimensional indexing methods for float-valued feature vectors, string indexing methods are highly efficient, mainly due to the string nature of being a sequence of discrete characters. In this paper, by utilizing string indexing methods in supporting efficient string matching, we propose a simple yet highly efficient string paradigm called VideoGram for indexing video sequence databases to achieve video sequence search based on content similarity. VideoGram consists of an off-line indexing component and an online search component. In the indexing component, the high-dimensional feature space is modeled as a set of visual words, each of which represents a patch of the feature space containing similar frames. Based on the visual word vocabulary, each high-dimensional video sequence in the database is transformed into a Sequence-of-Visual-Words (SoVW) which is further abstracted as a string of visual word IDs. A gram-based indexing structure

1520-9210/$31.00 © 2012 IEEE

HUANG et al.: A GRAM-BASED STRING PARADIGM FOR EFFICIENT VIDEO SUBSEQUENCE SEARCH

is then built for all the strings representing database videos. In the search component, a high-dimensional query video sequence is first expanded into multiple Sequences-of-Likelihood-Visual-Words (SoLVW), each of whose visual words is also associated with a likelihood indicating the spatial similarity to its corresponding query frame. Video sequence search is then performed by matching query SoLVW with candidate SoVW. A new Likelihood-based Video Similarity (LVS) measure based on string matching is proposed to capture the spatial and temporal similarity between query SoLVW and candidate SoVW. Obviously, quantizing the continuous visual feature space into discrete visual words and representing high-dimensional sequences as strings can cause spatial information loss among different visual words. To ensure high search accuracy in VideoGram, rational query expansion is the key. Motivated by the document language model in information retrieval, we propose a novel visual word language model to estimate a likelihood which accurately indicates the spatial similarity between a query frame and a visual word. Based on the visual word language model, our query expansion algorithm expands each high-dimensional query frame into its similar visual words with their likelihoods conveyed for restoring the spatial similarity in computing the overall video similarity. To prevent the over-expansion problem which potentially expands too many visual words for a frame, we apply the philosophy of F-measure to optimize the trade-off between search accuracy and cost with flexible importance control. Powered by the effective query expansion method which has small overhead, VideoGram achieves the following novelties: 1) alleviate the effect of the “curse of dimensionality” by using the gram-based indexing structure, 2) avoid high-dimensional sequence similarity computation by using string matching, and 3) support subsequence search by using gram matching. To further speed up the process, the sequence search algorithm is also equipped with upper bound pruning power. We conduct an extensive performance study to validate the novelties of VideoGram on large-scale video collections. Our results show that the string paradigm enabled by the proposed query expansion method improves the performance of existing video indexing methods significantly. The rest of the paper is organized as follows. In Section II, we review related work. In Section III, video sequence representation is discussed. The string paradigm VideoGram is presented in Section IV, followed by the query expansion method and the sequence search algorithm in Sections V and VI respectively. Experiment results are given in Section VII. We conclude our paper in Section VIII. II. RELATED WORK Our work is well related to content-based video similarity search, high-dimensional indexing and string indexing. Content-based multimedia similarity search has been extensively studied in the last few decades [8], [15]. Due to the high redundancy and complexity of video features, one primary task for efficient video similarity search is to summarize a video into

609

a compact representation. The video similarity is then approximated based on the compact representations for practical search. Many proposals have been presented, such as keyframe representation [40], video signature by sampling frames [6], video triplets by clustering frames [23], Gaussian distribution function by computing probability density function [4], bounded coordinate system by analyzing frame distribution [11], etc. However, most of the derived representations, if not all, are still high-dimensional feature vectors. Therefore, the “curse of dimensionality” retains. Towards effective database supports for content-based multimedia similarity search, a lot of research efforts have been witnessed in database community. Various categories of high-dimensional indexing methods have been proposed to tackle the “curse of dimensionality”, including tree-like structures such as M-tree [7], transformation based methods such as iDistance [12], data compression methods such as VA-file [32], dimensionality reduction methods such as MMDR [26], and hashing based methods for approximate results such as LSH [1]. Some methods have also been tested on video databases [25], [11], [23]. In [23], a method to choose an optimal reference point for one-dimensional transformation is proposed to maximally preserve the original distance of two high-dimensional points, which leads to the optimal performance of B -tree. As a further improvement of [23], a two-dimensional transformation method called Bi-Distance Transformation (BDT) is introduced in [11] to utilize the power of two far away optimal reference points. It transforms a high-dimensional point into two distance values with respect to two optimal reference points. Although certain success has been achieved, the performance on largescale databases is still not very satisfactory due to the hardlybreakable “curse of dimensionality”. Furthermore, the above methods do not consider the video sequence information. The requirement of sequence matching tends to further reinforce the hardness of the problem. Although some attempt has been made to identify short subsequences from a long video sequence [24], indexing varying-length videos to support subsequence matching has not been adequately addressed. More recently, Yeh and Cheng [38], [37] have proposed to represent videos using ordered lists and measure the video similarities using the approximate string matching technique. The approach unifies visual appearance and the ordering information in a holistic manner, which can also be used for identifying local alignments between two video sequences. Levenshtein distance and its extension to local alignment on high-dimensional video sequences are used, which are very expensive and may impose limitation on the search efficiency. Moreover, a vocabulary tree is used to index the feature vectors and a fast matching algorithm based on a visual method dot plot is used to examine only a subset of feature vectors in order to speed up the matching process. Differently, in this paper we present a gram-based string indexing approach for efficient video search. By representing video sequences as strings, video search can be performed by quick string matching, without involving any expensive distance computations among high-dimensional video sequences.

610

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 15, NO. 3, APRIL 2013

String indexing has achieved great success in supporting fast string matching from large collections [16], [36], due to the string nature of being a sequence of discrete characters. One main idea is to partition the string into multiple fixed-length grams (q-gram) or variable-length grams (v-gram) which are indexed with highly effective trie and inverted files or other structures. Given a query string, it is partitioned into grams as well. String or substring matching is then efficiently performed by integrating those matched grams. Inspired from string-matching techniques, in this paper we model a database video sequence as a string and propose a new string paradigm for large-scale video sequence indexing. III. VIDEO SEQUENCE REPRESENTATION In this section, we introduce the video sequence representation model called sequence-of-visual-words (SoVW). Before that, we first look at the generation of visual words. A. Visual Words Generation In content-based video similarity search, video content is typically represented by various visual features extracted from frames, such as color histograms, wavelet textures, shape descriptors, etc. Towards semantic-based retrieval, high-level features such as concepts or events are often used together with local interest points or motion flow [31]. In this paper, we frame our work in the content-based similarity search using visual features, based on which a visual word vocabulary is constructed. Recently, the usage of visual word has been popular for image and video classification [14], [29]. Analog to a word in text corpus, a visual word corresponds to a cluster in a visual feature space. Text indexing techniques such as inverted lists can then be applied to facilitate the retrieval [28]. All frames of the same visual word are expected to be similar. predefined distance threshold can be used to determine if two frames are similar. Formally, visual word can be defined as below: Definition (Visual Word): A visual word, denoted as VW, represents a partition in a visual feature space in the quartet of (c, o, r, s), where is the cluster ID, is the cluster center, is cluster radius, is the cluster size, and . A visual word can also be understood as a hypersphere centered at with a radius of in a high-dimensional feature space. To generate visual words in a visual feature space, all the frames in the database videos are partitioned into clusters, each of which corresponds to a visual word. It is understood that different clustering methods may produce different sets of visual words with different qualities. Typically, classical -means and its variants are used to generate the visual words [14], [29]. Due to the scalability issue, here we use the simple hierarchical -means method to quickly generate clusters whose radii are not greater than to guarantee that all frames within a cluster are similar [23]. All visual words are identified with their unique IDs and form a visual word vocabulary. It is expected that more effective clustering methods can generate higher quality visual word vocabularies. We will confirm in the experiments that our proposed string paradigm is able to

achieve significant performance improvement even if such a simple clustering method is used. B. Sequence-of-Visual-Words Representation Given a video sequence where is a high-dimensional frame feature vector, by mapping each frame into its visual word (i.e., the closest visual word whose is represented as a Sequence-of-Visualrange covers ), Words (SoVW), i.e., . Note that if a frame is not covered by any visual word, then a noise symbol is assigned to the frame, and the noise symbol does not match with any symbol, including itself. Interestingly, since each visual word can be uniquely identified with its ID, the video representation can be further simplified into a sequence of characters by declaring the attribute as character type (or integer type in real implementation). Now it becomes clear that we have represented a video sequence as a sequence of one-dimensional characters, i.e., a string, which can be efficiently indexed and searched as we will discuss in Section IV. On the other hand, we may have also noticed some shortcomings about this visual word based representation. Most seriously, the spatial information among different visual words is lost. Visual words generation basically quantizes the continuous feature space into a number of visual words identified by discrete IDs. It is well recognized that clusters in high-dimensional spaces are often not well-separated from each other. Overlaps may occur among clusters and a frame may have similar frames from multiple clusters. Given a high-dimensional query video sequence, each of its query frames may have similar frames from different visual words with different likelihoods. To recover the lost information caused by quantization effect, we propose a novel query expansion method based on the visual word language model, as deferred in Section V. In video content analysis, there exist some attempts to represent a video sequence as a string [2]. However, their focus is on classification. Efficient indexing has not been touched. Some proposals like Video Google [28] also represent a frame with multiple visual words for object matching. The database size is increased and all visual words for a frame have equal importance. Our method is able to compute the different weights from visual words to a query frame with the visual word language model. In time series indexing, symbolic representations such as SAX and its variants [17], [27] for one dimensional sequences have recently been proposed. The general idea to generate symbols is to divide the domain range into intervals, each of which is represented by a symbol. However, it is not very clear on how to map a high-dimensional video sequence into a symbol sequence while the spatial closeness can be preserved. IV. VIDEOGRAM In this section, we present VideoGram. Before that, we will first look at a traditional approach based on high-dimensional indexing structures for video sequence matching. The distinct features of VideoGram and interesting comparisons with the traditional approach will also be discussed.

HUANG et al.: A GRAM-BASED STRING PARADIGM FOR EFFICIENT VIDEO SUBSEQUENCE SEARCH

611

Fig. 1. The overview of VideoGram. It contains an off-line indexing component and an online search component. The off-line indexing component transforms database videos into strings and indexes them in a gram-based structure. The online search component takes a query video as input, transforms it into a string, and searches the gram-based structure for quick gram matching to find the similar videos.

To support high-dimensional video sequence matching, existing high-dimensional indexing methods need to be further extended. While all the keyframes are indexed by a high-dimensional indexing structure, one direct way to enable sequence search is to also include the position information of each keyframe in the data structure. Given a query video sequence, its keyframes are first searched in the high-dimensional indexing structure to find their similar keyframes, each of which then identifies a candidate database subsequence based on the corresponding position information within the whole video sequence. The video similarities between the query sequence and candidate subsequences are finally computed and ranked. We call the above approach relying on high-dimensional indexing structures as the traditional high-dimensional paradigm. Different from the traditional high-dimensional paradigm, the proposed string paradigm VideoGram utilizes the high efficiency of string matching. Fig. 1 depicts an overview of VideoGram which mainly consists of an indexing component for video databases and a search component for video queries. The general idea is to index high-dimensional database sequences by a gram-based indexing structure and process high-dimensional query sequences by means of string matching, while the spatial and temporal information can be preserved in computing the video similarity. The indexing component is an off-line process. Given a video sequence database, all the high-dimensional frames are first clustered into visual words (step 1). All the visual words form a vocabulary (step 2), based on which a high-dimensional video sequence is then mapped into a Sequence-of-Visual-Words

(SoVW), which is further represented as a string (step 3). Finally, after all database video sequences are transformed into strings, a string indexing structure is built to facilitate string matching (step 4). Various string indexing methods have been proposed in literature. Note that video strings may have different lengths spanning a wide range, from few to thousands of or more characters. To support efficient subsequence matching, we apply the gram-based indexing method to manage our video strings, where the gram dictionary is indexed as a trie and linked with inverted lists. Interestingly, grams also preserve local sequential-proximity in video strings. Details on constructing and searching gram dictionary can be found in [16]. It is highly expected that the number of visual words is smaller than the number of high-dimensional frames by multiple orders of magnitude. The scale of the visual word vocabulary is typically around thousand in order to fast map frames into corresponding visual words. A high-dimensional data structure can be used to index the vocabulary (step 5). The search component is an online process. Since database video sequences are represented and indexed in the form of strings, video sequence search is eventually performed based on string matching. Given a query video, its frame features are firstly extracted (step 6). Query expansion is then performed to expand a query video sequence into multiple Sequences-ofLikelihood-Visual-Words (SoLVW) which are able to restore the similarity between the query video sequence and a database video sequence in the original feature space, by searching the visual world index (steps 7 an 8). In an SoLVW, each visual word is associated with a likelihood which preserves its spa-

612

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 15, NO. 3, APRIL 2013

tial similarity to a high-dimensional query frame when representing the query frame by this visual word, thus called Likelihood-Visual-Word (LVW). The way to estimate the likelihood by consulting the visual word vocabulary affects search quality. After query expansion, expanded query SoLVWs are generated (step 9), followed by performing string sequence search from the gram-based indexing structure to find candidate sequences which are matched with SoLVW based on the underlying similarity measure (step 10 and 11). The filter-and-refine strategy is applied to further speed up the retrieval and return the results (step 12). In short, VideoGram has the following distinct features. First, the video sequence database is transformed and indexed by a gram-based string indexing structure. Second, a query video sequence is expanded into multiple strings associated with likelihood information based on query expansion. Third, string matching is performed to compute video similarity by taking likelihood information into consideration. Compared with the traditional high-dimensional paradigm which organizes all the keyframes by a high-dimensional indexing structure for video sequence search, the string paradigm has the following advantages. First and most importantly, the effect of the “curse of dimensionality” is alleviated by using the gram-based indexing structure which is much smaller in size and can be accessed efficiently. Second, high-dimensional sequence similarity computations are avoided by using string matching. Third, video subsequence search can be well supported by using gram matching. As an overhead, the vocabulary index needs to be accessed for query expansion in order to establish the spatial relationship between a high-dimensional query frame and a visual word. Due to the small vocabulary size, this overhead is expected to be small. V. QUERY EXPANSION In this section, we first propose a novel visual word language model to estimate the similarity between a high-dimensional query frame and a visual word, based on which the query expansion method is presented, followed by the optimization on the trade-off between search accuracy and cost to prevent the problem of over-expansion. A. Visual Word Language Model A statistical language model is a probability distribution over all database linguistic units in a language, which can be used in information retrieval for query generation to improve the retrieval accuracy. The general idea is to construct a language model for each document in the database and rank the documents based on how likely the query could be generated from these document models. One popular language model is called document language model [19]. In the document language model, given a word in the query, its individual term probability of being generated from a document is typically defined as the sum of two components: (1)

is the term frequency of in the document is the term frequency of in the document collection is the size of document in words and is the size of the document collection in words. is called the maximum likelihood estimate of word in indicating the importance of in and is the maximum likelihood estimate of word in Coll indicating the importance of in Coll. is a general symbol for smoothing the maximum likelihood estimate by considering the overall importance of in the whole collection. It adjusts the maximum likelihood estimate so as to correct the inaccuracy caused by data sparseness. For simplicity, in this paper, we use the simple Jelinek-Mercer smoothing [39], where is simply an arbitrary weight between 0 and 1 and usually set as 0.5. By applying the document language model, the relevance between a query word and a document can be defined as , which can be further integrated with other query words’ relevance for better retrieval results. Motivated by the document language model for measuring the relevance between words and documents in text retrieval, we propose a visual word language model to effectively estimate the similarity between a query frame and a visual word for query expansion purpose. The basic idea is to estimate how likely a frame can be generated from a visual word in the high-dimensional feature space. Given a frame and a visual word VW, the visual word language model estimating the frame probability of being generated from VW is defined as the sum of the following two components: where

(2) where is the frame frequency of in is the frame frequency of in the visual word vocabulary is the size of VW in frames (i.e., ), and is the size of Voca in frames (i.e., the total number of frames in the database). and can also be understood as the maximum likelihood estimates of frame in VW and in Voca. While a frame and a visual word in the visual word language model are analog to a word and a document in the document language model, our visual word language model has one essential difference which comes from the definition of frame frequency. Intuitively, the term frequency for a word in a document can correspond to the frame frequency for a frame in a visual word VW. However, different from a word whose term frequency can be directly counted from a document, the relationship between a high-dimensional frame and a visual word is indicated by their spatial information. We define the frame frequency as the number of similar frames for a frame. Since each visual word is represented by a quartet of and is a high-dimensional point in the feature space, estimating the number of similar frames for can be achieved by constructing a Virtual-Visual-Word (VVW) centered at , denoted as . Recall that two frames are similar if their distance is not greater than a predefined threshold . Therefore, by setting the radius of to be , all similar frames of in VW definitely lie in the intersection between and VW. After

HUANG et al.: A GRAM-BASED STRING PARADIGM FOR EFFICIENT VIDEO SUBSEQUENCE SEARCH

613

Similarly, in Fig. 2(b), is in the range of both A and B, i.e., all frames in A and B are similar to . Based on the visual word language model, we have

Fig. 2. An example of using visual word language model.

computing the volume of the intersection between two hyperspheres representing and VW in the high-dimensional feature space based on the formulas in [23], the frame frequency of in VW can be estimated as:

As we can see, A and B have the same probability to since all the frames in both visual words are similar to . In Fig. 2(c), since contains C only and does not intersect with A or B, straightforwardly we have

(3) Where is the volume of the intersection between and is the volume of VW. Then the maximum likelihood estimate of frame in VW becomes: (4) To better understand the proposed model, Fig. 2 shows a sample query video sequence with three frames , where the dotted circles represent three visual words (A, B, C) in the feature space and the dashed circle represents the query frame’s Virtual-Visual-Word with radius of . In Fig. 2(a), ’s Virtual-Visual-Word contains A and intersects with B. Obviously, all the frames in A are similar to , so as to the frames in the intersection between and B. Based on the visual word language model, the maximum likelihood estimates of in A and B are 1 and respectively. Then we have

Clearly, the higher the probability is, the more likely the frame is classified into the visual word. Note that although C is completely outside of the query search range, is not zero due to the second term which indicates the overall importance of C in the whole database. As explained in the language model, this can adjust the maximum likelihood estimate so as to correct the inaccuracy caused by data sparseness in high-dimensional space. Since is far larger than the size of any visual word, the second term actually plays a minor role in the estimation.

B. The Query Expansion Algorithm As mentioned, representing a high-dimensional video sequence as an SoVW in the form of string loses the spatial information. Fortunately, given a high-dimensional query frame, the visual words that overlap with its Virtual-Visual-Word can be found by referring to the visual word vocabulary and the visual word language model provides a statistical means to estimate the similarity between the frame and the visual words. Expanding each query frame into visual words that overlap with the frame’s Virtual-Visual-Word and conveying the estimated similarities into the video sequence similarity measure, we can remedy the spatial information loss caused by SoVW representation from the similarity search point of view. Based on the visual word language model, the query expansion method is outlined in Algorithm 1. Given a query video sequence , for each of its frame , a range search on the index of visual word vocabulary is firstly performed to obtain those visual words that overlap with ’s Virtual-Visual-Word (line 2). For each obtained visual word with respect to , its Likelihood-Visual-Word (LVW) for , denoted as , is computed (lines 4–6), where is defined as a couple of with and . However, if the frame frequency (i.e., the estimated number of similar frames) for in computed by (3) is less than 1, the visual word is regarded as disqualified to and discarded from the set (lines 7–9). If the number of derived Likelihood-Visual-Words for is 0, then is treated as a noise frame which is assigned with the special noise symbol with likelihood of 0 (lines 12–15). Note that does not match with any symbol, including itself. Finally, Sequences-of-Likelihood-Visual-Words (SoLVW) for are generated by considering all the possible sequence combinations (lines 17–18). Recall the query example

614

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 15, NO. 3, APRIL 2013

shown in Fig. 2. Our query expansion algorithm expands both and into 2 visual words A and B and expands into 1 visual word C, assuming all their frame frequencies are not less than 1. So four SoLVW are finally generated, including AAC, ABC, BAC and BBC with their respective likelihoods computed by the visual word language model. Algorithm I THE QUERY EXPANSION ALGORITHM Input: 1: for do 2: RangeSearchInVocaIndex 3: for do 4: if then 5: ; 6: ; 7: else 8: Remove from ; 9: ; 10: end if 11: end for 12: if then 13: ; 14: 15: end if 16: end for 17: 18: return ;

;

where Rest() function returns the subsequence by removing the and is the first first visual word in the sequence, visual word in Q and V respectively. For the query represented by multiple SoLVW, all the SoLVW are matched with and the maximum similarity value is used. Based on the SoLVW representation together with the Likelihood-based Video Similarity (LVS) measure, the spatial and temporal closeness between query videos and database videos can be accurately measured. Therefore, while high-dimensional video sequences can be indexed and matched in a string manner, retrieval quality can be assured. However, the problem of over-expansion which generates too many SoLVW for a single query may also arise due to the full expansion to each individual query frame. Apparently, more vigorous query expansion leads to better search accuracy, but higher search cost as well. Next, we discuss how to achieve an optimal trade-off between search accuracy and cost for query expansion. C. Optimization on Query Expansion

;

Such a query expansion process based on the proposed visual word language model quantizes a high-dimensional query video sequence into multiple strings (i.e., SoLVW) associated with the likelihood information which preserves the spatial closeness between query frames and visual words. To reflect the spatial similarity and also consider the temporal information in the video sequence level, we propose a new video similarity measure called Likelihood-based Video Similarity (LVS) by matching query SoLVW and database SoVW which is robust to alignment, gap and noise. Different from the standard Edit distance in string matching, the definition of LVS also brings the estimated likelihood information into video similarity to restore the spatial similarity when two characters are matched (i.e., the same visual word). Definition 2 (Likelihood-Based Video Similarity): Given a query video sequence represented by an SoLVW, i.e., and a database video sequence represented by an SoVW, i.e., , where and is the number of frames in and respectively, the Likelihood-based Video Similarity (LVS) between and is defined in a recursive manner as below:

(5)

The number of SoLVW generated by the query expansion algorithm for a query is . Potentially this number could be very large, especially for long queries. Assume each individual gram consumes a single unit of search cost to access the gram indexing structure without considering the possible optimizations of processing multiple grams concurrently. Clearly, a large number of SoLVW will generate many grams to be matched in the indexing structure, and may consequently disable the string paradigm. Therefore, it is stringent to keep the number of SoLVW reasonably small to assure the high efficiency of the string paradigm while search quality is least affected. As the SoLVW are constructed by considering all the possible sequence combinations of LVW, the number of SoLVW increases exponentially. The observation that an SoLVW may share most of its grams with others indicates that the growing speed of gram is much slower than that of SoLVW. Our preliminary statistics also suggest that the number of grams increases almost linearly. Search cost on the indexing structure depends on the number of distinct grams generated from SoLVW. Hence by applying the logarithmic scale on the number of SoLVW, we approximate the number of grams for search cost estimation as , which increases linearly as increases exponentially. On the other hand, in the query expansion process, a Likelihood-Visual-Word (LVW) for a high-dimensional query frame also carries the spatial similarity which can be preserved when mapping into the visual word. Obviously, the amount of similarity information being retained affects search accuracy. When the query is fully expanded, the maximum information gain is achieved as , which expects the highest search accuracy. From the Likelihood-based Video Similarity measure, it is also easy to notice that the video similarity between an SoLVW and a database SoVW reaches the maximum when the SoLVW is fully matched. Intuitively, the larger the likelihood is, the more likely the SoLVW finds similar video sequences for the query. Those Likelihood-Visual-Words having very small likelihoods have little importance in computing the video similarity and thus can be highly regarded as redundancies.

HUANG et al.: A GRAM-BASED STRING PARADIGM FOR EFFICIENT VIDEO SUBSEQUENCE SEARCH

Given a high-dimensional query video sequence, we intend to have a reasonably small number of SoLVW by removing those least significant Likelihood-Visual-Words (LVW) to ensure the efficiency of the string paradigm, without affecting search quality noticeably. Naturally, after the most important LVW (i.e., with the highest likelihood) for each query frame is selected, the rest LVW can be ranked based on their likelihoods in a descending order and the top- LVW are used to construct SoLVW. The key is how to determine the value of so that the trade-off between search accuracy and cost is optimized. We adopt the philosophy of F-measure in information retrieval which aims to maximize values on two performance measures: precision and recall, where precision typically decreases as recall increases. The general formula of F-measure for non-negative real is: (6) measures the retrieval effectiveness with respect to a user who attaches times as much importance to recall as precision. is also called the balanced F-measure where precision and recall are equally weighted. A high score requires both high precision and high recall. In our problem domain, for analytical purpose, we assume search accuracy is proportional to the amount of similarity information being preserved and cost is proportional to the number of grams being generated. Thus search accuracy can be measured as the percentage of similarity information preserved (i.e., the total likelihood of top- LVW over the total likelihood of all the LVW) and search cost as the percentage of grams generated (i.e., the total number of grams generated from the SoLVW constructed by top- LVW over the total number of grams generated from the SoLVW constructed by all the LVW). However, both accuracy and cost increase monotonically as goes up. To adapt the F-measure in our problem, we can simply replace precision with accuracy and recall with (1-cost) in the formula, i.e., (7) In the modified F -measure, accuracy increases as (1-cost) decreases. Similar to the standard F-measure, F -measure aims to maximize values on both accuracy and (1-cost). In other words, F -measure is designed to maximize accuracy while minimizing cost. By selecting the which has the maximum score, F -measure achieves the best trade-off between search accuracy and cost. For the extreme case when (i.e., all the LVW are used), gets its worst value 0. Consider again the example shown in Fig. 2 and assume ’s frame likelihood in B, i.e., , is smallest. By the F -measure based optimization, if ’s expansion to B is removed, then only 2 SoLVW are finally generated, including AAC and ABC with their respective likelihood information. Typically, is not a very large number. Thus the optimal value can be quickly identified by simply testing the integers from 1 to the total number of LVW and selecting the one with maximal score. Actually, in the implementation, it is not necessary to test all the possible integers from 1 to the total number

615

of LVW. As k increases from 1 to the total number of LVW, score first increases, then decreases continuously after it reaches the maximum. Therefore, the optimal can be quickly determined by detecting the peak of score. Typically, maximal score occurs for very a small value in our experiments, which results in a reasonably small number of SoLVW for satisfactory accuracy and low cost. Furthermore, we also have the flexibility to adjust the importance of accuracy and cost for different requirements by setting different values, either manually or automatically. In the experiments, we will further look at the performance of this optimization together with the effect of . VI. SEQUENCE SEARCH In this section, we present the sequence search algorithm given the expanded SoLVW for a high-dimensional query video sequence. Since both the query and database videos are now represented in the form of string, efficient string matching can be performed in the gram-based indexing structure to generate candidate sequences, which are further filtered by the established upper bounds on their Likelihood-based Video Similarity (LVS) and refined by their exact LVS values. Here we focus on NN search, i.e., finding most similar sequences from the database. Given the set of SoLVW expanded from the query expansion algorithm, their candidate sequences in the database are first generated by searching the gram-based indexing structure. At this step, grams for all the SoLVW are generated and matched in the indexing structure to get their respective lists of video IDs with occurring positions. Each matched gram between a query SoLVW and a database SoVW invokes the generation of a candidate squence from the SoVW. As videos may have different lengths, we use a subsequence window to allocate a subsequence from the SoVW. In this paper, we set the subsequence window size to be the query length. Given a matched gram between an SoLVW and an SoVW, the subsequence window contains a subsequence from the SoVW which has the same occurring position of the gram as the SoLVW has. In case of short SoVW, the special noise symbol is used to fill up the subsequence window. Certainly, a larger window size can also be used to allow for minor temporal tolerance. The subsequence in the window is then regarded as a candidate of the SoLVW. Other matched grams between the SoLVW and the SoVW are also identified by checking the position information. Finally, all candidates paired with their respective SoLVW are generated. To further avoid unnecessary quadratic LVS computations between SoLVW and their candidates, in the next step we derive an upper bound of LVS for each candidate to achieve upper bound pruning. From (5), the LVS between a candidate and its corresponding SoLVW reaches the maximum when two sequences are completely matched. Therefore, given which is a candidate of , an upper bound of LVS for , denoted as , is the sum of all visual word likelihoods in , i.e., (8)

616

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 15, NO. 3, APRIL 2013

where is the th Likelihood-Visual-Word in . Meanwhile, since the common grams shared by and are also known, the upper bound can be further tightened. Like a candidate, an upper bound for a gram is the sum of all visual word likelihoods in the gram. Consequently, can be further refined as: (9) where

is the th common gram shared by and , and is the number of common grams. Next, all candidates are ranked in a descending order according to the derived similarity upper bounds and KNN results are initialized to be the top- candidates with their exact LVS values. Before the exact LVS value between the next candidate and its corresponding SoLVW is computed, its upper bound is first checked with the th result’s LVS value for upper bound based filtering. If the upper bound is less than the th result’s LVS value, the rest of the candidates can be filtered without any further comparisons by directly returning the current top- results since the correctness of results is guaranteed. Otherwise, the exact LVS value has to be computed and compared with the th result’s LVS value followed by necessary refinement on the current top- results. Apart from the quick candidate generation from accessing the gram-based indexing structure, our search algorithm also deploys a filter-and-refine strategy to further speed up the process by avoiding unnecessary LVS computations. As the refinement goes on, the th result’s LVS value keeps increasing and the next candidate’s upper bound keeps decreasing. When two values converge, the search is terminated. It is expected that a large number of candidates having small upper bounds can be filtered.

search divided by the total number of existing similar videos in the database. To measure search efficiency, we use the most intuitive parameter total search time. Due to the quadratic complexities of similarity measures in sequence matching, our preliminary results show that the computational cost actually dominates the I/O cost. Total search time can reflect how practically fast high-dimensional sequence queries can be processed from a large-scale video database. For the purpose of comparison with the string paradigm, we implement the high-dimensional paradigm with the recently proposed Bi-Distance Transformation (BDT) method [11], M-tree [7] and inverted list which can return exact range search results. Other methods such as LSH and SASH are only able to return approximate results in the high-dimensional paradigm. Any existing high-dimensional indexing structures can be used in the traditional high-dimensional paradigm for video search. BDT has been employed to index video sequence databases and M-tree is one of the most popular high-dimensional indexing structures in the database literature. Inverted list is a classical method in information retrieval and has also been extended to index visual words for efficient multimedia retrieval. Note that the position information of each keyframe within its video sequence is also maintained in the leave nodes of the tree structures in order to support sequence matching. Given a query video sequence, its keyframes are first searched in the tree to find their similar keyframes, each of which then identifies a candidate (sub)sequence like a matched gram does in the string paradigm. The video similarities between the query sequence and the identified candidate sequences are then computed and ranked. Finally, top- results are returned. To compute the similarity between two video sequences in the original high-dimensional feature space, we use a sophisticated measure by modifying the match condition in the standard Edit distance. Specifically, the High-dimensional Video Similarity (HVS) between a query sequence and a database sequence is defined as:

VII. EXPERIMENTS A. Set Up We use two real-life video datesets for our experiments. 1) CC WEB VIDEO2: a public near-duplicate web video dataset containing 12 790 videos collected from YouTube, Google Video and Yahoo! Video. 2) OWN VIDEO: our own video collection containing 70 000 videos which are TV commercials and episodes in total duration of nearly 2 000 hours. Videos are very diverse in various categories with varying lengths from 5 seconds to 10 minutes. Each video is represented as a sequence of keyframes of 32-dimensional RGB color histograms in the range of [0,1]. To evaluate VideoGram, both search effectiveness and efficiency need to be measured and compared with the high-dimensional paradigm discussed in Section IV. To measure search effectiveness, standard precision and recall are used, where precision is defined as the number of similar videos retrieved by the search divided by the total number of videos retrieved and recall is defined as the number of similar videos retrieved by the 2http://vireo.cs.cityu.edu.hk/webvideo/

(10) where is the Euclidean distance between and which are the first keyframes in and respectively. Two keyframes are matched if their distance is not greater than . Otherwise, their similarity is 0. Such a sophisticated similarity measure can capture the visual similarity in the original high-dimensional feature space. Different from LVS, HVS involves actual high-dimensional distance computations between keyframes in order to find similar videos. All the experiments were performed in Windows XP with a 2.0 GB RAM and 2.6 GHz Duo CPU. We use a page size of 4 K and maintain the dataset in the hard disk. In the query-byexample approach for content-based video search, query videos are typically short clips. In our experiments, queries of different lengths from 5 to 30 keyframes (about 10 to 60 seconds) are used. By default, and . Each reported result is the average over 20 randomly selected queries.

HUANG et al.: A GRAM-BASED STRING PARADIGM FOR EFFICIENT VIDEO SUBSEQUENCE SEARCH

Fig. 3. Effect of on precision and recall. (a)

B. Effectiveness In this subsection, we evaluate search effectiveness measured by precision and recall, where the ground-truth for a query has to be known. Content-based near-duplicate video retrieval has recently been a hot research topic due to its wide range of applications and well-defined ground-truth [33], [35]. Since our work is in the context of content-based video similarity search, we conduct the task of near-duplicate video retrieval to evaluate search quality. Here we use the CC WEB VIDEO dataset where the near-duplicate ground-truth is known. 1) Effect of : We first study the effect of . As increases, the number of visual words decreases since more keyframes are classified into a visual word. The number of grams decreases as well because a smaller number of visual words lead to less number of gram combinations. Typically, the size of the visual word vocabulary should be relatively small (e.g., thousands) for quick lookup [29]. An extremely small value ( for our dataset) is highly undesirable since it will generate an overlarge number of clusters, among which many might be singular. Fig. 3 shows the effect of on precision and recall when changes from 0.02 to 0.08. From Fig. 3, we have the following three main observations. Note that both M-tree and BDT return exactly the same search results in the high-dimensional paradigm. First of all, VideoGram returns very satisfactory results from retrieval point of view. Its precision is higher than 80% before recall reaches 60% if . Comparing with the high-dimensional paradigm which performs sequence matching in the original feature space, VideoGram even achieves better search quality for small values (i.e., 0.02 and 0.04). This proves the effectiveness of our query expansion method in recovering the content similarity when quantizing high-dimensional sequences as strings. Note that queries are not fully expanded when . The clustering method to partition the feature space into visual words may also affect the effectiveness since different clustering methods can generate different visual word vocabularies. More effective methods are expected to yield better quality visual words. Second, as increases from 0.02 to 0.08 (Fig. 3(a) to (d)), the precision-recall curve of VideoGram slightly drops, while that of the high-dimensional paradigm goes up. When is small (e.g., 0.02 and 0.04), VideoGram outperforms the high-dimensional paradigm. However, it becomes the other way round when gets larger (e.g., 0.06 and 0.08). In our visual word language model, the likelihood for a query frame in a visual word

, (b)

617

, (c)

, (d)

is estimated based on the overlap between the visual word and the query frame’s Virtual-Visual-Word (VVW). The likelihood is then propagated in computing the video similarity which determines the accuracy. Clearly, a larger value leads to a larger visual word which is coarser than a compact visual word in representing the inside (key)frames. As a consequence, the estimation made on coarser visual words is expected to be less accurate, resulting in lower accuracy. In the high-dimensional paradigm, has contrary impact, where a larger value brings more frames to be matched with a query frame. Consequently, the number of candidate sequences increases. This reduces the opportunity of excluding true positives from the results. Given a smaller value, the chance is higher for a near-duplicate video with moderate editions to be mismatched with the query video in the high-dimensional paradigm. Interestingly, the query expansion method in VideoGram is able to enlarge the search range by matching the query frame’s Virtual-Visual-Word with its overlapped visual words, thus reduce the negative effect of smaller values while more accurate visual word representations are used. Third, precision of both VideoGram and the high-dimensional paradigm decreases as recall increases. When all near-duplicates are returned (i.e., ), precision becomes well below 50%. Since the RGB color space is used as our feature space in representing frames, color changes have relatively greater impact on RGB feature vectors. Based on color content similarity, those near-duplicate videos with color changes are less similar to the query and thus much harder to be retrieved. It is noted that some near-duplicates of web videos have strong color degradation caused by amateur users. Retrieval based on more sophisticated features is expected to return more accurate results. This is out of this paper’s scope. In view of Fig. 3, we set the default value of to be 0.02. 2) Effect of : We next look at the effect of query length on precision and recall. Fig. 4 shows the results for different query lengths with and . As we can see from Fig. 4(a) to (d) when increases from 5 to 30, both VideoGram and the high-dimensional paradigm perform better and better. Clearly, longer queries carry more sequence information. Matching two longer sequences can better compensate content information loss of individual frames. Meanwhile, shorter queries which have fewer grams are more sensitive to noises and un-indexed grams. It is also observed that VideoGram consistently outperforms the high-dimensional paradigm by large margins when recall is high, mainly due

618

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 15, NO. 3, APRIL 2013

Fig. 4. Effect of

Fig. 5. Effect of and

on precision and recall. (a)

on total search time. (a)

to the compact visual word representation and the effective query expansion method. This experiment further confirms the superiority of VideoGram over the high-dimensional paradigm in search accuracy when is small. C. Efficiency In this subsection, we focus on the evaluation of search efficiency measured by total search time. To test the scalability, here we use the larger dataset OWN VIDEO which contains 70 000 videos. The total number of keyframes extracted from the whole dataset is about 3 400 000. For each query, top-20 most similar sequences are returned. In addition to the high-dimensional paradigm implemented by BDT [11] and M-tree [7], we also test the performance of inverted lists directly on visual words used in [28]. Sequential scan which matches a query sequence in a sliding window fashion is also compared. Fig. 5 compares five methods on efficiency for queries of different lengths as varies. Note that the y-axis is in the logarithmic scale. As we can see clearly, VideoGram outperforms the high-dimensional paradigm (with BDT and M-tree respectively) and the inverted lists on visual words by more than an order of magnitude, which in turn improve the sequential scan greatly for different query lengths and values. In VideoGram, there are two major factors contributing to the improvement. Firstly, it is the exact gram search in the gram-based indexing structure. Comparing with the high-dimensional paradigm, exact gram search is far more efficient than similarity search in a high-dimensional indexing structure. Secondly, it is the Likelihood-based Video Similarity (LVS) (5) based on the character matching. Comparing with the High-dimensional Video Similarity (HVS) (10) in the high-dimensional paradigm, character matching is much faster than high-dimensional distance

, (b)

, (b)

, (c)

, (c)

, (d)

, (d)

.

.

computation. We also notice that the inverted lists on the visual words achieve similar performance with the high-dimensional paradigm. Although the videos containing the query visual words can be identified quickly from the inverted lists, the number of candidate sequences is huge. As increases for the same query length, total search time for the sequential scan does not have noticeable change since the number of its similarity computations is invariant to . Differently, efficiency of the rest methods deteriorates quickly. In VideoGram, a larger value leads to a larger query’s Virtual-Visual-Word (VVW), which potentially overlaps with more visual words during the query expansion process. Consequently, the query will be expanded into a larger number of sequences and thus a higher search cost is expected. Nevertheless, VideoGram consistently improves the high-dimensional paradigm and the inverted lists which also suffer from a larger value. Given a query keyframe in the high-dimensional paradigm, the number of its similar keyframes increases as (i.e., search range) becomes larger and thus many more high-dimensional candidate sequences are generated for the query video sequence. It gets much more expensive for the high-dimensional paradigm when larger values are used. In the inverted lists of visual words, a larger also corresponds to a larger number of candidate sequences since each individual visual word tend to contain more frames. When query length increases for the same value, total search time for all methods grows rapidly as indicated from Fig. 5(a) to (d). This is mainly because that sequence similarity measures have quadratic time complexity to the sequence length. Meanwhile, a longer query also brings in more candidate sequences for comparisons. Actually, improvement gaps achieved by VideoGram over existing methods are enlarged for

HUANG et al.: A GRAM-BASED STRING PARADIGM FOR EFFICIENT VIDEO SUBSEQUENCE SEARCH

longer queries which incur more expensive similarity computations. Note that gaps are abated due to the logarithmic scale in y-axis. On average, VideoGram improves the high-dimensional paradigm and the inverted lists on visual words by more than an order of magnitude and the sequence scan by nearly two orders of magnitude on our video dataset. This proves the performance superiority of the string paradigm in managing large-scale highdimensional video sequence databases. Although further optimizations on the high-dimensional paradigm and the inverted lists are possible, the performance magnitude might be hardly affected. VIII. CONCLUSION In this paper, by utilizing the high efficiency of string indexing approach, we present VideoGram, a string paradigm for indexing and searching large-scale video sequence databases in high-dimensional spaces. In the new paradigm, video feature space is modeled as a set of visual words, based on which each database video sequence is mapped into a string. A grambased indexing structure is then built. Given a a high-dimensional query video sequence, it is first expanded into multiple strings which also capture the spatial closeness between query frames and visual words. Video sequence search is performed by matching query SoLVW and candidate SoVW. To ensure effectiveness and efficiency of the string paradigm, we propose a novel query expansion method based on the visual word language model to offset the quantization effect and also prevent over-expansion by optimizing the trade-off between the search accuracy and cost. An extensive performance study on large-scale video datasets confirms the novelties of the proposal. REFERENCES [1] A. Andoni and P. Indyk, “Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions,” CACM, vol. 51, no. 1, pp. 117–122, 2008. [2] L. Ballan, M. Bertini, A. D. Bimbo, and G. Serra, “Video event classification using bag of words and string kernels,” in Proc. ICIAP, 2009, pp. 170–178. [3] C. Böhm, S. Berchtold, and D. A. Keim, “Searching in high-dimensional spaces: Index structures for improving the performance of multimedia databases,” ACM Computing Survey, vol. 33, no. 3, pp. 322–373, 2001. [4] C. Böhm, M. Gruber, P. Kunath, A. Pryakhin, and M. Schubert, “Prover: Probabilistic video retrieval using the gauss-tree,” in Proc. ICDE, 2007, pp. 1521–1522. [5] L. Chen, M. T. Ozsu, and V. Oria, “Robust and fast similarity search for moving object trajectories,” in Proc. SIGMOD, 2005, pp. 491–502. [6] S. S. Cheung and A. Zakhor, “Efficient video similarity measurement with video signature,” IEEE Trans. Circuits Syst. Video Technol., vol. 13, no. 1, pp. 59–74, Feb. 2003. [7] P. Ciaccia, M. Patella, and P. Zezula, “M-tree: An efficient access method for similarity search in metric spaces,” in Proc. VLDB, 1997, pp. 426–435. [8] R. Datta, D. Joshi, J. Li, and J. Z. Wang, “Image retrieval: Ideas, influences, and trends of the new age,” ACM Computing Survey, vol. 40, no. 2, pp. 5:1–5:60, 2008. [9] D. C. Gibbon and Z. Liu, Introduction to Video Search Engines. Berlin, Germany: Springer, 2008. [10] M. E. Houle and J. Sakuma, “Fast approximate similarity search in extremely high-dimensional data sets,” in Proc. ICDE, 2005, pp. 619–630. [11] Z. Huang, H. T. Shen, J. Shao, X. Zhou, and B. Cui, “Bounded coordinate system indexing for real-time video clip search,” ACM Trans. Inf. Syst., vol. 27, no. 3, 2009.

619

[12] H. V. Jagadish, B. C. Ooi, K.-L. Tan, C. Yu, and R. Zhang, “idistance: -tree based indexing method for nearest neighbor An adaptive search,” ACM Trans. Database Sys., vol. 30, no. 2, pp. 364–397, 2005. -tree [13] C. Jensen, D. Lin, and B. Ooi, “Query and update efficient based indexing of moving objects,” in Proc. VLDB, 2004, pp. 768–779. [14] Y.-G. Jiang and C.-W. Ngo, “Visual word proximity and linguistics for semantic video indexing and near-duplicate retrieval,” Comput. Vis. Image Understanding, vol. 113, no. 3, pp. 405–414, 2009. [15] M. S. Lew, N. Sebe, C. Djeraba, and R. Jain, “Content-based multimedia information retrieval: State of the art and challenges,” ACM Trans. Multimedia Computing, Commun. Applic., vol. 2, no. 1, pp. 1–19, 2006. [16] C. Li, B. Wang, and X. Yang, “Vgram: Improving performance of approximate queries on string collections using variable-length grams,” in Proc. VLDB, 2007, pp. 303–314. [17] J. Lin, E. J. Keogh, L. Wei, and S. Lonardi, “Experiencing sax: A novel symbolic representation of time series,” Data Min. Knowl. Discov., vol. 15, no. 2, pp. 107–144, 2007. [18] H. Liu, S. Jiang, Q. Huang, and C. Xu, “A generic virtual content insertion system based on visual attention analysis,” ACM Multimedia, pp. 379–388, 2008. [19] X. Liu and W. B. Croft, “Cluster-based retrieval using language models,” in Proc. SIGIR, 2004, pp. 186–193. [20] T. Mei, X.-S. Hua, L. Yang, and S. Li, “Videosense—Towards effective online video advertising,” ACM Multimedia, pp. 1075–1084, 2007. [21] S. Rasetic, J. Sander, J. Elding, and M. A. Nascimento, “A trajectory splitting model for efficient spatio-temporal indexing,” in Proc. VLDB, 2005, pp. 934–945. [22] Y. Sakurai, M. Yoshikawa, S. Uemura, and H. Kojima, “The A-tree: An index structure for high-dimensional spaces using relative approximation,” in Proc. VLDB, 2000, pp. 516–526. [23] H. T. Shen, B. C. Ooi, X. Zhou, and Z. Huang, “Towards effective indexing for very large video sequence database,” in Proc. SIGMOD, 2005, pp. 730–741. [24] H. T. Shen, J. Shao, Z. Huang, and X. Zhou, “Effective and efficient query processing for video subsequence identification,” IEEE Trans. Knowl. Data Eng., vol. 21, no. 3, pp. 321–334, Aug. 2009. [25] H. T. Shen, X. Zhou, Z. Huang, and J. Shao, “Statistical summarization of content features for fast near-duplicate video detection,” ACM Multimedia, pp. 164–165, 2007. [26] H. T. Shen, X. Zhou, and A. Zhou, “An adaptive and dynamic dimensionality reduction method for high-dimensional indexing,” VLDB J., vol. 16, no. 2, pp. 219–234, 2007. [27] J. Shieh and E. J. Keogh, “Sax: Indexing and mining terabyte sized time series,” in Proc. KDD, 2008, pp. 623–631. [28] J. Sivic and A. Zisserman, “Video google: A text retrieval approach to object matching in videos,” in Proc. ICCV, 2003, pp. 1470–1477. [29] P. Tirilly, V. Claveau, and P. Gros, “Language modeling for bag-ofvisual words image categorization,” in Proc. CIVR, 2008, pp. 249–258. [30] E. Valle, M. Cord, and S. Philipp-Foliguet, “High-dimensional descriptor indexing for large multimedia databases,” in Proc. CIKM, 2008, pp. 739–748. [31] F. Wang, Y.-G. Jiang, and C.-W. Ngo, “Video event detection using motion relativity and visual relatedness,” in ACM Multimedia, 2008. [32] R. Weber, H.-J. Schek, and S. Blott, “A quantitative analysis and performance study for similarity-search methods in high-dimensional spaces,” in Proc. VLDB, 1998, pp. 194–205. [33] X. Wu, A. G. Hauptmann, and C. W. Ngo, “Practical elimination of near-duplicates from web video search,” in ACM Multimedia, 2007, pp. 218–227. [34] S. Xu, H. Jiang, and F. C. Lau, “Personalized online document, image and video recommendation via commodity eye-tracking,” in Proc. ACM Conf. Recommender Syst., 2008, pp. 83–90. [35] Y. Yan, B. C. Ooi, and A. Zhou, “Continuous content-based copy detection over streaming videos,” in Proc. ICDE, 2008, pp. 853–862. [36] X. Yang, B. Wang, and C. Li, “Cost-based variable-length-gram selection for string collections to support approximate queries efficiently,” in Proc. SIGMOD, 2008, pp. 353–364. [37] M.-C. Yeh and K.-T. Cheng, “A string matching approach for visual retrieval and classification,” Multimedia Inf. Retrieval, pp. 52–58, 2008. [38] M.-C. Yeh and K.-T. Cheng, “Fast visual retrieval using accelerated sequence matching,” IEEE Trans. Multimedia, vol. 13, no. 2, pp. 320–329, Mar. 2011. [39] C. Zhai and J. D. Lafferty, “A study of smoothing methods for language models applied to information retrieval,” ACM Trans. Inf. Sys., vol. 22, no. 2, pp. 179–214, 2004. [40] X. Zhu, X. Wu, J. Fan, A. K. Elmagarmid, and W. G. Aref, “Exploring video content structure for hierarchical summarization,” Multimedia Syst., vol. 10, no. 2, pp. 98–115, 2004.

620

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 15, NO. 3, APRIL 2013

Zi Huang is a Lecturer and Australian Postdoctoral Fellow in School of Information Technology and Electrical Engineering, The University of Queensland. She received her B.Sc. degree from Department of Computer Science, Tsinghua University, China, and her Ph.D. in computer science from School of Information Technology and Electrical Engineering, The University of Queensland. Dr. Huang’s research interests include multimedia search, information retrieval, and knowledge discovery.

Jiajun Liu is a Ph.D. candidate in School of Information Technology and Electrical Engineering, The University of Queensland. His research interests are mainly on social multimedia analysis, near-duplicate video retrieval, and multimedia indexing.

Bin Cui is a faculty in the School of EECS and Vice Director of Institute of Network Computing and Information Systems, at Peking University. He obtained his B.Sc. from Xi’an Jiaotong University (Pilot Class) in 1996, and Ph.D. from National University of Singapore in 2004 respectively. His research interests include database performance issues, query and index techniques, multimedia databases, Web data management, data mining. He is currently in the Editorial Board of VLDB Journal, TKDE, and DAPD.

Xiaoyong Du is a Professor and Dean in School of Information, Renmin University of China. He received his Ph.D. from Nagoya Institute of Technology, Japan. His research interests mainly include High-Performance Database Systems, Intelligent Information Retrieval, and Semantic Web and Knowledge Engineering. He is a Member of Expert Group, Information Division, in NSFC, and has served extensively on many database conferences.

A Gram-Based String Paradigm for Efficient Video ... - IEEE Xplore

Mar 13, 2013 - semination of video data has created an urgent demand for the large-scale ... video sequence, retrieval is performed by transforming the query.

2MB Sizes 4 Downloads 340 Views

Recommend Documents

Ubiquitous Robot: A New Paradigm for Integrated Services - IEEE Xplore
virtual pet modeled as an artificial creature, and finally the. Middleware which seamlessly enables interconnection between other components. Three kinds of ...

DISCOV: A Framework for Discovering Objects in Video - IEEE Xplore
ance model exploits the consistency of object parts in appearance across frames. We use maximally stable extremal regions as obser- vations in the model and ...

Energy-Efficient Opportunistic Interference Alignment - IEEE Xplore
Abstract—We introduce an energy-efficient distributed op- portunistic interference alignment (OIA) scheme that greatly improves the sum-rates in multiple-cell ...

Computationally Efficient Template-Based Face ... - IEEE Xplore
head poses, illuminations, ages and facial expressions. Template images could come from still images or video frames. Therefore, measuring the similarity ...

mpeg-2 video encoder chip - IEEE Xplore
Thanks to increased market accep- tance of applications such as digital versatile disks (DVDs), HDTV, and digital satellite broadcasting, the MPEG-2 (Moving ...

Bandwidth-Efficient WDM Channel Allocation for Four ... - IEEE Xplore
52, NO. 12, DECEMBER 2004. Bandwidth-Efficient WDM Channel Allocation for. Four-Wave Mixing-Effect Minimization. Vrizlynn L. L. Thing, P. Shum, and M. K. ...

An Energy Efficient Multi-channel MAC Protocol for ... - IEEE Xplore
Department of Computer Engineering, Kyung Hee University, 449-701, ... Department of Electronics and Communications Engineering, Kwangwoon University, ...

Efficient Pattern Matching Algorithm for Memory ... - IEEE Xplore
intrusion detection system must have a memory-efficient pat- tern-matching algorithm and hardware design. In this paper, we propose a memory-efficient ...

A Peak Power Efficient Cooperative Diversity Using Star ... - IEEE Xplore
Abstract—In this paper, we propose a new simple relaying strategy with bit-interleaved convolutionally coded star quadra- ture amplitude modulation (QAM).

overlapped quasi-arithmetic codes for distributed video ... - IEEE Xplore
The presence of correlated side information at the decoder is used to remove this ... Index Terms— Distributed video coding, Wyner-Ziv coding, coding with side ...

IEEE Photonics Technology - IEEE Xplore
Abstract—Due to the high beam divergence of standard laser diodes (LDs), these are not suitable for wavelength-selective feed- back without extra optical ...

Energy Efficient Content Distribution in an ISP Network - IEEE Xplore
The content data is delivered towards the clients following a path on the tree from the root, i.e., the Internet peering point. A storage cache can be located at each node of the network, providing a potential facility for storing data. Moreover, cac

Efficient MR Inhomogeneity Correction by Regularized ... - IEEE Xplore
Magnetic Resonance (MR) images usually exhibit intensity inhomogeneity (bias field) due to non-uniformity generated from RF coils or radiation-patient ...

Efficient Multiple Hypothesis Tracking by Track Segment ... - IEEE Xplore
Burlington, MA, USA. {chee.chong, greg.castanon, nathan.cooprider, shozo.mori balasubramaniam.ravichandran}@baesystems.com. Robert Macior. Air Force ...

An Efficient Geometric Algorithm to Compute Time ... - IEEE Xplore
An Efficient Geometric Algorithm to Compute Time-optimal trajectories for a Car-like Robot. Huifang Wang, Yangzhou Chen and Philippe Sou`eres.

Efficient Estimation of Critical Load Levels Using ... - IEEE Xplore
4, NOVEMBER 2011. Efficient Estimation of Critical Load Levels. Using Variable Substitution Method. Rui Bo, Member, IEEE, and Fangxing Li, Senior Member, ...

wright layout - IEEE Xplore
tive specifications for voice over asynchronous transfer mode (VoATM) [2], voice over IP. (VoIP), and voice over frame relay (VoFR) [3]. Much has been written ...

Device Ensembles - IEEE Xplore
Dec 2, 2004 - time, the computer and consumer electronics indus- tries are defining ... tered on data synchronization between desktops and personal digital ...

wright layout - IEEE Xplore
ACCEPTED FROM OPEN CALL. INTRODUCTION. Two trends motivate this article: first, the growth of telecommunications industry interest in the implementation ...

Video Description Length Guided Constant Quality ... - IEEE Xplore
University of Florida. Gainesville, FL, US [email protected]. Abstract—In this paper, we propose a new video encoding strategy — Video description length guided ...

Video Stabilization and Completion Using Two Cameras - IEEE Xplore
Abstract—Video stabilization is important in many application fields, such as visual surveillance. Video stabilization and com- pletion based on a single camera ...

Unified Video Annotation via Multigraph Learning - IEEE Xplore
733. Unified Video Annotation via Multigraph Learning. Meng Wang, Xian-Sheng Hua, Member, IEEE, Richang Hong, Jinhui Tang, Guo-Jun Qi, and Yan Song.