A novel shot boundary detection framework Wujie Zheng∗, Jinhui Yuan, Huiyi Wang, Fuzong Lin and Bo Zhang State Key Laboratory of Intelligent Technology and System Department of Computer Science and Technology Tsinghua University, Beijing, 100084, China ABSTRACT Shot boundary detection servers as a preliminary step to structure the content of videos. Up to now, a large number of methods have been proposed. We give a brief overview of previous works with a novel view, focusing on the solutions of the two main disturbances, i.e., abrupt illuminance change and great camera or object motion. Then this paper presents a novel shot boundary detection framework, consisting of three components: fade out/in (abbreviated as FOI) detector, cut detector and gradual transition (abbreviated as GT) detector. The key technique of FOI detector is the recognition of monochrome frames. For cut detection, a second-order difference method is firstly applied to obtain candidate cuts, and then a post-processing procedure is taken to eliminate the false positives. In GT detector, the twin-comparison approach is employed to detect short gradual transition which lasts less than six frames, while for long gradual transition, an improvement of twin-comparison algorithm is designed. Firstly, to effectively reduce the false alarms of quick motion, the lower threshold is self-adaptive to motion feature. Secondly, an FSA (finite state automata) model is adopted to replace the twin-comparison strategy. This framework makes good use of various features and successfully integrates all the modules together. Finally, the system is evaluated on the TRECVID benchmarking platform and the experimental results reveal the effectiveness of our system. Keywords: shot boundary, second-order difference, motion-based threshold, finite state automata

1. INTRODUCTION Recent advances in multimedia compression technology, coupled with the significant increase in computer performance and the growth of Internet, have led to the widespread use and availability of digital videos. The rapidly expanding applications such as digital libraries, distance learning, video-on-demand, digital video broadcast, interactive TV, multimedia information systems have spurred the growing demand of new technologies and tools for efficiently indexing, browsing and retrieval of video data. The area of content based video retrieval, aiming to automate the indexing, retrieval and management of video, has attracted extensive research during the last decade.1, 2 To achieve automatic video content analysis, shot boundary detection is a prerequisite step, since the shot level organization of video sequence is considered appropriate for browsing and content based retrieval.1 A shot consists of frame sequences captured by a single camera action, of which all the frames occur at the same spot. According to whether the transition between shots is abrupt or not, the shot boundaries are categorized into two types: cuts and gradual transitions. Furthermore, the gradual transitions are classified into wipe, fade out/in(FOI) and dissolve by the differences of editing effect.3 Up to now, a large number of methods have been proposed to perform shot boundary detection. Hanjalic, Gargi and Lienhart have given some comprehensive and comparative surveys on the most representative approaches respectively.4–7 All the existing methods are designed based on the fact that the frames within the same shot maintain some consistency in the visual content, while the frames surrounding the shot boundaries ∗ Supported by National Natural Science Foundation of China (60135010), National Natural Science Foundation of China(60321002) and the Chinese National Key Foundation Research Development Plan(2004CB318108).

Send correspondence to Wujie Zheng. E-mail: [email protected], Telephone: +86 10 6277 7702.

Visual Communications and Image Processing 2005, edited by Shipeng Li, Fernando Pereira, Heung-Yeung Shum, Andrew G. Tescher, Proc. of SPIE Vol. 5960 (SPIE, Bellingham, WA, 2005) · 0277-786X/05/$15 · doi: 10.1117/12.631547 Proc. of SPIE Vol. 5960 596018-1

exhibit a significant change of visual content. For cut detection, the main challenge lies in how to successfully distinguish the real cut transition from the abrupt illumination changes and foreground moving objects of large size or fast speed. Meanwhile, since the gradual transitions may span over dozens of frames and the variation of the visual content between two consecutive frames may be considerably small, the identification of gradual transitions is far more difficult. More importantly, the large object movement in the frames and great camera motion may both cause the content variation similar to that of gradual transitions. The related literature usually considers that the identification of cuts has been somehow successfully tackled, the detection of gradual transitions, nevertheless, remains a difficult problem.2 In fact, the performance of the existing approaches is far from perfect detection. On the one hand, most of them focus on the detection of one specific transition type such as cut, wipe, dissolve, etc., little attention has been paid to integrate different techniques together. However, the integration framework is more difficult, since it has to distinguish all the transition types from each other and therefore, different from the binary classification problem, is a multi-classification one. On the other hand, due to the difficulty of annotating the training and testing data set, most of the methods are evaluated on a small test collections. However, when applied to larger test collections, they probably are not able to perform as excellently as before. Finally, although several literatures have given thorough overview and investigation to the representative methods, few studies have surveyed the up-to-date enhancements to the existing methods. In this paper, we firstly present a brief overview of the most up-to-date techniques, and then introduce a novel shot boundary detection framework. In the presented framework, a novel monochrome frame detector is designed to identify fade out/in(abbreviated as FOI). And then a second-order difference technique is adopted to distinguish real cuts from various disturbances more clearly. The twin-comparison approach proposed by Zhang et al.8 is employed to detect short gradual transition, i.e., lasting less than six frames. For long gradual transition, we design a finite state automata model, which uses a motion-based adaptive threshold to determine the input signal of the model. The framework makes good use of various features and successfully integrates all the modules together. Finally, the system is evaluated on the TRECVID benchmarking platform, and the experimental results reveal the effectiveness of our system. The remainder of this paper is organized as follows. Section 2 gives a brief review of the related work. Then in Section 3, we present the overview of the proposed framework. Section 4 introduces the novel FOI, cut and GT detectors respectively and also describes the collaboration framework of the various modules. Then the experiment evaluation is presented in Section 5. And finally comes the conclusion in Section 6.

2. PREVIOUS WORK Various shot boundary detection methods have been proposed in the last decade. Since manipulating digital video requires intensive computation, to speed up the efficiency, some works have performed the shot boundary detection directly in the compressed domain of MPEG video. Huang et al.9 have given a thorough survey on the techniques of the compressed domain, here we focus on the ones in uncompressed domain. Of the uncompressed domain approaches, all detect shot boundaries via the identification of the variation of visual content. To characterize the variation, several schemes such as color histogram, pixel difference,8 edge change ratio10 and motion11 have been proposed. The published literatures, which survey the existing techniques, usually investigate the various methods by separating them into categories according to some criteria such as compressed vs. uncompressed domain, the features utilized to characterize the visual content, or the editing effects to detect.4–7 Little work has been done to group the existing methods by the corresponding functions of eliminating the disturbances. In this section, we will give a brief overview of previous works by novel grouping criteria, focusing on the recent advances of shot boundary detection methods. Abrupt illuminance change such as flashlight constitutes one of the main disturbances to cut detection. The common practice to reduce such disturbances is to design a flashlight detector to sift the candidate shot boundaries. Based on the fact that a flashlight usually occurs during several consecutive frames and vanishes quickly, Yeo et al. detects the flashlight by recognizing the two sharp peaks caused by the occurring and the vanishing of the flashlight.12 Similarly, Zhang et al. distinguish cuts from flashlights according to the ideal cut model and flash model.13 To improve the accuracy, more sophisticated methods based on edge matching

Proc. of SPIE Vol. 5960 596018-2

strategy have been proposed, but edge matching leads to more intensive computation and is also sensitive to great motion of object or camera.14, 15 These methods are effective for flashlight detection, but they are not able to handle more general illumination variation. To overcome the drawback above, Qing et al. propose an illumination invariant metric to measure content variation and their experiments reveal its effectiveness. Besides the illumination change, great camera motion and object movement can also cause moderately large content variation, i.e., the variations caused by real cuts and those by motion may overlap each other. Based on the observation that cut usually causes isolated sharp peak in the curve of feature variation while motion usually leads to continuous peaks during dozens of frames, several techniques to reduce the overlapping have been suggested. Jun et al. propose to transfer the feature variation signal by a median filter firstly, and then compare the original signal with the transferred one to get a clear measured data.16 Similarly, Leszczuk et al. implement a so called differential motion factor to achieve this.17 In another more popular type of ideas, instead of using a global fixed threshold to determine whether the variation is cut or not, they adopt local adaptive thresholding scheme, in which the threshold varies according to the activity of the feature variation in the neighborhood.18–20 Thus, during the sequence including many frames with motion, the threshold will heighten automatically to reduce the disturbances. All the methods mentioned above perform shot boundary detection via pair-wise comparison of consecutive frames. To overcome their drawback of the sensitivity to disturbances, Cooper21 proposed to use multi-pair comparison strategy via similarity analysis. Different from the above ones, Ngo,22 Chung23 and Jamil24 have transformed the video segmentation problem into a problem of pattern detection on a 2D image, called visual rhythm or spatio-temporal slice. The visual rhythm provides better efficiency. Meanwhile, based on the scheme, various methods for detecting cut, wipe and dissolve can be designed.

3. OVERVIEW OF THE NOVEL FRAMEWORK Most of existing algorithms are developed based on the observation that frames surrounding a boundary generally display a significant change of the visual content while those within the same shot usually maintains somehow consistency. However, the observation is neither a sufficient nor a necessary condition for shot boundaries. Firstly, abrupt illumination change or object/camera’s large movement may also lead to significant change of visual content. Furthermore, some statistical features, adopted to represent the visual content, are not expressive enough to reflect shot transitions occurring in the same scene. Therefore, to achieve high performance, a shot boundary detection system have to reduce various disturbances effectively as well as adopt features expressive enough. To make full use of the various features, our system uses a two-stage implementation, i.e., feature extraction stage and shot boundary detection. Firstly, various required features, which form a compact representation of each frame’s visual content, are extracted from either compressed domain or uncompressed domain. Then shot boundary detectors, including FOI detector, cut detector and GT detector, analyze the feature variation and determine whether a shot transition occurs. Dividing the system into two stages facilitates the system to make decisions through the global analysis of the video, such as global thresholds selection or different local adaptive thresholding schemes. Fig 1. depicts the overall architecture of our system. Five kinds of features, as a trade-off of efficiency and expressiveness, have been adopted to represent the visual content of each frame. Color histogram, 16 bins for each channel of RGB space, is extracted and compared by the histogram intersection method. Also, pixel-wise difference feature, introducing spatial information, is calculated as a supplement to color histogram. To detect flashlight effect and monochrome frame, the mean and the standard deviation of pixel intensities in each frame are calculated. Besides features from uncompressed domain, motion vectors from compressed domain are also extracted and synthesized to reflect the global motion of a frame.

4. SYSTEM COMPONENTS In this section, different components for detecting the specific type of shot transition are discussed. In order to make full use of multiple cues to detect a shot boundary, either the cut detector or the GT detector is composed of two steps: candidates selection and candidates sifting. In the candidates selection procedure, a basic technique is used to get a collection of candidate boundaries. And then, various cues are analyzed and help to judge whether the candidate is a real boundary or not.

Proc. of SPIE Vol. 5960 596018-3

Mean of Pixel Intensities Standard Deviation of Pixel Intensities

Video Clips

Feature Extractor

Color Histogram

Feature Extraction

Pixel-wise Difference Motion Vector

FOI Detector

CUT Detector

Coherence Filter

Final Result

GT Detector Shot Boundary Detection

Figure 1. System overview of shot boundary detection task.

4.1. FOI Detector In the process of fade out/in effect, the first shot fades out into a sequence of dark monochrome frames and then the next shot fades in. Meanwhile, the dark monochrome frames seldom appear elsewhere. Thus, the FOI detection problem turns to the recognition of monochrome frames. Here, we adopt the monochrome frame detection approach proposed by Lienhart.25 In the method, the monochrome frames are identified easily by calculating standard deviation σI of the pixels in each frame. For a perfect monochrome frame, the corresponding σI should assume zero. In practice, in the presence of noises, a frame is regarded as monochrome if the σI drops below a small threshold TM F σ . In order to detect the dark monochrome ones, the average intensity µI is also required to be below a small threshold TM F µ . In formulas:   dark monochrome frame (σI < TM F σ ) and (µI < TM F µ ) other monochrome frame (σI < TM F σ ) and (µI > TM F µ ) (1) M F (I) =  ploychrome frame else Having recognized all the monochrome frames, the rough positions of the FOI transitions can be determined. To search the boundaries of fade out and fade in, different from the line regression method described by Lienhart,5 here the task is accomplished by the GT detector describe in Sect. 4.3. The FOI detection process is described as follows: Step 1. Detect monochrome frames according to Equation 1. Step 2. Judge the type of entering transition, abrupt or gradual. If it is gradual, search the fade out boundary of the previous shot by GT detector. Step 3. Judge the type of exiting transition, abrupt or gradual. If it is gradual, track the fade in boundary of the next shot by GT detector.

4.2. Cut Detector In cut transition, the last frame of one shot is followed immediately by the first frame of the next shot. This abrupt transition is reflected by the corresponding sharp peak on the curve of the feature variation. Most of the existing methods usually compare the variation with a threshold. If the variation is above the threshold, a cut is declared. However, it is difficult to select an appropriate threshold. As we have mentioned in Sect. 2, there are two types of techniques to distinguish the cuts from various disturbances more clearly: either to reduce the overlapping of cuts and non-cuts by pre-processing the signal of feature variation, or alternatively employ local adaptive threshold. Corresponding to the first type, in the next section, we propose a novel method, i.e., secondorder difference, to separate cuts from other disturbances more clearly. To further reduce the disturbances, several post-processing modules have been implemented to sift the candidate shot boundaries.

Proc. of SPIE Vol. 5960 596018-4

0.2

0.2

Second−order Difference of feature

0.18

Difference of feature

0.16 0.14 0.12 0.1 0.08 0.06

Video Sequence 1 Video Sequence 2

0.04 0.02 0

0

5

10

15

20

25

30

0.15

0.1

0.05

0

−0.05

−0.1

Video Sequence 1 Video Sequence 2

−0.15

−0.2

0

5

10

15

20

25

30

Frame number

Frame number

Figure 2. Left: The curve of feature variation for video sequence 1 and 2. Right: The curve of second-order difference for the two sequences. In the video sequence 1, there is one cut occurring at the 15-th frame, while no cuts in sequence 2, which is a segment of basketball game with great camera motion.

4.2.1. Second-order Difference Method Formally, let fk denote the feature of the k-th frame and ∆fk = fk+1 − fk be the feature variation between the k-th and the next frame. In traditional methods,feature variation ∆fk is directly compared with a threshold Tcut . If the variation ∆fk surpasses Tcut , a cut is declared. If all the videos only include sequences similar to “Video Sequence 1” in the Fig. 2, it will be easy to determine a fixed threshold Tcut . Unfortunately, there are also a lot of sequences similar to “Video Sequence 2”, which is caused by great movement of camera or objects. In those sequences, the feature variation ∆fk is frequently comparative to that of cut. It is difficult for the threshold Tcut to distinguish them from each other. Similar to Leszczuk’s idea,17 we here propose a second-order difference method to reduce the overlapping between real cuts and various disturbances. Let ∆2 fk = ∆fk+1 −∆fk , then the ∆2 fk is a second-order difference. In the method, instead of ∆fk , ∆2 fk is compared to the threshold Tcut . If ∆2 fk exceeds the threshold, a candidate cut is declared. As shown in the right plot of Fig. 2, the values of disturbances can be successfully depressed. At the same time, the sharp peak of cut is preserved but in the form of a positive and a negative peak. 4.2.2. Post Processing Modules Abrupt illumination change like flashlight effect and large movement of object/camera may cause feature variations exceeding Tcut . Nevertheless, the second-order difference method is not able to eliminate all these false positives. Therefore, we further design several post-processing modules, including flashlight detector and gradual transition filter, to sift the candidate cuts. Firstly, according to the ideal flash model and ideal cut model described by Zhang,13 a simple flashlight detector is implemented. The mean value of pixel intensities reflects the frame’s illumination. Once the increment of average illumination between two successive frames is relatively large, the flashlight detector will examine the illumination of the next several frames. If the average illumination falls to the low value again, it is regarded as a flashlight effect. Secondly, short gradual transition and great camera movement usually cause large feature variation comparable to that of cut. Based on the observation that the feature variation caused by cut is often an isolated sharp peak while that of disturbances may consist of several sharp peaks in the neighborhood, a gradual transition filter is designed to reduce the false positives. In the method, if feature variations surrounding the candidate cut are unstable and most of the magnitudes are comparative to Tcut , the candidate will not be declared as a cut.

Proc. of SPIE Vol. 5960 596018-5

4.3. Gradual Transition Detector During a gradual transition, two shots are superimposed together (dissolve), or the former gradually becomes totally black, then the latter shot gradually appears (wipe or FOI). Up to date, gradual transition detection still remains a hard problem due to its gradualness and versatility. The twin comparison method proposed by Zhang8 works well for short gradual transition detection. However, it has some difficulties in detecting long gradual transition. On the one hand, it is not able to effectively eliminate the disturbances caused by camera/object movement. On the other hand, it often truncates long gradual transition, because the detection process will terminate once the difference of adjacent frames doesn’t exceed the lower threshold. To overcome such shortcomings, according to the length of grandual transition, we divide them into two categories: short gradual transition and long gradual transition, and treat them respectively. The short ones refer to those whose lengths are less than six frames, and the others are long ones. The twin comparison method is employed for detecting short ones and a novel approach, described by a finite state automata model, for identifying long ones. The novel method is an improved version of twin comparison algorithm, and the lower threshold is self-adaptive based on the motion feature from compressed domain. In addition, the new method possesses more robust conditions for both entering and exiting of the gradual transition process. 4.3.1. Motion Based Self-adaptive Threshold Feature variation caused by object/camera movement exhibits similar characteristics to that of gradual transition. The gradual transition detector has to successfully reduce the disturbance of motion. An intuitive idea is to adjust the lower threshold of twin comparison self-adaptively according to the magnitude of motion, specifically, the system heightens it if large movement occurs or depresses it if small or no movement occurs. In our method, the motion vectors of compressed MPEG video are extracted to characterize the motion feature. In the frame of type I, the macro-blocks are intra-frame encoded and thus do not contain motion vectors. The motion feature of such frame is calculated via interpolation from the forward and backward frames of type B or P . In addition, for the frame of type B or P , some of their macro-blocks are not encoded through motion compensation. In our algorithm, only the macro-blocks with motion vectors are concerned and used to estimate the global motion, which is characterized by the mean of the absolute motion vectors in horizontal and vertical directions: M Vh =

N 1  |mvhi |, N i=1

M Vv =

N 1  |mvvi | N i=1

(2)

In which, N denotes the number of macro-blocks with motion vectors in the current frame. |mvhi | and |mvvi | represent the absolute values of the i-th macro-block’s horizontal and vertical motion vectors respectively. Then the lower threshold Tl is adaptive to the motion features by a linear equation: Tl = α + β × (M Vh + M Vv )

(3)

Where α and β are fixed coefficients which are heuristically determined. α is assumed about 0.05 to avoid Tl ≈ 0 when the frame is static. A straightforward implementation of this idea can be roughly described as follows: Step 1. For frame of type P or B, extract motion vectors of predictive encoding macro-blocks from the compressed domain. As to I type frame, calculate the interpolation values of motion vectors. Step 2. Calculate the mean values of horizontal and vertical motion vectors by Equation 2. Step 3. Get the current threshold according to Equation 3. 4.3.2. Finite State Automata Model The principle of the improved twin comparison method can be demonstrated by a finite state automata (FSA) model illustrated in Fig. 3. As the figure shows, there are nine states in the FSA model and two signals  0 and  1 in the alphabet set. The model has only one accept state Buf f er. The state set {P repare1, P repare2, P repare3} includes the entering states of gradual transition. As the model shows, to reach P repare3, at least three  1 signals are required. The states in the set {Check1, Check2, Check3, Check4} are all the exiting states. To go from P repare3 to Buf f er, at least three  0 within the five continuous signals are required. Otherwise, the model will cycle at state P repare3. Its working process is summarized as follows.

Proc. of SPIE Vol. 5960 596018-6

1 Check2

1

1 Normal

1 Prepare1

1 Prepare2

0 Prepare3

0

1

1

0

Check4

0 Check1

0

1

0 Check3

Buffer

0 0

Figure 3. Finite state automata model.

Step 1. When the detection process starts, it is assumed at N ormal State by default. Step 2. The lower threshold Tl is determined via the method described in Sect. 4.3.1. Step 3. If the feature variation exceeds Tl , the input signal will be set as  1 , otherwise it is set  0 . Step 4. According to the current state and input signal, the FSA goes to the next state. Step 5. If FSA reaches accept state Buf f er, go to step 6, else go to step 2. Step 6. Compare the accumulated variation with the higher threshold Th . Only if the variation exceeds Th , the candidate is declared as a real gradual transition. Overall, there are three advantages of the FSA model over the twin comparison method. Firstly, by motion based self-adaptive threshold, the system can effectively reduce the disturbance of object/camera motion. Secondly, the system possesses more robust condition of entering the gradual transition detection process, since only more than three consecutive variations exceeding Tl indicates a suspect gradual transition. Finally, the condition of exiting the gradual transition is also more robust. The system considers gradual transition is over only when there are three variations below Tl , within five consecutive frames. Therefore, the method can successfully suppress the false positives and the shortcoming of truncation .

4.4. Collaboration of Separate Modules The components introduced above do not work independently. They collaborate together to perform the multiclassification problem. The relation of these modules is illustrated in Fig. 4. As the figure shows, FOI detector has the highest priority. On a frame arriving, it is firstly analyzed whether it is a monochrome one or not. If it is, the FOI detector will be activated and runs until FOI transition is over. Otherwise, the frame will be processed by the next two parallel modules, i.e. CUT detector, and GT detector. In cut detection process, the second-order difference method is employed to get the candidate cut. And then a post-processing module is used to eliminate the false positives from the candidates. Note that the non-cut candidates are not discarded but sent to GT detectors for further recognition. In the gradual transition procedure, twin-comparison method and FSA model are adopted to detect short and long gradual transitions respectively. At the last stage, a coherence filter named “Buffer” is employed to harmonize the detection results of the cut detector and the GT detector, consequently, only one appropriate shot boundary is declared at one position.

5. EXPERIMENT AND EVALUATION To examine the effectiveness of the proposed framework, we evaluate it in the shot boundary task of TRECVID, which is a TREC-style video retrieval evaluation benchmarking platform.26 All the 2003 and 2004 TRECVID test collections for shot boundary detection task are used. They are all programs from the CNN Headline News and ABC World News Tonight from January to June 1998, in MPEG-1. About 3.05 gigabytes test collection of 2003, comprising 8 videos each lasting about half an hour, is employed to develop the system. And 4.23 gigabytes

Proc. of SPIE Vol. 5960 596018-7

Frame Sequence

FOI Checker

Monochrome Frame

FOI Detector

Twin comparison

Second-order Difference

CUT Detector

FSA Model

GT Detector Post-processing

Buffer

Signal {Delay,Reset,OK}

CUT

GT

FOI

Shot Boundary List

Figure 4. Collaboration of separate modules.

Table 1. Evaluation result of the ten runs (Ranked by F-measure)

Sysid thuai15 thuai14 thuai10 thuai16 thuai17 thuai07 thuai02 thuai05 thuai19 thuai08

All Rcl 0.884 0.888 0.888 0.896 0.903 0.889 0.888 0.884 0.902 0.898

transitions Prc F# 0.896 0.890 0.89 0.890 0.89 0.890 0.881 0.888 0.872 0.887 0.885 0.887 0.885 0.886 0.888 0.886 0.864 0.883 0.866 0.882

Rcl 0.928 0.928 0.925 0.929 0.929 0.928 0.925 0.919 0.902 0.898

Cuts Prc 0.931 0.93 0.939 0.926 0.923 0.93 0.931 0.938 0.864 0.866

F# 0.929 0.929 0.932 0.927 0.926 0.929 0.928 0.928 0.920 0.919

Gradual transitions Rcl Prc F# 0.792 0.82 0.806 0.803 0.807 0.805 0.809 0.79 0.799 0.827 0.79 0.808 0.846 0.775 0.809 0.808 0.791 0.799 0.808 0.791 0.799 0.811 0.787 0.799 0.821 0.789 0.805 0.801 0.801 0.801

Proc. of SPIE Vol. 5960 596018-8

GT frame accuracy Rcl Prc F# 0.824 0.86 0.842 0.829 0.848 0.838 0.839 0.835 0.837 0.824 0.848 0.836 0.82 0.848 0.834 0.839 0.836 0.837 0.839 0.836 0.837 0.839 0.836 0.837 0.817 0.859 0.837 0.838 0.848 0.843

Table 2. Description and analysis of the ten runs

Name thuai02 thuai05 thuai07 thuai08 thuai10

thuai14 thuai15 thuai16 thuai17 thuai19

Description and Analysis It is produced by the baseline system, with default architecture and default parameter settings. Second-order difference method is employed in short GT detection module. In theory, this should lead to higher precision and lower recall of GT, but the actual result is not. This run loosens the condition of short GT detector. Thus, recall of short GT should increase, while precision should decline. The effectiveness is not evident. This run removes the gradual transition filter module. Thus, recall of cut should increase, while its precision should decline. Opposite affection is expected for GT detection. The result confirms the above inference. This run extends the duration of post-cut processing modules. In thuai02, before declaration of a cut, the post-cut module examines the next four frames of the candidate cut. Only if the next four frames satisfy some specified conditions, the candidate is declared as a cut. In the thuai10, the post-cut module examines the next five frames before declaring a cut. Thus, the precision of cuts increases. It is produced by a new baseline system derived from thuai10. The β value is tuned higher than that of thuai10. Thus, the recall of gradual transition declines, while the precision increases. Even higher β value. Therefore, the gradual transition recall declines further, and precision increases. Lower threshold of post-processing module. So the recall of GT increases, while the precision declines. Even lower threshold of post-processing module. So the recall of GT increases further, while precision declines. This run replace the second-order difference with first-order difference, so the recall of cut abruptly increases, while the precision abruptly declines.

test collection of 2004, lasting about 6 hours and comprising 12 videos , is utilized to test the performance of the proposed system. Similar to other information retrieval task, the performance is evaluated by recall and precision criteria, representing fraction of relevant documents retrieved and fraction of retrieved documents that are relevant respectively. These two criteria are calculated by an evaluation tool provided by TRECVID. To rank performance of the different algorithms, F1 measure, a harmonic average of recall and precision is introduced. F1 measure combining recall and precision with equal weight is in the following form27 : F1 (recall, precision) =

2 × recall × precision . recall + precision

(4)

To investigate the contributions and drawbacks of each module, we develop and test various modules separately and then assemble these modules in different manners to constitute a system. By evaluating the systems of different configurations, we get the framework described above. All the eight videos of 2003 test collections are treated as the training data to tune the parameter settings of the system, and therefore the optimal parameter setup is obtained. Ten runs, most of which correspond to the variations of a baseline system, have been evaluated. Besides by tuning parameters, some of the runs are produced by changing the structure of the baseline system. The evaluation results and their corresponding analysis are listed in Table 1 and Table 2. Compared to the evaluation results submitted by the other participators, our system is among the best.28

Proc. of SPIE Vol. 5960 596018-9

6. CONCLUSIONS AND FUTURE WORK In this paper,we have presented a novel shot boundary detection framework and evaluate it on the TRECVID benchmarking platform. The evaluation on a large enough test collection reveals the effectiveness of the system. The main advantages of the proposed system can be summarized as follows: • Utilizing of multiple complementary features. • Improvements on the existing algorithms, including second-order difference and finite state automata model. • In-depth exploiting of the differences between the real shot transitions and two main disturbances. Most importantly, the paper presents a novel manner to utilize motion feature, i.e., motion based self-adaptive threshold. Although the system has performed well, there is still much room to improve. Firstly, most of the thresholds used in the system are global and heuristically determined. Replacing the global thresholds with local adaptive ones may further boost the performance. Secondly, the flash detector is simple, more accurate method like edge matching may be tried. Thirdly, although motion feature has been utilized, it is somewhat straightforward, more sophisticated method like global motion estimation seems more reliable. Finally, detector for specific editing effect, like wipe, may be designed to achieve better result.

REFERENCES 1. S. W. Smoliar and H. Zhang, “Content-based video indexing and retrieval,” IEEE MultiMedia 1(2), pp. 62– 72, 1994. 2. C.-W. Ngo, H.-J. Zhang, and T.-C. Pong, “Recent advances in content-based video analysis,” International Journal of Image and Graphics 1(3), pp. 445–468, 2001. 3. J. S. Boreczky and L. A. Rowe, “Comparison of video shot boundary detection techniques,” in Proc. SPIE, Storage and Retrieval for Still Image and Video Databases IV, 2670, pp. 170–179, Mar. 1996. 4. U. Gargi, R. Kasturi, and S. H. Strayer, “Performance characterization of video-shot-change detection methods,” IEEE Transaction on Circuits and Systems for Video Technology 10, February 2000. 5. R. Lienhart, “Comparison of automatic shot boundary detection algorithms,” in In Image and Video Processing VII, 3656, pp. 290–301, Proc.SPIE, Jan. 1999. 6. R. Lienhart, “Reliable transition detection in videos: A survey and practitioner’s guide,” International Journal of Image and Graphics 1(3), pp. 469–486, 2001. 7. A. Hanjalic, “Shot boundary detection: unraveled and resolved?,” IEEE Transactions on Circuits and Systems for Video Technology 12(2), pp. 90–105, 2002. 8. H. Zhang, A.Kankanhalli, and S.W.Smoliar, “Automatic partitioning of full-motion video,” Multimedia Systems , pp. 10–28, 1993. 9. X. Huang, M. Fisher, and D. Smith, “Experimental comparison of existing video shot detection techniques in compressed video,” in Proceeding of Portuguese Conference on Pattern Recognition, (University of Aveiro, portugal), June 2002. 10. R.Zabih, J. Miller, and K. Mai, “A feature-based algorithm for detecting and classifying scene breaks,” in Proc.ACM Multimedia 95, pp. 189–200, (San Francisco, CA), November 1995. 11. P. Bouthemy, M. Gelgon, and F. Ganansia, “A unified approach to shot change detection and camera motion characterization,” IEEE transactions on circuits and systems for video technology 9, pp. 1030–1044, October 1999. 12. B.-L. Yeo and B. Liu, “Rapid scene analysis on compressed video,” IEEE Transactions on Circuit and Systems for Video Technology 5, pp. 533–544, December 1995. 13. D. Zhang, W. Qi, and H.-J. Zhang, “A new shot boundary detection algorithm,” in PCM ’01: Proceedings of the Second IEEE Pacific Rim Conference on Multimedia, pp. 63–70, Springer-Verlag, (London, UK), 2001. 14. S. H. Kim and R.-H. Park, “Robust video indexing for video sequences with complex brightness variations,” in in Proc. IASTED Int. Conf. Signal and Image Processing, pp. 410–414, (Kauai, Hawaii, USA), August 2002. 15. W. J. Heng and K. N. Ngan, “High accuracy flashlight scene determination for shot boundary detection,” Signal Processing: Image Communications 18, pp. 203–219, March 2003.

Proc. of SPIE Vol. 5960 596018-10

16. S.-C. Jun and S.-H. Park, “An automatic cut detection algorithm using median filter and neural network,” in ITC-CSCC2000, July 2000. 17. M. Leszczuk and Z. Papir, “Accuracy vs. speed trade-off in detecting of shots in video content for abstracting digital video libraries,” in Lecture Notes In Computer Science, 2515, pp. 176–189, Springer-Verlag London, UK, 2002. 18. Y. Y, C. WJ, and Kittler, “Video shot cut detection using adaptive thresholding,” in In Proceedings of the 11th British Machine Video Conference, (Bristol, UK), 2000. 19. B. T. Truong, C. Dorai, and S. Venkatesh, “New enhancements to cut, fade, and dissolve detection processes in video segmentation,” in Proceedings of the eighth ACM international conference on Multimedia, pp. 219– 227, (Los Angeles, USA), 2000. 20. A. Miene, A. Dammeyer, T. Hermes, and O. Herzog, “Advanced and adaptive shot boundary detection,” in ECDL, 2001. 21. M. Cooper, “Video segmentation combining similarity analysis and classification,” in ACM Multimedia, October 2004. 22. C. W. Ngo, T. C. Pong, and R. T. Chin, “Camera breaks detection by partitioning of 2d spatio-temporal images in mpeg domain,” in IEEE Multimedia Systems (ICMCS’99), 1, pp. 750–755, (Italy), 1999. 23. M. G. Chung, H. Kim, and S. M.-H. Song, “A scene boundary detection method,” in In Proceedings of the International Conference on Image Processing, 3, pp. 933–936, (Vancouver), September 2000. 24. S. J. F. Guimar and M. Couprie, “Video segmentation based on 2d image analysis,” Pattern Recognition Letters 24(7), pp. 947–957, 2003. 25. R. Lienhart, C. Kuhmnch, and W. Effelsberg, “On the detection and recognition of television commercials,” in Proc. IEEE Conf. on Multimedia Computing and Systems, pp. 509–516, (Ottawa, Canada), June 1997. 26. A. F. Smeaton, P. Over, and W. Kraaij, “Trecvid: Evaluating the effectiveness of information retrieval tasks on digital video,” in In Proceedings of the ACM MM’04, pp. 652–655, (New York, NY, USA.), October 10-16 2004. 27. C. J. van Rijsbergen, Information Retireval, Butterworths, London, 1979. 28. A. Smeaton and P. Over, “Trecvid 2004: Short boundary detection task overview,” slides, Dubin City University and NIST, 2004, url=http://www-nlpir.nist.gov/projects/tvpubs/tvpapers04/tv4.sbslides-as.ps.

Proc. of SPIE Vol. 5960 596018-11

A novel shot boundary detection framework - Semantic Scholar

Fuzong Lin and Bo Zhang. State Key Laboratory of Intelligent Technology and System. Department of Computer Science and Technology. Tsinghua University ...

254KB Sizes 1 Downloads 289 Views

Recommend Documents

A Unified Shot Boundary Detection Framework ... - Semantic Scholar
Nov 11, 2005 - Department of Computer Science and. Technology, Tsinghua University ..... the best result among various threshold settings is chosen to.

A Formal Study of Shot Boundary Detection - CiteSeerX
Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. ... not obtained by thresholding schemes but by machine learning ...... working toward the M.S. degree in the Department of Computer ...

A Formal Study of Shot Boundary Detection - CiteSeerX
Based on the comparison of the existing approaches, optimal criteria for ... in computer perfor- mance and the growth of the Internet, have led to the widespread.

A Formal Study of Shot Boundary Detection
fied shot boundary detection system based on graph partition ... technologies and tools for efficient indexing, browsing and re- ...... Right: Visualization of the similarity matrix of the left graph. w is defined as the reciprocal of Euclidean dista

Boundary Element Formulation of Harmonic ... - Semantic Scholar
On a deeper level, BEM makes possible the comparison of trans- finite harmonic ... Solving a Dirichlet problem could seem a high price to pay, but the quality of the .... Euclidean space, and not just to some large yet bounded domain. This task ...

Boundary Element Formulation of Harmonic ... - Semantic Scholar
that this will allow tailoring new interpolates for particular needs. Another direction for research is ... ment methods with the software library BEMLIB. Chapman &.

A NOVEL INTER-CLUSTER DISTANCE ... - Semantic Scholar
ous meeting speech data show that this combined measure improves ..... speaker diarization systems,” IEEE Trans. Audio ... Speech and Audio Processing, vol.

Optimal Detection of Heterogeneous and ... - Semantic Scholar
Oct 28, 2010 - where ¯Φ = 1 − Φ is the survival function of N(0,1). Second, sort the .... (β;σ) is a function of β and ...... When σ ≥ 1, the exponent is a convex.

Intrusion Detection Visualization and Software ... - Semantic Scholar
fake program downloads, worms, application of software vulnerabilities, web bugs, etc. 3. .... Accounting. Process. Accounting ..... e.g., to management. Thus, in a ...

Intrusion Detection Visualization and Software ... - Semantic Scholar
fake program downloads, worms, application of software vulnerabilities, web bugs, etc. 3. .... Accounting. Process. Accounting ..... e.g., to management. Thus, in a ...

DETECTION OF URBAN HOUSING ... - Semantic Scholar
... land-use changes is an important process in monitoring and managing urban development and ... Satellite remote sensing has displayed a large potential to.

Plagiarism, detection and intentionality - Semantic Scholar
regard to the design of algorithms as such and the way in which it is ..... elimination of plagiarism requires a systemic approach which involves the whole system.

Automated Down Syndrome Detection Using ... - Semantic Scholar
*This project was supported by a philanthropic gift from the Government of Abu Dhabi to Children's National Medical Center. Its contents are solely the responsibility of the authors and ..... local and global facial textures. The uniform LBP, origina

A study of OFDM signal detection using ... - Semantic Scholar
use signatures intentionally embedded in the SS sig- ..... embed signature on them. This method is ..... structure, channel coding and modulation for digital ter-.

Shadow Detection and Removal in Real Images: A ... - Semantic Scholar
Jun 1, 2006 - This may lead to problems in scene understanding, object ..... Technical report, Center for Automation Research, University of Maryland, 1999.

Pedestrian Detection with a Large-Field-Of-View ... - Semantic Scholar
miss rate on the Caltech Pedestrian Detection Benchmark. ... deep learning methods have become the top performing ..... not to, in the interest of speed.

Enhanced Electrochemical Detection of Ketorolac ... - Semantic Scholar
Apr 10, 2007 - The drug shows a well-defined peak at –1.40 V vs. Ag/AgCl in the acetate buffer. (pH 5.5). The existence of Ppy on the surface of the electrode ...

Enhanced Electrochemical Detection of Ketorolac ... - Semantic Scholar
Apr 10, 2007 - Ketorolac tromethamine, KT ((k)-5-benzoyl-2,3-dihydro-1H ..... A. Radi, A. M. Beltagi, and M. M. Ghoneim, Talanta,. 2001, 54, 283. 18. J. C. Vire ...

Tree detection from aerial imagery - Semantic Scholar
Nov 6, 2009 - automatic model and training data selection to minimize the manual work and .... of ground-truth tree masks, we introduce methods for auto-.

DETECTION OF URBAN HOUSING ... - Semantic Scholar
natural resources, because it provides quantitative analysis of the spatial distribution in the ... By integrating the housing information extracted from satellite data and that of a former ... the recently built houses, which are bigger and relative

Automated Down Syndrome Detection Using ... - Semantic Scholar
these anatomical landmarks and texture features based on LBP ... built using PCA describing shape variations in the training data. After PCA, we can represent ...