OBJECT-BASED SCENE SEGMENTATION COMBINING MOTION AND IMAGE CUES Yun-Ting Lin, Yen-Kuang Chen, and S. Y. Kung Princeton University, Princeton, NJ 08544, USA ABSTRACT This paper presents an object-based scene segmentation algorithm which combines the temporal information (e.g. motion) from video and image cues from individual frame. First a motionbased segmentation is decided based on the Hierarchical Principal Component Split (HPCS) algorithm for multi-moving-object motion classification. HPCS is a binary-tree-structured recursive procedure which clusters the feature blocks according to their principal component of the feature track matrix. Tracking of feature blocks from multiple frames ( 2) can be effectively processed and this results in a more accurate rigid motion classification. Experimental result shows that by using motion alone, some mostly homogeneous blocks may fit well to more than one motion classes so that ambiguity occurs. Such blocks are categorized into the socalled “undetermined” region (or U-region) for further processing. An image segmentation scheme using local pixel statistics of blocks in the U-region (U-blocks) is applied to find “valid voting regions” (VVRs). A VVR has a mostly homogeneous interior and is surrounded by a closed contour consisting of relatively high gradient points, which can offer the needed discriminating power for classifying each VVR to its belonging object class by motion voting. By combining the motion-based segmentation with the classification result of VVRs, the final object-based scene segmentation is determined. Simulation results are presented.



1. INTRODUCTION Object-based scene segmentation from video has been essential for many video applications such as content/object-based retrieval from video, video composition, video understanding, object recognition, object-based video coding, and so on. Literarily speaking, a scene is a division of an act presenting continuous action in one place. In video processing, scene segmentation can be defined in a similar way as to find the divisions of the video so that each division presents a continuous action of one object (or group of objects such as background in terms of camera movement). Supplementary to static cues (e.g. edge, boundary, color, texture) that a still image can offer, one can benefit much more from the rich temporal information (e.g. motion, context) from the video so that a higher degree of video analysis and understanding can be achieved by some automatic scheme. One problem that motionbased schemes often face is that “homogeneity” causes “ambiguity”. An interesting observation is that successful motion tracking often requires the existence of prominent or distinctive features, on the other hand, image segmentation schemes seem to seek an opposite condition. While distinction is important for motion tracking, similarity or homogeneity is essential for image segmentation. In other words, pixels with similar statistical characteristics (intensity, color, or texture) tend to be clustered together in an image segmentation scheme. This can be useful to solve the ambiguity problem encountered by motion-based approaches.

Our proposed scene segmentation method is engendered from the above observation. In our approach, motion and image cues will be used at different stages. Due to the expensive computation of texture segmentation techniques, we apply motion analysis for motion-based segmentation first. Only those blocks which cannot be classified by motion will be taken care of by techniques using image cues. First a motion-based segmentation is decided based on the Hierarchical Principal Component Split (HPCS) algorithm which will be described in Sec. 2. As a fact that using motion-cue alone cannot be adequate to extract the real object boundary, in Sec. 3, we will introduce two different approaches to final segmentation with the aid from image cues. Simulation results will be shown to conclude this work. 2. MOTION CLASSIFICATION BY HIERARCHICAL PRINCIPAL COMPONENT SPLIT (HPCS) ALGORITHM 2.1. Principal Component Clustering Algorithm Here we briefly review the Principal Component Clustering Algorithm [1] for separating independent moving objects. It has been successfully applied to separate two different moving objects for motion-based scene segmentation. The clustering is based on the Principal Singular Vector (PSV, or principal component PC) of the centered feature track matrix ~ which consists of both the shape (1-st two rows record the initial coordinates) and motion information (all the other rows record the displacements) of feature blocks tracked from the video. Suppose there are K feature blocks and F frames,

W

2 ~ =4 W

w

(1 1)

(

w F

1)

 w

. . .

(1 K )

3 2 5 6 4

1

;

(

   w F K

)

P

K

1

p

. . .

P

K

w

p

(1 p)

(

w F p

)

3 7 5 1 1



1]

    (1 p) ; w(fp) = x(fp) ; x(f ; 1 p) w(1p) =  xy(1  p) y(f p) ; y(f ; 1 p)





for p = 1  K ; f = 2  F .  is the neighborhood sensitivity, which defines the ratio of the influence from shape to motion. In the 2-object case, by taking singular value decomposition ), the 1-st right PSV (V1 , or PC) has the following (~ = form: 1 (See [1] for mathematical derivation.)

W UV



V1

2 =  j j ] 4



t

r a

r b

M N

+M

S

E

;

N

a

N

+M

0

S

a

0

E

b

3 5

b

1 Here t , r , r denote the scaling effect of translation difference, a b rotation change of object a, object b, resp. N and M denote the number of feature blocks of object a and b. a and b are the 2D (or 3D) shape matrices. a and b are all ’1’ matrices of size 1  N and 1  M .

E

E

S

S

Image 1

: node to split

Image 2

~ W

: clean node Feature Block Tracking

node

Motion Estimation

~

HPCS Algorithm

Split? Cluster 1

. . . . Cluster n

Cluster 2

(Error > t2 ?)

Motion Class 2

<

Pruning

. . . . Motion Class k

Error > t 1 ?

Motion Region 2

. . .

Yes

Prunning

V1(p) > < 0?

>

No

Scene Segmentation

Motion Region 1

SVD( Wi ) Split

No

Merge Phase Motion Class 1

Yes

( t 2> t 1 ) Motion Region k

Merge

(a)

left child

right child

Ready for the merge phase

(b)

(c)

(d)

Figure 1: (a) Flow chart of our scene segmentation algorithm. (b) The binary tree structure of HPCS Algorithm. After the pruning phase, each clean node represents one consistent motion and is ready for the merge phase. (c) The recursive procedure for each node in the HPCS tree. (d) The basic operation of the split phase.

h M ;N  E +  S i  E +  S j N +M N +M When it is translation-dominant (small ),  is expected to dominate  and  . As a consequence, V 1 tends to take different signs for different objects. Therefore, V1 = 0 provides an ideal decision =

t

r a

a

t

a

b

r b

b

t

r a

r b

boundary for separating two independently moving targets.

After HPCS split and pruning, feature blocks are classified into motion clusters so that blocks in the same cluster share a common motion, i.e. they can be described by the same motion parameters. Clusters sharing similar motion parameters will be merged in the merge phase (as shown in Fig.1(b)), and thereafter a new set of motion parameters will be determined collectively by the combined clusters.

2.2. Hierachical Principal Component Split (HPCS) Algorithm The Principal Component Clustering Algorithm can be extended to the multi-object case. If the translation term dominates, for 3 objects, the entries in V1 will be split into 3 clusters:



V1



=



E

a

E

b

E

c



+ perturbation

As long as the perturbation terms are not very large, we can expect the three clusters to be roughly ordered along the PC direction. This implies that any overlapping regions in the PC domain will contain feature blocks mostly from no more than two objects. Therefore, for the multi-object case, the main challenge still lies in the separation of two neighboring clusters. Consequently, it is natural to extend the original PC Clustering Method to deal with the multi-moving-object case. This leads to the Hierarchical Principal Component Split (HPCS) Algorithm. The flow of our motion-based segmentation is depicted in Fig. 1(a). At first feature blocks are selected and tracked [2]. The structure of HPCS can be illustrated by a binary tree (called HPCS tree) as depicted in Fig.1(b). Each node in the HPCS tree contains a . set of feature blocks represented by a group of columns from . Naturally, the root of the tree is represented by the complete Each node is assigned a “cleanness” level based on its motion estimation error. In our experiments, a LS-estimator is used to find the 2-D affine motion parameters for each node. A node’s “cleanness” level is then determined by the norm of its error matrix. As shown in Fig.1(b), two types of nodes are defined according to the “cleanness” level:

~ W ~ W

 

The nodes in grey are labeled “non-clean” They have to go through the split phase to split into two child nodes. During each split,  can be adaptively changed to adjust weights between shape and motion information. All nodes in white are “clean” for they have passed the motion consistency check and therefore no further split is needed. All clean nodes have to go through the pruning phase to remove some extraneous blocks, if there are any.

Split Phase During each spilt, a node will split into its left and right child by examining its V1 as illustrated in Fig.1(d). The stopping rule relies on the motion consistency check. For each node, if its motion error is larger than a preset threshold t2 , it has to split again. Otherwise, if its motion error is smaller than t2 , then it has passed the motion consistency check and is marked “clean”. so that no further split is performed. These clean nodes will be processed in the pruning phase. Pruning Phase All clean nodes have to be “pruned” before leaving for the merge phase. This is because although each clean node contains one major motion, there may still exist some extraneous blocks bearing a different motion from the majority blocks. The existence of these extraneous blocks can greatly degrade the LS estimation. In order to achieve a better motion classification and estimation, a clean node should be pruned. The main reason for resorting to pruning, as opposed to split, is because too many splits may lead to a clean node to become too small (i.e. too few blocks in the node) and yields an unreliable motion estimation. In the pruning phase, inconsistent blocks are removed from the clean node until the motion error is further reduced to within a stricter threshold t1 (t1 < t2 ). The recursive procedure for each node in the HPCS tree is depicted in Fig.1(c). Merge Phase The goal of the merge phase is to recombine those clean nodes with similar motion, so that they can be represented by the same motion parameters. The clean nodes can be merged in the motion parameter space by VQ. Instead of the conventional distance measure, a scaled distance [3] is used for normalization to the image size. Associated with each merged node, a confidence measure based on combined factors of (1) motion error, (2) number of blocks in the merged node, is used to determine whether the merged node can adequately represent one major motion class. Only a merged node with high confidence will be confirmed and retained for the scene segmentation process as one of the motion hypotheses.

S

1

1

Motion Based Segmentation

1

S1 S2 . . . Sk

(Motion Regions)

U

Clustering of Homogeneous Region (statistics-based or edge-based)

1

2 2

2

2

(a) (b) Figure 3: An example of direct (a) and diagonal inter-cluster pixel pairs. The number is the pixel’s cluster labeling.

R1 R2 . . .

RL (Valid Voting Regions)

Classification by Motion Classifier

Final Segmentation

Figure 2: A scene segmentation scheme with combined motion and intensity/edge cues.

3. SCENE SEGMENTATION USING MULTIPLE CUES 3.1. Initial Segmentation by Motion Hypothesis Testing After the motion classification from HPCS followed by the merge VK . As depicted phase, we have found K motion classes V 1 in Fig.2, the whole image S now can be segmented into K motion regions S1 : : : SK and an “undetermined” region U composed by blocks which cannot be correctly segmented by motion alone. Each block (8 8) in S will be labeled based on a score function. Given VK , the score function for the i-th block motion hypotheses V 1 bi warped by Vk is defined. It depends on the residue of the central block and that of its 4-neighbors.







score(kb ) = T1 (k arg min Res(V  b )) X + T2 (k arg min Res(V  b)) i

j

j

2N ( i)

b

i

j

j

label(b ) = i

k

0

Region Growing Based on Image Statistics To find a proper statistical representation for possible homogeneous clusters in U , an energy function is developed and a minimizing energy function criterion is used. A SNAKE-like energy function [4] is defined which consists of two major components. The external energy (image force) is a RBF term which forces the pixel values in the same cluster to be close to its mean. The internal energy plays the role of an regularization term for contour relaxation. Since the same operation will be performed to every U-block, for simplicity, only one U-block is considered in the remaining of this section. Mathematically, the energy function can be expressed as:

E = EX X + E ext

=

j

b

where  is the Kronecker delta function. T1 and T2 are Gaussianlike weights for the central block and its 4-neighbors. Suppose arg maxj score(jbi ) = k, and arg maxj6=k score(j bi ) = l (1-st runner-up). Also assume that label(bi ) = 0 denotes b i is assigned to the U-region. The labeling function is defined as:



properties lead to two different scenarios to generate VVRs. The first one is based on local image statistics. Homogeneous clusters are found for each U-block and described by the pixel mean through minimizing a SNAKE-like energy function. In the second method, edge extraction and global thresholding are applied to find edge points in the image. Both methods are followed by a region growing scheme to find the VVRs.

if score(k bi ) otherwise

; score(lb )  threshold i

yi

int

2

(y ; m )2 + (n1  + n2 ) i

j

Rj

is a parameter for balancing the external and internal energy levels. yi denotes the intensity of pixel i and m j is the mean pixel value of cluster Rj in the U-block. Pairs of adjacent pixels assigned to different clusters are called inter-cluster pairs and they are associated with cost  or according to their relative position.  is the cost for direct inter-cluster pairs and is that for the diagonal inter-cluster pairs as depicted in Fig.3. Relating the boundary cost to the potential contour length created by such labeling, is assigned as 2. n1 and n2 denote the number of direct and diagonal inter-cluster pairs in the U-block respectively. This energy function has a dual analogy to the SNAKE algorithm in which the image force comes from the high-gradient points to attract a normally fixed number of snake points while the internal force is used to smooth the contour. Different from SNAKE, our goal here is to find homogeneous clusters instead of an active contour. The image force now comes from the cluster mean, and the internal force is represented by the boundary cost. The formulation is also analogous to a probabilistic approach for image segmentation [5, 6]. In their approaches an MAP criterion is used to maximize the a posteriori density function p(R Y ). By Bayes rule, to maximize p(R Y ) is equal to maximize p(Y R) p(R) since p(Y ) is fixed for the current image. By assuming that the conditional pdf p(Y R) is Gaussian and a Gibbs random field is used to describe the a priori probability of the partition p(R), the MAP criterion can be formulated. It is impractical to look for the global minimum of our energy function by an exhaustive scheme, therefore we aim at finding a local minimum. Since the energy function is not differentiable, we propose a SOR-type (Successive Over-Relaxation) iterative approach. During initialization, the block mean is calculated. According to each pixel’s deviation, 3 initial clusters are formed. The

p

3.2. Segmentation by Image Cues Fig.4 shows the initial motion segmentation of the table-tennis sequence. Although the motion segmentation result is mostly correct, it appears to be inadequate for extracting the real object boundary. Especially in the presence of homogeneous regions, the motioncue provides very little discriminability, causing most of the errors in the segmentation result. Experimental evidence also shows that the U-region contains mainly homogeneous blocks. To correctly classify the U-region, we introduce the notion of valid voting region (VVR). A VVR, by definition, must offer discriminating power (in terms of motion compensated residue) if warped by different motions. Although the interior of a VVR is mostly homogeneous, it is surrounded by a closed contour consisting of relatively high gradient points, offering the needed discriminating power for classifying each VVR by motion voting. Two RL in Fig.2) from different methods for generating VVRs (R1 U are proposed.



3.2.1 Clustering of Homogeneous Regions In image processing, a homogeneous region is one which contains pixels with small variance, or with no edge points. These two

j

j

j

j

(a) (b) (c) (d) (e) Figure 4: (a) The original frame from the table-tennis sequence. (b) Motion region 1 corresponding to the background. (c) Motion region 2 corresponding to the ball. (d) Motion region 3 corresponding to the arm and the racket. (e) The undetermined U-region.

(a) (b) (c) (d) (e) Figure 5: (a) Final segmentation (shown in grayscale) by combining motion and statistics-based region growing. (b) Final segmentation (shown in grayscale) by combining motion and edge-based region growing. While the statistics-based method can correctly identify the racket, edge-based method has successfully clustered the whole arm as one cluster. A better segmentation result can be obtained by further combining both region growing methods. The refined segmentation of background, ball, arm and racket are shown in (c), (d), and (e), respectively.

update (if any) in each iteration is performed by changing the pixel’s label to match one of its 4-neighbors’. If such a change does not result in a net decrease of the local energy, no update is performed. Otherwise, the change which yields the most reduction of the local energy will be performed. Obviously, the iterative process will stop when there is no net decrease of the total energy within the block. At the end of iterations, at most three clusters are formed in each U-block. The region growing starts from each cluster based on its pixel mean. The resulting VVRs can be guaranteed to have uniform pixel values with a small variance. Region Growing Based on Edge Points It is commonly accepted that the object boundary usually corresponds to high gradient edge points while a connected homogeneous region tends to lie in the interior of one object. Our second method to generate VVRs is based on finding those edge points and then applying the region growing scheme. An edge point can be defined by applying the edge extraction followed by global thresholding. Region growing starts from the center of each U-block. It will stop if there exists any edge point within a 3 3 neighborhood of the current pixel.



On the other hand, edge-based method has successfully clustered the whole arm but failed to find the racket. The foreground object segmentation can be further refined by combining the above two simply by an OR function. The refined segmentation of background, ball, arm and racket are shown in Fig.5(c)–(e). 4. SUMMARY AND CONCLUSION In this work, we have presented a multi-cue object-based segmentation scheme for extracting different objects in video. It is first shown that motion cue only is not sufficient for our goal. Then it is demonstrated that static cues from image can provide complementary information to improve object-based segmentation. Two types of image cues are proposed and compared. Statistics-based method seems to be more capable of extracting non-distinctive object boundaries when the foreground and background have similar intensity levels. On the other hand, edge-based method can perform better on objects with clear edges (contrast to the background) and faint texture inside. For a further improved and more efficient segmentation algorithm, we plan to look into a scheme which, at an earlier stage, can combine multiple image cues from intensity statistics, edges, as well as color and texture.

3.2.2 Classification by Motion Voting To classify each VVR (R j ) to its belonging object, a vote will be conducted. The decision is made according to the minimum motion-compensated residue criterion.

winner(R ) = arg min j

for all V1

 V

K

where

k

P

(xy )

2 j Res(V  (xy)) R

N (R )

k

j

N (R ) denotes number of pixels in R . i

j

3.3. Final Segmentation The final segmentation is achieved by combining motion-based segmentation with the classification result of VVRs. Fig.5(a) and (b) compare the final segmentation from statistics-based and edgebased schemes. It is interesting that the statistics-based method can correctly identify the racket boundary which does not have a distinctive edge in the original frame, but it missed part of the arm.

5. REFERENCES [1] S.Y. Kung, Yun-Ting Lin, and Yen-Kuang Chen, “Motion-based segmentation by principal singular vector (PSV) clustering method”, Proc. ICASSP’96, Atlanta, May 1996. [2] Yen-Kuang Chen, Yun-Ting Lin, and S. Y. Kung, “A feature tracking algorithm using neighborhood relaxation with multi-candidate prescreening”, Proc. ICIP’96, Lausanne, Switzerland, Sep. 1996. [3] J. Y.-A. Wang and E. H. Adelson, “Representing moving images with layers”, IEEE Trans. on Image Processing, Vol. 3, No. 5, pp. 625-638, Sept. 1994. [4] M. Kass, A. Witkin, and D, Terzopoulos, “Snakes: Active contour models”, Intl. J. Comput. Vision, Vol. 4, pp. 321-331, 1988. [5] Charles Bouman and Bede Liu, “Multiple resolution segmentation of textured images”, IEEE Trans. on PAMI, Vol. 13, No. 2, Feb. 1991. [6] Thrasyvoulos N. Pappas, “An adaptive clustering algorithm for image segmentation”, IEEE Trans. on Signal Processing, Vol. 40, No. 4, Apr. 1992.

tj r aj r - CiteSeerX

ABSTRACT. This paper presents an object-based scene segmentation al- gorithm which combines the temporal information (e.g. motion) from video and image ...

109KB Sizes 1 Downloads 47 Views

Recommend Documents

(S1Ds r ri r r, 1
Jan 5, 2016 - CSDO HOSTING OF 2016 REGIONAL SPORTS COMPETITIONS (STCAA). Date: ... Email: [email protected] I Website: ..... render services during weekends shall be given Service Credits and Compensatory.

:=< r=
9 ED4-D .i s ji"= = LP'+ i=

R&R Update.pdf
Asked before SOTU address. Favor Oppose. Immigration 61% 39%. Jobs and the. economy 77% 23%. Handling North. Korea 61% 39%. Infrastructure, roads,. and bridges 80% 20%. 3. Page 3 of 50. R&R Update.pdf. R&R Update.pdf. Open. Extract. Open with. Sign I

R Foundation News - The R Journal - R Project
NEWS AND NOTES. 506. R Foundation News by Torsten Hothorn. Donations and new ... Inference Technologies, Czech Republic. New supporting members.

Mulholland, Margaret R., and Cindy Lee. Peptide ... - CiteSeerX
May 13, 2008 - that (1) protein N is not the dominant form of HMW. DON, (2) protein and .... alanine), and a dipeptide (dialanine) (custom-synthesized by Cambridge ...... With the develop- ment and application of new taxa-specific methods.

Editorial - The R Journal - R Project
from the Comprehensive R Archive Network (CRAN, http:://CRAN. ... Social Network Analysis Survey Framework, a Shiny interface to the OpenMX modeling.

r L
Fts r l ,isJj c,4E_iJ4*, ll ju+.r_f34. , qlpU 4s dJAi,sjJ.'rti*j.t- a;3U .F+J .,+ tu Ols4jj*.:lj . 6LSo_.,/-3,Sj_; dJU si,.Jl J ll1elt spor / C+,Jl^. ;_! euy',i-:j . y! ;, rirr r L,:uju a..rr.

Linden, R. Physiology of the prion protein. // R. Linden, V. R. Martins ...
ADAM9 indirectly contributes to the physiological processing of cellular prion by modulating. ADAM10 ... The interaction between prion protein and laminin.

Linden, R. Physiology of the prion protein. // R. Linden, V. R. Martins ...
Oswaldo Cruz, Sao Paulo; Program. ...... De Almeida CJ, Chiarini LB, Silva JP, Silva PM, Martins MA, Linden R. The cellular prion ..... distance communication.

3.5 Miles R - Broadway L - 10th Street R - Rogers R - College L ...
R - Broadway. L - 10th Street. R - Rogers. R - College. L - Walnut. Cross over Old 63 into. Stephens Lake Park. Run Around Lake (Small. Loop). R - Old 63.