Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Interactive segmentation based on iterative learning for multiple-feature fusion$ Lei Zhou a, Yu Qiao a, Yijun Li a, XiangJian He b, Jie Yang a,n a b

Shanghai Jiao Tong University, Shanghai, China University of Technology, Sydney, Australia

art ic l e i nf o

a b s t r a c t

Article history: Received 18 June 2013 Received in revised form 23 October 2013 Accepted 15 December 2013 Communicated by Mingli Song

This paper proposes a novel interactive segmentation method based on conditional random field (CRF) model to utilize the location and color information contained in user input. The CRF is configured with the optimal weights between two features, which are the color Gaussian Mixture Model (GMM) and probability model of location information. To construct the CRF model, we propose a method to collect samples for the cuttraining tasks of learning the optimal weights on a single image's basis and updating the parameters of features. To refine the segmentation results iteratively, our method applies the active learning strategy to guide the process of CRF model updating or guide users to input minimal training data for training the optimal weights and updating the parameters of features. Experimental results show that the proposed method demonstrates qualitative and quantitative improvement compared with the state-of-the-art interactive segmentation methods. The proposed method is also a convenient tool for interactive object segmentation. & 2014 Published by Elsevier B.V.

Keywords: Interactive object segmentation Conditional random field Parameter learning Active learning

1. Introduction Image segmentation is an important problem in computer vision and its task is to partition image pixels into distinct regions. There are mainly two classes of algorithms. One class applies automatic algorithms, such as [1–3]. The other class of work is related to interactive method with user guidance. Automatic segmentation is a difficult task, primarily because it is properly hard to model and integrate the visual pattern including color, texture and other gestalt characteristic. For interactive segmentation, users can label pixels as foreground or background, and such user guidance can help to reduce the complexity of pattern modeling as well as the ambiguity in segmentation. In the past few years, various interactive segmentation methods have been proposed [4–12]. The variations on interactive segmentation algorithms are primarily built on a set of core methods, such as graph cut [6,13], geodesic segmentation [4] and Random Walker [8]. Different algorithms have their own advantages, and how to combine the strengths of different algorithms is also extensively studied. A unified framework [14] is proposed to treat these algorithms as instances of a more general seeded segmentation framework with different values of parameter q. The framework of

☆ This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial-No Derivative Works License, which permits non-commercial use, distribution, and reproduction in any medium, provided the original author and source are credited. n Corresponding author. E-mail address: [email protected] (J. Yang).

[14] was extended in [15]. The optimal spanning forest algorithm for watershed in [15] can also be treated as an instance with different parameter settings. For the combination of geodesic segmentation and graph cut, authors of [11,16,17] have proposed their methods to inherit the complementary strengths of both methods from different perspectives. There exists a motivation about complementary metrics in geodesic segmentation and graph-cut approach. Graph cut has the well-known bias towards shorter paths, especially when it short-cuts across the interior of an object. Methods such as geodesic segmentation can avoid the boundary-length bias, but they are sensitive to seed placement. To address this problem, a model is designed to unify geodesic distance as a region term with a graph-cut optimization framework [11]. The model combines the seed-expansion approaches' ability of filling coherent regions without regard to boundary length. The results in [11] indicate that distance information can help improve the performance. In [16], a parallel filtering operator is introduced to produce a set of spatially smooth and contrast-sensitive segmentation hypotheses, which is built upon efficient geodesic distance computation. Then, a CRF is presented to find the segmentation with minimum energy. In [17], a conditional random field (CRF) model is formulated to combine the location information and color information contained in user input together from the perspective of feature fusion and the best cues are selected to generate better segmentation results. However, the performance shown in [17] can be affected by the set parameters easily and more robust combination strategies are appreciated.

0925-2312/$ - see front matter & 2014 Published by Elsevier B.V. http://dx.doi.org/10.1016/j.neucom.2013.12.026

Please cite this article as: L. Zhou, et al., Interactive segmentation based on iterative learning for multiple-feature fusion, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2013.12.026i

2

L. Zhou et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

There are also many segmentation methods that are formulated from the perspective of segment composition and there are mainly two categories of work. One category focuses on classifier composition [18] and the other category is to compose figure-ground segment hypotheses [19–21]. In [18], the segments are generated via basic detector and segmentor classifiers. Then, a boosting algorithm is designed to learn the ensemble classifier with cascade decision strategy from the base classifier pool. Instead of incorporation of classifiers, a mid-level statistical model is suggested in [20] for image segmentation. This model is composed of multiple figureground hypotheses (FG), obtained by applying CPMC [19] into larger interpretations (tilings) of the entire image. A hierarchical segmentation tree is used to represent the image in [21] then a “pylon” model is built to combine the segments coming from different layers of the tree. There are also many segmentation models that are formulated using CRF which is a flexible framework of incorporating low-level or high-level cues [22,23]. When multiple features are used in CRF model, the efficiency of CRF depends a lot on the setting of parameters of models (such as the weights between features). The parameter learning in CRF is extensively investigated and it is often formulated as an optimization problem involving the construction of training database [22,24]. The general form of CRF construction is to model the relationship between different features. However, when the image size and dimension of feature vector grow, the computation complexity increases exponentially. To decrease computation load, piecewise training is proposed in [25]. It divides the CRF model into pieces and each piece is trained independently. In [22], Szummer et al. proposed a technique capable of learning dozens of parameters from millions of pixels in minutes and the optimal parameters are found under standard regularized objective functions that ensured good generalization. To learn multiple CRF parameters from the samples collected from a single image, Kuang et al. [26] proposed a learning strategy applying structural SVM. The underlying assumption is that there exists a constant parameter setting that can yield optimal segmentation results over all of the pixels in an image. In recent years, lots of works have focused on the methods of combining a discriminative model with a graphical model to construct the CRF model. As reported in [27], two stage SVM/CRF model can improve the classification accuracy. In [28], Lee et al. combined discriminative random field (DRF) with support vector machine (SVM) to segment brain tumors. In this model, SVM is used to train a classifier for expressing local features and it is combined with DRF model for image labeling. In [29], the SVMs are used to provide local patch evidence and to predict a class label for the whole image, then the image structure and the global classifications are used for construction of CRF. In interactive segmentation, user input can locate where the object is (location information) and color information contained in the scribbles provides the prior knowledge of object's appearance. Encouraged by the above-related works, an iterative interactive segmentation strategy based on feature and CRF model updating is proposed in this paper, focusing on making better use of the location information and color information contained in user input in this paper. In our proposed method, CRF is configured with the optimal weights between two features, color Gaussian mixture model (GMM) and discriminative model of location information. We take the classification results of two features (color GMM and geodesic map) as two segments which are used for collecting samples for learning the CRF parameters and updating the parameters of features on a single image's basis. Furthermore, we also explore a way for improving the segmentation accuracy by updating the CRF model adaptively or by adding additional user input using active learning strategy. The overall framework is displayed in Fig. 1. The rest of the paper is organized as follows. Section 2 introduces the features used in segmentation, and Sections 3 reports how to formulate the CRF model for segmentation.

Fig. 1. Overall architecture of the proposed method for interactive object segmentation.

Section 4 describes the way for learning parameters CRF model on a single image's basis. Section 5 presents the iterative segmentation framework, Section 6 introduces the experimental results, and the conclusion is made in Section 7.

2. Image features for interactive segmentation 2.1. Color information We choose RGB color vector Ip as the basic features of pixel p. The color information contained in the foreground scribbles and background scribbles are modeled as Gaussian mixture model (GMM). Let the color models be represented by GMM fαc ; μc ; Σ c gc ¼ 1 in the RGB color space, where fαc ; μc ; Σ c g represent the set of weight, mean color and covariance matrix of the c-th component. For those two kinds of user inputs, a set of GMM parameters are learned. The mixture distribution of Ix can be formulated as a linear superposition of Gaussians in the form C

VðI x jlÞ ¼ ∑αcl NðI x jμcl ; Σ Þ; l A fFG; BGg; c

ð1Þ

cl

where fαcl ; μcl ; Σ cl g represent the weight, the mean color and the covariance matrix of the c-th component learned from color information of class l, l A fFG; BGg respectively. FG represents foreground and BG represents background. In our experiments, GMM with 5 components is used to represent the color models in each class. Then, the posterior probability at each pixel p of the image is Pgmm ðF p ¼ ljI p Þ ¼

V ðI p jlÞ ; VðI p jFGÞ þ VðI p jBGÞ

l A fFG; BGg:

ð2Þ

2.2. Location information 2.2.1. Distance related to scribbles Geodesic segmentation labels a pixel via their geodesic distance from the nearest foreground and background seeds. The

Please cite this article as: L. Zhou, et al., Interactive segmentation based on iterative learning for multiple-feature fusion, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2013.12.026i

L. Zhou et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

geodesic distance is computed on the length of a discrete path [9]: n  1 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi i iþ1 2 i ð1  γ g ÞdðΓ ; Γ LðΓ Þ ¼ ∑ Þ þ γ g J ∇ðΓ Þ J 2

ð3Þ

i¼1

where Γ is an arbitrary discrete path with n pixels defined as 1 n i iþ1 fΓ ; …; Γ g. dðΓ ; Γ Þ is the Euclidean distance between the two i iþ1 points ðΓ ; Γ Þ, and J ∇ðΓ i Þ J 2 is a finite difference approximation of image gradient between these two points. The parameter γg is to weigh two kinds of distances: the Euclidean distance and the distances computed on image gradient as done in [9]. Then, the geodesic distance is defined in Eq. (4) according to the definition described in [16] dg ða; bÞ ¼ min LðΓ Þ;

ð4Þ

Γ A P a;b

where P a;b denotes the set of all discrete paths between two points: a and b. We define a distance vector g i ¼ ½g b ðiÞ; g f ðiÞT for pixel i where gb(i) and gf(i) are the distances from pixel i to the nearest background and foreground nodes respectively.

2.2.2. Definition of location probability map The location probability map (also called geodesic map) is defined to model the discriminate observations for a single image site given the geodesic distance values. The geodesic map is defined as P loc ðF i ¼ ljg i Þ ¼

expðDl g i Þ ; ∑l A fFG;BGg expðDl g i Þ

ð5Þ

where parameter vector Dl ¼ ½Dl1 ; Dl2 ; l A fFG; BGg is the weight vector associated with a class. Simply, we set DFG ¼ f1; 0g, DBG ¼ f0; 1g in our experiments.

3

3.2. Unary term The energy function related to geodesic distance can be written as an N-dimensional vector summarizing the labeling cost at each pixel: Loc Loc T ELoc 1 ðFjIÞ ¼ ½V 1 ðF 1 Þ; …; V N ðF N Þ :

¼ lÞ measures the cost of assigning the label l to pixel p given the geodesic distance information: V Loc p ðF p ¼ lÞ ¼  ap log ðP loc ðF p ¼ ljg p ÞÞ;

ai ¼

jg b ðiÞ  g f ðiÞjr : jg b ðiÞ þ g f ðiÞjr

ð10Þ

The equation indicates that when geodesic information at pixel i is discriminative, the geodesic information term is maintained. Otherwise, its importance decreases. In the experiments, we set r ¼2. Similarly, the energy function related to color GMM is defined as an N-dimensional vector: T

Col Col ECol 1 ðFjIÞ ¼ fV 1 ðF 1 Þ; …; V N ðF N Þg ;

V Col q ðF q ¼ lÞ ¼  log ðPgmm ðF q ¼ ljI q ÞÞ:

PðFjI; wÞ ¼

1 e  ∑c A CG EðF c jI c ;wÞ ZðF; wÞ

The second-order clique potential of the smoothness regularization term is formed as  EPair ðFjIÞ ¼ ∑ ∑ V P ðF p ; F q IÞ; exp  V P ðF p ; F q jIÞ ¼

ð6Þ

where G ¼ ðν; ɛÞ is a graph on F and c is a clique belonging to a set of neighboring cliques CG. In our model, two features are used and we assume that it is possible that the weight for each pixel is adaptively set. w is an N  2 matrix and w ¼ fw1 ; w2 g, where wi ¼ ½wi1 ; wi2 ; …; wiN T is an N-dimensional vector related to a feature. Z is the normalizing coefficient. We formulate the segmentation problem as a binary labeling task and the energy function EðFjI; wÞ take the form T Loc EðFjI; wÞ ¼ wT1 ECol 1 ðFjIÞ þ w2 E 1 ðFjIÞ

þ τEPair ðFjIÞ:

ð7Þ ECol 1 ðFjIÞ

is the energy related to color The energy function information and ELoc 1 ðFjIÞ represents the single-site clique potentials related to location information. EPair ðFjIÞ is the pair-site clique potentials and the parameter τ is a control weight for the pairwise constraint.

ð11Þ

3.3. Pairwise term

3. Formulation of interaction segmentation using CRF model

Image segmentation can be modeled with a random field. Consider a random field F ¼ fF 1 ; F 2 ; …; F N g being a set of random variables defined on the image. The domain of each variable is a set of labels L ¼ fℓ1 ; ℓ2 ; …; ℓk g. Let I ¼ fI 1 ; ‥; I N g be the observed data corresponding to image information. Ii is the feature vector at pixel i. Fi represents the label assigned to pixel i. N is size of the pixels in a image. A CRF model is described by a Gibbs distribution:

ð9Þ

where ap is a geodesic weight to weigh the geodesic term based on its confidence. As described in [11], the geodesic weight at pixel i, ai is written as

p A V q A Np

3.1. CRF model for segmentation

ð8Þ

V Loc p ðF p

ðI p  I q ÞT ΦðI p  I q Þ 2

!

λpq jdgeo ðp; qÞj þ ð1  λpq Þjdeuc ðp; qÞj

:

ð12Þ

where Np is the neighbor pixels of p and Φ is the covariance matrix representing the amount of variability allowed between feature vectors (RGB color) of two neighboring pixels within an object. dgeo ðp; qÞ and deuc ðp; qÞ mean the geodesic distance and Euclidean distance between points p and q respectively. λpq is the weighting pffiffiffiffiffiffiffiffiffiffi term for geodesic distance measurement and λpq ¼ ap aq . It can be easily proved that all constraints integrated are sub-modular in nature. Therefore, the energy function can be optimized using max-flow/min-cut [13] which is applied for optimizing energy functions.

4. Learning CRF parameters on single image basis The efficiency of our segmentation method mainly depends on the constructed CRF model. Generally, the model parameters are learned by maximizing the conditional Likelihood of class labels given training data. The methods such as gradient descent can achieve this goal and the computation of gradient of the Likelihood with respect to each parameter requires the evaluation of marginals over the class labels for each training image. In this paper, we propose a method to update the weights on a single image basis. Firstly, to collect the training samples, the image space is divided into four non-overlapped regions based on the classification results of two features. Then, a linear SVM is trained to obtain the weights between two features and the relative importance of features is transformed into the parameter w described in Eq. (7).

Please cite this article as: L. Zhou, et al., Interactive segmentation based on iterative learning for multiple-feature fusion, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2013.12.026i

L. Zhou et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

4

4.1. Sample collection based on image subspaces

4.2. Learning localized CRF parameters

4.1.1. Image subspaces The image region is divided into four non-overlapped parts via

4.2.1. Review of SVM Given a training set of instance-label pairs ðFeaðiÞ ; yi Þ, i ¼ f1; …; Mg where y A f1; 1g, M is the number of samples. SVM tries to maximize the margin between classes by finding the optimal ai in the following optimization problem:

simple classifier. We define Fea ¼ fFeaðnÞ gn A ½1;‥;N as the features set ðpÞ T of the whole image and FeaðpÞ ¼ ½P ðpÞ gmm ; P loc  is an instance corresponding to feature of color model and geodesic map at pixel p. The basic classifier is a function from the image space to figureground classification space and it is defined as ( f1g; pðF i ¼ FGjI i Þ 4pðF i ¼ BGjI i Þ HðpðF i ÞjI i Þ ¼ ð13Þ f0g; pðF i ¼ FGjI i Þ rpðF i ¼ BGjI i Þ;

where pðF i ¼ FGjI i Þ is the probability value associating with label Fi at pixel i. According to the basic classifier, two pixel sets A ¼ fijP gmm ðF i ¼ FGjI i Þ 4 P gmm ðF i ¼ BGjI i Þg and B ¼ fijP loc ðF i ¼ FGjI i Þ 4 P loc ðF i ¼ BGjI i Þg can be defined, which represent the pixels classified as foreground via color model and geodesic map respectively. A and B are also treated as two segments for composition. Then, the whole image space is divided into four subspaces as described in Fig. 2. C1 is the set of pixels identified as foreground in both A and B. C2 is the set of pixels identified as foreground in A and background in B. C3 is the set of pixels identified as foreground in B and background in A. C4 is set of pixels identified as background in both A and B. The four pixel sets can be expressed as C1 ¼ fiji A A \ Bg;

C2 ¼ fiji A A \ Bg;

C3 ¼ fiji A B \ Ag;

C4 ¼ fijiA B \ Ag:

ð14Þ

4.1.2. Estimation of initial target region The initial target region is obtained according to the average probability P ave ðF i jI i Þ ¼

P gmm ðF i jI i Þ þ P loc ðF i jI i Þ : 2

Then, a segmentation map INITS is generated by ( f1g; P ave ðF i ¼ FGjI i Þ 4 P ave ðF i ¼ BGjI i Þ INITSðI i Þ ¼ f0g; P ave ðF i ¼ FGjI i Þ r P ave ðF i ¼ BGjI i Þ;

ð15Þ

ð16Þ

The object region is estimated as the pixel set INITR ¼ fijIINITSðI i ÞÞ ¼ 1g and the background region is denoted by INITB ¼ fijIINITSðI i ÞÞ ¼ 0g. In sets of C1 and C4, two features can exhibit similar cues. For the training task, the collected positive samples Sp are randomly generated from pixels in SA ¼ C1 [ INITR and the negative samples Sn are randomly selected from pixels in set SB ¼ C4 \ INITB.

Fig. 2. Image subspaces. thick line and thin line represent the generation of foreground (FG) and the generation of background (BG) hypotheses respectively.

M

max

∑ ai 

i¼1

subject to :

1 M M ∑ ∑ a a y y kðFeaðiÞ ; FeaðjÞ Þ 2i¼1j¼1 i j i j M

ai Z 0 and ∑ ai yi ¼ 0

ð17Þ

i¼1

where kðFeaðiÞ ; FeaðjÞ Þ is the kernel function which maps a feature vector to higher dimension space. The new data point FeaðiÞ is classified using the learned parameters ai and bias b, by evaluating the sign of the decision function fsvm: M

f svm ðFeaðiÞ Þ ¼ ∑ an yn kðFeaðiÞ ; FeaðnÞ Þ þb:

ð18Þ

n¼1

In the formulation of linear SVM, the decision function can be described as f svm ðFeaðiÞ Þ ¼ W svm FeaðiÞ þ b;

ð19Þ

where Wsvm is the linear weight learned by SVM. 4.2.2. Learning local weights using linear SVM We use the liblinear support vector machine to train a model on the positive and negative training samples collected on a single image. We use models with linear kernels because we find from experimentation that they perform well for our specific task. In addition, linear models are also faster to compute and the resulting weights of features are easier to understand. We set the misclassification cost c to be 1 to obtain parameters Wsvm and b in Eq. (19). For training part, we adopt the learning framework proposed in [30] to train a model on samples mentioned above. Our experimentations show that resulting weights differ very little with different kernels used. Moreover, the ratio of positive to negative samples is 1:1. Then, K positive samples Sp and K negative samples Sn are randomly collected from sets SA and SB (introduced in Section ) respectively, and we set K ¼ minðjSAj; jSBjÞ=2 in the experiments. In the procedure of testing, the same normalization parameters should be used to normalize the testing features first. We will introduce the methods for incorporating the decision values fsvm (Eq. (19)) into CRF model in the next section.

5. Iterative segmentation framework 5.1. Model updating using active learning Active learning [31,32] suggests a way for the learning algorithm to proactively select the training data. An active learner may pose queries in the form of unlabeled data instances to be labeled by an oracle (e.g., the scribbles input by users). In the proposed method, SVM model is trained in each iteration using the newly collected training samples and the confidence of classification's accuracy can be computed using the decision function (Eq. (19)). On the one hand, we use the confidence to select the pixels for updating the weights between features. On the other hand, users are guided to input scribbles to refine the initial classifier multiple subsequent iterations. In classification using SVM, the data points close to the boundary plays a much more important role in improving the classification accuracy than those further away. Active learning suggests a way for the learning algorithm to proactively choose training data instead of passively receiving

Please cite this article as: L. Zhou, et al., Interactive segmentation based on iterative learning for multiple-feature fusion, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2013.12.026i

L. Zhou et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

them and it keeps on recomputing the boundary by sampling more points around it once the initial boundary is defined. In our method, the active learning scheme is based on the classification results of linear SVM. Before the binary classification, the margin Psvm (Eq. (19)) is first computed and it can be defined as the confidence of the classification and the confidence of each point is indicated by the margin of SVM. The points with relatively high confidence are picked out and the parameter w is updated for them.

5.1.1. Updating parameters of features At the beginning of the first iteration, the initial scribbles are used to train the initial parameters of features Pgmm (Eq. (2)) and Ploc (Eq. (5)). Then, four subspaces, C1(1), C2(1), C3(1) and C4(1), are generated and FG samples SF gmm ð1Þ ¼ INITRð1Þ and BG samples BGgmm ð1Þ ¼ INITBð1Þ \ C4ð1Þ are collected for training parameters of GMM (Eq. (1)). Then, in the segmentation process, once a segmentation result S(k) is generated at the end of the k-th iteration and it is used to update the feature parameters in the (k þ1)-th iteration and FG samples are SF gmm ðt þ 1Þ ¼ unionðC1ðtÞ; SðtÞÞ and BG samples are BGgmm ðt þ 1Þ ¼ INITBðtÞ \ C4ðtÞ. It is noticed that Ploc (Eq. (5)) is not updated in the iteration process.

5.1.2. Energy updating For the energy updating, there are two kinds of strategies. Strategy 1 is to integrate the classification result fsvm into CRF model directly (similar to the strategies in [28,29]). In our method, we use the updated weights to combine the features (the second strategy). Firstly, the decision value fsvm is projected into [ 1,1] and the transformed value is denoted by fnsvm. The transformed conditional probability is represented as P svm ðF i ¼ FGjI i Þ ¼ 0:5 þ 0:5fnsvm ðI i Þ; P svm ðF i ¼ BGjI i Þ ¼ 1  P svm ðF i ¼ FGjI i Þ:

ð20Þ

5

The energy function in Eq. (7) can be rewritten as EðFjI; wÞ ¼ ESVM ðFjIÞ þ τEPair ðFjIÞ, where 1 T

ðFjIÞ ¼ fV SVM ðF 1 Þ; …; V SVM ESVM 1 1 N ðF N Þg ; ðF q ¼ lÞ ¼ log ðPsvm ðF q ¼ ljI q ÞÞ; q A ½1; ‥; N: V SVM q

ð21Þ

The second strategy is to calculate the relative importance between features and they are incorporated into CRF energy function, together with the updated features. In strategy 2, energy updating consists of two aspects. On one hand, the unary terms Eloc in Eq. (8) and ECol in Eq. (11) are updated according to the 1 1 updated features. On the other hand, w ¼ fw1 ; w2 g are computed according to the trained SVM model. For example, assume that the weights Wsvm, as indicated in Eq. (19), learned by SVM are {1,0.5}, and they are normalized to the CRF weights denoted by w ¼ W svm = J W svm J with value w¼{0.67,0.33}. The efficiency of both strategies is compared and evaluated in our experiments (see Fig. 10) and the second strategy is applied in our method. As initialization, every pixel takes the same weights w ¼ {0.5,0.5}. However, the weights for some pixels are updated in the subsequent iterations. With the initial weights given, the CRF weights are updated for the pixels with higher classification confidence and the pixel set is defined as US. Firstly, the candidate pixels set Scan comprises 50% of pixels in subspaces C1 and C4 which are randomly selected. Moreover, Scan contains all the pixels of sets C2 and C3. To ensure the flexibility of our algorithm, we choose the pixels from the candidate pool Scan with margin fsvm between a prescribed threshold Th1 and Th2 to construct the pixel set US for energy updating: US ¼ fijf svm ðiÞ Z Th1 \ f svm ðiÞ r Th2 \ i A Scan g;

ð22Þ

where we set Th1¼ 0.15 and Th2 ¼  0.15 in the experiments. In the Kth iteration, using the newly learned features and obtained weights, the weight vector wi(K) and features' unary terms ECol 1 and EGeo are both updated (Eq. (7)) for the pixel i A US. For the 1 pixels i⊈US, only the features' unary terms are updated and the weight vector wi ðKÞ ¼ wi ðK  1Þ is not changed. As displayed in the example in Fig. 3, the pixels labeled red in subfigures (e) and (j) are

Fig. 3. The proposed iterative segmentation framework without user guidance. (a) The initial scribbles. (b) the initial probability map using 0.5*Pgmm þ0.5*Ploc (brighter pixels indicate higher probability). (c) The set SA (marked in red) and set SB (marked in blue) for training samples selection for linear SVM in the first iteration. (d) The probability map Psvm learned from SVM. (e) The pixels with low confidence (marked in red, jf svm j o 0:15). (f) The updated probability map with optimal weights, w1 nP gmm þ w2 nP loc . (g) The segmentation result in the first iteration. (h) The samples collection sets for the second iteration. (i) The probability map Psvm learned from SVM in the second iteration. (j) The pixels with low classification confidence. (k) The updated probability map. (l) The second segmentation result. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this paper.)

Please cite this article as: L. Zhou, et al., Interactive segmentation based on iterative learning for multiple-feature fusion, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2013.12.026i

6

L. Zhou et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Fig. 4. The subspaces of the examples in Fig. 3. (a) is set A (corresponding to the pixels marked white). (b) is set B. (c), (d), (e) and (f) are sets C1, C2, C3 and C4 respectively. (g) is collected training samples. (h), (i), (j), (k), (l) and (m) are sets A, B, C1, C2, C3 and C4 respectively.

of low confidence and the related weights are not updated for them (Fig. 4). 5.1.3. User guidance using active learning Once an initial classification result has been generated, we can also run multiple subsequent iterations with additional user input to refine the initial classifier. User can draw relatively long scribbles to assign multiple query pixels with the labels instead of labeling only several pixels. In addition, user can provide labels to correct the incorrectly classified pixels and it can lead to a more accurate classification result in the next iteration. The process for selecting the query pixels is described below. Input: X (unlabeled pixels set), k (the number of randomly generated pixels in each round) and Q (the number of query pixels). begin: Initialize Sp and Sn (introduced in Section 4.1) for SVM training. Obtain a SVM classifier fsvm on Sp and Sn. The query set Sg ¼ ½. for i ¼ 1 to Q do Randomly generate k pixels and the set is O, O A X\fSp ; Sn g; Pick a pixel xn with minimal margin: xn ¼ arg maxx A O jf svm ðxÞj; Sg ðiþ 1Þ ¼ unionðSg ðiÞ; xn Þ; end Output: The set of query pixels Sg.

5.2. Summary of iterative segmentation framework The complete iterative segmentation framework is explained in Fig. 1, where the initial scribbles are used to generate the initial features parameters and the initial learned weights. Once a segmentation result S is obtained, it is used to update the feature parameters in the next iteration. At the beginning of the t-th Loc iteration ðt 4 1Þ, the features' unary terms (ECol 1 ðtÞ and E 1 ðtÞ) are updated and new subspaces are generated. Then, we collect samples Sp and Sn to train the optimal weights w(t) using a linear

SVM. Then, the optimal weights w(t) are used to update the energy function EðFjI; wÞ for pixels in US with high classification confidence. The weights wðt  1Þ and new features are used to updated energy function for other pixels. If additional user inputs are given, they are applied to train the parameters of features and CRF weights. Then, the segmentation is generated by inferencing the CRF model iteratively. We use the model's energy to control the iteration process, or user can stop the iteration process once satisfactory results are obtained. As shown in Fig. 3, the process of refining the segmentation results with two iterations is displayed. In the second iteration, a more discriminative probability map (k) in Fig. 3 is generated compared with (f) in Fig. 3 in the first iteration.

6. Experimental results and discussion In the experiments, we set τ ¼ 1:5 in Eq. (7) and it can be easily proved that all constraints integrated are sub-modular in nature. Therefore, the energy function can be optimized using max-flow/ min-cut [13] which is applied for optimizing energy functions. Our algorithm (with shorthand SCRF) is compared with some state-ofthe-art segmentation algorithms, such as post-processing graph cut (GC) using color GMM as the only feature [33], shortest path method (SP) via Bai and Sapiro [4], geodesic graph cut (Ggc) [11] and random walker (RW) [8]. 6.1. Experiment: evaluation of learning strategy on a single image basis It is noticed that we first evaluate the efficiency of strategies for learning localized CRF weights. The segmentation results are not updated iteratively or any additional user inputs are required. 6.1.1. Subjective comparison For qualitative comparison, the seeds are placed randomly. For quantitative comparison, in order to compare with more existing methods, the seeds provided by database are used. The first to third examples shown in Fig. 5 demonstrate the case in which the partial regions of object are similar to background. The graph cut

Please cite this article as: L. Zhou, et al., Interactive segmentation based on iterative learning for multiple-feature fusion, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2013.12.026i

L. Zhou et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

7

Fig. 5. Interactive segmentation with randomly placed scribbles. White and red strokes represent the foreground and background seeds respectively. Results using graph cut [33], random walker [8], shortest paths method [4], geodesic graph cut [11] and our method are displayed in the first column to the fifth column. The output segmentations are displayed as two-colored boundary. The red boundary is toward the foreground side and the green is toward the background side. The learned weights of SCRF for the six examples are [0.2118, 0.7882], [0.1869, 0.8131], [0.2507, 0.7493], [0.541, 0.459], [0.4280, 0.5720] and [0.1462, 0.8538]. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this paper.)

fails to classify the pixels (such as in shadow regions) correctly due to their confusing color model. Geodesic segmentation method can label those pixels in shadow correctly but it suffers from oversegmenting (see the elephant image) or short-cutting (see the banana and ceramic images). The results of geodesic graph cut are not satisfactory as well due to the inherent limitation of fusing strategy while our method can inherit the strengths of both kinds of features and achieve better segmentation results. The fourth example shows the case in which geodesic segmentation may provide partially useful and partially misleading cues. However, we want to preserve the complementary information and eliminate useless information to provide more discriminative cues for segmentation. Our method is adjusted to rely less on averaging combination of color models and location probability, and apply the learned localized weights. The shortcomings of SCRF are that it will misclassify small and isolated regions, such as small regions along the boundary of trains. The fifth and sixth examples in Fig. 5 show the segmentation results of images selected from VOC

database [34] and SCRF achieve better segmentation results. It can be observed that in a complicated scene with even only one single object, graph cut tends to produce disconnected or irregular segments and the similar background is clustered into regions of foreground (in the experiments on cat and car images). On the contrary, the location information can serve as a location constraint to improve graph cut's performance. Geodesic segmentation's result is connected but it fails to localize the boundary correctly. In this case, by combining the color information and the constraint of object's location or shape adaptively, the extracted objects of SCRF have a more regular shape. Fig. 6 shows the results of images with complex background or foreground, and four examples listed are typically hard images selected from Grabcut database. Segmentation cues of graph cut and geodesic segmentation are useful and complementary. SCRF achieves better results in distinguishing and combining such kind of useful details compared with geodesic graph cut. Moreover, some missing cues that are ignored by both methods (such as the

Please cite this article as: L. Zhou, et al., Interactive segmentation based on iterative learning for multiple-feature fusion, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2013.12.026i

L. Zhou et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

8

Fig. 6. Examples with complex foreground or background. The “Lasso” form of the trimap is used for initialization which is listed in the first column. The second to the fifth columns show the results using post-processing graph cut [33], random walker [8], shortest paths method [4], geodesic graph cut [11] and our method SCRF. The learned weights of SCRF are [0.3621, 0.6379], [0.1482, 0.8518], [0.1230, 0.8770] and [0.1465, 0.8535].

closed region between two legs in the solider's example) can be reconstructed correctly primarily due to the features updating and dynamic optimal weights learning strategies. 6.1.2. Quantitative evaluation 70 images are selected from the MSRC ground truth dataset (50 images) and PASCAL VOC'09 segmentation challenge (20 images) [34], and are used for testing and quantitatively evaluating the proposed method. Specially, twenty VOC images with complex scenes are used for demonstrating the effectiveness of SCRF. To evaluate our algorithm, the quantitative evaluation is defined via comparing with the labeled ground truth in the database. Six criteria are presented, and they are the true positive fraction (TPF), the false positive fraction (FPF), the true negative fraction (TNF), false negative fraction (FNF), overlap score (OV) and error rate (ER): TPF ¼

jAS \ AG j ; jAG j

FPF ¼

jAS  AG j jAG j

jAS ⋃AG j jAG AS j ; FNF ¼ jAG j jAG j jAS ⋂AG j no: misclassified pixels ; ER ¼ OV ¼ jAS ⋃AG j no: pixels in unclassified region

TNF ¼

ð23Þ

where AS represents the segmented foreground area using our tested segmentation method and its complement is AG . AG is the foreground area in the ground truth and its complement is AG . As shown in Table 1, graph cut has the highest true positive rate, but it also exhibits the highest false positive rate. That is mainly because it may suffer from over-segmentation easily. For the overlap rate, geodesic graph cut performs better than either graph cut or geodesic cut. In the evaluation, SCRF achieves average

Table 1 The TNF, TPF, FNF, FPF and overlap score results by different methods. Rate reported here are all from the experiments implemented by us and randomly placed seeds are used for initialization. Methods

TPF (%)

FNF (%)

TNF (%)

FPF (%)

OV (%)

GC SP RW Ggc SCRF

94.58 94.45 91.80 93.62 97.04

5.42 5.55 8.20 6.38 2.96

96.70 98.51 98.18 98.38 98.58

3.30 1.49 1.82 1.62 1.42

83.71 88.21 85.50 88.24 91.16

TPF of 97.04% and the TNF is 98.58%. SCRF generates the highest overlap score of 91.16% primarily due to carefully designed fusion strategy. However, it struggles in the case when both features fail to provide discriminative segmentation cues. The highest FPF of graph cut indicates that it suffers from over-segmentation more easily in the experiments. Methods such as random walker, geodesic graph cut and geodesic segmentation tend to be indulged in short-cutting as shown in the examples listed in Figs. 5 and 6. To compare with state-of-the-art interactive segmentation methods, the error rate on GrabCut database is reported as well. The experiment uses the seeds provided by the database. Over all 50 images, the average error rate of SCRF is 4.11%, better than geodesic graph cut (4.8%), graph cut (6.48%) and geodesic segmentation (5.13%). SCRF's error rate is also lower compared to the results using the methods reported in [5,35,36]. The relationship of introduced evaluating criteria can be summarized as OV ¼

TPF  ; 1 1 1 þ FPF

μ

ER ¼

   μ  1μ FPF FNF þ 1ν 1ν

ð24Þ

Please cite this article as: L. Zhou, et al., Interactive segmentation based on iterative learning for multiple-feature fusion, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2013.12.026i

L. Zhou et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

where μ is the fraction of FG pixels in an image and ν is the fraction of pixels labeled as seeds. The method that achieves a balance between TFF and TNF will generate higher overlap soccer. The FPF of graph cut is the highest among the five algorithms However, graph cut has lowest overlap score although it has second highest TPF. SCRF has highest TPF, TNF, highest overlap score and the lowest error rate (Table 2).

6.1.3. More visual results Fig. 7 shows the segmentation results of different algorithms for four images selected from Berkley Segmentation Dataset [37] which contains 300 images with more complex background or weak boundaries. On one hand, it is observed that color information plays more important role for image with well separable color (image in the fourth row) or when location constraint is less effective (images in the first and third rows). On the other hand, location information severs as an important segmentation cue for image of weak boundary (image in the second row), and location information is complementary to the color information. Even if the strokes are sparsely drawn, our approach works well by learning and fusing the extracted features effectively and the good visual results indicate that the performance of our method does not rely on a carefully designed trimap. To compare the segmentation results quantitatively, the overlap scores (OV, introduced in Eq. (23)) are computed as well. Then, Table 2 The segmentation is initialized using the ‘Lasso’ form of the trimap provided by the database. Error rates reported here are either computed by us ([33,4] and our method) or were previously reported in [5,35]. Segmentation model

Error rate (%)

GMMRF [5] Post-processing (PP) graph cut [33] Random walker (RW) [8] Segmentation by transduction [35] Geodesic segmentation [4] Geodesic graph cut (Ggc) [11] Result of [36] SCRF (proposed method)

7.9 6.48 5.4 5.4 5.13 4.8 4.34 4.11

9

the OV rates for the four selected images are reported in Table 3. It is obvious that SCRF configured with optimal weighting strategy outperforms the other results by fusing the features for segmentation effectively. To demonstrate the effectiveness of integrating location information into segmentation model, Fig. 8 displays the examples of extracting single object from multiple similar objects. We select two images from SIFT-flow database [38] in which the images are of complex scenes. It is observed that the location information plays an important role in separating the surrounding building from the targeted building. The learned weights prove the contribution of location information for improving the segmentation performance.

6.2. The iterative segmentation framework The iterative segmentation framework is assessed and the typical results are listed in Fig. 9. It is noticed that all the segmentation results are obtained in two iterations and additional user input is not required. A study is implemented to compare the two strategies for incorporating classification results into CRF. As shown in the example listed in Fig. 10, when the classification result of SVM is misleading (Fig. 10(c)) the segmentation results using strategy 1 may not be satisfactory. By combining the features using weights, the results listed in Fig. 10(b) demonstrate that the segmentation results are refined gradually. Moreover, the segmentation accuracy can be further improved with additional scribbles guided by the confidence map displayed in the fourth row (such as the pixels in the feet region). Fig. 11 shows an example for which Table 3 The overlap scores of four examples corresponding to different methods. Image Image Image Image Image

in in in in

the the the the

first row second row third row fourth row

GC

SP

RW

SCRF

0.7168 0.4971 0.2335 0.3006

0.1941 0.4098 0.2011 0.3515

0.6778 0.5633 0.6277 0.2919

0.8405 0.7346 0.9164 0.8612

Fig. 7. Segmentation results on images with complex background or weak boundary. The results of extracted objects by GC, SP, RW, and SCRF are listed from the second column to the last column. The learned weights for four listed examples are [0.5819, 0.4181], [0.4832, 0.5168] and [0.7052, 0.2948], [0.5397, 0.4603].

Please cite this article as: L. Zhou, et al., Interactive segmentation based on iterative learning for multiple-feature fusion, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2013.12.026i

10

L. Zhou et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Fig. 8. Illustration of the role of location information in the tasks of extracting single object from images with complex scenes. The results of GC, SP, RW, and SCRF are listed from the second column to the last column. The learned weights for the two examples listed are [0.4051, 0.5949 ] and [0.4132, 0.5868].

Fig. 9. The segmentation results generated using iterative segmentation strategy without any further user input.

Fig. 10. The iterative results without additional user input. The images in subfigure (a) are the initial scribbles and the iterative results using SVM probability (strategy 2 introduced in Section 5.1.2). Images in subfigure (b) shows the iterative result using strategy 2. Images in subfigure (c) display the classification results using SVM. Images in subfigure (d) display the selected query pixels (marked as green) in each iteration. The weights for strategy 1 are [0.5246,0.4754], [0.1883, 0.8117], [0.6596, 0.3404],[0.4954, 0.5046]. The optimal weights learned for strategy 2 are [0.5246, 0.4754],[0.2374, 0.7626],[0.4598, 0.5402], [0.6107, 0.3893]. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this paper.)

our method is relatively robust to the seed position. In the iterative process, the collected samples will gradually contain most of the pixels that can reflect the statistic information of objects, and the weights learned can reflect the relationship between features more correctly. During the iterative process of strategy 2, the weights for color increase from the second to the fourth iteration, this is primarily because the description ability of updated color model is improved and better segmentation results are achieved. 6.3. Segmentation refinement with additional user guidance In the examples of Figs. 12 and 13, the efficiency of additional user inputs is illustrated. In the experiments, the additional user

input scribble will cover the query pixels but it is unnecessary for user to cover all the query pixels. It can be observed that the probability map Psvm learned from SVM can generate more discriminative segmentation cues by taking the additional labels as training samples in the second iteration. In the fish's example (Fig. 12), learned Psvm with additional scribbles (in the black rectangle) can highlight the object pixels more uniformly. In the example of person segmentation (second row in Fig. 12), the hair with black color is misclassified in the first iteration and most of the query pixels are selected from pixels in hair's region. The segmentation is refined with the additional user input across the pixel in hair's region in the second iteration. Examples in Fig. 13 display how the additional user inputs affect the learned weights.

Please cite this article as: L. Zhou, et al., Interactive segmentation based on iterative learning for multiple-feature fusion, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2013.12.026i

L. Zhou et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

11

Fig. 11. Segmentation results under different user interactions. Results of PP [33], shortest paths method [4] and SCRF are listed.

Fig. 12. (a) Illustrates the initial scribbles. (b) is the classification probability Psvm in the first iteration. (c) is the segmentation result applying graph cut. In (d) pixels marked in green show automatically selected query pixels using active learning. (e) shows the additional user input (in the black rectangle). (f) is the classification probability Psvm learned from SVM in the second iteration. (g) is the segmentation result. (h) is the automatically selected query pixels in the second round.

Fig. 13. In this experiment, the process of segmentation refinement with additional user input is illustrated in subfigures (a)–(d). In each subfigure, the label in the black rectangle is the scribble added and the first column displays the initial scribbles. The segmentation results by applying the basic classifier (Eq. (13)) of color and location information are listed in the second and third columns. In the fourth column, pixels marked in green are the automatically selected query pixels using active learning. The fifth column is the segmentation results. In the four iterations, the learned weights are [0.1465, 0.8535], [0.4035, 0.5965], [0.4061, 0.5939] and [0.4052, 0.5948]. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this paper.)

When more user inputs are added, the features' discriminative ability tends to be more powerful (corresponding to the classification results on single features) and the weights between two features tend to be balanced. 6.4. Experiments: medical image segmentation In many medical images, the intensity distributions of structures usually overlap. It leads to the difficulties of medical image segmentation. Our algorithm can utilize the location information and the segmentation accuracy of liver on clinical CT image can be improved. The segmentation results are listed in Fig. 14. We select five CT slices and the foreground seeds of objects are randomly placed in this experiment. For the background seeds, we use the anatomy knowledge such as the boundary pixels, the spine region (the pixels with the highest intensity), pixels in the gap between

organs (with intensity approaching 0) which are more likely to be the background. The gap pixels between organs contribute little to construction of the color GMM, however, they are playing an important role as location constraints. The geodesic distance is computed on a likelihood image in the experiments and experimental results show that the graph cut fails to distinguish the organs (such as the kidney) with similar color distribution (see the first, second, third and fifth images). In this case, color model is insufficient for distinguishing the nearby tissues or organs from liver. The geodesic segmentation suffers from the short-cutting, but it can serve as a robust location constraint. Then, the proposed localized weighting strategy can fuse two features very effectively. It is observed that distance information can provide useful cues in medical images segmentation and it can be combined with color information using robust fusion strategy to improve the segmentation accuracy.

Please cite this article as: L. Zhou, et al., Interactive segmentation based on iterative learning for multiple-feature fusion, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2013.12.026i

12

L. Zhou et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Fig. 14. Examples of interactive liver segmentation. The original images (a), results of GC in [33] (b), SP [4] (c) and our method (SCRF) (d) are displayed in the first row to the fourth row.

7. Conclusion In this paper, a novel interactive object segmentation method is proposed. Two features, color model and location probability, are selected to construct the CRF model. To train the CRF on a single image's basis, we collect the training samples from the image subspaces generated via the classification results of those two features. Then, the parameters of features and the CRF model are updated iteratively until the segmentation results become satisfactory. Furthermore, we also explore the way for improving the segmentation accuracy with energy updating adaptively or with additional user guidance using active learning strategy. The experiments show the efficiency of our proposed system and it is an effective tool for interactive segmentation with its improvement in both segmentation accuracy and efficiency.

Acknowledgments This research is partly supported by NSFC, China (No. 61375048, 31100672). References [1] V. Caselles, R. Kimmel, G. Sapiro, Geodesic active contours, Int. J. Comput. Vis. 22 (1) (1997) 61–79. [2] J. Shi, J. Malik, Normalized cuts and image segmentation, IEEE Trans. Pattern Anal. Mach. Intell. 22 (8) (2000) 888–905. [3] Xiang-Yang Wang, Qin-Yan Wang, Hong-Ying Yang, Juan Bu, Color image segmentation using automatic pixel classification with support vector machine, Neurocomputing 74 (18) (2011) 3898–3911. [4] X. Bai, G. Sapiro, Geodesic matting: a framework for fast interactive image and video segmentation and matting, Int. J. Comput. Vis. 82 (2) (2009) 113–132. [5] A. Blake, C. Rother, M. Brown, P. Pérez, P. Torr, Interactive image segmentation using an adaptive GMMRF model, in: Proceedings of the 8th European Conference on Computer Vision (ECCV), 2004, pp. 428–441. [6] Y.Y. Boykov, M.P. Jolly, Interactive graph cuts for optimal boundary & region segmentation of objects in nd images, in: Proceedings of 8th IEEE International Conference on Computer Vision, 2001 (ICCV 2001), vol. 1, IEEE, 2001, pp. 105–112. [7] D. Comaniciu, P. Meer, Mean shift: a robust approach toward feature space analysis, IEEE Trans. Pattern Anal. Mach. Intell. 24 (5) (2002) 603–619. [8] L. Grady, Random walks for image segmentation, IEEE Trans. Pattern Anal. Mach. Intell. 28 (11) (2006) 1768–1783. [9] V. Gulshan, C. Rother, A. Criminisi, A. Blake, A. Zisserman, Geodesic star convexity for interactive image segmentation, in: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2010, pp. 3129–3136. [10] Taiyong Li, Zhilong Xie, Jiang Wu, Jingwen Yan, Li Shen, Interactive object extraction by merging regions with k-global maximal similarity, Neurocomputing 120 (2013) 610–623. [11] B.L. Price, B. Morse, S. Cohen, Geodesic graph cut for interactive image segmentation, in: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2010, pp. 3161–3168. [12] S. Xiang, C. Pan, F. Nie, C. Zhang, Interactive image segmentation with multiple linear reconstructions in windows, IEEE Trans. Multimedia 13 (2) (2011) 342–352. [13] Y. Boykov, V. Kolmogorov, An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision, IEEE Trans. Pattern Anal. Mach. Intell. 26 (9) (2004) 1124–1137.

[14] A.K. Sinop, L. Grady, A seeded image segmentation framework unifying graph cuts and random walker which yields a new algorithm, in: IEEE 11th International Conference on Computer Vision, 2007 (ICCV 2007), IEEE, 2007, pp. 1–8. [15] C. Couprie, L. Grady, L. Najman, H. Talbot, Power watersheds: a unifying graphbased optimization framework, IEEE Trans. Pattern Anal. Mach. Intell. 33 (2011) 1384–1399. [16] A. Criminisi, T. Sharp, A. Blake, Geos: Geodesic image segmentation, in: Proceedings of the 10th European Conference on Computer Vision (ECCV): Part I, IEEE, 2008, pp. 99–112. [17] L. Zhou, Y. Qiao, J. Yang, X. He, Learning geodesic CRF for image segmentation, in: 2012 Proceedings of 19th IEEE International Conference on Image Processing, IEEE, 2012, pp. 1565–1568. [18] B. Wu, R. Nevatia, Simultaneous object detection and segmentation by boosting local shape feature based classifier, in: 2007 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2007, pp. 1–8. [19] J. Carreira, C. Sminchisescu, Constrained parametric min-cuts for automatic object segmentation, in: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2010, pp. 3241–3248. [20] A. Ion, J. Carreira, C. Sminchisescu, Image segmentation by figure-ground composition into maximal cliques, in: 2011 IEEE International Conference on Computer Vision (ICCV), IEEE, 2011, pp. 2110–2117. [21] V. Lempitsky, A. Vedaldi, A. Zisserman, A pylon model for semantic segmentation, in: Proceedings of Advances in Neural Information Processing Systems (NIPS), 2011. [22] M. Szummer, P. Kohli, D. Hoiem, Learning CRFs using graph cuts, in: Proceedings of the 10th European Conference on Computer Vision (ECCV), 2008, pp. 582–595. [23] J. Shotton, J. Winn, C. Rother, A. Criminisi, Textonboost for image understanding: multi-class object recognition and segmentation by jointly modeling texture, layout, and context, Int. J. Comput. Vis. 81 (1) (2009) 2–23. [24] P. Krähenbühl, V. Koltun, Efficient inference in fully connected crfs with gaussian edge potentials, in: Proceedings of Advances in Neural Information Processing Systems (NIPS), 2011. [25] C. Sutton, A. McCallum, Piecewise training of undirected models, in: 21st Conference on Uncertainty in Artificial Intelligence, 2005. [26] Z. Kuang, D. Schnieders, H. Zhou, K.Y.K. Wong, Y. Yu, B. Peng, Learning imagespecific parameters for interactive segmentation, in: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2012, pp. 590–597. [27] G. Hoefel, C. Elkan, Learning a two-stage svm/crf sequence classifier, in: Proceeding of the 17th ACM Conference on Information and Knowledge Management, ACM, 2008, pp. 271–278. [28] C.H. Lee, M. Schmidt, A. Murtha, A. Bistritz, J. Sander, R. Greiner, Segmenting brain tumors with conditional random fields and support vector machines, Comput. Vis. Biomed. Image Appl. (2005) 469–478. [29] N. Plath, M. Toussaint, S. Nakajima, Multi-class image segmentation using conditional random fields and global classification, in: Proceedings of the 26th Annual International Conference on Machine Learning (ICML), ACM, 2009, pp. 817–824. [30] Tilke Judd, Krista Ehinger, Fredo Durand, Antonio Torralba, Learning to predict where humans look, in: 2009 IEEE 12th International Conference on Computer Vision (ICCV), IEEE, 2009, pp. 2106–2113. [31] Vijay S Iyengar, Chidanand Apte, Tong Zhang, Active learning using adaptive resampling, in: Proceedings of the 6th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, 2000, pp. 91–98. [32] H. Sebastian Seung, Manfred Opper, Haim Sompolinsky, Query by committee, in: Proceedings of the Fifth Annual Workshop on Computational Learning Theory, ACM, 1992, pp. 287–294. [33] J. Liu, J. Sun, H.Y. Shum, Paint selection, in: ACM Transactions on Graphics (ToG), ACM, vol. 28, 2009, p. 69. [34] M.E.L. Van, A. Zisserman, The pascal visual object classes (VOC) challenge, Int. J. Comput. Vis. 88 (3) (2010) 3–338. [35] O. Duchenne, J.Y. Audibert, R. Keriven, J. Ponce, F. Ségonne, Segmentation by transduction, in: IEEE Conference on Computer Vision and Pattern Recognition, 2008 (CVPR 2008), IEEE, 2008, pp. 1–8. [36] Tae Hoon Kim, Kyoung Mu Lee, Sang Uk Lee, Nonparametric higher-order learning for interactive segmentation, in: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2010, pp. 3201–3208.

Please cite this article as: L. Zhou, et al., Interactive segmentation based on iterative learning for multiple-feature fusion, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2013.12.026i

L. Zhou et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎ [37] D. Martin, C. Fowlkes, D. Tal, J. Malik, A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics, in: Proceedings of 8th IEEE International Conference on Computer Vision, 2001 (ICCV 2001). IEEE, 2001, vol. 2, pp. 416–423. [38] Ce Liu, Jenny Yuen, Antonio Torralba, Josef Sivic, William T Freeman, Sift flow: dense correspondence across different scenes, in: Computer Vision—ECCV 2008. Springer, 2008, pp. 28–42.

Lei Zhou received the B.E. degree from Wuhan University of technology, Wuhan, China, in 2008. He is currently pursuing the Ph.D. degree in school of electronic information, electrical engineering in Shanghai Jiao Tong University, Shanghai, China. His interests include image segmentation and machine learning.

Yu Qiao received his B.E. and M.E. degrees from Shanghai Jiao Tong University, in 1991 and 1997 respectively, and Ph.D. degree from National University of Singapore in 2004. He is currently an associate professor with the Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University. His research interests include medical image processing, machine learning, pattern recognition, signal processing and data mining.

Yijun Li received the B.E. degree in control science and engineering from Zhejiang University, Hangzhou, China, in 2012 and he is currently an M.S. student of automation major at Shanghai Jiao Tong University. His research interests include medical image processing and saliency detection.

13

Xiangjian He as a chief investigator has received various research grants including four national Research Grants awarded by Australian Research Council (ARC). He is the Director of Computer Vision and Recognition Laboratory, and the Deputy Director of Research Centre for Innovation in IT Services and Applications (iNEXT) at the University of Technology, Sydney (UTS). He is an IEEE senior member. He has been awarded ‘Internationally Registered Technology Specialist’ by International Technology Institute (ITI). He has been carrying out research mainly in the areas of image processing, network security, pattern recognition and computer vision in the previous years. He is a leading researcher for image processing based on hexagonal structure. He has played a chairman role in various international conferences including IEEE CIT, IEEE AVSS and ICARCV. He is a guest editor for various international journals such as Journal of Computer Networks and Computer Applications (Elsevier), and in the editorial boards of various international journals. He is a supervisor of postdoctoral research fellows and Ph.D. students. Since 1985, he has been an academic, a visiting professor, an adjunct professor, a postdoctoral researcher or a senior researcher in various universities/institutions including Xiamen University, China, University of New England, Australia, University of Georgia, USA, Electronic and Telecommunication Research Institute (ETRI) of Korea, University of Aizu, Japan, and Hongkong Polytechnic University.

Jie Yang was born in Shanghai, China, in August 1964. He received the B.S. degree in automatic control and the M.S. degree in pattern recognition and intelligent system from Shanghai Jiao Tong University, Shanghai, in 1985 and 1988 respectively, and the Ph.D. degree from the University of Hamburg, Hamburg, Germany, in 1994. He is currently a professor and the director of Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University. He is the principal investigator of more than 30 nation and ministry scientific research projects in computer vision, pattern recognition, data mining, and artificial intelligence, including two 973 projects, three national 863 projects, three national nature fund projects, and three international cooperative projects with France, Korea, and Japan. He has published more than 300 articles in national or international academic journals and conferences. Up to now, he has guided two postdoctoral, 26 doctors and 34 masters, awarded three researcher achievement prizes from the Ministry of Education, China and Shanghai municipality.

Please cite this article as: L. Zhou, et al., Interactive segmentation based on iterative learning for multiple-feature fusion, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2013.12.026i

Interactive Segmentation based on Iterative Learning for Multiple ...

Interactive Segmentation based on Iterative Learning for Multiple-feature Fusion.pdf. Interactive Segmentation based on Iterative Learning for Multiple-feature ...

9MB Sizes 2 Downloads 367 Views

Recommend Documents

Iterative Learning Control for Optimal Multiple-Point Tracking
on the system dynamics. Here, the improved accuracy in trajectory tracking results has led to the development of various control schemes, such as proportional ...

Interactive Image Segmentation with Multiple Linear ...
Oct 20, 2011 - [12], snapping [5] and jet-stream [13] require the user to label the pixels near ...... have ||M1 − C1||F = 5.5164 and ||M2 − C2||F = 1.7321 × 104.

Distributed PageRank Computation Based on Iterative ... - CiteSeerX
Oct 31, 2005 - Department of Computer. Science. University of California, Davis. CA 95616, USA .... sults show that the DPC algorithm achieves better approx-.

Terminal Iterative Learning Control with Multiple ...
Jul 1, 2011 - School of Mechatronics, Gwangju Institute of Science and ... (GIST) - 2011 American Control Conference, San Francisco, California, USA, 2011.

Segmentation of Markets Based on Customer Service
Free WATS line (800 number) provided for entering orders ... Segment A is comprised of companies that are small but have larger purchase ... Age of business.

Outdoor Scene Image Segmentation Based On Background.pdf ...
Outdoor Scene Image Segmentation Based On Background.pdf. Outdoor Scene Image Segmentation Based On Background.pdf. Open. Extract. Open with.

Spatiotemporal Video Segmentation Based on ...
The biometrics software developed by the company was ... This includes adap- tive image coding in late 1970s, object-oriented GIS in the early 1980s,.

Contextual Query Based On Segmentation & Clustering For ... - IJRIT
In a web based learning environment, existing documents and exchanged messages could provide contextual ... Contextual search is provided through query expansion using medical documents .The proposed ..... Acquiring Web. Documents for Supporting Know

Contextual Query Based On Segmentation & Clustering For ... - IJRIT
Abstract. Nowadays internet plays an important role in information retrieval but user does not get the desired results from the search engines. Web search engines have a key role in the discovery of relevant information, but this kind of search is us

Graph-based Multiple-Instance Learning for Object ...
Notice that once the instance classifier f is found, the bag classifier c ...... Apple. WoodRollingPin. Banana. LargeSpoon. 73.4±1.8. 72.7±1.5. 72.4±1.9. 69.5±1.6.

An Iterative Algorithm for Segmentation of Isolated ...
of overlapped, connected and merged characters with in a word. Structural features are helpful in segmentation of machine printed text but these are of little help ...

Interactive lesion segmentation on dynamic contrast ...
*[email protected]; phone: +1.512.471.1771; fax: +1.512.471.0616; http://www.bme.utexas.edu/research/informatics/. Medical Imaging 2006: Image ...

ACTIVITY-BASED TEMPORAL SEGMENTATION FOR VIDEOS ... - Irisa
The typical structure for content-based video analysis re- ... tion method based on the definition of scenarios and relying ... defined by {(ut,k,vt,k)}t∈[1;nk] with:.

ACTIVITY-BASED TEMPORAL SEGMENTATION FOR VIDEOS ... - Irisa
mobile object's trajectories) that may be helpful for semanti- cal analysis of videos. ... ary detection and, in a second stage, shot classification and characterization by ..... [2] http://vision.fe.uni-lj.si/cvbase06/downloads.html. [3] H. Denman,

Distributed PageRank Computation Based on Iterative ...
Oct 31, 2005 - PageRank, distributed search engines, iterative aggregation- disaggregation, Block ..... trices Bi are small enough to fit into main memory. Thus, the .... The authors are grateful to Stanford Database Group for sharing ST01 and ...

ACTIVITY-BASED TEMPORAL SEGMENTATION FOR VIDEOS ... - Irisa
based indexing of video filmed by a single camera, dealing with the motion and shape ... in a video surveillance context and relying on Coupled Hid- den Markov ...

Multilevel learning-based segmentation of ill-defined ...
shape, margin, and density were measured by a senior radi- ologist according to ... ure 2d shows the size of these masses ranged from 5 to 50 mm. The size of a ...

Silhouette Segmentation in Multiple Views
23 Oct 2010 - Note that color calibration was not performed with the data sets used in these experiments. In the following, we first explain how camera calibration is conducted then show the results of silhouette segmentation. The images are used for

Multi-Task Text Segmentation and Alignment Based on ...
Nov 11, 2006 - a novel domain-independent unsupervised method for multi- ... tation task, our goal is to find the best solution to maximize. I( ˆT; ˆS) = ∑. ˆt∈ ˆ.

Query Segmentation Based on Eigenspace Similarity
§School of Computer Science ... National University of Singapore, .... i=1 wi. (2). Here mi,j denotes the correlation between. (wi ทททwj−1) and wj, where (wi ...

Outdoor Scene Image Segmentation Based On Background ieee.pdf ...
Loading… Whoops! There was a problem loading more pages. Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps... Outdoor Scen ... und ieee.pdf. Outdoor Scen ... und ieee.pdf. Open. Extract. Open with. Sign I

A Meaningful Mesh Segmentation Based on Local Self ...
the human visual system decomposes complex shapes into parts based on valleys ... use 10í4 of the model bounding box diagonal length for all the examples ...

Segmentation of Mosaic Images based on Deformable ...
in this important application domain, from a number of points of view including ... To the best of our knowledge, there is only one mosaic-oriented segmentation.

Query Segmentation Based on Eigenspace Similarity
University of Electronic Science and Technology. National ... the query ”free software testing tools download”. ... returns ”free software” or ”free download” which.