Biomedical Signal Processing and Control 19 (2015) 68–76

Contents lists available at ScienceDirect

Biomedical Signal Processing and Control journal homepage: www.elsevier.com/locate/bspc

Robust point matching method for multimodal retinal image registration Gang Wang, Zhicheng Wang ∗ , Yufei Chen, Weidong Zhao CAD Research Center, Tongji University, No. 4800, Cao’an Highway, Shanghai 201804, China

a r t i c l e

i n f o

Article history: Received 15 December 2014 Received in revised form 27 February 2015 Accepted 10 March 2015 Keywords: Image registration Multimodal retinal image Robust point matching PIIFD SURF

a b s t r a c t In this paper, motivated by the problem of multimodal retinal image registration, we introduce and improve the robust registration framework based on partial intensity invariant feature descriptor (PIIFD), then present a registration framework based on speed up robust feature (SURF) detector, PIIFD and robust point matching, called SURF–PIIFD–RPM. Existing retinal image registration algorithms are unadaptable to any case, such as complex multimodal images, poor quality, and nonvascular images. Harris-PIIFD framework usually fails in correctly aligning color retinal images with other modalities when faced large content changes. Our proposed registration framework mainly solves the problem robustly. Firstly, SURF detector is useful to extract more repeatable and scale-invariant interest points than Harris. Secondly, a single Gaussian robust point matching model is based on the kernel method of reproducing kernel Hilbert space to estimate mapping function in the presence of outliers. Most importantly, our improved registration framework performs well even when confronted a large number of outliers in the initial correspondence set. Finally, multiple experiments on our 142 multimodal retinal image pairs demonstrate that our SURF–PIIFD–RPM outperforms existing algorithms, and it is quite robust to outliers. © 2015 Elsevier Ltd. All rights reserved.

1. Introduction Image registration is an important element in the fields of computer vision, pattern recognition, and medical image analysis. In this problem, two or more images are aligned together in a same spatial axis to receive a comprehensive understanding. In this paper, we focus on digital retinal images which are widely used to diagnose varieties of diseases, such as diabetic retinopathy, glaucoma, and age-related macular degeneration [1,2]. Then, using the computer-assisted retinal image registration technique is helpful to assist doctors to diagnose diseases and make treatment planning. There are four main retinal registration applications: mono-modal registration, multimodal registration, temporal registration, and multi-images fusion. Mono-modal and multimodal retinal images are captured by the same sensor (e.g. fundus camera) and different sensors (e.g. red-free and fluorescein angiography) at the same time, respectively, while temporal retinal images are captured at different times. These applications align images to create a wider view, and integrated data information.

∗ Corresponding author. Tel.: +86 13636524116; fax: +86 02165983989. E-mail addresses: [email protected] (G. Wang), [email protected] (Z. Wang), [email protected] (Y. Chen), [email protected] (W. Zhao). http://dx.doi.org/10.1016/j.bspc.2015.03.004 1746-8094/© 2015 Elsevier Ltd. All rights reserved.

Recently, many related registration approaches have been proposed for retinal image registration, can be classified into three classes: area-based, feature-based, and hybrid approaches, typically. The area-based approaches are widely used in image registration, they mainly use a certain similarity metric, such as mutual information (MI) [3–5], cross correlation (CC) [6], entropy correlation coefficient (ECC) [7], and phase correlation [8,9], to match the intensity difference of image pairs. In order to minimize the measures of match, some optimizations are applied, such as simulated annealing [10] and genetic algorithms. However, there are several shortcomings. (1) The huge searching space depends upon the complication of the transformation models. (2) The metric of similarity is always disturbed by nonoverlapping areas when faced with too small overlaps. (3) The optimization often meets local minima when handling with high order transformations, and it may have huge searching space to fall in a computational bottleneck. What is more, the performance of area-based approaches degrades when confronted with illumination, content, and texture changes. Feature-based approaches extract several features, such as the bifurcations of retinal vasculature, fovea, optic disc, and corners, whose number is much less than the number of pixels, and they are more appropriate for retinal image registration. Feature extraction and transformation estimation are two key components of these approaches. Typically, bifurcations [11–15], fovea, and optic

G. Wang et al. / Biomedical Signal Processing and Control 19 (2015) 68–76

disc [16,17] are common features in retinal image registration. Bifurcations are invariant feature to intensity, scale, rotation, and illumination variations, and dependent of vasculature detection [11]. The vascular tree is detected and bifurcations are labeled with surrounding vessel orientations. Then angle-based invariant is used to give a probability for every matching bifurcation pairs. However, it is difficult to extract bifurcations, fovea, and optic disc in poor quality, and unhealthy retinal images [18]. Thus, featurebased approaches based on an assumption that certain features are easy to be extracted. The bifurcations are used as landmarks, but some bifurcations with more than one correspondence, then the transformation can be estimated by a hierarchical strategy which makes the registration approach robust to unmatchable features and mismatches between image pairs [12]. Local features, such as Harris corner [19], scale invariant feature transform (SIFT) [20–22], speed up robust feature (SURF) [23], are also widely used general features and easier to extract than bifurcations, however these feature descriptors are not appropriate for multimodal registration. More precisely, SIFT and SURF descriptors are designed for monomodal retinal image registration [24]. Shape context (SC) [25] only uses the locations of feature points to describe point set in logpolar histogram bins, it is rotation invariant, scale invariant, and affine invariant, but it is highly sensitive to outliers. Hybrid approaches integrate area-based with feature-based approaches to improve the registration performance. For instance, [7] combines mutual information technique and bifurcations to register retinal images. Chen et al. [18] presented a partial intensity invariant feature descriptor (PIIFD) for multimodal registration, even for poor quality images. It is a hybrid area-feature descriptor due to the surrounding area of each corner point is used to extract structural outline. However, the Harris-PIIFD registration framework cannot detect more repeatable and scale invariant key points, and its sensitive to large amount of mismatches. Ghassabi et al. [26] analyzed the problems related to SIFT, and proposed an uniform and robust scale invariant transform feature extraction (UR-SIFT) to instead Harris detector, and obtained an efficient UR-SIFT-PIIFD registration approach. From the angle of point matching, iterative closest point (ICP) [27] is widely used to register retinal images. Stewart et al. [28] proposed dual-bootstrap ICP. There are three steps in each bootstrap region, such as refining the transformation estimation, expanding the bootstrap region, and using higher order transformation model. Yang et al. [29] used the dual-bootstrap ICP algorithm to refine each estimate, and proposed the generalized dual-bootstrap ICP (GDB-ICP) algorithm. Edge-driven dual-bootstrap ICP (ED-DBICP) [30] combines SIFT key points and vascular features to register multimodal retinal images. All of those approaches only need one initial correct match to run iterative registering process successfully. However, their performance degrades when faced with poor quality images. Deng et al. [31] introduce graph matching method for retinal image registration, and they also combined graph-based matching and ICP to generate a registration framework called GMICP. The methods require sufficient feature points to obtain efficient performance. Although there are many approaches in retinal image registration, several challenges still exist in retinal image registration. Firstly, how to extract reliable, repeatable, and distinctive features in different modal retinal images. Secondly, how to find correspondences between multimodal retinal pairs, i.e. how to design robust descriptor for matching control point candidates. Thirdly, how to remove outliers, due to the initial matching correspondences are contaminated by outliers which highly impact the registration accuracy. As mentioned earlier, the Harris-PIIFD is designed for multimodal images, and poor quality images. It can register multimodal images successfully, but the fusion images contain some degree of dislocation and ghost, two main corresponding problems are features repeatable and outlier removal, due to the

69

limitation of the Harris corner detector, and outlier rejection strategy, respectively. In this paper, we improve the Harris-PIIFD registration framework using SURF key points to solve the features repeatability, and propose a novel robust point matching algorithm to reject outliers and estimate transformation robustly. The improved SURF–PIIFD is useful to extract repeatable, rotation, scale invariant, intensity and affine partial invariant local features. In outliers rejection process, we assume that the inliers satisfy a single Gaussian distribution, then we search for the optimal mapping function in reproduced kernel Hilbert space with a Gaussian radial basis kernel function. A novel low-rank Gram matrix approximation is proposed to construct control points to speed up our algorithm. Thus, the described robust automatic multimodal retinal image registration framework named SURF–PIIFD–RPM. The rest of the paper is organized as follows: In Section 2, we introduce the improved retinal image registration framework. In Section 3, we present the improved SURF–PIIFD of feature descriptor. In Section 4, we devote the proposed robust point matching for outliers removing and transformation estimation. In Section 5, we describe the experimental settings and report the results. In Section 6, we give a discussion and conclusion.

2. Retinal image registration framework In this paper, we concentrate on the hybrid area-feature based registration method for multimodal retinal images. Our improved multimodal retinal image registration framework, as showed in Fig. 1, based on SURF–PIIFD and RPM contains the following four main parts: (1) (2) (3) (4)

Locate local feature points by SURF detector. Extract feature descriptor based on PIIFD. Feature matching and mismatches removing using RPM. Estimate the transformation using weighted least-squares.

Note that image preprocessing is applied before extracting local feature descriptor, since the SURF detector can detect key points based on color images directly. Then we select green component from the input RGB image format, and scale the intensities of them to the full eight bits intensity range [0, 255]. In order to reduce the sensitivity of algorithm parameters and the different scale, the Harris-PIIFD framework suggests zooming out the input image to a fixed size. In our improved framework, we use the original image size for lossless intensities and repeatable key points. Following the matching algorithm of the Harris-PIIFD, we use the bilateral matching based on the unilateral best-bin-first (BBF) method [32]. In this way, we can obtain more accurate matching point pairs, though losing some pairs. In terms of our following robust point matching method, the threshold of the nearest neighbor criterion is set to 0.96 in this paper for getting more candidate matching pairs. Note that the larger the threshold, the more outliers may be obtained. Another important operation is to tune corresponding control point locations using cross correlation (e.g. function ‘cpcorr’ in Matlab), then refined matching point locations are used to estimate transformation parameters. Transformation models, such as rigid, affine, and second order polynomial (quadratic), are adaptively chosen to register image pairs according to the matches number. The reason why we use the quadratic transformation is that the surface of retina is approximately spherical [12]. Nonetheless, the difference between [18] and ours is that the former uses a hierarchical style from linear conformal to affine or second order polynomial transformation iteratively, our framework registers images only depending on the

70

G. Wang et al. / Biomedical Signal Processing and Control 19 (2015) 68–76

Fig. 1. Multimodal retinal image registration framework flowchart. Input two multimodal retinal images, one as the moving image, and the other one as the fixed image. Locate local feature points by SURF detector, and extract feature descriptor based on PIIFD. Feature matching and mismatches removing using RPM, and estimate the transformation using weighted least-squares. Finally, two retinal images are aligned together in a same spatial axis to receive a comprehensive understanding by rigid, affine, or second order polynomial transformation.

initial matching points. Thus our improved framework can estimate transformation parameters quicker than the former one [18]. The detailed improvements of multimodal retinal image registration framework are explored in the following sections.

3.1. Detect local features by SURF The Harris corner detector is the most widely used local feature detector in image processing. It is based on the second comment matrix M. Given a point x = (x, y) in an image I, it is considered as a corner point if and only if det(Mx,y ) − tr2 (Mx,y ) > 0, where  is determined empirically and dependent on images to analyze [33]. Note that its values usually adopted range in [0.04, 0.16]. However, the Harris corner is not scale invariant, and it has low feature repeatability. Motivated by the problem of local feature detectors, this paper introduces and analyzes a widely used robust scale invariant feature, called speed up robust feature (SURF) [23]. In the feature-based retinal registration framework, the performance of registration approaches may be enhanced by highly repeatable local feature detectors. Note that local feature detectors are only providing locations of key points for our registration framework. In this paper, we consider that the repeatability is more important than the uniform distribution of key points in multimodal retinal images. Interest point detector of SURF is based on integral images [34], Hessian matrix [35], and scale space theory. The core idea of SURF is that the interest point candidates can be detected by the maximum determinant of the Hessian matrix. Given an arbitrary point x = (x, y) in an image I, we can give a 2 × 2 Hessian matrix HL (x, ) in x at scale  as follows:

HL (x, ) =

Lxx (x, )

Lxy (x, )

Lxy (x, )

Lyy (x, )

D(x, ) = L(x, k) − L(x, )

(2)

where k denotes the linear scale difference between each image in each octave. However, SURF approximates the determinant of the Hessian matrix HL (x, ) by using box filters, and SURF constructs the approximation D(x, ) to det(HL (x, )) directly. More precisely,

3. SURF–PIIFD for feature representation



From the scale space, a new scale space is constructed using the difference of Gaussian (DoG) function as used in [20–22]:

 (1)

where L(x, ) = G(x, ) * I(x), * denotes the convolution operation between image I and the Gaussian kernel, and the Gaussian function G(x, ) = 1/2 2 exp(− (x2 + y2 )/2 2 ). Lxx , Lxy , and Lyy denote the convolution of the Gaussian second order derivatives with the image I in point x.

det(Happrox ) = Dxx Dyy − (wDxy )2

(3)

where the relative weight w of the filter responses is used to balance the expression of the determinant of the Hessian matrix. Bay et al. [23] suggest w = 0.9. Using the integral image, SURF approximates the different levels of scale space D(x, ) by adjusting the size of the box filters instead of the original image as used in SIFT. Finally, interest points can be found by non-maximum suppression in a 3 × 3 × 3 neighborhood around each sample point. Following the SIFT, the second-order Taylor expansion around the key point is used to improve the locations of the key points. More details of SURF detector are well studied in [23,36]. 3.2. Extract feature descriptor by PIIFD To our knowledge, PIIFD is the one of the best descriptor which is independent of vascular tree for multimodal retinal image registration. It is based on two assumptions, (1) the similar anatomical structure regions of one image would consist of similar outlines in the corresponding regions of another image, and (2) the gradient orientations at corresponding control point locations would point to the same or opposite directions in multimodal retinal image pairs. Compared with the most popular feature descriptor SIFT, there are several differences, (1) PIIFD uses a continuous averaging squared gradients to calculate the main orientation instead of the discrete orientation histogram, due to the former can improve the accuracy and computational efficiency. (2) PIIFD uses the fixed neighborhood size instead of selecting by the scale of the control point automatically, since the scale changes of retinal images is slight. (3) PIIFD convert the orientation histogram with 16 bins (0◦ , 22.5◦ , . . ., 337.5◦ ) to a degraded eight bins orientation histogram (0◦ , 22.5◦ , . . ., 157.5◦ ) by computing the sum of the opposite orientations. (4) A linear combination of two sub-descriptors to solve the

G. Wang et al. / Biomedical Signal Processing and Control 19 (2015) 68–76

opposite main orientations of the corresponding control points. The 4 × 4 orientation histograms with eight bins are defined as follows:



P11

P12

P13

P14



⎢ ⎥ ⎢ P21 P22 P23 P24 ⎥ ⎥ ⎥ ⎣ P31 P32 P33 P34 ⎦

P=⎢ ⎢

P41

P42

P43

(4)

P44

where Pij denotes an orientation histogram with 8 bins, and the combined descriptor can be denoted as follows:





P1 + rot(P1 , )

⎢ P + rot(P , ) ⎥ ⎢ 2 ⎥ 2 ⎥ Dp = ⎢ ⎢ a

P − rot(P , )

⎥ 3 3 ⎣ ⎦





a P4 − rot(P4 , )

max(Pi + rot(Pi , ))



, i = 3, 4

max( Pi − rot(Pi , ) )

In this section, we firstly construct a single Gaussian model to eliminate outliers, then we estimate the transformation parameters using weighted regularization least-squares. Note that the outliers exist in both point sets, i.e. the input data is an initial correspondence set which is contaminated by outliers. More precisely, we assume that the inliers satisfy Gaussian distribution. In multimodal retinal image registration, initial correspondences are obtained by matching SURF–PIIFDs using bilateral BBF algorithm, and incorrect matches (i.e. outliers) may be falsely charged as inliers by some outlier removing methods sometimes. Thus, our robust point matching is proposed to solve this problem. 4.1. Problem formulation Given two point sets: (1) the moving point set XM×D = (x1 , . . ., xM )T , and (2) the fixed point set YN×D = (y1 , . . ., yN )T . Following the idea of robust point matching method [37–39], we use a slightly simpler form to estimate the mapping function f with the given two point sets. N

M

where we assume that yi − f(xj ) satisfies a single Gaussian distribution, and  denotes the corresponding relation between matching points. In this paper, we have got the correspondence set C = {(xl , yl )}Ll=1 after the initial matching, note that there is some sort of proportion outliers in point set C. Then we can rewrite the objective function as 2 N(yl − f (xl )|0,  2 I) L L

l=1

where L ≤ M, N is the number of correspondences.

K = ⎢ .. ⎣.

...

k(x1 , xL )

..

.. .

.

···

⎤ ⎥ ⎥ ⎦

(9)

k(xL , xL )

According to the representation theorem [42], the solution of the Tikhonov regularization [43] risk minimization (ε = minE(f,  2 ) + f ∈H

/2f 2K , where  > 0 denotes a trade-off parameter) can be written

L

h K(xl , · ) for some hl ∈ RL . Thus our final objective as: f ∗ ( · ) = l l function can be rewritten in the following form:

˜ 2) = − E(H,

1

L 2 2

    Y − KH 2

D/2 exp −

2 2

+

  T tr H KH 2 (10)

4.3. Low-rank matrix valued kernel approximation The matrix-valued kernel [44] plays a major role in the regularization theory, it provides an easy way to choose a preferable RKHS. However, in this paper, the computational complexity of the robust point matching method is O(M3 ), hopefully, low-rank kernel matrix approximation [45] can yield a large increase in speed with little loss in accuracy. Low-rank kernel matrix approximation Kˆ is the closest -rank matrix approximation to K, meanwhile satisfies both L2 and Frobenius norms. Using eigenvalue decomposition of K, the approximation matrix can be written as Kˆ = Q Q T , where is a diagonal matrix of size  ×  with  largest eigenvalues, and Q is an L ×  matrix with the corresponding eigenvectors. The object function of our method therefore can be rewritten as:

˜ H, ˜ 2) = − E(

1 D/2 L(2 2 )

(7)

i=1 j=1

E(f,  2 ) = −

k(x1 , x1 )

where tr(·) denotes the trace, H = (h1 , . . ., hL )T is an coefficient matrix of size L × D, D = 2 denotes the dimension of point set in 2D retinal image registration.

4. Robust point matching method

1 N(yi − f (xj )|0,  2 I) MN



(6)

Therefore, the dimension of PIIFD is 4 × 4 × 8 =128, and it is normalized to a unit length finally. Note that PIIFD is rotation invariant, and partially invariant to intensity, affine, and point view change.

E(f,  2 ) = −

Here we can introduce a special feature space, i.e. reproducing kernel Hilbert space (RKHS) [40], and then searching the functional form of the mapping model f using calculus of variation [41]. In RKHS, the moving point set and the fixed one satisfy X ∈ RD and Y ∈ RD , respectively. Then we define an RKHS H with a positive definite kernel function f. In this paper, we use the well-known Gaussian radial basis kernel: k(xi , xj ) = exp(− ˇ  xi − xj  2 ), where ˇ is a constant, which controls the speed of moving points. Thus we can define the kernel matrix (Gram matrix) K:

k(xL , x1 )

where P1 denotes the first row of orientation histograms matrix, and similarly for P2 , P3 , and P4 . rot(P, ) rotates the orientation histogram matrix 180◦ . ˛ is used to tune the proportion of magnitude in the local descriptor, and it can be defined as follows: a=

4.2. Kernel function selection



(5)

71

(8)

exp

    Y − U H˜ 2 −

2 2

+

 ˜ T Kˆ H) ˜ tr(H 2 (11)

˜ of size  × D instead of the where UL× = Q , parameter matrix H original matrix H. 4.4. Implemental details In this paper, the aforementioned cost function is convex in the neighborhood of the optimal position and, most importantly, always differentiable. Thus, the numerical optimization problem can be solved by employing some gradient-based optimization methods, such as quasi-Newton method [46]. The derivative of the ˜ is final objective function with respect to the coefficient matrix H given by:

72

G. Wang et al. / Biomedical Signal Processing and Control 19 (2015) 68–76

successful estimation needs at least 2, 3, and 6 point pairs for rigid, affine, and quadratic transformation, respectively.

Kˆ ∂E˜ ˜ − Y) ◦ = (U H D/2 ˜ ∂H L 2 (2 2 )



×



⎝exp ⎝



˜ − Y )(U H ˜ − Y) diag (U H

T

2 2

⎞



5. Experiments and results

⎠ ⊗ 1⎠ + U H˜

(12)

where 1 is an 1 × D row vector of all ones. ◦ denotes the Hadamard product, ⊗ denotes the tensor product. The iteration process is using determination annealing technique which is a heuristic method to escape from the trap of local minima. Following the annealing framework [38], we set a big initial value to bandwidth  2 = (1/MND)

N M i=1

2

j=1

(yi − xj ) , then

we reduce it according to  2 = ˛ ×  2 , where ˛ = 0.75 in our whole experiments. The penalty parameter  = 0.1 is used to trade off the regularization term and the empirical risk, which solves the ill-posed problem in point matching. The parameter of kernel function ˇ = 5.0 is used to control the strength of interaction between points. Small values of ˇ produce locally smooth transformation, while large values correspond to nearly pure translation transformation. The parameter  is used to control the complexity of the method. In practice,   M and we set  = 15, the computational complexity of our method will be reduced to O(N) approximately when the number of points is relatively large and well clustered. In other words, it enables our method to be applied to larger point set. Most importantly, all input point sets are normalized as distributions with zero mean and unit variance for linear rescaling of the initial correspondences. 4.5. Transformation parameters estimation Consider two images of the retinal surface, we used weighted least squares to estimate the parameters of geometric transformation, such as rigid, affine, and quadratic. ˜ = argmin

 L



2 w (xl , yl ) yl − ϕ xl , 



(13)

2



l=1



2

1, if exp(−(yl − f (xl )) /(2 2 )) ≥ , more pre0, otherwise cisely, w(xl , yl ) = 1 denotes the inliers, while w(xl , yl ) = 0 denotes the outliers. In this paper, we set = 0.9 to reject outliers. Following the weighted least squares model, the rigid transformation between two retinal images relates X and X as follows: where w(xl , yl ) =

X = sRX + t, orϕ(xl , ) = sRxl + t, s.t.RT R = I, det(R) = 1

(14)

where = {s, R, t}, R2×2 is a rotation matrix, t2×1 is a translation vector, and s is a scaling parameter. Similarly, the affine transformation is defined as ϕ(xl , ) = Bxl + t, where B2×2 denotes the affine matrix. In the retinal image registration, second order polynomial transformation is also well used, because of the surface of the retina is almost spherical [12], which is defined as ϕ(xl , ) = Px2l + Qxl + t. All of these three transformations can be rewritten in matrix form:



ϕ X, = X X = 1, x1 , x2 , x1 x2 , x1 2 , x2

where

 

11 21

(15)

12 −13

13 12

0 0

0 0



0 , 0





11 21

12 22

 2 T

13 23

, 0 0

0 0



0 , 0

= and

11 12 13 14 15 16 for rigid, affine, and quadratic 21 22 23 24 25 26 transformation, respectively. The Degree of Freedom (DoF) of rigid, affine, and second order polynomial is 4, 6, and 12, respectively. In other words, the

In order to evaluate the performance of our proposed SURF–PIIFD–RPM, we implemented it in Matlab R2012b and tested it on a laptop with Pentium CPU I5 2.4GHz and 4GB RAM. In this section, we compare our proposed method with SURF,1 GDB-ICP,2 I2K Retina Pro,3 and Harris-PIIFD [18] on 142 pairs of multimodal retinal images. Note that we combined SURF with our robust point matching method to register image pairs, called SURF-RPM. We downloaded the binary executable program of GDB-ICP written in C++, and we used the special command, ‘-complete’, to register our multimodal retinal images. I2K Retina Pro is a commercial application software, in this paper, we downloaded its trial version to test our retinal image datasets. Harris-PIIFD is implemented in Matlab, and its parameter settings are suggested by the authors. 5.1. Data and evaluation criterion Two groups of multimodal retinal image data sets are used to demonstrate our proposed SURF–PIIFD–RPM. One contains 122 pairs of multimodal images which was provided by the Xinshijie Eye Hospital, and Zhongshan Hospital of Shanghai, China, and these images are of two modalities: red-free (RF, which is ordinary fundus photography with green filter) and fundus auto-fluorescence (FAF). The most of those pairs were taken at the same time, while just a few pairs were taken at different time. The images have a resolution in the range from 640 × 480 to 1280 × 960 pixels. Moreover, the images range in overlap is 20–100%. The second dataset contains 20 pairs of gray multimodal images collected from the Internet. Compare to the first one, these image pairs are poor quality. These images range in size from 300 × 260 to 500 × 500 pixels. To evaluate the results of registration, we need a reliable and fair criteria to measure the performance of registration approaches, because of lacking public multimodal retinal image registration dataset, and preferable ground truth. In this paper, we tried three evaluation methods to evaluate registration approaches. The first one is based on subjective evaluation, and the second method is centerline error measure (CEM) which needs to extract the vascular tree from retinal images, and the last one measures the root of mean square error (RMSE) by manual ground truth. However, it is very hard to measure the slight difference by the first evaluation method when two registration approaches perform well. Centerline of vasculature is extremely difficult to extract from some retinal images in our data set, then the CEM is not the preferable criteria in our data. More precisely, manual ground truth is constructed by four steps: (1) select at least six control point pairs in retinal image pairs, (2) compute the transformation via the manual correspondences, (3) transform the matched points in the moving image to the fixed image by the forward spatial transformation, (4) calculate Euclidean distances between the transformed points and the reference points in the fixed image. Note that we select 12 matched points manually using Matlab R2012b and generate the manual ground truth. Following the idea of CEM [28], we can measure the median error (MEE), maximum error (MAE), and RMSE of the obtained distances [18]. Sometimes, we get a low median value, but the maximum value is large. In this approach, we can use these three measures to evaluate registration approaches. Considering our dataset, in our experiments, we classify the registration results

1 2 3

http://www.chrisevansdev.com/computer-vision-opensurf.html http://www.vision.cs.rpi.edu/gdbicp/exec/ http://www.dualalign.com/index.php

G. Wang et al. / Biomedical Signal Processing and Control 19 (2015) 68–76

73

Fig. 2. Plotting the percentage of successful multimodal retinal image registrations.

as incorrect (MAE > 10 pixels), inaccuracy (MAE ≤ 10 and MEE > 1.5 pixels), and acceptable (MAE ≤ 10 and MEE ≤ 1.5 pixels) groups.

Fig. 3. Plotting the percentage of successful multimodal retinal image registrations as a function of overlapping area between images. The plots cover all image pairs in our datasets, and show the percentages of GDB-ICP, I2K Retina Pro, Harris-PIIFD, and our proposed SURF–PIIFD–RPM for each interval.

5.2. Registration performance evaluation Success Rate [28] is the most and first important quantitative evaluation criteria of overall performance, and it denotes the percentage of image pairs for which the registration approach obtained enough matching number of point pairs, such as two for similarity, three for affine, and six for quadratic. Fig. 2 shows the successful percentage of GDB-ICP, I2K Retina Pro, Harris-PIIFD, and SURF–PIIFD–RPM in our multimodal retinal image datasets. We also registered our retinal image datasets by SURF-RPM, however, SURF-RPM can obtain 100% success rate, and most of its results are incorrect. Owing to the descriptor of SURF is not preferable to multimodal retinal images [24]. Our proposed SURF–PIIFD–RPM uses the SURF detector and PIIFD descriptor to obtain initial correspondence, then our novel single Gaussian robust point matching model is used to remove outliers, because of correct matching point pairs need to be found from the initial correspondence correctly. Overlapping area [12] always impacts the registration results in retinal registration study. However, we consider that the performances of registration approaches which based on feature descriptor, such as SIFT, SURF, and PIIFD, are not affected by overlapping area directly. As shown in Fig. 3, we could register two images correctly if plenty of corresponding feature points were detected even at a very small overlapping area. In other words, it is difficult to obtain sufficient information to register images at extremely low overlaps. Note that I2K Retina Pro obtained high

success rate, because it is based on retinal vessel tree and when the vessel tree is just easy to extract. Thus we consider that a more repeatable feature detector is needed to solve this problem. The success rate of SURF–PIIFD–RPM drops to 0 at the lowest overlap in our datasets, as there is insufficient information in the overlapping area. An example of multimodal registration at approximately 25% overlaps as shown in Fig. 4. The median error of Harris-PIIFD is 41.56 pixels, while the median error of our method is 4.23 pixels. Since the PIIFD is rotation invariant, and small-scale invariant, our SURF–PIIFD–RPM performs well on the datasets when faced with rotated or scaled image pairs. Fortunately, the rotation and the scale between image pairs in our datasets are changed very small. Actually, the rotation angle is lower than 30◦ , and the scaling factor is less than 1.5 in our datasets. So we can say that our proposed SURF–PIIFD–RPM is still preferable for multimodal retinal image registration. Registration results are classified as three groups: accepted (MAE ≤ 10 and MEE ≤ 1.5 pixels), inaccuracy (MAE ≤ 10 and MEE > 1.5 pixels), and incorrect (MAE > 10 pixels) groups. Some examples are shown in Fig. 5(a)–(c), respectively. Note that the first column is the moving image, and the second column is the fixed image. We apply our SURF–PIIFD–RPM algorithm on our entire test datasets, and the third column denotes the registration results. In our experiments, we mainly compare our method with Harris-PIIFD.

Fig. 4. A pair of retinal image registration in overlapping area test. The overlapping area of the fixed and moving images is approximately 25%. (a) The initial image pairs. (b) The registration result of Harris-PIIFD, and its median error is 41.56 pixels. (c) The registration result of SURF–PIIFD–RPM, and the median error is 4.23 pixels. We use color fusion image to present registration results.

74

G. Wang et al. / Biomedical Signal Processing and Control 19 (2015) 68–76

Fig. 5. (a) Accepted (MAE ≤ 10 and MEE ≤ 1.5 pixels) multimodal retinal registration results using our SURF–PIIFD–RPM. The foremost three image pairs are in the first dataset, and the last two image pairs belong to our second dataset. (b) Inaccuracy (MAE ≤ 10 and MEE > 1.5 pixels) multimodal retinal registration results using our SURF–PIIFD–RPM. (c) Incorrect (MAE > 10 pixels) multimodal retinal registration results using our SURF–PIIFD–RPM. Note that the first column denotes the moving images, the second column denotes the fixed images, and the last column denotes the fusion images registered by our method.

Table 1 Means and standard deviations (std) of RMSE, and median error (MEE) for all outputs of Harris-PIIFD and SURF–PIIFD–RPM.

Incorrect Incorrect Inaccuracy Inaccuracy Acceptable Acceptable Overall Overall

Evaluation criterions

Harris-PIIFD

RMSE MEE RMSE MEE RMSE MEE RMSE MEE

36.51 13.88 4.17 3.11 2.21 1.18 20.75 8.48

± ± ± ± ± ± ± ±

55.77 9.32 1.51 1.10 0.77 0.07 42.85 8.78

SURF–PIIFD–RPM 14.77 6.87 3.53 2.77 1.51 1.13 8.07 4.24

± ± ± ± ± ± ± ±

13.36 7.79 0.82 1.02 0.29 0.26 10.57 5.62

We measured the accuracy of Harris-PIIFD, and our method by RMSE and MEE, as shown in Table 1. Our SURF–PIIFD–RPM performs better than Harris-PIIFD with higher accuracy in acceptable, inaccuracy, incorrect, and overall groups, respectively. Note that both methods obtained approximately one pixel MEE accuracy in acceptable group, but Harris-PIIFD performs badly on the incorrect group. Thus our method improved the performance of Harris-PIIFD.

Table 2 Means and standard deviations (std) of runtime for all outputs of GDB-ICP, HarrisPIIFD, and SURF–PIIFD–RPM. Evaluation criterion

GDB-ICP

Harris-PIIFD

SURF–PIIFD–RPM

Runtime (s)

90.09 ± 74.94

14.87 ± 3.85

31.35 ± 8.74

However, our SURF–PIIFD–RPM is less computationally efficient than Harris-PIIFD, because of the reason that images were not zoomed to a smaller size, and our robust point matching is an iteration algorithm. On the other hand, processing of feature detection also needs more time in Matlab. The runtime of GDB-ICP, Harris-PIIFD, and our method are shown in Table 2. GDB-ICP is much less computationally efficient than Harris-PIIFD and ours. So we can state that our proposed method gets a balance between performance and computation complexity in our experiments. Other feature descriptors are also applied to our proposed robust point matching method, such as shape context [25] with bifurcations in distinctive vessel images. We also implemented Affine-PIIFD descriptor based on the framework of the Affine-SIFT

Fig. 6. Left: the comparison of RANSAC and our single Gaussian RPM method by measuring precision and recall on twenty two examples with inlier rate from 0.04 to 0.53. Right: the comparison of RANSAC and our single Gaussian RPM method by the F1-measure on ten examples with low inlier rate (0.06–0.20).

G. Wang et al. / Biomedical Signal Processing and Control 19 (2015) 68–76

75

Fig. 7. The problem of RANSAC when faced many-to-one matching cases. This result is obtained by our implemented SURF and RANSAC.

[47]. It can get more reliable matching correspondences, but its speed is very slow in Matlab.

for multimodal retinal images, and applying our method on generalized image registration.

5.3. Robust matching analysis

Acknowledgements

In the process of outlier removal, Harris-PIIFD uses the main orientations to discriminate which matching is incorrect. However, it is extreme dependency on main orientations. Our robust point matching algorithm is generalized for matching point pairs in the presence of outliers. Note that the RANSAC (RANdom SAmple Consensus) [48] is the most popular method to remove outliers, but our single Gaussian robust point matching method performs better than RANSAC, especially when faced a large number of outliers. The matching threshold is set to 0.96 (0.9 for Harris-PIIFD) in our experiments, then more outliers are obtained in our initial correspondence. The number of iteration processing is set to 50, and then we can obtain the minimization value. Compare with RANSAC, our single Gaussian RPM performs better when the initial correspondence contains a large number of outliers. The left one of Fig. 6 shows the precision and recall on twenty two examples on our multimodal retinal images, note that the interval of their inlier rate is from 0.04 to 0.53. The right one of Fig. 6 illustrates that our method obtained higher F1-measure than RANSAC in most cases, where the interval of inlier rate is [0.06, 0.20]. When many points are matched to a same points, the result of RANSAC may be very bad, as shown in Fig. 7. This comparison demonstrates that our proposed single Gaussian RPM outperforms RANSAC algorithm in the processing of outlier removing, and our method is quite robust to outliers.

This work was supported by National Natural Science Foundation of China (NSFC, no. 61103070). The authors would like to acknowledge Dr. J. Chen and Dr. Z. Ghassabi. The authors also sincerely acknowledge the help of Dr. Jing Chen and Chunyan Mu of the Xinshijie Eye Hospital, and Dr. Yong Zhang of the Zhongshan Hospital, Shanghai, China.

6. Conclusion In this paper, we focus on multimodal retinal image registration, and we found that the problem of previous registration approaches, then our proposed method not only extracts sufficient information from image pairs, but also removes incorrect matches (outliers) robustly. Multiple experiments on our 142 multimodal retinal image pairs demonstrate that our proposed SURF–PIIFD–RPM outperforms SURF, GDB-ICP, I2K Retina Pro, and Harris-PIIFD algorithms. However, there are also 12 image pairs in our datasets are failed to register by our method. The main reason is that the overlapping areas between them are extremely small, i.e. there is insufficient information in the overlaps. Moreover, several failed images dissatisfy the assumption of PIIFD descriptor. In our future work, we will concentrate on designing a more robust descriptor than PIIFD

References [1] C. Sanchez-Galeana, C. Bowd, E.Z. Blumenthal, P.A. Gokhale, L.M. Zangwill, R.N. Weinreb, Using optical imaging summary data to detect glaucoma, Ophthalmology 108 (10) (2001) 1812–1818. [2] L. Zhou, M.S. Rzeszotarski, L.J. Singerman, J.M. Chokreff, The detection and quantification of retinopathy using digital angiograms, IEEE Trans. Med. Imaging 13 (4) (1994) 619–626. [3] F. Maes, A. Collignon, D. Vandermeulen, G. Marchal, P. Suetens, Multimodality image registration by maximization of mutual information, IEEE Trans. Med. Imaging 16 (2) (1997) 187–198. [4] J.P. Pluim, J.A. Maintz, M.A. Viergever, Mutual-information-based registration of medical images: a survey, IEEE Trans. Med. Imaging 22 (8) (2003) 986–1004. [5] P.A. Legg, P.L. Rosin, D. Marshall, J.E. Morgan, Improving accuracy and efficiency of mutual information for multi-modal retinal image registration using adaptive probability density estimation, Comput. Med. Imaging Graph. 37 (7) (2013) 597–606. [6] A.V. Cideciyan, Registration of ocular fundus images, IEEE Eng. Med. Biol. 14 (1) (1995) 52–58. [7] T. Chanwimaluang, G. Fan, S.R. Fransen, Hybrid retinal image registration Information, IEEE Trans. Technol. Biomed. 10 (1) (2006) 129–142. [8] J.-Z. Huang, T.-N. Tan, L. Ma, Y.-H. Wang, Phase correlation based iris image registration model, J. Comput. Sci. Technol. 20 (3) (2005) 419–425. [9] R. Kolar, V. Harabis, J. Odstrcilik, Hybrid retinal image registration using phase correlation, Imaging Sci. J. 61 (4) (2013) 369–384. [10] L. Ingber, Simulated annealing: practice versus theory, Math. Comput. Model. 18 (11) (1993) 29–57. [11] F. Zana, J.-C. Klein, A multimodal registration algorithm of eye fundus images using vessels detection and hough transform, IEEE Trans. Med. Imaging 18 (5) (1999) 419–428. [12] A. Can, C.V. Stewart, B. Roysam, H.L. Tanenbaum, A feature-based robust hierarchical algorithm for registering pairs of images of the curved human retina, IEEE Trans. Pattern Anal. Mach. Intell. 24 (3) (2002) 347–364. [13] F. Laliberte, L. Gagnon, Y. Sheng, Registration and fusion of retinal images-an evaluation study, IEEE Trans. Med. Imaging 22 (5) (2003) 661–673. [14] G.K. Matsopoulos, P.A. Asvestas, N.A. Mouravliansky, K.K. Delibasis, Multimodal registration of retinal images using self organizing maps, IEEE Trans. Med. Imaging 23 (12) (2004) 1557–1563. [15] B. Fang, Y.Y. Tang, Elastic registration for retinal images based on reconstructed vascular trees, IEEE Trans. Biomed. Eng. 53 (6) (2006) 1183–1187. [16] W.E. Hart, M.H. Goldbaum, Registering retinal images using automatically selected control point pairs, in: IEEE International Conference on Image Processing, Proceedings, ICIP-94, vol. 3, IEEE, 1994, pp. 576–580.

76

G. Wang et al. / Biomedical Signal Processing and Control 19 (2015) 68–76

[17] J.C. Nunes, Y. Bouaoune, E. Delechelle, P. Bunel, A multiscale elastic registration scheme for retinal angiograms, Comput. Vis. Image Underst. 95 (2) (2004) 129–149. [18] J. Chen, J. Tian, N. Lee, J. Zheng, R. Smith, A.F. Laine, A partial intensity invariant feature descriptor for multimodal retinal image registration, IEEE Trans. Biomed. Eng. 57 (7) (2010) 1707–1718. [19] C. Harris, M. Stephens, A combined corner and edge detector, in: Alvey Vision Conference, vol. 15, Manchester, UK, 1988, p. 50. [20] D.G. Lowe, Object recognition from local scale-invariant features, in: The Proceedings of the Seventh IEEE International Conference on Computer vision, vol. 2, IEEE, 1999, pp. 1150–1157. [21] D.G. Lowe, Local feature view clustering for 3d object recognition, in: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, IEEE, CVPR, 2001, pp. 682–688. [22] D.G. Lowe, Distinctive image features from scale-invariant keypoints, Int. J. Comput. Vis. 60 (2) (2004) 91–110. [23] H. Bay, T. Tuytelaars, L. Van Gool, Surf: Speeded Up Robust Features, Springer, 2006, pp. 404–417. [24] P.C. Cattin, H. Bay, L. Van Gool, G. Szkely, Retina Mosaicing Using Local Features, Springer, 2006, pp. 185–192. [25] S. Belongie, J. Malik, J. Puzicha, Shape matching and object recognition using shape contexts, IEEE Trans. Pattern Anal. Mach. Intell. 24 (4) (2002) 509–522. [26] Z. Ghassabi, J. Shanbehzadeh, A. Sedaghat, E. Fatemizadeh, An efficient approach for robust multimodal retinal image registration based on UR-SIFT features and PIIFD descriptors, EURASIP J. Image Video Process. 2013 (1) (2013) 1–16. [27] P.J. Besl, N.D. McKay, A method for registration of 3-d shapes, IEEE Trans. Pattern Anal. Mach. Intell. 14 (2) (1992) 239–256. [28] C.V. Stewart, C.-L. Tsai, B. Roysam, The dual-bootstrap iterative closest point algorithm with application to retinal image registration, IEEE Trans. Med. Imaging 22 (11) (2003) 1379–1394. [29] G. Yang, C.V. Stewart, M. Sofka, C.-L. Tsai, Registration of challenging image pairs: initialization, estimation, and decision, IEEE Trans. Pattern Anal. Mach. Intell. 29 (11) (2007) 1973–1989. [30] C.-L. Tsai, C.-Y. Li, G. Yang, K.-S. Lin, The edge-driven dual-bootstrap iterative closest point algorithm for registration of multimodal fluorescein angiogram sequence, IEEE Trans. Med. Imaging 29 (3) (2010) 636–649. [31] K. Deng, J. Tian, J. Zheng, X. Zhang, X. Dai, M. Xu, Retinal fundus image registration via vascular structure graph matching, J. Biomed. Imaging 2010 (2010) 14.

[32] J.S. Beis, D.G. Lowe, Shape indexing using approximate nearest-neighbour search in high-dimensional spaces, in: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Proceedings, IEEE, 1997, pp. 1000–1006. [33] F. Bellavia, D. Tegolo, C. Valenti, Improving Harris corner selection strategy, IET Comput. Vis. 5 (2) (2011) 87–96. [34] P. Viola, M. Jones, Rapid object detection using a boosted cascade of simple features, in: CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, IEEE, 2001, pp. 511–518. [35] T. Lindeberg, Feature detection with automatic scale selection, Int. J. Comput. Vis. 30 (2) (1998) 79–116. [36] H. Bay, A. Ess, T. Tuytelaars, L. Van Gool, Speeded-up robust features (surf), Comput. Vis. Image underst. 110 (3) (2008) 346–359. [37] A.W. Fitzgibbon, Robust registration of 2d and 3d point sets, Image Vis. Comput. 21 (13) (2003) 1145–1153. [38] H. Chui, A. Rangarajan, A new point matching algorithm for non-rigid registration, Comput. Vis. Image Underst. 89 (2) (2003) 114–141. [39] B. Jian, B.C. Vemuri, Robust point set registration using gaussian mixture models, IEEE Trans. Pattern Anal. Mach. Intell. 33 (8) (2011) 1633–1645. [40] C.M. Bishop, Pattern Recognition and Machine Learning, vol. 1, springer, New York, 2006. [41] A. Myronenko, X. Song, Point set registration: coherent point drift, IEEE Trans. Pattern Anal. Mach. Intell. 32 (12) (2010) 2262–2275. [42] C.A. Micchelli, M. Pontil, On learning vector-valued functions, Neural Comput. 17 (1) (2005) 177–204. [43] A.N. Tikhonov, V.Y. Arsenin, Solutions of Ill-posed Problems, Winston, Washington, DC, 1977. [44] D. Tschumperle, R. Deriche, Vector-valued image regularization with pdes: a common framework for different applications, IEEE Trans. Pattern Anal. Mach. Intell. 27 (4) (2005) 506–517. [45] I. Markovsky, K. Usevich, Low Rank Approximation, Springer, 2012. [46] J. Nocedal, S.J. Wright, Conjugate Gradient Methods, Springer, 2006. [47] J.-M. Morel, G. Yu, Asift: a new framework for fully affine invariant image comparison, SIAM J. Imaging Sci. 2 (2) (2009) 438–469. [48] M.A. Fischler, R.C. Bolles, Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography, Commun. ACM 24 (6) (1981) 381–395.

Robust point matching method for multimodal retinal ...

Gang Wang, Zhicheng Wang∗, Yufei Chen, Weidong Zhao. CAD Research Center, Tongji University, No. 4800, Cao'an Highway, ... [email protected] (W. Zhao). Recently, many related registration approaches have been ...... 110 (3) (2008) 346–359. [37] A.W. Fitzgibbon, Robust registration of 2d and 3d point sets, Image ...

2MB Sizes 2 Downloads 254 Views

Recommend Documents

A New Point Pattern Matching Method for Palmprint
Email: [email protected]; [email protected]. Abstract—Point ..... new template minutiae set), we traverse all of the candidates pair 〈u, v〉 ∈ C × D.

An Integer Projected Fixed Point Method for Graph Matching and MAP ...
Graph matching and MAP inference are essential problems in computer vision and machine learning. We introduce a novel algorithm that can accommodate both problems and solve them efficiently. Recent graph matching algorithms are based on a general qua

Robust Face-Name Graph Matching for Movie ...
Dept. of Computer Science and Engineering, KVG College of Engineering, Sullia .... Principal Component Analysis is to find the vectors that best account for the.

Robust Stability in Matching Markets
Aug 14, 2010 - A matching problem is tuple (S, C, P,≻,q). S and C are finite and disjoint sets of students and schools. For each student s ∈ S, Ps is a strict ...

robust image feature description, matching and ...
Jun 21, 2016 - Y. Xiao, J. Wu and J. Yuan, “mCENTRIST: A Multi-Channel Feature Generation Mechanism for Scene. Categorization,” IEEE Transactions on Image Processing, Vol. 23, No. 2, pp. 823-836, 2014. 110. I. Daoudi and K. Idrissi, “A fast and

Learning coherent vector fields for robust point ...
Aug 8, 2016 - In this paper, we propose a robust method for coherent vector field learning with outliers (mismatches) using manifold regularization, called manifold regularized coherent vector field (MRCVF). The method could remove outliers from inli

Two Dimensional Projective Point Matching
Jul 1, 2002 - This algorithm gracefully deals with more clutter and noise ...... The maximal match set will be the same for any pose within a given cell of this.

A robust non-rigid point set registration method based ...
The algorithm is implemented in Matlab 2012b, and tested on a. Pentium Core I5-2450 CPU ..... their implemented source codes and test data sets. References.

An Effective Similarity Propagation Method for Matching ...
XML data. However, similarity flood is not a perfect algorithm. Melnik and his ..... between two sequential similarity matrices is not bigger than the given ..... on Knowledge Discovery and Data Mining. ... The prompt suite: Interactive tools.

QPLC: A novel multimodal biometric score fusion method
many statistical data analysis to suppress the impact of outliers. In a biometrics system, there are two score distributions: genuine and impostor as shown in Fig.

Robust Image Feature Description, Matching and ...
Nov 12, 2016 - ... Feature Generation Mechanism for Scene Categorization,” IEEE Transactions on Image. Processing, Vol. 23, No. 2, pp. 823-836, 2014. [37] http://www.robots.ox.ac.uk/~vgg/research/affine/. [38] http://vision.ia.ac.cn/Students/wzh/da

Dense point cloud acquisition via stereo matching ...
and laser scanner technologies and it is now clear that DStM offers more than an alternative. Lately ... calibrated the camera (body and optics). The knowledge of.

Two Dimensional Projective Point Matching σ2
Point Matching as Optimization. The point matching problem may be stated as the task of finding a correspondence between model and data points.

Dense point cloud acquisition via stereo matching applied to: the ...
N.N. and N.N. (Editors). Dense point cloud acquisition via stereo matching applied to: the Kilwa archaeological site and the Gallo-Roman theatre of Mandeure.

Robust Ground Plane Detection from 3D Point Clouds
support vector machine (SVM) were also popular tools to .... All objects exist above the ground so ..... [7] J. Byun, K. in Na, B. su Seo, and M. Roh, “Drivable.

Tumor sensitive matching flow: A variational method to ...
TSMF method and a baseline organ surface partition (OSP) approach, as well as ... recent development and application to medical image analysis. Optical flow ...

Robust Simulator: A Method of Simulating Learners ...
solution (i.e., correct phenomena) a learner should accept finally. ... ideas/solutions externalized by a learner (we call them 'erroneous hypotheses'), which.

a robust and efficient uncertainty quantification method ...
∗e-mail: [email protected], web page: http://www.jeroenwitteveen.com. †e-mail: ... Numerical errors in multi-physics simulations start to reach acceptable ...

Removing mismatches for retinal image registration via ...
16 Aug 2016 - 1(d), while Fig. 1 (i) shows the perfect image registration after removing mismatches. Furthermore, vector field interpolation shows the smooth vector field learning from the feature matching. For retinal image registration, however, mu

A robust method for vector field learning with application to mismatch ...
Huazhong University of Science and Technology, Wuhan, China. {zhaoji84 ... kernel methods for learning vector fields, which is based on filtering the spectrum ...

Method for processing dross
Nov 20, 1980 - dross than is recovered using prior art cleaning and recovery processes. ..... 7 is an illustration of the cutting edge ofa knife associated with the ...

Multimodal Metaphor
components depict things of different kinds, the alignment is apt for express- ing pictorial simile. ... mat of simile, this pictorial simile can be labeled AMERICAN NEWS IS LIKE. HORROR NOVEL (see ... sign of multimodal metaphor. Figure 1.

Method for processing dross
Nov 20, 1980 - able Products from Aluminum Dross", Bur. of Mines. Report of .... the salt bath must be heated at a temperature substan tially above its melting ...

Method for processing dross
Nov 20, 1980 - the free metal entrained in dross or skimmings obtained from the production of aluminum or aluminum based alloys. In the course of conventional aluminum melting op ..... 7 is an illustration of the cutting edge ofa knife.