Information Fusion 23 (2015) 99–115

Contents lists available at ScienceDirect

Information Fusion journal homepage: www.elsevier.com/locate/inffus

Steerable local frequency based multispectral multifocus image fusion Vijay N. Gangapure, Sudipta Banerjee, Ananda S. Chowdhury ⇑ Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata 700032, India

a r t i c l e

i n f o

Article history: Received 5 March 2014 Received in revised form 28 July 2014 Accepted 30 July 2014 Available online 8 August 2014 Keywords: Multispectral focus measure Multifocus image fusion Oriented analytic image Steerable local frequency

a b s t r a c t Design of a focus measure and a fusion algorithm which will perform well across different spectra remains an extremely challenging task. In this work, the problem of multispectral multifocus image fusion is addressed using the phase information of the source image pixels at different orientations. We make the local frequency, the spatial derivative of the local phase of the pixels, steerable to obtain a good novel focus measure. Oriented analytic image based on the theory of steerable filters is constructed for that purpose. A multifocus fusion algorithm is proposed next using this focus measure. Comprehensive experimentations clearly demonstrate that our focus measure as well as the multifocus fusion algorithm yield promising results across the visual (VIS), the near-infrared (NIR) and the thermal (TH) spectra. Ó 2014 Elsevier B.V. All rights reserved.

1. Introduction Image fusion with several important applications in diverse areas like medicine, surveillance and remote sensing has generated lot of interest among the researchers over the last decade [1]. Multifocus image fusion aims at integrating multiple images of the same scene captured at different focal settings into a single allin-focus image. This all-in-focus image can be thought of as an ensemble of the best focused pixels extracted from the set of source images. According to Stathaki [2], multifocus image fusion methods can be broadly classified into spatial domain methods and transform domain methods. In spatial domain methods [3,4], the fusion rules are directly applied to the pixels or a region in an image. In contrast, in the transform domain methods, images are initially processed by discrete cosine, wavelet and similar other transforms before the fusion rules are applied [5]. The fundamental step behind the multifocus fusion lies in the determination of the focus quality of the source images [6,7]. Zukal et al. introduced the determination of focus measure via interest point detection [8]. An interest point in the image has significant local variations such as a corner or a junction. Very recently, detection of interest point based focus measure for multispectral images has gained popularity. A majority of these measures are based on intensity [9]. However, some of the interest point detection methods are found to use frequency and phase congruency [10,11]. In the present work, we compute local frequency of the pixels from ⇑ Corresponding author. Tel.: +91 33 2457 2405; fax: +91 33 2414 6217. E-mail addresses: [email protected] (V.N. Gangapure), [email protected] (S. Banerjee), [email protected] (A.S. Chowdhury). http://dx.doi.org/10.1016/j.inffus.2014.07.003 1566-2535/Ó 2014 Elsevier B.V. All rights reserved.

their local phase at different orientations. The proposed steerable local frequency (local frequency considered at different orientations) based focus measure is shown to perform well in various spectra like visual (VIS), near-infrared (NIR) and thermal (TH). We further illustrate that this focus measure yields good performance for multispectral multifocus image fusion. The rest of the paper is organized in the following manner: in Section 2, we discuss some popular focus measures and multifocus image fusion schemes with their limitations and highlight our contributions. In Section 3, we provide the necessary theoretical foundations. In Section 4, we describe the proposed method in details. In Section 5, we compare the performance of the proposed method with several existing approaches. Finally, the paper is concluded in Section 6 with an outline of directions for future research. 2. Related work Multifocus image fusion consists of two major steps, namely, the focus measure computation and its application in image fusion. We first discuss certain limitations of the existing focus measures. Some problems in the current fusion schemes are analyzed next. We end this section highlighting our contributions. Focus measure algorithms are classified into four broad categories, such as, derivative based, statistics based, histogram based and intuition based. In [12], Liu et al. evaluated the performance of eighteen focus measures, e.g., EOG (Energy of Gradient), SML (Sum of Laplacian), EOL (Energy of Laplacian), TEN (Tenengrad) for microscopic images from the above four categories. All the above focus measures are based on variations in pixel intensities only. These methods have several drawbacks, like, the performance variation with spectral content of the source images, insensitivity

100

V.N. Gangapure et al. / Information Fusion 23 (2015) 99–115

to defocus, fluctuation with noise content and narrow effective range. To overcome these limitations, Minhas et al. [13] proposed a novel efficient focus measure for shape from focus (SFF) application. In another method, Tian and Chen [14], examined statistics of details wavelet coefficients to perform sharpness measurement in the input image. Zhao et al. [15] related the degree of gray level surface curvature to the sharpness of image region. But these attempts to measure focus are limited to the visual spectrum only. In other spectra like thermal and near-infrared, less focus measure works are reported due to unavailability of scene viewing in case of manual focusing, and limited resolution as well as lack of autofocus features in the cameras. Faundez-Zanuy et al. [16] addressed the problem of determining the optimal focus position in the thermal images. Zukal et al. [9] proposed a focus measure based on interest points detection (IPD) to achieve uniform performance in multispectral imaging. The results show that this focus measure performs better than the standard focus measures for thermal images but lag behind in the visual and the near-infrared spectra. Many fusion algorithms are available in the literature. These algorithms operate at pixel level or region level, and in spatial as well as transform domains. Spatial domain pixel level algorithms are popular due to their computational efficiency [2]. Multiresolution transform based algorithms are preferred nowadays due to their robust performance [17]. Within the transform domain, Discrete Wavelet Transform (DWT) based algorithms, though perform better than the Laplacian Pyramid Transform (LPT), have limited orientation selectivity [18]. More improved multiresolution transform techniques include Stationary Wavelet Transform (SWT), Curvelet Transform (CVT), Contourlet Transform (CT), Dual Tree Complex Wavelet Transform (DTCWT) and Non-Subsampled Contourlet Transform (NSCT) [17,18]. In [17], Li et al. has evaluated the performance of such multiresolution transforms for multifocus image fusion in the visual spectrum only. Benes et al. [19] proposed a new multifocus image fusion algorithm for thermal images where they employed pixel level weighted averaging based on modified EOL. But such a linear combination often fails to preserve the original information in source images leading to a degradation in the fusion performance. So, existing literature clearly suggests that designing a focus measure and applying it for fusion across different spectra still poses a considerable challenge. In this paper, we propose a novel focus measure using steerable local frequency based interest point detection. A recent work is reported where pixel intensities at different orientations are considered for obtaining a focus measure [13]. However, phase of a pixel carry more useful information than intensity [20]. To the best of our knowledge, this phase information has not been captured in different orientations earlier. In this paper, we make the local frequency of the pixels, the spatial derivative of the local phase, steerable to obtain a good focus measure. For this purpose, we suggest the construction of the oriented analytic image. The proposed focus measure captures all possible sharp image features in different orientations and hence perform well across all spectra. As a second contribution, we employ our focus measure for multispectral multifocus image fusion. Detailed experimentations reveal much improved multifocus image fusion performances across different spectra. 3. Theoretical foundations In this section, we provide the theoretical foundations behind the proposed method. In particular, analytic image and steerable filters are discussed in details. 3.1. Analytic image We start with the concept of analytic signal in 1D, which can be easily extended to higher dimensions [21]. Given a time domain signal s(t) in 1D, its analytic signal is defined as:

sA ðtÞ ¼ sðtÞ  jsH ðtÞ

ð1Þ

where sH(t) is the Hilbert transform of s(t). An image can be treated as a 2D spatial domain signal. Corresponding analytic image can be expressed as:

IA ðx; yÞ ¼ Iðx; yÞ  jIH ðx; yÞ

ð2Þ

where IH(x, y) is the Hilbert transformation of I(x, y). Argument of IA(x, y), defined in the spatial domain, is referred to as the local phase of I(x, y). Khan et al. [10] have used the local frequency of an image to capture the dominant regions in an image (a dominant region will contain many pixels with high local frequencies). The local frequency can be determined easily as it is the spatial derivative of local phase. High value of local frequency at a particular pixel of an image indicates the presence of interest point at that location. Concept of quadrature pair of filters can be introduced in this context. Quadrature pair of filters has same frequency response but differ in phase by an angle of 90°, i.e., in effect they must be Hilbert transforms of each other [24]. 3.2. Steerable Gaussian filter Steerable filters can be defined as a special class of filters in which an arbitrarily oriented filter can be designed using a linear combination of a set of basis filters [22,23]. The directional derivative of a 2D Gaussian function is steerable because of its circular symmetry. In [24], Freeman and Adelson have shown that the first order x-derivative (Gh1 ) of a Gaussian filter oriented at an arbitrary  orientation h can be expressed as a linear combination of G01 and  90 G1 in the following manner: 



Gh1 ¼ cosðhÞG01 þ sinðhÞG90 1

ð3Þ 

G01



G90 1

In the above equation, and are the basis filters and the terms cos(h) and sin(h) are the interpolation functions. Thus an image filtered at any orientation can be expressed as a linear combination of the image convolved with the basis filters (convolution operation being linear). Then, we can write: 



R01 ¼ G01  I 

ð4aÞ



R90 ¼ G90 I 1 1 

Rh1 ¼ cosðhÞR01 þ sinðhÞR90 1

ð4bÞ 

ð5Þ Rh1

In the above equation, represents the image I filtered using the basis filter at an arbitrary orientation h and ‘*’ denotes the convolution operation. 4. Proposed method Oppenheim and Lim [20] demonstrated the importance of phase in images through a series of experiments. The standard focus measures are mainly based on intensity. In this work, we explore the potential of local phase information of the pixels in the source images for determining the focus measure. Features in an image can be oriented at any angle h (0° 6 h 6 180°) [25]. For each pixel, corresponding responses from the filter at different orientations need to be compared to get the maximum response. Local frequency map obtained from the analytic image does not include any knowledge of orientation. To capture orientation, we introduce the concept of steerable local frequency map. Oriented analytic image is used to build the steerable local frequency map. Hilbert transform, realized through the quadrature pair of filters (G4, H4), is used first to obtain the analytic image. Fourth order derivative of Gaussian (G4) offers higher resolution analysis as it

V.N. Gangapure et al. / Information Fusion 23 (2015) 99–115

has narrow frequency tuning. In [24], the approximation to the Hilbert transform of G4, denoted by H4, is obtained using the least squares fit of product of a 5th order polynomial with six basis functions and a radially symmetric Gaussian function. To obtain the oriented analytic image, we therefore require a steerable Hilbert kernel. However, since the Hilbert Transform itself cannot be made steerable in its present form, we apply the concept of steerable quadrature pair of filters ðGh4 ; Hh4 Þ. The analytical expression of Gh4 is given by:

Gh4 ¼ ðK a ðhÞG4 a þ K b ðhÞG4 b þ K c ðhÞG4 c þ K d ðhÞG4 d þ K e ðhÞG4 eÞ ð6Þ where G4a, G4b, G4c, G4d and G4e constitute the basis set functions and Ka(h), Kb(h), Kc(h), Kd(h) and Ke(h) are the interpolation functions. Similarly, the analytic expression for and Hh4 is given by:

Hh4 ¼ ðK a ðhÞH4 a þ K b ðhÞH4 b þ K c ðhÞH4 c þ K d ðhÞH4 d þ K e ðhÞH4 e þ K f ðhÞH4 f Þ

ð7Þ

where H4a, H4b, H4c, H4d, H4e and H4f are basis set functions and Ka(h), Kb(h), Kc(h), Kd(h), Ke(h) and Kf(h) are the interpolation functions. The equations for basis and interpolation functions are given in Appendix A. This steerable quadrature pair Gh4 and Hh4 is used to filter the original image I(x, y) to obtain the oriented analytic image IA,h (x, y) at an arbitrary orientation h. So, we can write:

IA;h ðx; yÞ ¼ IG4;h ðx; yÞ  jIH4;h ðx; yÞ

ð8Þ

IG4;h ðx; yÞ ¼ Iðx; yÞ 

Gh4

ð9aÞ

IH4;h ðx; yÞ ¼ Iðx; yÞ 

Hh4

ð9bÞ

IG4,h(x, y) and IH4,h(x, y) together constitute the steerable quadrature filtered response of the original image I(x, y). The steerable local phase Uh(x, y) of the Gaussian filtered image can now be obtained using

Uh ðx; yÞ ¼ absðarctanðIH4;h ðx; yÞ=IG4;h ðx; yÞÞÞ

ð10Þ

To get rid of background noise or distortions, the mean of the steerable local phase map is subtracted from the phase value at each of the pixels to construct the modified phase map U0h ðx; yÞ [25].

U0h ðx; yÞ ¼ Uh ðx; yÞ  Uh

ð11Þ

In the above equation, Uh is mean of the steerable local phase map. Steerable local frequency map is obtained using the gradient of the modified local phase in the following manner:

v( ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi u  2  2 ) u @ðU0h ðx; yÞÞ @ðU0h ðx; yÞÞ t Freqh ðx; yÞ ¼ þ @x @y

ð12Þ

where @ U0h ðx; yÞ=@xÞ ¼ U0h ðx þ 1; yÞ  U0h ðx; yÞ

ð13aÞ

and @ U0h ðx; yÞ=@yÞ ¼ U0h ðx; y þ 1Þ  U0h ðx; yÞ

ð13bÞ

The local frequency maps obtained at different orientations are further max-pooled to obtain the resultant steerable local frequency map, Freqhmax.

Freqhmax ðx; yÞ ¼ maxðFreqh1 ðx; yÞ; Freqh2 ðx; yÞ; . . . ; Freqh13 ðx; yÞÞ ð14Þ In the above equation, h1, h2, . . . , h13 denote 13 orientations covering the entire range [0°, 180°] in steps of 15°. Number of orientations is chosen experimentally and this is described later (Section 5.2.2). We then choose the best suitable threshold T

101

experimentally, for thresholding max-pooled local frequency map, Freqhmax, to compute the number of interest points in source image. Selection of the best performing threshold (T) is also discussed later (Section 5.2.1). Number of interest points n detected in the source image is given by:



XX ðFreqhmax ðx; yÞ P TÞ x

ð15Þ

y

Thus, the proposed focus measure can be normalized in [0, 1] using [9]:

FMproposed ¼

n  nmin nmax  nmin

ð16Þ

Here, nmax is the maximum number of interest points and nmin is the minimum number of interest points detected among all the source images in a set I. We now present Algorithm 1 where various steps to obtain the proposed focus measure are shown. Algorithm 1: Computation of Focus Measure (FM) Input: I, An image from Visual spectrum (VIS), Near-Infrared spectrum (NIR) or Thermal spectrum (TH) image set. Output: FMproposed, Image level Focus measure of the input image. // Obtain local frequency maps at different orientations: 1. for h = 0°: 180° in step of 15° // h is the orientation angle. 2. Compute 7  7 quadrature pair kernels Gh4 and Hh4 at orientation h 3. Convolve image I with kernels Gh4 and Hh4 to obtain IG4,h(x, y) and IH4,h(x, y) 4. Compute Oriented analytic image IA,h(x, y) = IG4,h(x, y)  j IH4,h(x, y) 5. Obtain steerable local phase map in I: Uh(x, y) = abs(arctan(IH4,h(x, y)/IG4,h(x, y))) 6. Obtain modified steerable local phase map in I: U0h ðx; yÞ ¼ Uh ðx; yÞ  Uh // Uh is mean of Uh(x, y) 7. Obtain steerable local frequency Freqh (x, y) in I: use gradient (U0h ðx; yÞ), see Eqs. (12) and (13) end for // Obtain max-pooled steerable local frequency map Freqhmax, from steerable local frequency maps Freqh: 8. for i = 1: p 9. for j = 1: q // where p  q is the size of the max-pooled steerable local frequency map: 10. Freqhmax (i, j) = max {Freqh1(i, j), Freqh2(i, j),. . . , Freqh13(i, j)} end for end for // Obtain interest points from resultant frequency map: 11. Determine an experimental threshold T to yield interest points. 12. n = 0 13. for i = 1: p 14. for j = 1: q // where p  q is the size of the max-pooled steerable frequency map. 15. if Freqhmax (i, j) >= T // T: Threshold 16. n = n + 1 // Since, (i, j) is an interest point. end if end for end for // Compute focus measure: 17. FMproposed = (n-nmin) /(nmax  nmin) // nmax is the maximum number of interest points and nmin is the // minimum number of interest points detected among all the images in a set to which source image I belongs.

102

V.N. Gangapure et al. / Information Fusion 23 (2015) 99–115

The proposed multifocus image fusion begins with the assumption that the source images to be fused are pre-registered. The resultant fused image F contains pixels from the source image having highest max-pooled local frequency value for that pixel. Further, a 3  3 majority filter is applied for consistency verification [15]. This step ensures that a pixel in the fused image is not allowed to come from a source image if majority of its neighbors in the two images (fused and the source) are different. This measure plays an important role in characterizing the performance of image fusion algorithms. We, next, present Algorithm 2 where various steps for proposed multifocus image fusion are given. Algorithm 2: Procedure for Multifocus Image Fusion Input: IN, Multiple registered source images from VIS, NIR or TH spectrum. Output: F, All-in-focus image. // Obtain max-pooled steerable local frequency map of each source image. 1. for k = 1: N // where N is the number of source images to be fused. 2. Determine the max-pooled steerable local frequency map, Freqhmax,k for each of the source image using Algorithm 1 (Step 1 to Step 10). end for // Obtain final fused image F by selecting the pixels (i, j) from the set of source images which yields maximum local frequency for the respective pixels. 3. for i = 1: p 4. for j = 1: q 5. Q = arg max (Freqhmax,k(i, j)) k:1, . . . ,N 6. F(i, j) = IQ(i, j) end for end for 7. Perform consistency verification using 3  3 majority filter to obtain the output. 5. Experimental results In this section, we first mention the datasets used for various experimentations along with the performance evaluation criteria. We next discuss how certain parameters are chosen experimentally. We then show the comparative performance analysis of the proposed focus measure. Finally, we demonstrate the improvements in fusion results using our focus measure. 5.1. Datasets and performance metrics For the evaluation of focus measure performance we use three multispectral datasets, one each from the visual (VIS), near-infrared (NIR) and thermal (TH) spectrum [9]. Each dataset in turn consists of seven sets of images. Some sample image sets are shown in Fig. 1. For the evaluation of the multifocus image fusion in the visual spectrum, we use the same image sets as in [17] (see Fig. 2). In the near-infrared, only one image set was found to be suitable from the available NIR dataset [9] (see Fig. 3). We also use a multimodal medical image set to evaluate proposed fusion method. This consists of CT and MRI modality image set of human brain and the images are shown in Fig. 4. For the thermal spectrum, we experiment with the reduced set of multifocus thermal image datasets developed by Benes et al. [19]. The original thermal image database consists of five multifocus image sets with 96 images in each set. All the sets contain a scene image with two objects but

with different backgrounds, varying temperatures and different object distances. A reduced set of 10 images for each dataset is derived from the original pool of 96 images using EOL based activity level measurement [19]. The reduced image sets for the mobileinterface and the two bulbs are shown in Figs. 5 and 6. The performance evaluation measures are now briefly discussed below. The focus measure is evaluated based on different criteria such as monotonicity, magnitude of slope and smoothness. For this we employ the Q (Quality factor) and P (Peak of focus curve) performance metrics [9]. 1. Q (Quality factor): The quality factor is computed from the focus curve. The focus curve is the plot between image index (N) and focus measure (FMproposed). The formula for Q is given below:



1 Nmax  N min þ 1

ð17Þ

C s ½N P 0:7079; For Nmin ; . . . ; N; . . . ; Nmax

ð18Þ

Cs[N] in Eq. (18) is the focus curve normalized in the range [0, 1]. Number of focus curve samples higher than 0.7079 are used to measure the Q factor. A narrow peak in the focus curve with a high Q-factor is favorable. 2. P (Peak of focus curve): P represents the image having highest focus evaluated from the focus curve, Cs[N]. For the evaluation of multifocus image fusion in the visual and near-infrared spectrum, we use MI (Mutual Information), QAB/f and Q0. They are described below: 1. MI (Mutual Information) [26]: MI measures the statistical dependence between two random variables and the amount of information that one variable contains about the others. Here, the MI between source images A and B and fused image F is given by:

MI ¼ IAF þ IBF

ð19Þ

In Eq. (19) IAF is the mutual information between the source image A and the fused image F whereas IBF is the mutual information between the source image B and the fused image F. A high value of MI indicates better result. 2. QAB/f [27]: This metric reflects the quality of visual information obtained from the fusion of input images. QAB/f can be defined as:

Q AB=f ¼

PN PM n¼1

AF A m¼1 ðQ ðn; mÞw ðn; mÞ PN PM A n¼1 m¼1 ðw ðn; mÞ

þ Q BF ðn; mÞwB ðn; mÞÞ þ wB ðn; mÞÞ ð20Þ

In Eq. (20), A and B denotes the source images and f denotes the final fused image. QAF and QBF represents amount of edge information preserved in F from image A and that from image B respectively. wA and wB are weights derived by convolving sobel operator with images A and B [27]. QAB/f varies in the range [0, 1] where a value of 1 corresponds to the best performance. 3. Q0 [28]: This metric is designed by modeling any image distortion as a combination of three factors, namely, loss of correlation, luminance distortion, and contrast distortion. The value of Q0 between source images A, B and fused image F is expressed as:

Q 0 ðA; B; FÞ ¼ ðQ 0 ðA; FÞ þ Q 0 ðB; FÞÞ=2

ð21Þ

where Q0(A, F) is defined as:

Q 0 ðA; FÞ ¼

f raf 2r a r f 2a   2 ra rf ðaÞ þ ðf Þ2 ðr2a þ r2f Þ

ð22Þ

V.N. Gangapure et al. / Information Fusion 23 (2015) 99–115

103

Fig. 1. Sample images from each multispectral dataset used in this paper for focus measure evaluation: (a) ‘Loudspeaker’ (VIS), (b) ‘Head’ (NIR), and (c) ‘Circuit’ (TH).

Fig. 2. Visual multifocus image datasets used in this paper for evaluation of proposed image fusion method: (a) ‘Clock’, (b) ‘Desk’, (c) ‘Lab’, and (d) ‘Pepsi’.

Here, ra and rf are standard deviations of input image A and fused image F; raf denotes the covariance between A and F. The dynamic range of Q0(A, B, F) is [1, 1] with best possible value as 1. Three metrics RMSE (Root Mean Square Error), MAE (Mean Absolute Error) and CC (Cross Correlation) as in [19], are employed to evaluate the performance of the proposed fusion method in thermal spectrum and are described below.

4. RMSE: The Root Means Square Error between the fused image F and reference ground truth image R is given by.

RMSE ¼

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 XN XM jRði; jÞ  Fði; jÞj2 i¼1 j¼1 NM

ð23Þ

Here, NM is the number of pixels in the image. Lower the value of RMSE, better is the performance of fusion.

104

V.N. Gangapure et al. / Information Fusion 23 (2015) 99–115

Fig. 3. Near-infrared (NIR) multifocus image dataset used in this paper for evaluation of proposed image fusion method: ‘keyboard’.

5. MAE: The Mean Absolute Error between the fused image F and reference ground truth image R is given by.

MAE ¼

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 XN XM jRði; jÞ  Fði; jÞj i¼1 j¼1 NM

ð24Þ

Lower the value of MAE, better is the performance of fusion. 6. CC: The Cross Correlation between the fused image F and reference ground truth image R can be expressed as.

Fig. 4. Medical image dataset of Brain used in this paper for evaluation of proposed image fusion method. (a) CT and (b) MRI.

P P 2 Ni¼1 M j¼1 Rði; jÞFði; jÞ CC ¼ PN PM PN PM 2 2 i¼1 j¼1 Rði; jÞ þ i¼1 j¼1 Fði; jÞ

ð25Þ

The dynamic range of CC is [0, 1] with best possible value as 1.

Fig. 5. Reduced thermal multifocus image dataset used in this paper for evaluation of proposed image fusion method: Set 1 (Mobile phone and RS 232 interface).

Fig. 6. Reduced thermal multifocus image dataset used in this paper for evaluation of proposed image fusion method: Set 2 (Two bulbs).

V.N. Gangapure et al. / Information Fusion 23 (2015) 99–115

Fig. 7. Specimen focus curves for ‘Loudspeaker’ and ‘Mixer’ image sets in VIS spectrum for different threshold values.

Fig. 8. Specimen focus curves for ‘Head’ and ‘Office desk’ image sets in NIR spectrum for different threshold values.

Fig. 9. Specimen focus curves for ‘Circuit breakers’ and ‘Circuit’ image sets in TH spectrum for different threshold values.

105

106

V.N. Gangapure et al. / Information Fusion 23 (2015) 99–115

Fig. 10. Specimen focus curves for ‘Loudspeaker’ and ‘Mixer’ image sets in VIS spectrum for different number of orientations.

Fig. 11. Specimen focus curves for ‘Head’ and ‘Office desk’ image sets in NIR spectrum for different number of orientations.

Fig. 12. Specimen focus curves for ‘Circuit breakers’ and ‘Circuit’ image sets in TH spectrum for different number of orientations.

V.N. Gangapure et al. / Information Fusion 23 (2015) 99–115

Fig. 13. Specimen focus curves for ‘Loudspeaker’ and ‘Mixer’ image sets in visual spectrum.

Fig. 14. Specimen focus curves for ‘Head’ and ‘Office desk’ image sets in near-infrared spectrum.

Fig. 15. Specimen focus curves for ‘Circuit breakers’ and ‘Circuit’ image sets in thermal spectrum.

107

108

V.N. Gangapure et al. / Information Fusion 23 (2015) 99–115

Table 1 P and Q factor values of seven sets of images in visual (VIS) spectrum. Spectrum

Object

Metric

Proposed method

FH

FAST

HL

EOL

SML

SF

P (subjective)

VIS

Guitar

P Q P Q P Q P Q P Q P Q P Q

5 0.3333 7 0.3333 5 0.2000 3 0.3333 9 1.0000 9 0.2500 6 0.3333

6 0.2500 5 0.1667 4 0.2000 0 0.1667 5 0.33 8 0.5000 5 0.5000

5 0.2500 8 0.3333 3 0.1250 2 0.2000 0 0.3333 5 0.33 6 0.1250

8 0.2000 0 0.3333 0 0.3333 7 0.2500 8 0.1429 11 0.1250 7 0.3333

5 1.0000 7 0.5000 5 0.3333 2 0.5000 8 1.000 8 1.000 5 0.5000

5 0.5000 7 0.5000 5 0.2000 2 0.5000 8 1.000 8 1.000 5 0.2500

5 0.5000 7 0.5000 5 0.1667 2 0.5000 8 1.000 8 0.3333 5 0.3333

5 – 7 – 5 – 2 – 8 – 8 – 5 –

Head-Phones Keyboard Keys Loudspeaker Mixer Sunglass

Table 2 P and Q factor values of seven sets of images in near-infrared (NIR) spectrum. Spectrum

Object

Metric

Proposed method

FH

FAST

HL

EOL

SML

SF

P (subjective)

NIR

Building

P Q P Q P Q P Q P Q P Q P Q

7 0.2000 6 0.2500 8 0.2000 5 1.0000 5 0.5000 7 0. 5000 8 1.0000

6 0.0667 6 0.0667 7 0.1000 4 0.3333 4 0.2500 6 0.2000 7 0.2500

6 0.5000 4 0.2500 4 0.1000 19 0.1667 16 0.1000 7 0.3333 18 0.2500

20 0.2500 20 0.2500 20 0.2500 2 0.2000 0 0.1250 10 0.0909 3 0.1429

6 0.2500 4 0.5000 7 0.3333 4 1.0000 4 0.5000 6 1.000 7 1.0000

6 0.2500 4 0.2500 7 0.1429 4 1.0000 3 0.0830 6 1.000 7 1.0000

6 0.1667 4 0.2500 7 0.1429 4 1.0000 3 0.3333 6 1.000 7 1.0000

6 – 7 – 7 – 4 – 3 – 6 – 7 –

Car Corridor Head Keyboard Office desk Pens

Table 3 P and Q factor values of seven sets of images in thermal (TH) spectrum. Spectrum

Object

Metric

Proposed method

FH

FAST

HL

EOL

SML

SF

P (subjective)

TH

Circuit Breakers

P Q P Q P Q P Q P Q P Q P Q

19 1.0000 26 1.0000 7 0.3333 15 0.2500 25 0.2000 20 0. 3333 6 0.1670

17 0.5000 25 0.5000 6 0.3333 15 0.5000 17 0.3333 21 0.3333 19 0. 0500

20 0. 0769 3 0.0500 26 0.0909 0 0.2000 0 0.0909 3 0.1000 0 0.0625

18 0.2500 23 0.2000 5 0.1250 16 0.2000 16 0.2000 18 0.1667 20 0.1429

17 0.5000 12 0.1000 4 0.3333 14 0.3333 0 0.3333 20 1.000 0 0.1667

17 0.0400 12 0.0476 26 0.0370 14 0.0370 0 0.0435 6 0.0435 2 0.0714

17 0.0476 25 1.0000 5 0.3333 17 0.1429 0 0.0500 20 1.000 0 0.0526

17 – 25 – 5 – 14 – 18 – 20 – 20 –

Building Circuit Engine Printer Server Tube

5.2. Selection of threshold and number of orientations We perform experiments to judiciously select the number of orientations (O) and the threshold (T) in the proposed method. 5.2.1. Selection of threshold We have experimentally obtained the best performing value of threshold parameter T. For the range of T, we used [min, max] of max-pooled local frequency map. T is set from the above range based on the performance of the focus curves (Cs[N]) [12] in terms of Accuracy, Width at 50% maximum and Number of local maxima. Some sample focus curves obtained (Cs[N]) for five different threshold values (T1, T2, T3, T4, T5) in this range are shown in Figs. 7–9. The focus measure curves reveal an interesting trend.

The curves are almost comparable with varying values of threshold indicating the robustness of the proposed focus measure. However, in terms of Accuracy and Width at 50% maximum, the experimentally selected value of T3 (=0.0607) emerges as an optimal choice for the threshold. 5.2.2. Selection of Number of orientations Features in an image can be oriented at any angle h within the range 0–180° [25]. Selection of number of intervals for orientations influences the detection of oriented features in the input image. Note that less number of intervals may fail to capture the finer oriented features present in the image. On other hand, use of large number of intervals for orientations can be unreliable (sensitive to noise) in addition to increasing the computational overhead.

109

V.N. Gangapure et al. / Information Fusion 23 (2015) 99–115 Table 4 Fusion results of proposed method: MI, QAB/f and Q0 values with and without Consistency Verification (CV). Images

QAB/f

MI

Q0

Without CV

With CV

Without CV

With CV

Without CV

With CV

Multifocus (VIS spectrum) Clock Desk Lab Pepsi

8.5045 7.9716 8.3728 8.2292

8.5563 8.0246 8.4551 8.2621

0.6179 0.5979 0.6160 0.6344

0.6701 0.6697 0.6838 0.6838

0.9783 0.9585 0.9758 0.9810

0.9785 0.9586 0.9759 0.9810

Multifocus (NIR spectrum) Keyboard

7.9217

7.9342

0.6561

0.6853

0.9907

0.9908

Medical CT-MR

7.0279

7.0278

0.6677

0.6870

0.5010

0.5028

Fig. 16. Fused images obtained using the proposed method in the visual spectrum for four datasets performed without consistency verification (FI) and with consistency verification (FI-CV). (a) ‘Clock’ FI, (b) ‘Clock’ FI-CV; (c) ‘Desk’ FI, (d) ‘Desk’ FI-CV; (e) ‘Lab’ FI, (f) ‘Lab’ FI-CV; (g) ‘Pepsi’ FI, and (h) ‘Pepsi’ FI-CV.

Table 5 Multifocus image fusion: Performance comparison with (a) FH IPD based method and (b) best results of multi-resolution based fusion methods. Spectrum

Method

MI (mean)

QAB/f (mean)

Q0 (mean)

Visual (VIS) spectrum

Proposed method FH DWT SWT DTCWT CVT CT NSCT

8.3245 8.2493 2.4126 2.4510 2.4814 2.4387 2.3978 2.4804

0.6768 0.5804 0.6866 0.7140 0.7231 0.7075 0.6700 0.7219

0.9735 0.9746 0.7206 0.7555 0.7650 0.7421 0.7076 0.7799

Near-infrared (NIR) spectrum

Proposed method FH DWT DTCWT

7.9342 8.0198 5.9485 7.3575

0.6853 0.7105 0.5135 0.7082

0.9908 0.9912 0.9061 0.9902

So, as a trade-off, five intermediate choices O1 (7 orientations in step of 30°), O2 (10 orientations in step of 20°), O3 (13 orientations in step of 15°), O4 (16 orientations in step of 12°), O5 (19 orientations in step of 10°) are used with a fixed threshold (T). The focus measure curves (Cs[N]) for different orientations for some sample image sets from visible (VIS), near infra-red (NIR) and thermal (TH) spectra are shown in Figs. 10–12.

These focus measure curves are evaluated in terms of Accuracy, Width at 50% maximum and Number of local maxima of the focus curves obtained (Cs[N]) [12]. In the visible (VIS) spectrum the performance is uniform with respect to Accuracy but in terms of Width at 50% maximum O3 yields better performance. As can be seen from the focus curves in the NIR spectrum, O3 performs better. For example, in case of ‘Office Desk’ image set O3 produces

110

V.N. Gangapure et al. / Information Fusion 23 (2015) 99–115

Fig. 17. Fused images obtained using the Fast Hessian (FH) IPD based method in the visual spectrum for four datasets performed without consistency verification (FI) and with consistency verification (FI-CV). (a) ‘Clock’ FI, (b) ‘Clock’ FI-CV; (c) ‘Desk’ FI, (d) ‘Desk’ FI-CV; (e) ‘Lab’ FI, (f) ‘Lab’ FI-CV; (g) ‘Pepsi’ FI, and (h) ‘Pepsi’ FI-CV.

Fig. 18. Fused images obtained using the DWT method in the visual spectrum for four datasets. (a) ‘Clock’, (b) ‘Desk’, (c) ‘Lab’, and (d) ‘Pepsi’.

Fig. 19. Fused images obtained using the DTCWT method in the visual spectrum for four datasets. (a) ‘Clock’, (b) ‘Desk’, (c) ‘Lab’, and (d) ‘Pepsi’.

peak at image index 7 which is nearest to the index of image having highest focus from subjective assessment test given in [8]. In the TH spectrum there are a number of local maxima for each number of orientations due to limited resolution. But based on Width at 50% maximum and Accuracy it is clear that O3 performs better. So, we have used O3 (13 orientations in step of 15°) for our work. 5.3. Performance analysis of the multispectral focus measure We compare our proposed focus measure with other such measures from two categories, namely, (i) the standard intensity driven focus measures and (ii) the recently developed IPD based focus

measures. The first category includes Energy of Laplacian (EOL), Sum Modified Laplacian (SML) and Spatial Frequency (SF). From the second category we compare with Fast Hessian (FH), Harris-Laplace (HL) and Features from Accelerated Segment Test (FAST) [9]. The standard focus measures are known to yield good results. EOL and its modified adaptation SML are high performing derivative based focus measures. On the other hand IPD based focus measures perform relatively well when applied to multispectral images. We now show focus curves for the proposed focus measure (Cs[N]) for datasets from the different spectra. Please see Fig. 13 for the focus curves of the ‘Loudspeaker’ and the ‘Mixer’ datasets

111

V.N. Gangapure et al. / Information Fusion 23 (2015) 99–115

Fig. 20. Fused images obtained in the near-infrared (NIR) spectrum performed without consistency verification (FI) and with consistency verification (FI-CV). (a) Proposed method FI, (b) proposed method FI-CV, (c) FH IPD based method FI, (d) FH IPD based method FI-CV, (e) DWT method and (f) DTCWT method.

Table 6 Thermal (TH) multifocus image fusion with reduced dataset: RMSE. Image set

Mobile-RS232 Bulbs set 1 Bulbs set 2 Bulbs set 3 Bulbs set 4

Pixel level weighted averaging based on EOL AL Fusion [19] 0.1803 0.1999 0.1342 0.2648 0.3307

FH method

0.0183 0.0307 0.0293 0.0541 0.0589

Table 7 Thermal (TH) multifocus image fusion with reduced dataset: CC and MAE. Proposed method 0.0172 0.0184 0.0160 0.0313 0.0368

in the visual spectrum, Fig. 14 for the focus curves of the ‘Head’ and the ‘Office desk’ datasets in the near-infrared spectrum, and, Fig. 15 for the focus curves of the ‘Circuit breakers’ and the ‘Circuit’ datasets in the thermal spectrum. A good focus measure possesses the characteristics of unimodality, monotonicity and is sensitive to defocus [9]. Our method exhibits all the desirable characteristics warranted of a good focus measure. Focus curves evaluated for the proposed focus measure reach a global maximum and decrease monotonically as the defocus increases on either side. However, a

Image set

Mobile-RS232 Bulbs set 1 Bulbs set 2 Bulbs set 3 Bulbs set 4

FH method

Proposed method

CC

MAE

CC

MAE

0.9933 0.9942 0.9891 0.9832 0.9821

0.0077 0.0097 0.0167 0.0250 0.0348

0.9944 0.9979 0.9969 0.9946 0.9932

0.0078 0.0075 0.0097 0.0210 0.0309

few false maxima and minima are observed in the focus curves of thermal images because of poor resolution due to limited focal length. It is reported in [9] that the interest point detectors perform dismally in the visual spectrum as compared to the standard focus measures. However, their performance is improved substantially in other spectra (thermal, near-infrared). Our interest point based focus measure shows decent performance across all the spectra. Comparative results for the visual spectra are shown in Table 1. EOL, SML and SF perform well for most of the datasets. Compared

112

V.N. Gangapure et al. / Information Fusion 23 (2015) 99–115

Fig. 21. Ground truth (GT) and fused images (FI) obtained using the proposed method in the thermal spectrum for five datasets. (a) Mobile_RS232 GT, (b) Mobile_RS232 FI; (c) Bulbs Set 1 GT, (d) Bulbs Set 1 FI; (e) Bulbs Set 2 GT, (f) Bulbs Set 2 FI; (g) Bulbs Set 3 GT, (h) Bulbs Set 3 FI; (i) Bulbs Set 4 GT, and (j) Bulbs Set 4 FI.

V.N. Gangapure et al. / Information Fusion 23 (2015) 99–115

113

Fig. 22. Fused images obtained using the Fast Hessian (FH) IPD based method in the thermal spectrum for five datasets. (a) Mobile_RS232, (b) Bulbs Set 1, (c) Bulbs Set 2, (d) Bulbs Set 3, and (e) Bulbs Set 4.

to other interest point detectors, the proposed IPD based focus measure outperforms FAST, FH and HL in most of the cases. The proposed method is comparable with subjective analysis in terms of the P metric as reported in [9]. Results for the near-infrared spectrum in Table 2 show significant improvements in the performance over other interest point detector based focus measures as well as standard focus measures. The Keyboard dataset belonging to near-infrared spectrum reveals that the proposed focus measure yields a Q value of 0.5 whereas the reported Q values of FH, FAST and HL are 0.25, 0.10, and, 0.1250 respectively. Our Q value is comparable to that of EOL and is better than SML and SF with reported values as 0.0830 and 0.3333. We outperform SML, SF in most of the cases. We have outdone FH in all the cases and perform much better compared to FAST and HL. Comparative analysis reveals that the performance of the proposed focus measure is best for the thermal images as shown in Table 3. Our detector performs better than SML, SF, FAST and HL and is comparable to EOL and FH. In the ‘Circuit breakers’ dataset, the best performing interest point detector based focus measure is FH and that from the standard focus measures is EOL, each having a Q value of 0.5. The proposed focus

measure having Q value of 1.0 easily surpasses both. The superior performance of our focus measure as compared to the standard interest point detector based focus measures is due to use of phase information at various orientations. Average execution time of obtaining the focus measure using the proposed method is 2 s. on a desktop PC with 3.4 GHz Intel Core CPU and 8 GB RAM. Our method tends to be slower compared to some of the other focus measures because we have to compute the local frequency map at thirteen different orientations. However, please note that Minhas et al. in [13] have reported an average execution time for their orientation-based focus measure to be 3.5 s for the same window size of 7  7 as ours. 5.4. Performance analysis of multifocus fusion results In regards to multifocus fusion of images in the visual spectrum, we compare our method with the highly accurate multiresolution transform domain methods such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Curvelet Transform (CVT), Contourlet Transform (CT), Dual Tree Complex Wavelet

114

V.N. Gangapure et al. / Information Fusion 23 (2015) 99–115

Transform (DTCWT) and Non-Subsampled Contourlet Transform (NSCT) [17,18,29]. Since our fusion method is essentially based on interest point detection, we also compare our method with best performing Fast Hessian (FH) based fusion scheme from the same category [9]. For thermal fusion, we choose a recently reported EOL based method [19] for comparison. In addition, we also compare with FH based fusion scheme. Overall, we provide an extensive comparison with several recent and well-known spatial and transform domain based fusion methods. Table 4 shows that the performances are improved with the inclusion of Consistency Verification (CV) for the proposed method. Fig. 16 qualitatively demonstrates the same results. Since only one image set pertaining to medical database is available, we just specify the result of the proposed method without any comparison. Next, in Table 5, we show that our method performs better compared to all the multiresolution transform based methods [17], in terms of a much higher MI, slightly higher Q0 and comparable with QAB/f (only marginally lower). Comparison to FH based fusion scheme reveals an improvement in terms of MI and QAB/f values. In VIS spectrum, for perceptual quality evaluation of fused images, we incorporate fused images obtained using FH, DWT and DTCWT methods (see Figs. 17–19). DWT is a basic method and performs moderately well while DTCWT is highly efficient [17]. The quality of fused images obtained using FH method are found to be inferior compared to our method. Some artefacts in form of halo effect are observed in case of DWT based fusion (shown using a red square in Fig. 18). So, we can infer that the proposed method supersedes DWT based fusion. The perceptual quality of fused images obtained by DTCWT method is however comparable with the proposed method. But quantitatively in terms of metrics MI and Q0 the proposed fusion method surpasses the DTCWT, while it is comparable in case of QAB/f. In the NIR spectrum, the visual quality of fused images obtained using the proposed method is comparable with that of the FH based method and is better than that of DWT and DTCWT based methods (see Fig. 20). For fusion in the thermal spectrum, the proposed scheme performs significantly better in comparison with the pixel-level weighted averaging method [19] and FH based scheme by yielding lower RMSE for all the five datasets. These results are specified in Table 6. In addition, we show the values obtained for CC and MAE in Table 7 which are quite promising. The fused results presented in Figs. 21 and 22 clearly indicate that the fusion scheme based on our focus measure gives high quality output. 6. Conclusions and future work In this paper, we present a new focus measure based on steerable local frequency based interest point detection. The proposed focus measure is shown to perform well in different spectra. Better performance of the proposed focus measure is due to the use of orientation selective local frequency in the source images. We further demonstrated that the proposed focus measure improves mutispectral fusion. In the visual spectrum, our fusion scheme outperforms some of the robust and efficient multiresolution transform based methods in addition to some IPD based approaches. In the near-infrared spectrum the proposed fusion method offers a decent performance in comparison with the spatial and transform domain based approaches. In the thermal spectrum, the results show significant improvement over previously reported results. In future, we plan to extend pixel based fusion to region or object level fusion for further improvement in the performance. Another possible direction for future research is to incorporate importance measures based on intensity/color information to obtain a better focus measure which in turn would also improve the fusion results.

Table A1 X–Y Separable basis set and interpolation functions for fourth derivatives of Gaussian. G4 a ¼ 1:246ð0:75  3x2 þ x4 Þeðx

Ka(h) = cos4(h)

2 þy2 Þ

Kb(h) = 4cos3(h)sin(h)

2 þy2 Þ

G4 b ¼ 1:246ð1:5x þ x3 ÞðyÞeðx 2

Kc(h) = 6cos2(h)sin2(h)

ðx2 þy2 Þ

2

G4 c ¼ 1:246ðx  0:5Þðy  0:5Þe G4 d ¼ 1:246ð1:5y þ y3 ÞðxÞeðx 2

2

Kd(h) = 4cos(h)sin3(h)

þy2 Þ

Ke(h) = sin4(h)

ðx2 þy2 Þ

4

G4 e ¼ 1:246ð0:75  3y þ y Þe

Table A2 X–Y Separable basis set and interpolation functions fit to hilbert transform of fourth order derivative of Gaussian. 2

H4 a ¼ 0:3975ð7:189x  7:501x3 þ x5 Þeðx 2

þy2 Þ

ðx2 þy2 Þ

4

H4 b ¼ 0:3975ð1:438  4:501x þ x ÞðyÞe

H4 c ¼ 0:3975ðx3  2:225xÞðy2  0:6638Þeðx 3

2

þy2 Þ

ðx2 þy2 Þ

2

H4 d ¼ 0:3975ðy  2:225yÞðx  0:6638Þe

H4 e ¼ 0:3975ð1:438  4:501y2  y4 ÞðxÞeðx 3

5

2 þy2 Þ

ðx2 þy2 Þ

H4 f ¼ 0:3975ð7:189y  7:501y  y Þe

Ka(h) = cos5(h) Kb(h) = 5cos4(h)sin(h) Kc(h) = 10cos3(h)sin2(h) Kd(h) = 10cos2(h)sin3(h) Ke(h) = 5cos(h)sin4(h) Kf(h) = sin5(h)

Acknowledgements The authors would like to thank Zukal M., Mekyska J., Cika P., Smekal Z. of Brno University of Technology, Czech Republic for providing access to the multispectral database (http://splab.cz/en/ download/databaze/multispec). Authors are also grateful to Espinosa-Duró V. from EUP Mataró, Barcelona, Spain, for granting access to the thermal multifocus image datasets (http://splab.cz/en/ download/databaze multi-focus-thermal-image-database). Appendix A We have included Tables A1 and A2 from [24] which were used for the computation of the oriented analytic image. References [1] Z. Wang, D. Ziou, C. Armenakis, D. Li, Q. Li, A comparative analysis of image fusion methods, IEEE Trans. Geosci. Rem. Sens. 43 (2005) 1391–1402. [2] T. Stathaki, Image Fusion: Algorithms and Applications, Academic Press, 2008. [3] S. Li, J.T. Kwok, Y. Wang, Combination of images with diverse focuses using the spatial frequency, Inform. Fusion 2 (2001) 169–176. [4] J. Tian, L. Chen, L. Ma, W. Yu, Multi-focus image fusion using a bilateral gradient-based sharpness criterion, Opt. Commun. 284 (2011) 80–87. [5] S. Li, B. Yang, Multi-focus image fusion by combining curvelet and wavelet transform, Pattern Recogn. Lett. 29 (2008) 1295–1301. [6] E. Krotkov, Focussing, Comput. Vis. 1 (1987) 223–237. [7] M. Subbarao, T. Choi, A. Nikzad, Focussing techniques, Opt. Eng. 32 (1993) 2824–2836. [8] J. Mekyska, M. Zukal, P. Cika, Z. Smekal, Interest points as a focus measure, in: 35th IEEE International Conference on Telecommunications and Signal Processing, 2012, pp. 774–778. [9] M. Zukal, J. Mekyska, P. Cika, Z. Smekal, Interest points as a focus measure in multi-spectral imaging, Radioengineering 22 (2013) 68–81. [10] J. Khan, S. Bhuiyan, R. Adhami, Feature point extraction from the local frequency map of an image, Electr. Comput. Eng. 2012 (2012) 1–15. [11] C. Wu, Q. Wang, A novel approach for interest point detection based on phase congruency, IEEE TENCON Conf. (2005) 1–6. [12] X.Y. Liu, W.H. Wang, Y. Sun, Dynamic evaluation of autofocusing for automated microscopic analysis of blood smear and pap smear, Microscopy 227 (2007) 15–23. [13] R. Minhas, A.A. Mohammed, Q.M.J. Wu, An efficient algorithm for focus measure computation in constant time, IEEE Trans. Circ. Syst. Video Technol. 22 (2012) 152–156. [14] J. Tian, L. Chen, Adaptive multi-focus image fusion using a wavelet-based statistical sharpness measure, Signal Process. 92 (2012) 2137–2146. [15] H. Zhao, Z. Shang, Y.Y. Tang, B. Fang, Multi-focus image fusion based on the neighbor distance, Pattern Recogn. 46 (2013) 1002–1011. [16] M. Faundez-Zanuy, J. Mekyska, V. Espinosa- Duró, On the focusing of thermal images, Pattern Recogn. Lett. 32 (2011) 1548–1557. [17] S. Li, B. Yang, J. Hu, Performance comparison of different multi-resolution transforms for image fusion, Inform. Fusion 12 (2011) 74–84.

V.N. Gangapure et al. / Information Fusion 23 (2015) 99–115 [18] S. Das, M.K. Kundu, A neuro-fuzzy approach for medical image fusion, IEEE Trans. Biomed. Eng. 60 (2013) 3347–3353. [19] R. Benes, P. Dvorak, M. Faundez-Zanuy, V. Espinosa-Duró, J. Mekyska, Multifocus thermal image fusion, Pattern Recogn. Lett. 34 (2013) 536–544. [20] A.V. Oppenheim, J.S. Lim, The importance of phase in signals, Proc. IEEE 69 (1981) 529–541. [21] G. Granlund, H. Knutsson, Signal Processing for Computer Vision, Kluwer Academic, Dordrecht, Netherlands, 1995. [22] P. Danielsson, O. Seger, Rotation invariance in gradient and higher order derivative detectors, Comput. Vision Graph. Image Process. 49 (1990) 198– 221. [23] W.T. Freeman, E.H. Adelson, Steerable Filters, Topical Mtg. Image Understanding Machine Vision Opt. Soc. Amer., Tech. Digest Series 14, 1989.

115

[24] W.T. Freeman, E.H. Adelson, The design and use of steerable filters, IEEE Trans. Pattern Anal. Mach. Intell. 13 (1991) 891–906. [25] S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, M. Goldbaum, Detection of blood vessels in retinal images using two-dimensional matched filters, IEEE Trans. Med. Imag. 8 (1989) 263–269. [26] G. Qu, D. Zhang, P. Yan, Information measure for performance of image fusion, Electron. Lett. 38 (7) (2002) 313–315. [27] C.S. Xydeas, V. Petrovic, Objective image fusion performance measure, Electron. Lett. 36 (2000) 313–315. [28] Z. Wang, A. Bovik, A universal image quality index, IEEE Signal Process. Lett. 9 (2002) 81–84. [29] W. Xiuqing, Z. Rong, X. Yunxiang, A method of wavelet-based edge detection with data fusion for multiple images, in: Proceedings of the 3rd World Congress on Intelligent Control and Automation China, 2000, pp. 2691–2694.

Steerable local frequency based multispectral ...

fusion is addressed using the phase information of the source image pixels at different orientations. We make .... More improved multiresolution transform tech-.

4MB Sizes 4 Downloads 269 Views

Recommend Documents

Steerable Principal Components for Space-Frequency ...
December 13, 2016 ... accounting for all (infinitely many) rotations. We also mention [35] ...... As for the NFFT implementation, we used the software package [12],.

Institutional Based Local Govt..PDF
Page 1 of 1. System Date / Time Report ID: ACR-O21. January 13, 2015 11:53:44. Republic of the Philippines. Department of Health. List of Accredited DTLs.

Discovering cluster-based local outliers
Our algorithm was implemented in. Java. All experiments were conducted on a Pen- tium-600 machine with 128 MB of RAM and running Windows 2000 Server.

3D Object Recognition Based on Low Frequency ... - CiteSeerX
points. At last, the DAM is fed with this information for training and recognition. To ... then W is auto-associative, otherwise it is hetero-associative. A distorted ...

Brain-computer interface based on high frequency ...
Abstract—Brain-computer interfaces (BCIs) based on steady- state visual evoked .... checkerboard was 6.64 degrees horizontally and 6.64 degrees vertically.

Impact of Delays on a Consensus-based Primary Frequency Control ...
for AC Systems Connected by a Multi-Terminal HVDC Grid. Jing Dai, Yannick ...... with the network topology,. i.e., each edge in the figure also represents a bi-.

Geometric steerable medial maps
Mar 7, 2013 - internal vascular system, powerful sources of information in organ functionality, analysis .... of energy-based methods for surpassing the performance of thinning ... Two alternative binarizations that scale well with dimension.

Sparsity-Based Ranging for Dual-Frequency Radars
Dual-frequency radars offer the benefit of reduced complexity, fast computation time, and real-time target tracking in through-the-wall and urban sensing applications. Compared to single-frequency (Doppler) radar, the use of an additional frequency i

Brain-computer interface based on high frequency steady-state visual ...
visual evoked potentials: A feasibility study ... state visual evoked potentials (SSVEPs) are systems in which virtual or physical ... The classification of the data is.

3D Object Recognition Based on Low Frequency ... - CiteSeerX
in visual learning. ..... based in the polar form of the Box-Muller transformation [1]. .... [1] Box, G.E.P., Muller, M.E.: A note on the generation of random normal ...

Frequency And Ordering Based Similarity Measure For ...
the first keeps the signatures for known attacks in the database and compares .... P= open close ioctl mmap pipe pipe access access login chmod. CS(P, P1) ... Let S (say, Card(S) = m) be a set of system calls made by all the processes.

Multispectral Pedestrian Detection: Benchmark Dataset ...
Multispectral Pedestrian Detection: Benchmark Dataset and Baseline. Soonmin Hwang, Jaesik Park, Namil Kim, Yukyung Choi, In So Kweon. Korea Advanced Institute of Science and Technology (KAIST), Republic of Korea. Figure 1: Examples of proposed multis

Multispectral Multifocus Image Fusion with Guided ...
In this paper, we propose a novel solution to the above problem using guided .... Analytic image provides an easy solution for recov- ering local phase from the ...

Geometrical Calibration of Multispectral Calibration
cameras have been a promising platform for many re- search and industrial ... [2] proposed a pattern consisting of a grid of regular squares cut out of a thin.

LNCS 6622 - NILS: A Neutrality-Based Iterated Local ... - Springer Link
a new configuration that yields the best possible fitness value. Given that the .... The neutral degree of a given solution is the number of neutral solutions in its ...

A Study on Dominance-Based Local Search ...
view of dominance-based multiobjective local search algorithms is pro- posed. .... tor, i.e. a job at position i is inserted at position j \= i, and the jobs located.

Face Recognition Based on Local Uncorrelated and ...
1State Key Laboratory for Software Engineering, Wuhan University, Wuhan, ... of Automation, Nanjing University of Posts and Telecommunications, 210046, ...

On Set-based Local Search for Multiobjective ...
Jul 10, 2013 - ABSTRACT. In this paper, we formalize a multiobjective local search paradigm by combining set-based multiobjective optimiza- tion and neighborhood-based search principles. Approxi- mating the Pareto set of a multiobjective optimization

Utility-Based Anonymization Using Local Recoding
2Simon Fraser University, Canada, [email protected]. 3The Chinese University of Hong Kong, [email protected]. ABSTRACT. Privacy becomes a more and more serious concern in applications involving microdata. Recently, efficient anonymization has attrac

Robust Audio Fingerprinting Based on Local Spectral ...
Index Terms: audio fingerprints, local spectral luminance max- ima. 1. ..... International Symposium Conference on Music Information Re- trieval(ISMIR), 2003, pp. ... for audio and video signals based on histogram pruning,” IEEE. Transactions ...

A Meaningful Mesh Segmentation Based on Local Self ...
the human visual system decomposes complex shapes into parts based on valleys ... use 10í4 of the model bounding box diagonal length for all the examples ...