Multispectral Multifocus Image Fusion with Guided Steerable Frequency and Improved Saliency Sudipta Banerjee

Vijay N. Gangapure

Department of ETCE Jadavpur University Kolkata, India.

Department of of ETCE Jadavpur University Kolkata, India.

ABSTRACT Multispectral multifocus image fusion remains a challenging problem for the researchers in the computer vision community. In this paper, we propose a novel solution to the above problem using guided filtering, steerable local frequency and an improved model of saliency. In the first place, promising fusion results are obtained through guided steerable local frequency maps. An accurate saliency map is developed next to further enhance these fusion results. Extensive experimentation on the visual, near infrared and thermal spectra clearly demonstrate the superiority of the proposed approach over some of the recently published works.

Keywords Image fusion, Steerable local frequency, Guided filter, Saliency map

1.

INTRODUCTION

Image fusion aims at combining the complementary information from multiple source images of a scene to build a single image of improved quality. Image fusion finds enormous applications in the diverse fields of Medical imaging, Remote sensing, Surveillance and law enforcement, Biometry and Multimedia processing. Multifocus image fusion integrates source images acquired at varying focal lengths. Majority of the multifocus fusion methods are tested in the visual (VIS) spectrum only. Very less work is reported in other spectra like near-infrared (NIR) and thermal (TH). Performance of any multifocus fusion scheme depends on the effectiveness of the underlying focus measure. Once again majority of the intensity based focus measures are evaluated solely in the VIS spectrum [1]. Zukal M. et al. [2] proposed interest point detection (IPD) based focus measures and evaluated their performance against some standard focus measures such as En∗Corresponding author Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. ICVGIP ’14, December 14-18, 2014, Bangalore, India Copyright 2014 ACM 978-1-4503-3061-9/14/12 $15.00. http://dx.doi.org/10.1145/2683483.2683492.



Ananda S. Chowdhury Department of of ETCE Jadavpur University Kolkata, India.

[email protected]

ergy of Laplacian (EOL), Sum Modified Laplacian (SML), Energy of Gradient (EOG), Spatial Frequency (SF), Tenengrad (TEN). But the performance is not much uniform in VIS, NIR and TH spectra. To address the above problem, a novel focus measure based on steerable local frequency based interest point detection is proposed in multispectral multifocus image fusion (SLF method) [3]. In this paper, we propose a multispectral multifocus fusion scheme using a focus measure derived from guided steerable local frequency and improved saliency model. The rest of the paper is organized in the following manner: in section 2, we discuss some popular focus measures and multifocus image fusion schemes with their limitations and highlight our contributions. In Section 3, we describe the proposed method in details. In Section 4, we compare the performance of the proposed method with several existing.

2.

RELATED WORKS

Some well-known simple intensity based focus measures include EOG, SML, EOL, TEN [1]. However, the performances of these measures vary with the spectral content of the source images. Some efficient intensity based focus measures are introduced to overcome such limitations. In literature, many pixel level and region level multifocus image fusion algorithms are proposed in spatial and transform domain. In [4], Li et al. have evaluated the performance of multiresolution transforms for the multifocus image fusion in the VIS spectrum only. The authors in [5] proposed image matting technique to derive the focus measure in the source images and then employed it for fusion. Recently the use of visual saliency detection in aggregation with other techniques in multifocus image fusion became popular. For example, see the works of Saha et al. [6]. Benes et al. [7] proposed a multifocus image fusion algorithm in the TH spectrum using pixel level weighted averaging based on modified EOL activity level. But such a linear combination often fails to preserve the original information in source images leading to degradation in the fusion performance. Overall, the existing works clearly show a paucity of multispectral multifocus fusion algorithms. Very recently guided filtering technique [8] has gained prominence in computer vision applications like edge preserving smoothing, dehazing, feathering and image matting. In [9] Li et al. proposed a fast and effective image fusion scheme using the strength of guided filtering. The experimental results for multispectral, multifocus, multimodal and multiexposure images show the better performance over some state-of-the-art methods in VIS spectrum. But still

this approach involves two-scale decomposition of source images at initial stage. In this paper, we propose a multispectral multifocus fusion method based on the guided steerable local frequency (GSLF) and improved saliency model. Our first contribution is judicious application of the guided edge preserving filtering in two phases. In the first phase, the source images to be fused are enhanced using guided filtering keeping the source images same as the guidance images. Steerable local frequency map (SLF) capturing phase/frequency information at different orientations is suggested in [3]. In the second phase, the steerable local frequency maps of the enhanced source images are further refined using guided filtering. In this case, the enhanced source images are used as the guidance images. Our second contribution is development of an improved model of saliency based on Graph-based visual saliency (GBVS) [10], Spectral residual saliency (SRA) [11] and Laplacian saliency (LS) [9]. This improved saliency map is combined with the guided steerable local frequency map to generate good fusion results across all spectra.

3.

PROPOSED METHOD

Fig. 1 shows the block diagram of our multifocus image fusion method. The proposed method starts with guided filtering of the source images which are further processed to yield improved saliency maps and steerable local frequency (SLF) maps. SLF map is enhanced with guided filtering. The composite saliency map is combined with guided SLF (GSLF) map to yield improved fusion results. The integration aims at incorporating the combined effect of intensity, phase and orientation. The major components governing the above process are discussed in the following subsections.

be expressed as: IA,θ (x, y) = IG4,θ (x, y) − jIH4,θ (x, y)

(2)

IG4,θ (x, y) = I(x, y) ∗ Gθ4

(3)

IH4,θ (x, y) = I(x, y) ∗ H4θ

(4)

IG4,θ (x, y) and IH4,θ (x, y) together constitute the steerable quadrature filtered response of the original image I(x, y). The steerable local phase φθ (x, y) of the Gaussian filtered image can now be obtained using:    IH4,θ (x, y) (5) φθ (x, y) = abs arctan IG4,θ (x, y) To get rid of background noise or distortions, the mean of the steerable local phase map is subtracted from the phase value at each of the pixels to construct the modified phase 0 map φθ (x, y). 0

φθ (x, y) = φθ (x, y) − φ¯θ

In the above equation, φ¯θ is mean of the steerable local phase map. Steerable local frequency map is obtained using the gradient of the modified local phase in the following manner: s  0 2  0 2 ∂φθ (x, y) ∂φθ (x, y) + (7) F reqI,θ (x, y) = ∂x ∂y The local frequency maps obtained at different orientations are further max-pooled to obtain the resultant steerable local frequency map, F reqI,θ (x, y). SLFI = F reqI,θ,max (x, y) = max [F reqI,θ1 (x, y), ..., F reqI,θ13 (x, y)]

3.1

Steerable local frequency

The steerable local frequency model uses orientation selective local frequency to represent the features present in an image. Oriented analytic image was used to achieve that goal. Analytic image provides an easy solution for recovering local phase from the signal in spatial domain. The analytic image can be expressed as: IA (x, y) = I(x, y) − jIH (x, y)

(1)

In (1), IH (x, y) is the Hilbert transform of I(x, y). Argument of IA (x, y), defined in the spatial domain, is referred to as the local phase of I(x, y). Khan et al. [12] have used the local frequency of an image to capture the dominant regions in an image. The local frequency can be determined easily as it is the spatial derivative of local phase. High value of local frequency at a particular pixel of an image indicates the presence of interest point at that location. The oriented analytic image is further used to obtain the steerable local frequency (SLF) map [3]. The construction of the oriented analytic image requires the use of steerable filters. Steerable filters can be considered as a special class of filters in which any arbitrarily oriented filter can be designed using a linear combination of a set of basis filters. The directional derivative of 2D circularly symmetric Gaussian function defined in Cartesian coordinates is steerable in nature [13]. The steerable quadrature pair Gθ4 and H4θ is used to filter the input image I to obtain the oriented analytic image IA at an arbitrary orientation θ. The oriented analytic image can thus

(6)

(8)

Where, θ = 00 to 1800 in the intervals of 150 .

3.2

Saliency model

Saliency model aims to capture visually attentive regions in an image [14]. A single approach is incapable of detecting all the salient regions accurately for all images [15]. However, a synergistic combination of some of the highly performing saliency methods can lead to a highly informative and accurate saliency map. The proposed model uses a linear weighted combination of three saliency maps obtained using Graph-based visual saliency (GBVS), Laplacian saliency (LS) and Spectral residual saliency (SRA). Now, we briefly discuss about the individual saliency models. GBVS uses topological structure of the graphs to compute saliency values and employs Markovian approach in the process [10]. This model comprises of three stages: extraction of the important feature vectors from the scene, construction of an “activation map(s)”, and, combination of these maps for obtaining a single saliency map. The construction of activation maps is based on a linear filtering method [16]. Markovian approach is employed to construct the activation map. An elegant dissimilarity measure is adopted to identify locations having high variations. The dissimilarity of M (i, j) and M (p, q) can be mathematically expressed as:   M (i, j) | (9) d(i, j) k (p, q) , | log M (p, q)

Figure 1: Schematic diagram of the proposed method The normalized maps are summed over each feature channel to obtain the master saliency map. The implementation of GBVS is available from http://www.klab. caltech.edu / harel/share/ gbvs.php.

in the following manner: SI =

3 X

ωm N (SI,m )

(13)

m=1

LS uses Laplacian filter to capture the high frequency regions (salient regions) in the image [9]. It computes the saliency map S, of image I, as the local average of the absolute value of high-pass image HI obtained using convolution of the input image I with 3 × 3 Laplacian mask using: S = |HI | ∗ grg ,σg

(10)

Where, HI = I ∗ L and g is a Gaussian low-pass filter of size (2rg + 1).(2rg + 1); rg and σg have been set to 5. SRA employs the power of log spectrum for saliency detection [11]. The log spectrum representation of an image, L(f ) can be expressed in terms of the amplitude of the Fourier spectrum, A(f ) of that image, (f denotes frequency) using: L(f ) = log(A(f ))

(11)

The algorithm aims at reducing the redundant information in the image. It focuses on the statistical singularities in the spectrum. The statistical singularities, also defined as the spectral residual of an image, can be obtained using: R(f ) = L(f ) − A(f )

(12)

In (12), R(f ) represents the spectral residual of the image. The spectral residual is further processed to construct the saliency map in spatial domain by using inverse Fourier transform. The integration realizes the full potential of the three individual schemes to obtain an improved saliency model. Note that GBVS is a robust and computationally efficient scheme owing to the use of graphs [10]. LS uses the Laplacian operator which uses second order derivative to determine the edges in an image. SRA model offers a general solution for salient region detection [11]. We construct the final saliency map of the input image I, SI by taking weighted combination of the three saliency maps, SGBV S,I , SSRA,I and SLS,I

P In (13), 3m=1 = 1 , N (SI,m ) denotes the mth normalized saliency map and ωm is its corresponding weight [15]. In the proposed method we use weights ωGBV S = 0.5, ωLS = 0.3 and ωSRA = 0.2 and construct the final saliency map. These weights are determined experimentally and kept constant throughout. An example of improved saliency detection using the proposed model is illustrated in Fig. 2. The test image is shown in Fig 2(a). Green polygons in each of the figures (b), (c) and (d) indicate the dominant salient regions. These images clearly show that the obtained saliency maps are complementary in nature. So, no single method can detect all the dominant salient regions. As a marked improvement, figure (e) shows that the map obtained using the proposed saliency model can successfully capture all the perceptually salient regions in the test image that also includes edges.

3.3

Guided Filtering

Guided filter proposed by He et al. [17, 8] is an edgepreserving smoothing filter, derived from a local linear model between guidance I and output q. The guided filter computes the output by taking into account the content of the guidance image which can be the filtering input itself [18, 19]. Assuming q as a linear transform of I in window ωk centred at pixel k, q can be expressed for a pixel i as below. qi = ak Ii + bk

∀ i ∈ ωk

(14)

Here, (ak , bk ) are linear coefficients assumed to be constant in the window ωk . As 5(q) = a. 5 (I), the guided filter preserves edges and can be used in applications like image matting, dehazing and feathering [8]. The linear coefficients are determined using constraints between filtering input p and q given as: qi = pi − ni

(15)

Where, ni refers to unwanted noise components. The linear ridge regression model is used to optimize the cost function by minimizing the difference between p and q. The coeffi-

(a)

(b)

(c)

(d)

(e)

Figure 2: Example of improved Saliency map : (a) Test image (b) GBVS saliency map (c) LS saliency map (d) SRA saliency map (e) Proposed method. Green polygons indicate dominant salient regions. cients are given by: ak =

1 |ω|

P

i∈ωk Ii pi σk2 + ∈

− µk p¯k

bk = p¯k − ak µk

(16) (17)

Here, µk represents the mean of I in window ωk and σk2 represents the variance of the image in the same window. |ω| is the number of pixels in the window ωk and p¯k is the mean of p computed in that window. As a pixel i can be involved in overlapping windows, the final output should be obtained by taking the average of all possible values of q. qi = a¯i Ii + b¯i

(18)

Where, a¯i and b¯i are the average value of ak and bk of all windows overlapping i. In the proposed method, we employ guided filter in two stages. In the first stage, the input images are enhanced. Our objective is to obtain a high quality fused image which is definitely dependent on the quality of input. So, to obtain improved inputs, we apply the linear model of guided filter [8]. The guidance image in this case is the input image itself (I). The filtered output IG at pixel i gives the guided input as shown below: IG,i = ak Ii + bk

∀ i ∈ ωk

IG,i = Ii − ni

(19) (20)

The guided input obtained as a result is used for the construction of SLF and saliency maps. The second stage attempts at modifying the steerable local frequency maps using guided filter. The objective here is to improve the maps which will enable efficient representation of features with increased accuracy. The guided filter accepts the SLF map as the original filtering input and treats the guided input obtained from first phase as the guidance image. The filtered output then yields finely tuned feature map. The mathematical representation describing the process for a pixel i is given below: GSLFIG ,i = ak IG,i + bk

∀ i ∈ ωk

GSLFIG ,i = SLFIG ,i − ni

is the guided input which preserves the gradients. From equation (14) it is apparent that the filtering output is basically a scaled version of the guidance image displaced by an offset. The local linear model of the guided filter supports structure-transfer filtering due to its patch-based model [8]. This unique property enables the transfer of fine structures present in the guided input to GSLFIG , even if the original filtering input is smooth in some regions. Thus, an enhanced steerable local frequency map containing sharp features is obtained. Let SIG be the saliency map of the improved input. For each source image, we combine GSLFIG and SIG by taking their product. The result yields the final map, M APF inal , for each of the multifocus source image. So, we can write: M APF inal,i = (GSLFIG ,i ) × (SIG ,i )

(23)

For the VIS and the NIR spectrum, the fused image F contains pixels belonging to the source image possessing highest corresponding value in M APF inal for that particular pixel i. Q = arg max(M APF inal,i,k ) k

(24)

Where, k ∈ [1, N ]. Here N is the total number of source images to be fused and Q is the index of source image having maximum value of M APF inal for the pixel i. So, the fused image F is obtained by choosing each pixel i from the most suitable source image with index Q. So, we can write: [ [ F = Fi = IQ,i (25) i

i

Further, a 3 × 3 majority filter is applied for consistency verification [20] to ensure that a pixel in the fused image does not come from a source image different from that of its majority of neighbors. In case of TH spectrum, we use pixellevel weighted averaging rule to obtain final fused result F as in [7], to avail better comparison. Please note that the methods with which we have compared our work in the NIR and the TH spectra are obtained from applying different fusion strategies. So, in order to have proper comparisons with these methods from different spectra, we have to fuse our NIR and TH images accordingly.

(21) (22)

Here, SLFIG is the original steerable local frequency map of the guided input image IG and GSLFIG is its improved guided version. The same guided input image IG acts as the guidance image. The reason behind improvement in SLF maps using guided filter is explained next. Steerable local frequency which primarily uses Hilbert transform is linearly related to guidance image (Hilbert transformer being a linear time-invariant filter). The guidance image for our work

4.

EXPERIMENTAL RESULTS

We propose to evaluate the proposed fusion method in three different spectra viz. VIS, NIR and TH. In the VIS spectrum, we use four multifocus image sets as shown in Fig. 3. Each image set consists of two complementary focused images [4]. In the NIR spectrum, we use one suitable image set from available dataset [2] ( see Fig. 4 ). To evaluate the performance in the TH spectrum, we use the reduced set of 10 multifocus thermal image datasets using EOL based activity level measurement (Set1-Set5) [7]. One sample reduced

(a)

(b)

(c)

(d)

Figure 3: VIS multifocus image datasets used for evaluation of proposed image fusion method: (a) ‘Clock’, (b) ‘Desk’, (c) ‘Lab’, (d) ‘Pepsi’

Figure 4: Near-infrared (NIR) multifocus image dataset used for evaluation of proposed image fusion method : ‘Keyboard’

Figure 5: Reduced thermal multifocus image dataset (10 images) used for evaluation of proposed image fusion method: Set 1 (Mobile phone and RS 232 interface).

image set (Mobile phone and RS232 interface) is shown in Fig. 5. In the VIS and NIR spectrum, we use M I (Mutual Information), QAB/f (Petrovic metric) and Q0 , as the quantitative performance measures [4]. RM SE (Root Mean Square Error), M AE (Mean Absolute Error) and CC (Cross Correlation) [7] are employed to evaluate the performance in the thermal spectrum. For the qualitative performance evaluation, the perceptual qualities of the fused images are compared. We have implemented the proposed method in MATLAB R2010a environment on a desktop PC with 3.4 GHz Intel Core i7 CPU, 8 GB RAM. We implemented GBVS using the available codes from [10] (see: http://www.klab.caltech.edu/ harel/share/gbvs.php). In addition, we used the codes for Guided filter from [9] (see: http://xudongkang.weebly.com). In the VIS and NIR spectra, we compare our results with some spatial as well as some transform domain based approaches. In the multiresolution category, we compare our method with high performing Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Curvelet Transform (CVT), Contourlet Transform (CT), Dual Tree Complex Wavelet Transform (DTCWT) and Non-Subsampled Contourlet Transform (NSCT) [4] methods. Fast Hessian (FH) being one of the best in the interest point detection (IPD) based category, is also chosen for comparison. We also compare the results with steerable local frequency (SLF) based method in [3]. Results of the proposed method in the VIS spectrum at an intermediate stage, i.e., after only guided filtering of SLF and without saliency (GSLF) are also incorporated. We use the recently reported EOL based method [7] in addition to FH and SLF method for comparisons in the TH spectrum. Table 1 shows quantitative results of the proposed method in the VIS and NIR spectra. Corresponding fused results are shown in Fig. 6 and 7. Perceptual quality of the fused images is superior as it shows no inconsistencies or artifacts in the form of halo effects, illumination changes and contrast reduction. Table 2 shows the quantitative comparison of our method with other methods. In the VIS spectrum, the proposed method performs better than all the multiresolution based methods and the SLF as well as the GSLF with significant improvement in M I, QAB/f and Q0 . In comparison to the FH IPD based method, the proposed method supersedes in terms of M I, QAB/f and marginally looses in terms of Q0 . In the NIR spectrum, the proposed method shows consistently good performance over the SLF, FH, DWT and DTCWT methods. The quantitative results in TH spectrum are shown in Table 3 and 4. The proposed fusion method outperforms the pixel level weighted averaging with EOL activity level based fusion method [7] with high margin in terms of RMSE. Our method also shows an improvement over the SLF and FH IPD based methods in terms of RM SE, CC and M AE. The proposed method yielded better results as compared to its competitors due to the use of guided filter, steerable local frequency maps and the improved saliency maps. Fig. 8 (a-e) shows the fused resultant images in the TH spectrum for five image sets (Set1-Set5). Visual inspection of the fused images in comparison with the ground truth clearly demonstrates the superior performance of the proposed fusion method.

5.

CONCLUSIONS

In this paper, we present a novel multifocus fusion scheme for images spanning visual, near-infrared and thermal spec-

tra. The highlights of our methods are innovative use of guided filtering and development of an improved saliency model. Superior fusion results are achieved by combining guided steerable local frequency maps with the saliency maps. We plan to extend our work in region based fusion for further improvement. Another possible direction for future research is to employ a suitable model for achieving guided saliency to further improve the fusion results.

6.

REFERENCES

[1] Liu X. Y., Wang W. H., and Sun Y. Dynamic evaluation of autofocusing for automated microscopic analysis of blood smear and pap smear. Microscopy, 227:15–23, January 2007. [2] Zukal M., Mekyska J., Cika P., and Smekal Z. Interest points as a focus measure in multi-spectral imaging. Radioengineering, 22(1):68–81, April 2013. [3] Gangapure V. N., Banergee S., and Chowdhury A. S. Steerable local frequency based multispectral multifocus image fusion. Information Fusion, 23:99–115, May 2015. [4] Li S., Yang B., and Hu J. Performance comparison of different multi-resolution transforms for image fusion. Information Fusion, 12:74–84, March 2010. [5] Li S., Kang X., Hu J., and Yang B. Image matting for fusion of multi-focus images in dynamic scenes. Information Fusion, 14:147–162, July 2011. [6] Saha A., Bhatnagar G., and Wu Q.M.J. Mutual spectral residual approach for multifocus image fusion. Digital Signal Processing, 23:1121–1135, March 2013. [7] Benes R., Dvorak P., Faundez-Zanuy M., Espinosa-Duro V., and Mekyska J. Multi-focus thermal image fusion. Pattern Recognition Letters, 34:536–544, November 2012. [8] He K., Sun J., and Tang X. Guided image filtering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(6):1397–1409, June 2013. [9] Li S., Kang S., and Hu J. Image fusion with guided filtering. IEEE Transactions on Image Processing, 22:2864–2875, July 2013. [10] Harel J., Koch C., and Perona P. Graph-based visual saliency. In Proceedings of Neural Information Processing Systems, 2006. [11] Hou X. and Zhang L. Saliency detection: A spectral residual approach. In Proceedings of IEEE Comp. Soc. Conf. Computer Vision and Pattern recognition, 2009. [12] Khan J., Bhuiyan S., and Adhami R. Feature point extraction from the local frequency map of an image. Electrical and Computer Engineering, pages 1–15, 2012. [13] Freeman W.T. and Adelson E.H. The design and use of steerable filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 13(9):891–906, September 1991. [14] Rutishauser U., Walther D., Koch C., and Perona P. Is bottom-up attention useful for object recognition? In Computer Vision and Pattern recognition, December 2004. [15] Li H. and Ngan N.K. A co-saliency model of image pairs. IEEE Transactions on Image Processing, 20(12):3365–3375, December 2011.

Table 1: Fusion results of the proposed method in VIS and NIR spectra: M I, QAB/f and Q0 values with Consistency Verification (CV). Images MI QAB/f Q0 VIS spectrum Clock 8.7171 0.7405 0.9786 Desk 8.3108 0.7313 0.9588 Lab 8.5689 0.7437 0.9761 Pepsi 8.8772 0.7849 0.9812 NIR spectrum Keyboard 8.1360 0.7580 0.9980

Table 2: Multifocus image fusion: Performance comparison in VIS and NIR spectra Spectrum VIS spectrum

Near-infrared (NIR) spectrum

(a)

(b)

Method Proposed method GSLF SLF [3] FH DWT SWT DTCWT CVT CT NSCT Proposed method SLF [3] FH DWT DTCWT

MI 8.6185 8.5930 8.3245 8.2493 2.4126 2.4510 2.4814 2.4387 2.3978 2.4804 8.1360 7.9342 8.0198 5.9485 7.3575

QAB/f 0.7501 0.7496 0.6768 0.5804 0.6866 0.7140 0.7231 0.7075 0.6700 0.7219 0.7580 0.6853 0.7105 0.5135 0.7082

Q0 0.9737 0.9737 0.9735 0.9746 0.7206 0.7555 0.7650 0.7421 0.7076 0.7799 0.9980 0.9908 0.9912 0.9061 0.9902

(c)

(d)

Figure 6: Fused images obtained using the proposed method in the VIS spectrum for four datasets with consistency verification (CV): (a) Clock, (b) Desk, (c) Lab, (d) Pepsi.

Table 3: Thermal (TH) multifocus image fusion with reduced dataset: RMSE. Image Set Pixel level Weighted FH SLF [3] Proposed Method averaging based on EOL AL Fusion [15] Mobile-RS232 0.1803 0.0183 0.0172 0.0159 Bulbs set 1 0.1999 0.0307 0.0184 0.0150 Bulbs set 2 0.1342 0.0293 0.0160 0.0118 Bulbs set 3 0.2648 0.0541 0.0313 0.0210 Bulbs set 0.3307 0.0589 0.0368 0.0265

Table 4: Thermal (TH) multifocus image fusion with reduced dataset: CC and MAE. Image set FH SLF [3] Proposed method CC MAE CC MAE CC MAE Mobile-RS232 0.9933 0.0077 0.9944 0.0078 0.9951 0.0075 Bulbs set 1 0.9942 0.0097 0.9979 0.0075 0.9986 0.0063 Bulbs set 2 0.9891 0.0167 0.9969 0.0097 0.9983 0.0072 Bulbs set 3 0.9832 0.0250 0.9946 0.0210 0.9976 0.0146 Bulbs set 4 0.9821 0.0348 0.9932 0.0309 0.9965 0.0219

Figure 7: Fused images obtained using the proposed method in the NIR spectrum: Keyboard set.

(a)

(b)

(d)

(c)

(e)

Figure 8: Fused images obtained using the proposed method in the TH spectrum: (a) Mobile phone and RS232 Set; (b) Bulbs Set 1; (c) Bulbs Set 2; (d) Bulbs Set 3; (e) Bulbs Set 4. [16] Malik J. and Perona P. Preattentive texture discrimination with early vision mechanisms. Journal of the Optical society ofAmerica A, 7(5):923–932, May 1990. [17] He K., Sun J., and Tang X. Guided image filtering. In Proceedings of European Conference on Computer Vision, 2010. [18] Tomasi C. and Manduchi R. Bilateral filtering for gray and color images. In Proceedings of IEEE International Computer Vision Conference, 1998. [19] Petschnigg G., Agrawala M., Hoppe H., Szeliski R., Cohen M., and Toyama K. Digital photography with flash and no- flash image pairs. In Proceedings of ACM Siggraph, 2004. [20] Zhao H., Shang Z., Tang Y. Y., and Fang B. Multifocus image fusion based on the neighbor distance. Pattern Recognition, 46:1002–1011, September 2012.

Multispectral Multifocus Image Fusion with Guided ...

In this paper, we propose a novel solution to the above problem using guided .... Analytic image provides an easy solution for recov- ering local phase from the ...

1MB Sizes 10 Downloads 195 Views

Recommend Documents

Image Fusion With Undecimated Wavelet Transform
The process of image fusion can be performed at pixel-, feature- or decision-level. Image fusion at pixel-level represents the combination of information at the lowest level, since each pixel in the fused image is determined by a set of pixels in the

Image-guided system with miniature robot for precise ...
This paper describes a novel image-guided system for precise automatic targeting in minimally invasive ... Four types of support systems for minimally inva-.

pdf-1462\image-guided-hypofractionated-stereotactic-radiosurgery ...
... the apps below to open or edit this item. pdf-1462\image-guided-hypofractionated-stereotactic-r ... reatment-of-brain-and-spine-tumors-from-crc-press.pdf.

underwater image enhancement using guided trigonometric ... - Name
distortion corresponds to the varying degrees of attenuation encountered by light ... Then, He et al. [9,. 10] proposed the scene depth information-based dark.

Review of Image Fusion Techniques
IJRIT International Journal of Research in Information Technology, Volume 2, Issue 1, ... 1 M-Tech student, Electronics and Telecom, Government college of Engineering ..... which provides best approximation in image details of fused images.

medical image fusion using cross scale coeffcient
decomposition-based fusion of volumetric medical images taking into account both intrascale and interscale consistencies. An optimal set ..... navigation, but also offers them the flexibility of combining modalities of their choice, which is importan

Investigation on image fusion of remotely sensed ...
In this paper the investigation on image fusion applied to data with significantly different spectral properties is presented. The necessity of placing high emphasis on the underscored problem is demonstrated with the use of simulated and real data.

Localization and registration accuracy in image guided ... - Springer Link
... 9 January 2008 / Accepted: 23 September 2008 / Published online: 28 October 2008. © CARS 2008. Abstract. Purpose To measure and compare the clinical localization ..... operating room, the intraoperative data was recorded with the.

Atlas-Of-Image-Guided-Spinal-Procedures-1e.pdf
Our services was released using a want to work as a. complete on the internet electronic digital catalogue that offers use of great number of PDF file archive selection. You may. find many ... any payment, but you are able to access an enormous colle

Underwater Optical Image Dehazing Using Guided ... - IEEE Xplore
Kyushu Institute of Technology, Kyutech. Kitakyushu, Japan ... Color change corresponds to the varying degrees of attenuation encountered by light traveling in ...

A boundary element approach for image-guided near-infrared ...
properties in an image-guided setting. The reconstruction of optical properties using BEM was evaluated in a domain containing a 30 mm inclusion embedded in ...

Image-based fusion for video enhancement of night ...
School of computer science and engineering, Chengdu,. China, 611731. bShanghai Jiao Tong University, Institute of Image. Communications and Information ... online Dec. 29, 2010. .... where FL(x, y) is the final illumination image, M(x, y) is.

Localization and registration accuracy in image guided neurosurgery ...
navigation system (Medtronic Inc, USA) on a laptop installed with the ... 9(Pt. 2):637-44. 3. Fitzpatrick, J.M., J.B. West, and C.R. Maurer, Jr., Predicting error in ...