## Abstract

Multifocus color image fusion is an active research area in image processing, and many fusion algorithms have been developed. However, the existing techniques can hardly deal with the problem of image blur. This study present a novel fusion approach that integrates the quaternion with traditional curvelet transform to overcome the above disadvantage. The proposed method uses a multiresolution analysis procedure based on the quaternion curvelet transform. Experimental results show that the proposed method is promising, and it does significantly improve the fusion quality compared to the existing fusion methods.

© 2012 OSA

## 1. Introduction

Image fusion is commonly described as the task of combine information from multiple images of the same scene. The fusion image is suitable for human and machine perception or further processing, such as segmentation and object recognition. Our investigation focused mainly on the multifocus color image fusion, which is an important branch of image fusion.

It is difficult to get an image with all objects in focus, which is mainly caused by the limited depth-of-field for camera lens. The multifocus image fusion algorithm can solve this problem, and create an image with most of the objects in focus. However, most of the multifocus image fusion methods [1–20] are focused on the grayscale images, only a few studies [21–25] address the color images.

Most of the existing multifocus color image fusion algorithms belong to the spatial domain method [21–23], that is, the computation of focus measure and the fusion process are carried out in the spatial domain. Shi *et al.* proposed the method both in the spatial and frequency domain [24], namely the focus measure is computed in the frequency domain and the fusion process is carried out in the spatial domain. Besides, Chen *et al.* proposed the bidimensional empirical mode decomposition (BEMD) algorithm for the fusion of color microscopic images [25]. The fusion algorithms based on the discrete wavelet transform (DWT) for grayscale images were reported in [26–28]. For the color image fusion, we can use the DWT based method on each color channel separately and then combined the three fused color channel together.

The generic schematic diagram for the spatial domain fusion methods is given in Fig. 1. This method consists of the following steps. First, decompose the two source images into smaller blocks. Second, compute the focus measure for each block. Then, compare the focus measure of two corresponding blocks, and select the blocks with bigger focus measure as the corresponding blocks of the fused image. In the second step, we can select the energy of image gradient (EOG), energy of Laplacian (EOL), spatial frequency (SF) or sum-modified-Laplacian (SML) as focus measure. The experimental results in [22] show that the SML with optimized block size performs best among those focus measures. Besides, we can select the index of fuzziness as focus measure [23].

The spatial domain fusion methods have the virtue of lower computation complexity. However, these methods have the problem of image blur. This is mainly caused by the block decomposition of the source images. In the decomposition process, there must exist some blocks containing both distinct and blurred regions. No matter what fusion rule was adopted, the blurred region will be in the fused image unavoidably. This situation will become worse when the border between distinct and blurred regions in the source images is not a straight line (such as the source multifocus color images in the experiment section).

The IHS integrated with SWT (stationary wavelets transform) based fusion algorithm was carried out both in spatial and frequency domains [24]. The block diagram for this fusion algorithm is given in Fig. 2.

The detailed procedures are as follows. First, perform the IHS transform on the source images to get the I components. Second, perform stationary wavelet transform on the I components to get the multiresolution coefficients. Then, compute the focus measure by the summation of weighted coefficients for the corresponding pixel of each source images. Find the maximum focus measure of the corresponding pixels to get a decision map. This map decides the pixels of the fused image come from the source image A or B. Finally, we can get the fused color image through the consistency verification procedure in the spatial domain.

The IHS and SWT based method processes the source images in a pixel-by-pixel manner, which is different from the region based spatial domain method [21–23]. This method give us a new way for multifocus color image fusion. However, just as the experimental results show that this method can not overcome the image blur problem completely. This is mainly caused by the spatial domain fusion process, that is, the pixels of the fused color image are directly obtained from the source images.

As an alternative approach, the BEMD based method was proposed for the color microscopic images [25]. The source color microscopic images are transformed in the YIQ color space. Then the bidimensional empirical decomposition is performed on each Y component to get the *Residue* and *IMF* components. The local significance principle fusion rule is applied to fuse the *IMF* component, and the principal component analysis rule is applied to fuse the *Residue*, I and Q components, separately. Thirdly, the fused Y component is recovered by the inverse BEMD. In the end, the final fused color image is obtained by the inverse YIQ transform.

The BEMD based method can deal with the multifocus microscopic image successfully. However, just as the experimental results show, this method can not give us an ideal fusion results for outdoor scene multifocus color images.

In summary, the existing multifocus color image fusion algorithms face the problem of image blur. In addition, except for the BEMD based algorithm, the above multifocus color image fusion algorithms have a common drawback that they can not realize the fusion of multiple color images.

The contribution of this paper is to design a multifocus color image fusion algorithms which can overcome the problems mentioned above. The anisotropic scaling principle of the curvelets and the quaternion based multiresolution representation procedure can eliminate the blurred regions in the fused color image.

Based on the above motivations, we combined the quaternion with curvelet, and define the quaternion curvelet transform (QCT) for color image. A novel color image fusion algorithm based on the quaternion curvelet transform is proposed. Through the quaternion curvelet based multiresolution analysis, the proposed method can avoid the drawback of image blur, and performs better better than the existing methods.

The paper is structured as follows. In section 2 we first give a brief introduction of quaternion. Then, we propose the quaternion curvelet transform for color images. Next, section 3 introduces the color image fusion algorithm based on quaternion curvelet transform. In addition, we introduce the objective assessment metrics in section 4. The performance of the proposed fusion method is evaluated via some experiments in section 5, and the comparison with the existing methods is also discussed in this section. Finally, conclusions are made in section 6.

## 2. Preliminaries

#### 2.1. Quaternion

In recent years, quaternion has been used more and more in color image processing domain [29–36]. The quaternion ℍ, which is a type of hypercomplex number, was formally introduced by Hamilton in 1843. It is an associative noncommutative four-dimensional algebra

*i*,

*j*,

*k*are complex operators obeying the following rules

A quaternion can be regarded as the composition of a scalar part and a vector part : *q* = *S*(*q*) +*V*(*q*), where *S*(*q*) = *q _{r}*,

*V*(

*q*) =

*q*+

_{i}i*q*

_{j}*j*+

*q*. If a quaternion

_{k}k*q*has a zero scalar part (

*q*= 0), then

_{r}*q*is called pure quaternion, and if

*q*has a unit norm (||

*q*|| = 1), then

*q*is called unit pure quaternion.

The norm of *q* is defined as

In [29], Sangwine proposed to encode the three channel components of RGB image on the three imaginary parts of a pure quaternion as follows

where*f*(

_{R}*m*,

*n*),

*f*(

_{G}*m*,

*n*) and

*f*(

_{B}*m*,

*n*) are the red, green and blue components of the pixel, respectively. The advantage of quaternion-type representation is that a color image can be treated in a holistic manner.

#### 2.2. Quaternion curvelet transform

One of the primary tasks in computer vision is to obtain a directional representation of the features from an image. In general, those features are the irregular or anisotropic lines and edges. Therefore, a directional multiscale representation method is desirable in this field.

The wavelet transforms and multiresolution ideas permeate many fields especially in signal and image processing domain. However, two-dimensional (2-D) discrete wavelet transforms (DWT) can not represent the anisotropic features well in 2-D space. This is mainly caused by the fact that 2-D DWT do not supply good direction selectivity.

During the past decades, multiscale geometric transforms start a new stage of image processing. One of them is the curvelet transform, which was introduced by Candès and Donoho [37–40]. The curvelet has the properties of anisotropy and directionality, which is the optimal basis for the representation of the smooth curves in an image such as edges and region boundaries. For more detailed description of curvelet transform can be found in [37–40].

In this paper, we generalize the traditional curvelet transform from real and complex numbers to quaternion algebra and define the quaternion curvelet transform (QCT) for color image. The construction of quaternion valued curvelets *ϕ _{j,k,l}*(

**x**) is similar as traditional one, given by

*ϕ*(

_{j,k,l}**x**) is obtained by the translation, rotation and parabolic scaling of mother curvelet

*ϕ*(

**x**), and

**x**= (

*x*

_{1},

*x*

_{2}).

In Eq. (6), the parabolic scaling is given by

*R*_{θl} (·) represents the rotation transform by an angle *θ _{l}* = 2

*π*· 2

^{−}

*·*

^{j}*l*with

*l*= 0, 1, 2,... at scale

*j*, given by

${\mathbf{x}}_{k}^{(j,l)}$ represents the amount of translation at scale *j* and direction *l*, given by

*k*

_{1},

*k*

_{2}∈ ℤ

^{2}.

The set of quaternion valued curvelets {*ϕ _{j,k,l}*(

**x**)} is a tight frame system. The quaternion valued square integrable function

*f*∈

*L*

^{2}(ℝ

^{2}, ℍ) can be expanded as a quaternion curvelet series, given by

**quaternion curvelet transform**.

We now give a detail procedure for the implementation of quaternion curvelet transform. Figure 3 is the generic schematic diagram for the forward and inverse QCT.

The architecture of digital implementation of the QCT is similar as traditional curvelet transform [40], where the QCT coefficients were obtained from the frequency domain. As we can see from Fig. 3, the QCT is performed in the following four steps:

- Step1. Represent the color image in quaternion form and select a unit pure quaternion
*μ*for quaternion Fourier transform (QFT) [31]. In this paper we specify $\mu =\frac{\sqrt{3}}{3}i+\frac{\sqrt{3}}{3}j+\frac{\sqrt{3}}{3}k$, and use the “qfft” and “iqfft” functions in the Quaternion Toolbox [36], respectively, for fast computation of the forward and inverse QFT. - Step2. Apply the 2-D quaternion Fourier transform to obtain the quaternion based frequency samples.
- Step3. Interpolate the quaternion Fourier coefficients for each scale
*j*and angle*θ*. Multiply the interpolated object with the parabolic window, then wrap the above results around the origin. This step is similar as the realization of traditional curvelet transform, and the detailed windowing and wrapping procedures can be found in [40]._{l} - Step4. Apply the inverse 2-D quaternion Fourier transform on each windowed scale and angle to obtain the final QCT coefficients.

Performing the above steps reversely, we can achieve the inverse quaternion curvelet transforms (IQCT). More specifically, we first perform 2-D quaternion Fourier transform on the QCT coefficients to obtain the frequency domain samples. Then multiply the samples by the corresponding wrapped quaternion curvelet, and unwrap the multiplication results to get the reconstructed color image in QFT domain. Finally, we perform the inverse 2-D quaternion Fourier transforms to get the recovered color image.

The QCT can process not only real and complex signals, but also quaternion one. If we separate a color image into three scalar images and compute the curvelet transforms of these images separately, the potential color information will be corrupted. But in this work, we are concerned with the computation of a single, holistic, curvelet transform which treats a color image as a vector field. The advantage of our method is that the color information can be preserved to the greatest extent.

## 3. QCT based multifocus color image fusion algorithm

This section mainly discusses the proposed QCT based multifocus color image fusion algorithm. The overview schematic diagram of the proposed fusion algorithm is shown in Fig. 4.

As we can see from Fig. 4, the detailed fusion procedures are as follows. First, an image multiscale representation is constructed for each source color images by applying the quaternion curvelet transform. Each color image is thus described by a set of quaternion-valued curvelet coefficients, including the coarse and fine levels. Then, the image fusion rules are applied to each of those different levels, independently, to construct a combined multiresolution representation. In the end, the fused color image is obtained by performing the inverse quaternion curvelet transform.

The architecture in Fig. 4 is similar as the DWT based fusion algorithm in [26–28]. However, the proposed fusion algorithm is not a simple substitution of the DWT with the QCT, rather it is the change of processing manner. More specifically, the DWT cannot process the color image directly, but only process the R, G and B color channel separately, whereas the QCT can process the color image in a holistic manner. This holistic processing method can preserve the color information to the greatest extent.

The fusion rules are the most important step in the whole process. Due to the different physical meaning, the coefficients of coarse and fine levels are treated by the different rule in the fusion process. The fine level coefficients are the representation of detailed information of an image. The larger norm of the fine level coefficients corresponds to sharp intensity changes in an image, such as edges or region boundaries. Therefore, the fusion of fine levels coefficient is based on the maximum selection rule, that is

The coarse level coefficients are the approximate representation of an image, and have inherited of its property such as the mean intensity. In order to stress the details in multifocus color image and enhance image contrast, we use minimum selection rule to fusion this level, that is

We can generalize the above maximum (minimum) selection rule for fusion of multiple color images, that is, select the coefficients with biggest (smallest) norm from *n* source color images as the corresponding coefficients of the fused color image.

## 4. Performance evaluation metrics

The assessment of color fusion image quality is a necessary procedure. The performance of image fusion algorithms is usually assessed using both subjective and objective measures. Subjective measure relies on the ability of people’s comprehension, which is the qualitative analysis for fused color image. While objective measure can overcome the influence of mentality, human vision and knowledge, and give a quantitative analysis of the effectiveness of color image fusion.

For the subjective measure, we mainly focused on whether the blur was occurred in the fused color image by visual observation. However, the objective metric for quality assessment of color fusion image is still a challenging issue. As far as we know, there is no generally accepted objective evaluation measures. This problem partially lies in the difficulty of defining an ideal fused image. In this paper, we mainly using the subjective measure while the objective measure subsidiary for quality assessment of the fused color image.

In this paper, the objective evaluation metric is computed in the CIE *L*^{*}*a*^{*}*b*^{*} (CIE 1976) color space. The reason for the selection of this color space in that the *L*^{*}*a*^{*}*b*^{*} color is designed to approximate human vision. In other words, the *L*^{*}*a*^{*}*b*^{*} color is perceptually uniform, and its *L*^{*} component closely matches human perception of lightness. Here, we give a brief introduction of the objective evaluation metric used in this paper.

As we know, the blurred image exhibits low contrast. So we select the image contrast metric (ICM) proposed by Yuan *et al.* [41] to evaluate the image blur. The ICM is defined based on both gray and color histogram characteristics. In the computation of ICM, we first convert the fused color image to grayscale one, and use following formula to compute the gray contrast metric C_{g}

*P*(

*I*) is the gray histogram, and

_{k}*α*is the dynamic range metric of the gray histogram.

_{I}We employed the *L*^{*} channel in CIE *L*^{*}*a*^{*}*b*^{*} color space to evaluate the color contrast *C _{c}*, given by

*P*(

*L*) is the histogram of

_{k}*L*

^{*}channel in CIE

*L*

^{*}

*a*

^{*}

*b*

^{*}color space, and

*α*is the dynamic range metric of the color histogram.

_{L}Based on Eq. (13) and Eq. (14), the ICM is computed by

A large ICM means better contrast performance and less blur occurred in the fused color image, and indicates better fusion results.

## 5. Experimental results and comparisons

In this section, several experiments are carried out on naturally obtained multifocus color images to evaluate and compare the performance of the proposed QCT based method with the typical multifocus color image fusion schemes mentioned in section 1. The fusion results are shown for each experiment, and the assessment of the fused color image quality using both subjective and objective measures is also presented.

In reference [21–24], the authors only use the artificially obtained multifocus color images to evaluate the performance of their algorithms. The border between the distinct and blurred regions for this type of multifocus color image is a straight line. However, this is not sufficient to illustrate the efficiency of the algorithm. In this paper, three groups of source color image were obtained from the out of focus of the camera lens. The border between distinct and blurred regions of naturally obtained multifocus color images, such as Fig. 5, is a set of curve.

In the first two experiments, we demonstrate the effectiveness of the proposed QCT based algorithm for the blur elimination of the outdoor scene color images. The comparison of the IHS combined with SWT based method, fuzzy based method, SML based method, BEMD based method and DWT based method with the proposed method is also presented. The fusion of the multiple multifocus color images using the proposed method is carried out in the last experiment.

In the first experiment, the source multifocus color images, as shown in Fig. 5, are captured from the outdoor scene, and can be downloaded from the website [42]. The right corner part of Fig. 5(a) is distinct, whereas the other part is blurred. On the contrary, as shown in Fig. 5(b), the right corner part is blurred, whereas the other part is distinct.

Figure 6 shows the results from the existing fusion methods and the proposed method, where (a), (b), (c), (d), (e) and (f) are the results from the IHS combined with SWT based method, the fuzzy based method, the SML based method, the BEMD based method, the DWT based method and the proposed method, respectively. Figure 7 is the subimage of the corresponding fused image to display the detailed information for subjective evaluation.

Now we give a subjective assessment of the different fusion methods. The border between the distinct and blurred regions of the source color images is a set of curve. The corresponding part in the fused images, as we can see from Fig. 7(a) and Fig. 7(c), exist blurred effect. The saw-tooth edges were occurred in Fig. 7(c). This is mainly caused by the block decomposition procedure. We can also find that the blurred effect was occurred in the whole image in Fig. 7(b). As for BEMD based method, Fig. 7(d) exhibits a mild blur effect. The DWT based method, as shown in Fig. 7(e), still face the blur problem. So, the previous multifocus color image fusion algorithms can not excluded the blurred regions from the source image. In comparison, We can hardly find the blur by visual observation for the fused image obtained by the proposed method.

Figure 8 is the corresponding objective evaluation results. It clearly indicates that the ICM of the proposed method is the maximum. Therefore, we can say that the proposed QCT based method performs better than the other five methods through objective evaluation.

In the second experiment, the source multifocus color image, as shown in Fig. 9, is also selected from the website [42]. The foreground of Fig. 9(a) is distinct, whereas the background is blurred. On the contrary, as shown in Fig. 9(b), the foreground is blurred, whereas the background is distinct.

The fusion results from the existing method and the proposed fusion method are shown in Fig. 10, and Fig. 11 is the subimage of the corresponding fused image.

We can see that the background in Fig. 11(b) is blur. For the other methods, as shown in Fig. 11(a), 11(c), 11(d) and 11(e), the corresponding fused color images are blurred to some extents. In comparison, We can hardly find the blurred effect by visual observation for the fused image obtained from the proposed method.

Figure 12 shows the objective evaluation results of the different fusion methods. It clearly indicates that the proposed QCT based method performs better than the other five methods through objective evaluation.

The purpose of the last experiment is to demonstrate the effectiveness of the fusion of the multiple multifocus color images using the proposed QCT based method. As far as we know, except for the BEMD based method and the DWT based method, the existing multifocus color image fusion algorithm cannot fusion multiple images. The proposed method can done this work easily.

The source multifocus microscopic color images are shown in Fig. 13 and can be downloaded from the website [43]. Figure 14 shows the results of different methods, where (a), (b) and (c) are the results from the BEMD based method, the DWT based method and the proposed method, respectively.

We can see from Fig. 14(b) that the blurred effect was occurred in the whole image for the DWT based method. For the BEMD based method and the proposed method, as shown in Fig. 14(a) and 14(c), the fused image obtained by the proposed method was much distinct than the BEMD based fusion result. The subjective analysis is also validated by the objective evaluation results shown in Fig. 14(d). To sum up, the proposed method can be a useful tool in the multiple multifocus color image fusion tasks.

According to the stage at which the information is combined, color image fusion algorithms can be classified as pixel-level and region-level. The IHS combined with SWT based method, BEMD based method, DWT based method and the proposed method belong to pixel-level. The SML based method and the fuzzy based method belong to region-level. The above experimental results show that the pixel-level method performs better than region-level one.

The above experimental results show that the proposed QCT based algorithm performs best from either fusing the outdoor scene images or the microscopic images. The previous fusion algorithms can not exclude the blurred regions from both of the source images well. The proposed fusion method appears best among all the results by visual analysis. It is also validated by the objective evaluation results in the above discussion. In summary, we can arrive at a conclusion that the proposed QCT based multifocus color image fusion algorithm is successful in the aspect of blur elimination.

## 6. Conclusion

This paper first reviews the typical multifocus color image fusion algorithms, and then propose a novel fusion approach using quaternion based curvelet multiresolution analysis. In the proposed method, each source color image is described by a set of quaternion-valued coefficients. The different image fusion rules are applied to the coarse and fine level coefficients to construct a final quaternion based multiresolution representation. The fused color image is obtained by the inverse quaternion curvelet transform.

The comparisons between the competing multifocus color image fusion methods and the proposed method by the subjective and objective analysis are carried out. The experimental results indicate that the proposed method does significantly solve the image blur problem, and outperforms the previous methods. In addition, the proposed method can be easily generalized to fusion of multiple color images. Therefore, the proposed QCT based algorithm can be a useful tool in the multifocus color image fusion tasks. Our future work will be focused on the use of quaternion curvelet transform in the color image processing domain.

## Acknowledgments

We thank the reviewers for their helpful comments and suggestions that improved the quality of this manuscript. Thanks are also extended to the editors for their work. This work was supported by the Major State Basic Research Development Program of China (973 Program, No. 2009CB72400102A) and the National Natural Science Foundation of China (No. 61203242).

## References and links

**1. **X. Li, M. He, and M. Roux, “Multifocus image fusion based on redundant wavelet transform,” IET Image Process. **4**(4), 283–293 (2010). [CrossRef]

**2. **S. Li, J. T. Kwok, and Y. Wang, “Multifocus image fusion using artificial neutral networks,” Pattern Recogn. Lett. **23**(8), 985–997 (2002). [CrossRef]

**3. **Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recogn. **43**(6), 2003–2016 (2010). [CrossRef]

**4. **Q. Zhang and B. Guo, “Multifocus image fusion using the nonsubsampled contourlet transform,” Signal Process. **89**(7), 1334–1346 (2009). [CrossRef]

**5. **W. Huang and Z. L. Jing, “Multifocus image fusion using pulse coupled neutral network,” Pattern Recogn. lett. **28**(9), 1123–1132 (2007). [CrossRef]

**6. **N. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion algorithm based on shearlets,” Chin. Opt. Lett. **9**(4), 041001 (2011).

**7. **N. Ma, L. Luo, Z. Zhou, and M. Liang, “A Multifocus image fusion in nonsubsampled contourlet domain with variational fusion stategy,” Proc. SPIE **8004**, 800411 (2011). [CrossRef]

**8. **W. Yajie and X. Xinhe, “A multifocus image fusion new method based on multidecision,” Proc. SPIE **6357**, 63570G (2006). [CrossRef]

**9. **R. Nava, B. E. Ramírez, and G. Cristóbal, “A novel multi-focus image fusion algorithm based on feature extraction and wavelets,” Proc. SPIE **7000**, 700028 (2008). [CrossRef]

**10. **I. De and B. Chanda, “A simple and efficient algorithm for multifocus image fusion using morphological wavelets,” Signal Process. **86**(5), 924–936 (2006). [CrossRef]

**11. **S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. **26**(7), 971–979 (2008). [CrossRef]

**12. **H. Li, Y. Chai, H. Yin, and G. Liu, “Multifocus image fusion and denoising scheme based on homogeneity similarity,” Opt. Commun. **285**(2), 91–100 (2012). [CrossRef]

**13. **Y. Chai, H. Li, and Z. Li, “Multifocus image fusion scheme using focused region detection and multiresolution,” Opt. Commun. **284**(19), 4376–4389 (2011). [CrossRef]

**14. **S. Gabarda and G. Cristóbal, “Multifocus image fusion through pseudo-Wigner distribution,” Opt. Eng. **44**(4), 047001 (2005). [CrossRef]

**15. **P. L. Lin and P. Y. Huang, “Fusion methods based on dynamic-segmented morphological wavelet or cut and paste for multifocus images,” Signal Process. **88**(6), 1511–1527 (2008). [CrossRef]

**16. **Y. Chai, H. F. Li, and M. Y. Guo, “Multifocus image fusion scheme based on features of multiscale products and PCNN in lifting stationary wavelet domain,” Opt. Commun. **284**(5), 1146–1158 (2011). [CrossRef]

**17. **R. Redonodo, F. S̆roubek, S. Fischer, and G. Gristóbal, “Multifocus image fusion using the log-Gabor transform and a Multisize Windows technique,” Inform. Fusion **10**(2), 163–171 (2009). [CrossRef]

**18. **F. Luo, B. Lu, and C. Miao, “Multifocus image fusion with trace-based structure tensor,” Proc. SPIE **8200**, 82001G (2011). [CrossRef]

**19. **A. Baradarani, Q. M. J. Wu, M. Ahmadi, and P. Mendapara, “Tunable halfband-pair wavelet filter banks and application to multifocus image fusion,” Pattern Recogn. **45**(2), 657–671 (2012). [CrossRef]

**20. **Y. Chai, H. Li, and X. Zhang, “Multifocus image fusion based on features contrast of multiscale products in nonsubsampled contourlet transform domain,” Optik **123**(7), 569–581 (2012). [CrossRef]

**21. **H. Zhao, Q. Li, and H. Feng, “Multi-focus color image fusion in the HSI space using the sum-modified-laplacian and the coarse edge map,” Image Vis. Comput. **26**(9), 1285–1295 (2008). [CrossRef]

**22. **W. Huang and Z. Jing, “Evaluation of focus measures in multi-focus image fusion,” Pattern Recogn. Lett. **28**(9), 493–500 (2007). [CrossRef]

**23. **R. Maruthi, “Spatial Domain Method for Fusing Multi-Focus Images using Measure of Fuzziness,” Int. J. Comput. Appl. **20**(7), 48–57 (2011).

**24. **H. Shi and M. Fang, “Multi-focus Color Image Fusion Based on SWT and IHS,” in *Proceedings of IEEE Conference on Fuzzy Systems and Knowledge Discovery* (IEEE2007), 461–465. [CrossRef]

**25. **Y. Chen, L. Wang, Z. Sun, Y. Jiang, and G. Zhai, “Fusion of color microscopic images based on bidimensional empirical mode decomposition,” Opt. Express **18**(21), 21757–21769 (2010). [CrossRef] [PubMed]

**26. **H. Li, B. S. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical Models & Image Process. **57**(3), 235–245 (1995). [CrossRef] [PubMed]

**27. **Z. Zhang and R. S. Blum, “A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application,” Proc. IEEE . **87**(8), 1315–1326 (1999). [CrossRef]

**28. **K. Amolius, Y. Zhang, and P. Dare, “Wavelet based image fusion techniques–An introduction, review and comparison,” Photogramm. Eng. Remote Sens. **62**(1), 249–263 (2007). [CrossRef]

**29. **S. J. Sangwine, “Fourier transforms of colour images using quaternion, or hypercomplex numbers,” Electron. Lett. **32**(1), 1979–1980 (1996). [CrossRef]

**30. **S. C. Pei and C. M. Cheng, “Color image processing by using binary quaternion-moment-preserving thresholding technique,” IEEE Trans. Signal Process. **8**(5), 614–628 (1999).

**31. **T. A. Ell and S. J. Sangwine, “Hypercomplex Fourier transforms of color images,” IEEE Trans. Image Process. **16**(1), 22–35 (2007). [CrossRef] [PubMed]

**32. **D. S. Alexiadis and G. D. Sergiadis, “Estimation of motions in color image sequences using hypercomplex Fourier transforms,” IEEE Trans. Sig. Process. **18**(1), 168–186 (2009).

**33. **S. J. Sangwine, T. A. Ell, and N. L. Bihan, “Fundamental representations and algebraic properties of biquater-nions or complexified quaternions,” Adv. Appl. Clifford Algebras **21**(3), 607–636 (2011). [CrossRef]

**34. **L. Q. Guo and M. Zhu, “Quaternion Fourier-Mellin moments for color images,” Pattern Recogn. **44**(2), 187–195 (2011). [CrossRef]

**35. **B. J. Chen, H. Z. Shu, H. Zhang, G. Chen, C. Toumoulin, J. L. Dillenseger, and L. M. Luo, “Quaternion Zernike moments and their invariants for color image analysis and object recognition,” Signal Process. **92**(2), 308–318 (2012). [CrossRef]

**36. **S. Sangwine and N. L. Bihan, “Quaternion toolbox for Matlab,” http://qtfm.sourceforge.net

**37. **E. J. Candès and D. L. Donoho, “Continuous curvelet transform I. Resolution of the wavefront set,” Appl. Comput. Harmon. Anal. **19**(2), 162–197 (2005). [CrossRef]

**38. **E. J. Candès and D. L. Donoho, “Continuous curvelet transform II. Discretization and frames,” Appl. Comput. Harmon. Anal. **19**(2), 198–222 (2005). [CrossRef]

**39. **E. J. Candès, L. Demanet, D. L. Donoho, and L. Ying, “The curvelet transform website,” http://www.curvelet.org

**40. **E. J. Candès, L. Demanet, D. L. Donoho, and L. Ying, “Fast discrete curvelet transorms,” Multiscale Model. Simul. **5**(3), 861–899 (2006). [CrossRef]

**41. **Y. Yuan, J. Zhang, B. Chang, and Y. Han, “Objective quality evaluation of visible and infrared color fusion image,” Opt. Eng. **50**(3), 033202 (2011). [CrossRef]

**42. **M. Douze, “Blur image data,” http://lear.inrialpes.fr/people/vandeweijer/data.html

**43. **Helicon Soft, “Helicon Focus Sample images,” http://www.heliconsoft.com/focus_samples.html