We demonstrate a technique to improve structural data obtained from Optical Projection Tomography (OPT) using Image Fusion (IF) and contrast normalization. This enables the visualization of molecular expression patterns in biological specimens with highly variable contrast values. In the approach, termed IF-OPT, different exposures are fused by assigning weighted contrasts to each. When applied to projection images from mouse organs and digital phantoms our results demonstrate the capability of IF-OPT to reveal high and low signal intensity details in challenging specimens. We further provide measurements to highlight the benefits of the new algorithm in comparison to other similar methods.
© 2013 OSA
OPT is a technique for highly specific three-dimensional (3D) imaging of gene and protein expression patterns in tissue specimens . By virtue of its capacity to generate high resolution images of specimens on a scale that has been difficult to cover by other imaging modalities, the use of the technology is rapidly expanding in diverse research fields such as developmental biology, diabetes research, pathology and many more. However, in the course of 3D rendering of images generated by OPT, structures or objects spanning a wide range of signal intensities commonly represent specific challenges for the process of image generation. Examples of such structures include vascular or epithelial branches. For example, in OPT assessments of the liver, peripheral blood vessels typically display very low signal intensities and may be difficult to distinguish from the surrounding background. In contrast, vessels in other areas of the organ (such as the portal vein) display very high signal intensities. In order to visualize the peripheral vessels, a longer exposure can be acquired. However, this may lead to over-saturation of other areas (which may also occur when attempting to regulate the dynamic range for the reconstruction). Conversely, optimized exposures of high intensity elements may lead to under representation of low signal intensity areas. Hence, in order to reliably arrive at an adequate qualitative 3D rendering encompassing all labeled structures in this type of challenging specimen, scanned projections must in many cases undergo extensive post-acquisition image processing.
Image enhancement is a process applied to digital images that makes otherwise invisible features visible. Generally, the success rate of image enhancement is subjective, i.e. task dependent, but it can be measured by an objective function such as how well the signal is detected in a scan (our principle focus in this study). Hence, enhancement techniques are most commonly problem oriented . Thus, a functional method to enhance OPT data would be one that yields a better signal-to-noise ratio and a better preservation of morphology. Image fusion is the process of combining two or more registered images acquired under different conditions, or via different detection methods, in order to produce a more descriptive scene. The resulting output would be more suitable for the human visual system (HVS) or machine vision and aid in downstream image processing and analysis tasks than any of the individual input signals .
In this report we propose a method to overcome the shortcomings of OPT for imaging specimens containing widely differing contrast elements by using image fusion of multi-exposure projections, thereby producing projections that are inherently unbiased towards exposure variations. This fusion approach not only facilitates the visualization of structures with both strong and weak signal intensities. It may also aid in tuning the dynamic range during image reconstruction.
For exposure manipulation of certain specimens, there is a trade-off between capturing weak signals and revealing more detail. Commonly, longer exposure times allow for the detection of weak signals while a shorter exposure may reveal more detail of a high intensity area. The concept of image fusion is widely used in photography, remote sensing, military imaging equipment (comprising near-infrared night vision) etc . In biomedical imaging, there is today a tendency towards multimodality (multi-sensor) fusion, e.g., MRI with CT scan data , PET with CT , and MRI with PET , which eventually helps in confirming diseases or outlining treatment procedures. In parallel to the work described herein, attempts have been made to implement high dynamic range (HDR) on OPT . However, there is to the best of our knowledge no research done to-date at the level of uni-modal signal enhancement of biomedical scans through fusion of multi-exposure OPT projections of specimens on the cm scale. The possibility to simultaneously visualize areas of both high and low intensity, in a single specimen, has the potential to open up several research avenues for OPT imaging.
1.1 Related work
Image fusion techniques can be classified into two main groups in both spatial and frequency domains, pixel based fusion and feature based fusion (segmented blobs). The terms HDR imaging and image fusion are sometimes used incorrectly to describe the same process. This confusion can be eradicated if we see the HDR process as the one that tries to retrieve a camera response function from a given set of different exposures in such a way that the resulting intensity values violate any image format as they exceed the allowable ranges, thus the name “high dynamic range”. This is the reason why HDR images are usually exported to non-standard image formats and are converted to low dynamic range formats for display. In contrast, image fusion does not require the retrieval of camera response functions and does not involve exposure parameters in the computation of the final fused image. Therefore, it does not violate the image format (i.e. the intensity values are within the image type range).
Vavilin et al. propose an exposure blending scheme whereby a selective procedure was applied . Their method yielded better results than conventional straightforward averaging. However, this technique is less suitable when the exposure is varied between frames . Moreover, similar techniques can lead to a reduced contrast and attenuation of salient features [11, 12]. In a related study, Goshtasby divided images into non-overlapping blocks and measured entropy to select the best representative blocks across all available exposures and determined the blending function using a gradient-ascent algorithm . Daneshvar and Ghassemian took advantage of the HIS (Hue, Intensity, Saturation) color space of multi-spectral MRI images to fuse the intensity part with the grayscale images of PET . This technique, however, often suffers from spectral degradation . Another technique applies a pulse coupled neural network (PCNN) for multi-channel fusion [14, 15]. In a method analogous to this, a feature-based image fusion using independent component analysis (ICA) and topographic independent component analysis bases was introduced . These two bases are trained and the estimated transform is used for fusion of similar content images. A problem with ICA and PCNN is that their results and performances depend heavily on the size and diversity of the training data. This impediment for ICA and PCNN is also observed independently in another study .
Mertens et al. generated normalized weighted maps comprising contrast, saturation and well-exposedness weights of each of the exposures where the conventional averaging was carried out in a pyramid decomposition fashion . Pyramid decomposition has also been described by others (e.g [18, 19].). In a similar approach, Wavelet Decomposition (DWT) was introduced to image fusion at about the same time [20, 21]. Lewis et al. and Nikolov et al. were also in favor of using DWT [22, 23]. Nikolov showed that working in the wavelet domain gives a more desirable outcome in terms of fusion quality. Pyramid based fusion on the other hand suffers from blockiness, over-completeness of transform coefficients and noise interference compared to DWT [11, 12]. Empirical mode decomposition (EMD) is another method that has been proposed in the context of image fusion [24, 25]. An advantageous feature of EMD is its ability to adaptively separate spatial frequencies, but since it is a data-driven method there is no guarantee that decomposition of multiple sources match in terms of number or frequency .
As for HDR imaging, retrieving camera properties can also be used in the context of image fusion. Camera response functions are normally not disclosed by camera manufacturers . Nevertheless, Debevec and Malik propose recovering the camera response function with which multiple exposures can be fused into a single high dynamic range radiance map . The benefit sought after in recovering the response function is far less of an issue for OPT data as the scanning environment, i.e. within the OPT scanner setup, is well controlled. Hence, rather than delving into the computational complexity of HDR, low dynamic range fusion appears sufficient in the case of OPT. Moreover, the HDR method requires prior knowledge of the exposure durations (scalar variables) of each of the images.
2. Materials and methods
2.1 Specimen preparation and OPT scanning
Mouse liver tissue (lobus sinister lateralis) was obtained from adult C57BL/6 mice (Taconic) and kidneys were derived from neonatal wild-type and Wnt4 null mutant mice, respectively . Both tissue types were immunohistochemically processed and prepared for OPT scanning essentially as described . Smooth muscle expressing vessels of the liver were labeled with Cy3-conjugated anti-smooth muscle α-actin antibodies (Sigma). The ureteric branches of developing kidney were labeled with antibodies against Troma-I (developed by P. Brulet and R. Kemler and obtained from the Developmental Studies Hybridoma Bank developed under the auspices of the NICHD and maintained by The University of Iowa, Department of Biology, Iowa City, IA 52242) and visualized with an Alexa594 conjugated secondary antibody (Molecular Probes). All animal experimentation was conducted in consistence with international, national and institutional laws and guidelines at Umeå University and Oulu University, respectively. The specimens were scanned using the Bioptonics 3001 OPT scanner (Bioptonics) and tomographic projections were reconstructed using the NRecon software (SkyScan) as described previously . The data in Fig. 9 was acquired using the original setup for OPT scanning  fitted with an Andor iKon-M DU934N-BV camera (Andor Technology) . Iso-surface and volume renderings of the analyzed specimen was performed using the Imaris (v7.2.3, Bitplane, Zurich, Switzerland) or Drishti (v2.2, Australian National University, ANUSF VizLab, Canberra, ACT, Australia) software packages.
In order to perform IF-OPT, three exposures were acquired in the following order:
- • Low (I0): signal in high intensity areas never become saturated,
- • Medium (I1): normal (see Fig. 1) single exposure time setting, and
- • High (I2): weakest specific signal clearly visible.
To set proper exposure times, a search was made for the most saturated area on the specimen, rotating the sample where necessary. Once located, a profile histogram was plotted over the sample providing an instant indication of any saturation (commercial Bioptonics 3001 OPT scanners come with a measurement tool to guide the process of exposure tuning). The three exposures (I0, I1, I2) were then acquired, guided by this plot, as shown in Fig. 1. Hereby, it should be noted that the shift drawback, which is caused by mis-registration (see e.g .), is not essentially a problem in the OPT setup since it does not involve processing of data from different modalities. The acquired exposures are of the same mounted specimen and quasi-perfect alignment is guaranteed .
Scanner settings applied to all samples in this study, using the Bioptonics 3001 scanner, were: rotation degree 0.45, resolution 1024 x 1024 pixels, pixel sizes were 18μm for liver lobes and 3.3μm for the kidney. For Fig. 9 the scanner settings were: rotation degree 0.9, resolution 1024 x 1024 pixels and pixel size 29.87µm. The listed pixel sizes denote the value that a pixel corresponds to in the respective projection views. Acquired data were processed as described  in addition to the IF-OPT method discussed hereafter.
2.2 Proposed method (IF-OPT)
The current study proposes a contrast limited adaptive histogram equalization (CLAHE)-driven image fusion method since first order statistics manipulation, i.e. histogram equalization, can help in contrast enhancement of the scanned signal structure. The net effect will be contrast-enhanced images in the sense that the intensity levels span a wider range in the intensity scale and exhibit an almost uniform probability density function (PDF).
It is well known that global histogram equalization can give unfavorable results knowing that weakly stained objects are statistically undistinguishable globally but tend to be more visible locally . This fact suggests using an adaptive histogram equalization approach, which could cater for local statistics and thus overcoming the limitations of standard histogram equalization. CLAHE is a very powerful algorithm, which encapsulates a local operation based on a user-defined window. It comes as an improvement to Adaptive Histogram Equalization (AHE) and it has previously been applied to medical imaging [32–34]. The algorithm is initiated by dividing the image into non-overlapping blocks; the contrast is then estimated in each block and constrained by limiting the slope of the cumulative distribution function (CDF) in order to avoid amplification of noise in homogeneous areas, hence the name “contrast-limited”. Artificially induced boundaries, e.g. blocking effects, are eliminated by combining neighboring tiles using bilinear interpolation, e.g. deblocking (see Fig. 2).
Let, I0 I1 and I2 be the underexposed, normalized (reference) and overexposed images respectively. Then a compound CLAHE adaptation is imposed according to:Fig. 1 prior to applying the IF-OPT), represents the CDF (cumulative distribution function) denoted by where f is the probability density function approximated by the intensity histogram, X is a variate and x is a number (i.e., the observed intensity value).
In Eq. (1), are weighting coefficients and these values are optimized using a standard eigen-decomposition method (eig). Let A, B, C be treated as feature vectors corresponding to the 2D input matrices ,then we have the decomposition and the weights are then derived from , where index is the ith column in the eigenvectors (V) corresponding to λmax (dominant eigenvalue) entry in λ, cov[.] is the 3x3 covariance matrix. The obtained weights are assigned in the following order:
The histogram of the fused image, at this stage, approximately matches a histogram specified by a uniform distribution . Hereafter, the resulting distribution needs to be re-modeled in order to squeeze the histogram and shift it toward the mean intensity, i.e. 127.5 for 8-bit images, forming a bell-like shape :35]. See the effect of these histogram manipulations on histograms in Fig. 3, and on scan data in Fig. 6.
In our experiments with raw scan projection images of size , different tile sizes were utilized, . For a given j the image is thus decomposed into non-overlapping pixel tiles.
The optimal size of a tile depends on the input image (i.e. how much of the image area that is occupied by the investigated sample and the signal to noise ratio) and is likely to be application oriented. Owing to our extensive experimentations on OPT data we suggest, that as long as the sample is large enough to occupy most of the FOV the optimal size of a tile for the output of the scanner is . Therefore the image was blocked into tiles in Eq. (2) (the outer processing layer) and the block size is reduced by half in Eq. (1) (the inner processing layer) resulting in the images being blocked into tiles.
If the sample is considerably smaller than the FOV the tile size (t) has to be adjusted according to the physical size of the sample within the projected image. It is observed that choosing higher values for the tiles, e.g., when j = 7, can cause an effect similar to a phenomenon known as “blooming” which occurs when a charge or light at highly saturated sites on the imaging surface spills over and affects values at neighboring sites .
The pixel intensities in a block are then calculated and described by the CDF for that block. To limit the contrast and to avoid saturation and enhancement of background noise, a clip limit Eq. (3) is set for the histogram that constrains the permissible contrast ,
At present, it is not feasible for this approach to run in real-time, due to the requirement for three separate exposures. Therefore, the process is implemented as an off-line projection enhancement. It is possible to speed up this process, although not implemented here by operating in the wavelet decomposition domain. This approach must take into account that CLAHE is not designed and optimized to equalize highly compressed coefficients. Furthermore, one should also be aware that it may result in sacrificed data quality due to the fact that the wavelet approximates data using square wave functions (Haar) or polynomial functions (Daubchies) .
After applying IF-OPT on the raw scanned data the conventional filtered back-projection (FBP) can be carried out as normal. In our case, the OPT scan’s unitary step of rotation was set to 0.45°. Note that in the implementation of the filtered back-projection process, the background is automatically estimated and a threshold is set accordingly. As so long as the SNR is high the actual intensity of the background can be well estimated. It is highly unlikely that noise spots overlap in all 360° projections, since this would result in ring artifacts, which are not present in our processed samples.
3.1 Performance of IF-OPT on a digital phantom
Projection images of a 3D digital phantom were synthetically constructed to assess the performance of the algorithm. The digital phantom consisted of a hollow rotational symmetric tube-structure with a gradient intensity, with a low intensity at one end and high intensity at the other end. By simulating quantification into different exposures, making sure to include over saturation and simulated noise (Poisson), three different sets of projection images of the tube was generated, see Figs. 4(a)-4(c). The low exposure image was chosen so as to make sure that the low-intensity part of the tube was captured with a reasonable contrast. The high exposure image was generated in the same way but with a focus on the high intensity of the tube. In supplementary Fig. 1(d)-1(f), the areas of over-saturation are shown.
As can be seen in Fig. 4(d), some of the noise is carried through to the fused image but due to the fact that the noise is uncorrelated between projections it will not survive the tomographic reconstruction process. Thus, the reconstructed volume using the IF-OPT approach (Fig. 4(f)) is virtually noise free and results in better preservation of object detail (compare Figs. 4(e) and 4(f)).
3.2 Revealing vascular branching morphology in the liver
As far as OPT is concerned, a mouse liver lobe is regarded as a large specimen to fit into the scanner’s field of view. The vascular tree of the liver (in our case labeled for smooth muscle α-actin) typically displays a high to low signal intensity gradient that makes exposure tuning challenging for OPT imaging. The effect of IF-OPT when applied to this problematic specimen is shown in Fig. 6. Three exposures are acquired with I1 being the best-balanced exposure (from hereafter referred to as “normal”) obtained by a trained biologist/OPT-operator. Obviously, that comes at the cost of over-saturation of certain areas, i.e. in the venous entry area (arrowhead in Fig. 6(b)). This carries another challenge when performing reconstruction of tomographic sections; adjustments of the dynamic range can be considerably biased towards these regions affecting the low signal intensity areas (see Fig. 6(a)). Figure 6(d) is the result of applying the proposed algorithm on the three exposures, and Figs. 6(e)-6(f) illustrate a representative 3D rendering of both the normal and IF-OPT reconstructions.
3.3 Exposing ureteric branching patterns in the kidney
Several organs, including the lungs, pancreas, salivary glands and the kidneys, display complex epithelial branching patterns. Disturbed epithelial branching in a number of organs, which may be caused by genetic factors or exposure to certain drugs during development, is associated with a number of disease conditions. A good example is the kidney, which is also widely used as a model system to study general mechanisms behind epithelial branching and epithelial-mesenchymal interactions in developmental biology. During development of this organ, reciprocal interaction between the nephrogenic mesenchyme and the ureteric bud leads to the formation of the branched ureteric tree, the collecting duct and the ureter of the mature kidney.
Similar to the vascular network of the liver, the kidney impose challenges to the OPT imaging process, and to 3D imaging in general. To test whether it is possible to better resolve the ureteric tree branching patterns in the kidney by the developed approach, IF-OPT was tested against normal single exposure OPT scans of the neonatal kidney from wild-type (wt) and Wnt4-null mutant mice in which the branched epithelium was labeled with Troma-I antibodies that recognize a cytokeratin antigen. The Wnt4 mutants were chosen in part because they display dramatic differences in signal intensities between the proximal and more distal parts of the branching ureteric tree, which normally makes detailed OPT imaging of the entire epithelium challenging, and also to compare the ureteric bud branching pattern to the wild type. As can be deduced from the resulting image data, IF-OPT considerably facilitates imaging of the entire epithelial plexus in kidneys from Wnt4 null mice (compare Figs. 7(a), with Fig. 7(e) and Figs. 7(b)-7(d) with 7(f) respectively) and wt litter mates (Figs. 7(g) and 7(h)).
3.4 Human visual system scoring
In order to evaluate the effect of the proposed IF-OPT method on the visual output and to measure its performance in comparison to other methods, 21 trained biologists (trained biologist were defined as researchers with at least a Phd and/or MD in medical and/or biological science) were presented with 10 OPT volume renderings (see Fig. 8) of the specimen shown in Fig. 6. The projection images for each data set had been processed with one of the following fusion methods: PCA (principal component analysis) fusing method, DWT (discrete wavelet transform) fusing method, SIDWT (shift invariant wavelet transform) fusing method, MP (morphology pyramid) fusing method, Laplacian fusing method, PCNN (pulse coupled neural network) fusing method, Average (arithmetic mean) fusing method, HDRMD (MATLAB direct high dynamic range) fusion method, HDRMN (MATLAB normalized intensity high dynamic range) fusion method and IF-OPT (proposed) fusing method. The participants were asked to rank the top five (from 1 to 5) images that display the most detailed view of the vascular tree in a mouse liver lobe stained for smooth muscle α-actin. For all images, the participants were blinded to the image enhancement method used.
In this survey, 90.5% of the participants scored IF-OPT as the best performing method (average rank 1.48), demonstrating that IF-OPT surpasses the other tested methods for image enhancement using this type of specimen. For details on the HVS score of all methods see Table 1.
Note, for the PCNN, the used parameters were the ones shown in the original paper , which are as follows: ; convolution core , , where ⊗ denotes convolution operation, level factor σ = 1.0, time constant αT = 0.012, and normalized offset parameter VT = 4000. denotes the weighting factor of the kth channel while σ is the level factor to adjust the average level of internal activity.
Additionally, in the image fusion toolbox (ver 1.0 available from http://www.metapix.de/toolbox.htm), developed by Oliver Rockinger, Metapix, the settings for the wavelet and pyramid based methods are as follows: The 2D shift invariant discrete wavelet transform (SIDWT) used the wavelet filters ‘Haar’ (Daubechies family) and the discrete wavelet transform (DWT) used the wavelet filters 'db2’ (Daubechies family). Both transforms run in symmetric-padding (half-point) mode as their DWT extension mode. Parameters of these methods, in addition to the morphology pyramid and Laplacian pyramid algorithms are as follows: decomposition level 4th, coefficient selection rule for the highpass combination is ‘maximum value’ and coefficient selection rule for the lowpass combination is ‘average value’. The MATLAB “makehdr” built-in function was used which efficiently implements the algorithm of Reinhard et al. .
This work advocates the utility of IF-OPT for the analysis of structures with highly variable signal intensities. By assessing such structures (in our case epithelial or vascular networks) in different organs, we demonstrate the added value of the IF-OPT imaging approach for this purpose as applied to highly challenging specimens. Obviously, the possibility to assess this type of specimens is dependent on the dynamic range of the detector. In this report, the experiments were carried out on the only OPT scanner that currently has been made commercially available. It should be noted however that also when using a high-end camera with comparably superior dynamic range, in our case fitted to the original setup for OPT imaging described previously [1, 30], the IF-OPT approach greatly benefits the results (see Fig. 9). In parallel to this work, a HDR based method, in which nine different exposures were used, was developed for transmission OPT of small specimen in a setup with a low bit depth camera (8bit) . In comparison, the IF-OPT approach proposed herein greatly reduces the complexity in data processing and does not require retrieval of the cameras response function. Whereas the IF-OPT approach is capable of addressing the issue of highly variable contrast intensities using only three exposures, the higher number of exposures used for each projection view in the HDR approach  increase the demand for data storage capacity (and thereby the potential need for down sampling) and increase scanning times. Furthermore, for a fluorescent specimen, an increased number of exposures for each projection view also increase the potential of adverse effects due to photo bleaching. It should also be noted that in comparison with algorithms for HDRMN and HDRMD, tested here, IF-OPT delivers improved performance.
The proposed method consists of different techniques applied to the original projection data: Image Fusion (IF), histogram adjustment (CLAHE) and filtered back projection (FBP). In the case of IF-OPT, IF and CLAHE are applied as described in Eq. (1) to the projection data and the resulting fused images are then reconstructed using FBP. It could be argued that by individually manipulating the projection images prior to reconstruction we violate the basic assumptions for tomographic reconstruction by FBP. However, these assumptions are already violated in the original, and most common, implementation of OPT . I.e., by using an implementation of the FBP algorithm, attenuation and not emission is assumed in the imaged volume. Despite of this known theoretical drawback, OPT has proven to be immensely useful in a large number of biological applications. It is important to note that in imaging based on biomarkers, especially when looking at anatomical data, it is most commonly sufficient to detect the presence or absence of the marker and it is not always essential to have a linear mapping between the perceived intensity of the marker and its actual concentration. Thus it is not critical that the back projection yields quantitatively accurate intensity values and consequently it is not critical to exactly meet the assumptions on which the FBP algorithm is built. In the case of IF-OPT, however, the main reason for not applying the contrast enhancement after tomographic reconstruction is computational complexity. Since performing CLAHE in full 3D would increase the complexity by an order of magnitude from O(n2) to O(n3) and since n typically is in the order of 1024 it is not trivial to execute this type, or similar, operations in the reconstructed volume on computers easily accessible to the general lab performing OPT. The implementation of an efficient 3D CLAHE that could be applied to data in the gigabyte scale is an open research question of its own and goes beyond the scope of this paper. In Fig. 10, the effect of applying CLAHE at different stages in the imaging process is shown. Note that in generating Fig. 10(c), CLAHE was due to computational complexity applied to 2D-data that was stacked to generate the 3D volume, thus it is just an approximation of applying CLAHE to the reconstructed volume. It is however clear from the blinded HVS scoring (see Table 2.) where expert readers were asked to rank how well anatomical structures could be seen in different volumes that IF-OPT turned out to be superior to any other order of applying the techniques of fusion, histogram adjustment and tomographic reconstruction. In a recent study , we described a protocol whereby CLAHE was utilized to drastically improve the quantitative and morphometric assessments of pancreatic islet and β-cell mass distributions. Whilst both this approach and the method presented here exploit CLAHE, there are as outlined above several unique features of the IF-OPT technique that distinguish it from the pure CLAHE method.
Since the suggested IF-OPT approach alters first order statistics, studies that are concerned with intensity-level based measurements (e.g. protein expression levels) should keep this in mind when dealing with image fusion in general. It should further be emphasized that the IF-OPT approach demands that different exposure scans are properly aligned or else a post-registration alignment must be carried out. Post-registration alignment apart, one possibility to deal with imprecisions in the mechanical setup, and thereby imperfect image registration between exposures, would be to take multiple exposures at each rotational position. This approach however does not cope with the issue of system imperfections between projections. In our setups, where the different exposures are obtained after a full rotational cycle, we have verified a typical pixel shift between rotations in the sub-pixel domain. As mentioned above, an issue related to the mode of image registration is the topic of photo bleaching. As demonstrated by Henkelman and associates, normalizing each image registration according to the rate of signal decay could contribute to improve the results in OPT imaging . In this respect, performing the image registration for each exposure of a projection between each rotational cycle is likely to reduce the difference in signal intensity between the first and the last projection, all exposures included. With these considerations accounted for, IF-OPT provides a robust platform for further exploration into a range of research fields. In addition to the organ systems assessed in this report, such studies may include assessments of high-low signal intensity structures in a range of tissues in normal and genetically modified animals or pathological biopsies. Examples of such research areas are, but not limited to, secondary vascular disorders in experimental diabetes research or neuronal pathways in neuronal disorders. The technique may further be explored for fusion or co-registration with other source modalities (e.g., CT, PET, MRI). Altogether, the described method for image fusion of OPT projection data may open up several new research avenues for the OPT imaging technique.
We are grateful to Drs. J. Gilthorpe and J. Sharpe and members of our laboratory for helpful discussions and comments on the manuscript. This work was supported by the Swedish research council, the European Commission (FP-7, Grant agreement no.: CP-IP 228933-2), the Kempe Foundations and by grants from Umeå University to U.A.
References and links
1. J. Sharpe, U. Ahlgren, P. Perry, B. Hill, A. Ross, J. Hecksher-Sørensen, R. Baldock, and D. Davidson, “Optical projection tomography as a tool for 3D microscopy and gene expression studies,” Science 296(5567), 541–545 (2002). [CrossRef] [PubMed]
2. R. C. Gonzalez and R. E. Woods, Digital Image Processing (Pearson/Prentice Hall, 2008).
3. Z. Zhang and R. S. Blum, “A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application,” Proc. IEEE 87(8), 1315–1326 (1999). [CrossRef]
4. G. Simone, A. Farina, F. C. Morabito, S. B. Serpico, and L. Bruzzone, “Image fusion techniques for remote sensing applications,” Inf. Fusion 3(1), 3–15 (2002). [CrossRef]
5. G. K. Matsopoulos, S. Marshall, and J. N. H. Brunt, “Multiresolution morphological fusion of MR and CT images of the human brain,” IEEE Proc.Vis Image Sign 141(3), 137–142 (1994).
6. S. Cheng, J. He, and Z. Lv, “Medical image of PET/CT weighted fusion based on wavelet transform,” in Proceedings of IEEE Conference on Bioinformatics and Biomedical Engineering, (IEEE, 2008), pp. 2523–2525. [CrossRef]
7. S. Daneshvar and H. Ghassemian, “MRI and PET image fusion by combining IHS and retina-inspired models,” Inf. Fusion 11(2), 114–123 (2010). [CrossRef]
9. A. Vavilin, K. Deb, and K. H. Jo,N. García-Pedrajas, ed., “Fast HDR image generation technique based on exposure blending,” in Proceedings of Trends in Applied Intelligent Systems:23rd International Conference on Industrial Engineering and Other Applications of Applied Intelligent Systems, N. García-Pedrajas, ed. (Springer, 2010), pp. 379–388. [CrossRef]
10. A. O. Akyuz and E. Reinhard, “Noise reduction in high dynamic range imaging,” J Vis Commun Image R 18(5), 366–376 (2007). [CrossRef]
11. M. Ghantous, S. Ghosh, and M. Bayoumi, “A gradient-based hybrid image fusion scheme using object extraction,” in Proceedings of IEEE Conference on Image Processing, (IEEE, 2008), pp. 1300–1303. [CrossRef]
12. Y. Yang, D. S. Park, S. Huang, and N. Rao, “Medical image fusion via an effective wavelet-based approach,” EURASIP J. Adv. Signal Process. 2010, 1–14 (2010). [CrossRef]
13. A. A. Goshtasby, “Fusion of multi-exposure images,” Image Vis. Comput. 23(6), 611–618 (2005). [CrossRef]
14. Z. B. Wang and Y. Ma, “Medical image fusion using m-PCNN,” Inf. Fusion 9(2), 176–185 (2008). [CrossRef]
15. Q. Xiao-Bo, Y. Jing-Wen, X. Hong-Zhi, and Z. Zi-Qian, “Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain,” Acta Automatica Sinica 34, 1508–1514 (2008).
16. N. Mitianoudis and T. Stathaki, “Pixel-based and region-based image fusion schemes using ICA bases,” Inf. Fusion 8(2), 131–142 (2007). [CrossRef]
17. T. Mertens, J. Kautz, and F. Van Reeth, “Exposure fusion: a simple and practical alternative to high dynamic range photograph,” Comput. Graph. Forum 28(1), 161–171 (2009). [CrossRef]
18. A. Toet, “Hierarchical image fusion,” Mach. Vis. Appl. 3(1), 1–11 (1990). [CrossRef]
19. P. J. Burt and R. J. Kolczynski, “Enhanced image capture through fusion,” in Proceedings of IEEE Conference on Computer Vision. (IEEE, 1993), pp. 173–182.
20. L. J. Chipman, T. M. Orr, and L. N. Graham, “Wavelets and image fusion,” in Proceedings of IEEE Conference on Image Processing. (IEEE, 1995), pp. 248–251. [CrossRef]
21. L. Hui, B. S. Manjunath, and S. K. Mitra, “Multi-sensor image fusion using the wavelet transform,” in Proceedings of IEEE Conference on Image Processing, (IEEE, 1994), pp. 51–55. [CrossRef]
22. J. J. Lewis, R. J. O'Callaghan, S. G. Nikolov, D. R. Bull, and C. N. Canagarajah, “Pixel- and region-based image fusion with complex wavelets,” Inf. Fusion 8(2), 119–130 (2007). [CrossRef]
23. S. Nikolov, P. Hill, D. Bull, and N. C. Jah, “Wavelets for image fusion,” in Wavelets in Signal and Image Analysis: From Theory to Practice, A. A. Petrosian and F. G. Meyer, eds. (Springer, 2001), pp. 213–244.
24. D. Looney and D. P. Mandic, “Multi-scale image fusion using complex extensions of EMD,” IEEE Trans. Signal Process. 57(4), 1626–1630 (2009). [CrossRef]
25. D. Mandic, M. Golz, A. Kuh, D. Obradovic, T. Tanaka, G. Souretis, W. Leong, D. Looney, and M. Hulle, “Complex Empirical Mode Decomposition for Multichannel Information Fusion,” in Signal Processing Techniques for Knowledge Extraction and Information Fusion (Springer, 2008), pp. 243–260.
26. P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” in Proceedings of the 24th annual conference on Computer graphics and interactive techniques, (ACM Press/Addison-Wesley Publishing Co, 1997) pp. 369–378. [CrossRef]
27. K. Stark, S. Vainio, G. Vassileva, and A. P. McMahon, “Epithelial transformation of metanephric mesenchyme in the developing kidney regulated by Wnt-4,” Nature 372(6507), 679–683 (1994). [CrossRef] [PubMed]
28. T. Alanentalo, A. Asayesh, H. Morrison, C. E. Lorén, D. Holmberg, J. Sharpe, and U. Ahlgren, “Tomographic molecular imaging and 3D quantification within adult mouse organs,” Nat. Methods 4(1), 31–33 (2007). [CrossRef] [PubMed]
29. A. Cheddad, C. Svensson, J. Sharpe, F. Georgsson, and U. Ahlgren, “Image processing assisted algorithms for optical projection tomography,” IEEE Trans. Med. Imaging 31(1), 1–15 (2012). [CrossRef] [PubMed]
30. A. U. Eriksson, C. Svensson, A. Hörnblad, A. Cheddad, E. Kostromina, M. Eriksson, N. Norlin, A. Pileggi, J. Sharpe, F. Georgsson, T. Alanentalo, and U. Ahlgren, “Near infrared Optical projection tomography for assessments of β-cell mass distribution in diabetes research,” J. Vis. Exp. (71): e50238 (2013). [PubMed]
31. S. G. Nikolov, D. R. Bull, C. N. Canagarajah, M. Halliwell, and P. N. T. Wells, “Image fusion using a 3-D wavelet transform,” in Proceedings of IEEE Conference on Image Processing And Its Applications.(IEEE, 1999), pp. 235–239. [CrossRef]
32. E. D. Pisano, S. Zong, B. M. Hemminger, M. DeLuca, R. E. Johnston, K. Muller, M. P. Braeuning, and S. M. Pizer, “Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms,” J. Digit. Imaging 11(4), 193–200 (1998). [CrossRef] [PubMed]
33. S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, “Adaptive histogram equalization and its variations,” Comput Vision Graph 39(3), 355–368 (1987). [CrossRef]
34. S. M. Pizer, J. B. Zimmerman, and E. V. Staab, “Adaptive grey level assignment in CT scan display,” J. Comput. Assist. Tomogr. 8(2), 300–305 (1984). [PubMed]
35. K. Zuiderveld, “Contrast Limited Adaptive Histogram Equalization,” in Graphic Gems IV, S. H. Paul, ed. (Academic Press Professional, Inc., 1994), pp. 474–485.
36. C.-J. Du and D.-W. Sun, “Retrospective shading correction of confocal laser canning microscopy beef images for three-dimensional visualization,” Food Bioprocess Tech 2(2), 167–176 (2009). [CrossRef]
37. L. Choudur, U. Dayal, C. Gupta, and R. Swaminatha, “On wavelet compression and cardinality estimation of enterprise data” (2010), retrieved http://www.hpl.hp.com/techreports/2010/HPL-2010-132.pdf.
38. E. Reinhard, G. Ward, S. Pattanaik, and P. Debevec, “High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting (The Morgan Kaufmann Series in Computer Graphics), (Morgan Kaufmann Publishers Inc., 2005).
39. A. Hörnblad, A. Cheddad, and U. Ahlgren, “An improved protocol for optical projection tomography imaging reveals lobular heterogeneities in pancreatic islet and β-cell mass distribution,” Islets 3(4), 204–208 (2011). [CrossRef] [PubMed]