Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Global imaging with high resolution region of interest using fusion data based on dual-field of view detection system

Open Access Open Access

Abstract

X-ray micro-computed tomography (CT) is an important tool for high-resolution three-dimensional imaging. But one limitation of micro-CT is the compromise between resolution and fields of view (FOV). In this paper, we develop an x-rays dual-FOV optical coupling detection (DFOCD) system for global imaging with high-resolution region of interest (ROI). In DFOCD system, the beam splitter separates lights to form two sub-optical paths, two objectives with different FOV and magnification are used in the two sub-optical paths for dual-FOV imaging. Then a data fusion method is proposed to register and fuse dual-FOV data. Reconstructed images are obtained based on back projection filtering algorithm using fusion data. Dual-FOV data are collected simultaneously in DFOCD system, which precludes artifacts in fusion images from phantom movement or changes in two acquisitions on common micro-CT, and also saves scanning time. Simulation and experimental results show that details in ROI and global morphology of phantoms are correctly reconstructed. Bright ring artifacts of ROI caused by truncated data are corrected in reconstruction images. Therefore, global imaging with high-resolution ROI of samples can be obtained by single scan experiment using DFOCD system and data fusion method, which is expected to expand the application of micro-CT.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

X-ray micro-computed tomography (CT) generates high-resolution three-dimensional (3D) data to provide images with rich details, which has increasingly been used for studies of animal morphology, vasculature, bionic scaffold, plant structures [14]. However, one limitation of micro-CT is the compromise between resolution and field of view (FOV). When the entire volume of a sample is imaged on a micro-CT, the resolution range and observable details are determined. Detection of more detailed structures requires other micro-CT which provides high resolution. For example, in the global micro-CT images of a plant stem, boundaries between the epidermis, vascular layer and stem are difficult to be determined due to insufficient resolution [3]. In the morphological comparison experiments of diverse non-mineralized animal tissues using micro-CT, phantoms are globally imaged with a SkyScan 1174 micro-CT scanner. But details are imaged at a higher resolution using an Xradia Micro-CT system [4].

However, when internal details of samples are investigated by micro-CT, samples may extend beyond the volume measurable by the detector, which leads to truncation of sampling data. Bright external rings and central cupping artifacts occur in reconstruction images of the region of interest (ROI) using the truncated data [5]. Many methods have been proposed for the accurate reconstruction of ROI based on truncated data. Michel Defrise et al. proposed the DBP-POCS (differential back projection - projections on convex sets) algorithm to iteratively reconstruct images of ROI, which relies on the prior knowledge of samples [6]. Yangbo Ye and Liang Li et al. improved the DBP-POCS algorithm to reduce the requirements of prior knowledge [710]. But values of ROI edges of reconstruction images are still obviously larger than true values. In addition, extrapolation techniques have been widely used in truncation correction in CT imaging [5,1113]. Extrapolation techniques usually predict the missing data by fitting a model to the available data. Then reconstruction images can be obtained by common reconstruction algorithms using the corrected data. However, images using extrapolation techniques are qualitatively appealing but quantitatively imprecise. In addition to the challenge caused by truncation artifacts, the information of the non-ROI is lost in ROI reconstruction images. And exact positions of ROI in samples cannot be obtained. Therefore, the acquiring of global reconstruction images with clear high-resolution details in ROI of samples is essential and challenging in micro-CT.

X-rays detector is the key component which influences the resolution of micro-CT. X-rays optical coupling detector composed of scintillator, light microscopy optic and charge-coupled device (CCD) can collect images with a spatial resolution in the micrometer and submicrometer range, which is widely used in micro-CT imaging [1416]. In this paper, we develop a dual-FOV optical coupling detection (DFOCD) system for global imaging with high-resolution ROI images. Different from the previous dual-FOV system with two x-rays sources and two different detectors [17], DFOCD system only needs one x-rays source to realize dual-FOV CT imaging. DFOCD system has two sub-optical paths, a beam splitter right after a scintillator separates visible lights to form a transmission sub-optical path and a reflective sub-optical path. Two microscope objectives with different FOV and magnification are used in the two sub-optical paths. Small FOV (SFOV) sub-optical path is designed to achieve high-resolution ROI imaging, while large FOV (LFOV) sub-optical path achieves low-resolution global imaging. SFOV and LFOV are relative, and their values are determined according to the performance of DFOCD system device and experimental parameters. Dual-FOV data are collected simultaneously in DFOCD system, which precludes artifacts in reconstruction images from phantom movement or changes in two acquisitions on common micro-CT, and saves scanning time. Besides, a relay lens is introduced to expand the space in front of the objectives to install the beam splitter.

High resolution SFOV data from DFOCD system are truncated, and non-truncated LFOV data have low resolution. So a data fusion method is required for obtaining non-truncated global image with high-resolution ROI. Usually, the transform domain image fusion methods like wavelet transform-based fusion method can provide high contrast fusion images. However, the methods are computationally complex, and qualities of fusion images depend on fusion parameters [18]. The spatial domain image fusion methods are simple, but they provide poor contrast and reduced resolution fusion images, such as the weighted sum method in which weights are fixed based on image feature values [19]. The existing fusion methods are not applicable to dual-FOV data in this paper. Hence, a spatial data fusion method that maintains the contrast and high-resolution of data is proposed according to the characteristics of dual-FOV data.

In the data fusion method, ROI of LFOV data is replaced by the corresponding SFOV data, which is performed on DBP images. DBP images is the Hilbert transform, along the PI-line, of the weighted reconstructed images [20,21]. In principle, DBP images of a sample at a certain x-rays source are unique. Thus dual-FOV DBP images from DFOCD system have consistent gray characteristics such as gray values and contrast ranges, which promotes high-quality fusion images obtained using the proposed fusion strategy. What’s more, weighted superimposition is performed on ROI edges of fusion images to eliminate streak artifacts in ROI edges in actual experiments. Reconstruction images of samples are obtained based on back projection filtering (BPF) algorithm using fusion data [20,21]. We designed the phantom with different line pairs and matrixes of circles for simulation experiments to verify the feasibility of reconstruction strategy based on data fusion. The actual experiments of cylinder phantom and botanical branch phantom were implemented to further verify the practicability and effectiveness of the proposed method.

2. Methods

2.1 DFOCD system

DFOCD system is shown in Fig. 1 (the blue rectangle region). A scintillator is located at the forefront of DFOCD system, which converts x-rays into visible lights. Next, a beam splitter is used to evenly separate the visible lights. Then two objectives with different magnifications and FOV are required to collect reflected lights and transmitted lights respectively, which forms dual-FOV detection. However, the side length of the beam splitter is determined to be 25.4 mm or even larger. Commercial objectives have short work distances (several millimeters to thirty millimeters), which cannot provide enough space to install the beam splitter. Thus, a relay lens is introduced in front of the objectives to expand the space in this paper, so that objectives with short work distances can be integrated into DFOCD system, such as Plan Achromat Objectives and Fluorite Objectives. The adjustment of the optical components is more convenient in large space. Finally, the transmitted lights and reflected lights respectively pass through the tube lenses and then are collected by CCDs. The two objective-tube lenses system and CCDs constitute the reflection sub-optical path (optical path 1) and transmission sub-optical path (optical path 2). Note that the beam splitter is no longer being in the parallel “infinity regime” between an objective and a tube lens of common imaging systems [6,16]. The position of the scintillator on the optical axis is adjustable for focus imaging. Finite regime may be formed in the objectives and tube lens.

 figure: Fig. 1.

Fig. 1. The schematic illustration of DFOCD system

Download Full Size | PDF

In addition, to obtain mechanical registered sampling data, a mechanical adjustment mechanism is developed in DFOCD system. The adjusting process is similar to the mechanical adjustment technique described in our previous work [22]. The reflector in optical path 2 (see Fig. 1) is finely rotated around the v axis and ω axis, until the center of the image from light path 2 coincides with that of the image from light path 1 and the ROI of the two images also coincides. To facilitate the image comparison during registration adjustment, LFOV sampling images are bilinear interpolated to make the size of single pixel the same as that of SFOV sampling images.

In this paper, CsI: Tl scintillator with high light yield is used in DFOCD system. The emission wavelengths of CsI: Tl (the maximum emission wavelength of 550 nm) matches well to the quantum efficiency of the selected CCD (Princeton Instruments, PI2048B). The C-Mount Achromatic Pairs 1:1 from Edmund is used as relay lens, which has the magnification of −1. The object distance and image distance are both 60 mm. The distortion of the C-Mount Achromatic Pairs is smaller than 0.0002%. The transmission wavelength range of the relay lens is 425-675 nm, which matches well to the emission wavelengths of the CsI: Tl scintillator. The beam splitter is BS013 from Thorlabs. The reflection and transmission probabilities of the beam splitter are both about 50%. Objective 1 uses Sihmakoki MplanApo 5X objective with the real FOV (imaging device: 1/2-inch) of 0.96×1.28 mm, which is for SFOV high-resolution imaging. Objective 2 uses Sihmakoki MplanApo 2X objective with the real FOV (imaging device: 1/2-inch) of 2.4×3.2 mm, which is for LFOV low-resolution imaging.

2.2 Data fusion method

SFOV sampling data from DFOCD system have high resolution but are truncated, which causes truncation artifacts in the directly reconstructed images. Truncation artifacts are generated in the filtering process of data reconstruction, so data fusion can be implemented at any step before filtering to avoid truncation artifacts. For obtaining high-quality fusion images, the data fusion method including registration and fusion is performed on DBP images in the reconstruction process of BPF algorithm in this paper. Because gray characteristics such as gray values and contrast ranges between dual-FOV DBP images are consistent. In addition, BPF algorithm can reconstruct an image within a selected ROI by computing its values on portions of PI-lines that fill in this ROI. Therefore, the data fusion method can process DBP images only containing ROI, which reduces the amount of data processing.

Data registration is performed first. Mechanical registered dual-FOV sampling data (corresponding to high-resolution ROI data and low-resolution global data) are obtained through DFOCD system. Then computational registration algorithm is applied to DBP images for fine registration. Popular Lucas-Kanade algorithm is used in this paper [23,24]. The SFOV DBP images are as the template image, and LFOV DBP images are as the input images. Naturally, because the DBP images are from the mechanically registered sampling data, the zero transformation is as the initial estimation of the approximate affine warps for the iterative calculation. The affine transformation matrix of the input image is obtained iteratively by Lucas-Kanade registration algorithms. Besides, linear interpolation algorithm is applied to LFOV DBP images to make the single pixel size the same as the SFOV DBP images [25,26].

Data fusion is implemented based on finely registered DBP images. First, to reduce the data difference of the two datasets caused by unfavorable factors such as noise in the actual scanning, SFOV DBP images are multiplied by a weighting function. The weighting function is equal to the ratio of the ROI of the LFOV DBP image to the SFOV DBP image, which is usually close to 1. Then, the weighted SFOV DBP images replace the corresponding data in LFOV DBP images to generate global DBP images with high-resolution ROI. Compared with the weighted sum method, the fusion method in this paper is simple and can maintain the high-resolution of ROI well.

But fusion DBP images obtained by the above processes usually have obvious numerical differences at ROI edges, resulting in stitching streak artifacts at ROI edges. Therefore, values of $N$ pixels centered on ROI edges along PI lines are weighted superimposition by:

$${q_{fusion}}(i)\textrm{ = }\left\{ {\begin{array}{c} {{q_H}(i) \cdot \frac{i}{N} + {q_L}(i) \cdot (1 - \frac{i}{N}),1 - th\textrm{ }pixel\textrm{ }isnot\textrm{ }in\textrm{ }ROI}\\ {{q_H}(i) \cdot (1 - \frac{i}{N}) + {q_L}(i) \cdot \frac{i}{N},1 - th\textrm{ }pixel\textrm{ }is\textrm{ }in\textrm{ }ROI} \end{array}} \right.i = 1,2,3\ldots N,$$
where ${q_{fusion}}(i)$, ${q_H}(i)$ and ${q_L}(i)$ represent the fusion DBP values, the SFOV DBP values and the LFOV DBP values of the-th weighted superimposition pixels centered on one edge point, respectively. The number of weighted superimposed pixels $N$ is experimentally determined to be 10-15. Comparison results with weighted superimposition and without weighted superimposition can be seen in section 4.2.

2.3 Overall workflow

The overall workflow of the reconstruction processes based on the data fusion method is shown in Fig. 2. First, SFOV sampling data and LFOV sampling data are collected simultaneously through DFOCD system. Next, according to the Beer-Lambert law [27], projection data (integral values of the attenuation along light paths) are obtained by logarithmic transformation of ratios of sampling data with samples to sampling data without samples. And DBP images are respectively calculated using projection data. Then, fusion DBP images are obtained using the data fusion method described in section 2.2. Finally, fusion DBP images are filtered along the PI lines by Hilbert transform to complete the reconstruction processes of BPF algorithm. Global CT images with high-resolution ROI and without truncation artifacts are generated by using the sampling data from DFOCD system and the proposed image reconstruction process.

 figure: Fig. 2.

Fig. 2. Image reconstruction workflow for the proposed DFOCD system.

Download Full Size | PDF

3. Simulation study

To evaluate the feasibility of the data fused method, a simulated CT imaging experiment with a simulated resolution board is implemented. Line pairs and matrices of circles with different sizes and gray values are contained in the resolution board for resolution and accuracy investigation. The resolution board is projected using siddon algorithm [28]. The sample space is 400*400*20 voxels, and each voxel is 0.25mm * 0.25mm * 0.25mm. The simulated LFOV low-resolution detector contains 512 * 512 pixels, and the size of each pixel is 0.5 mm *0.5 mm. The simulated SFOV high-resolution detector contains 380 * 380 pixels, the size of each pixel is 0.25 mm *0.25 mm. The source-to-detector distance (SDD) and source-to-object distance (SOD) are 1100 mm and 500 mm, respectively. The sampling angles are selected with an interval of 1 degree over a total scan angle of 360 degrees. Influence of beam hardening, scattering and other factors is ignored in this simulation.

The slice of the simulated phantom is shown in Fig. 3(a), and the phantom parameters are shown in Table 1. Reconstruction images are shown in Fig. 3(b)-(d). ROI is marked with the green circle curve in Fig. 3(a). In Fig. 3(b), ROI edges of the reconstructed image using SFOV data have bright external rings, and non-ROI of the image is distorted. The reconstruction image using LFOV data in Fig. 3(c) shows the correct structures of the sample, but the details of the ROI are blurred. Figure 3(d) and (f) are the reconstruction image using the fusion data. Details of the ROI in Fig. 3(f) are clear and bright ring is corrected in the ROI edges. In addition, structures in the non-ROI of the sample are reconstructed correctly in Fig. 3(f). Specifically, the line pairs with a line width of 0.25 mm cannot be distinguished in LFOV reconstruction image, but are distinguished in fusion reconstruction image, as indicated by blue arrows in Fig. 3(e) and (f). Besides, in Fig. 3(e), the matrix of circles indicated by the green arrow shows inaccurate shapes of circles due to the low-resolution. Yet Fig. 3(f) shows the correct shapes of the matrix of circles. Therefore, reconstruction images using fusion data contain more information of samples.

 figure: Fig. 3.

Fig. 3. (a) Middle layer of the simulated phantom; (b) reconstruction image using SFOV data; (c) reconstruction image using LFOV data; (d) reconstruction image using fusion data; (e) and (f) are the zoom-in images of red rectangle regions in (c) and (d), respectively. Display window is [-0.2 0.8]. Results of the non-middle layers are similar.

Download Full Size | PDF

Tables Icon

Table 1. Simulated phantom geometry and value

Figure 4 shows one-dimensional profiles of ROI in Fig. 3(a), (b) and (d). Values of ROI edges of the reconstruction image using SFOV data are much larger than that of original simulation data, which corresponds to the bright ring in Fig. 3(b). Also, values of the ROI center of the image using SFOV data are slightly larger than original simulation values. But the values of the reconstruction image using fusion data are close to original simulation values. The profiles comparison further indicates that truncation artifacts of high-resolution ROI are corrected by using fusion data.

 figure: Fig. 4.

Fig. 4. One-dimensional profiles of row 334 in Fig. 3(a), (b) and (d). The horizontal line of row 334 is shown in the green solid line in Fig. 3(b).

Download Full Size | PDF

4. Realistic phantom study

4.1 Spatial resolution and MTF

To evaluate the spatial resolution of DFOCD system, a micro-chart Japan Inspection Instruments Manufacturers’ Association (JIMA RT RC-2) is imaged by the micro-CT with DFOCD system. The micro-chart could achieve submicrometer slits down to 0.4 micrometer (15.0 µm to 0.4 µm with 16 different width). X-rays source is operating at 40 kVp tube voltage. The exposure time is 200s.

Figure 5(a)-(d) are the images obtained from LFOV and SFOV optical paths, respectively. The corresponding modulation transfer function (MTF) are shown in Fig. 6, which are calculated by the method described in S. Richard et al. [29]. Line pairs with a line width of 2 µm are clearly visible in the SFOV image, and line pairs with a line width of 4 µm can be distinguished in the LFOV image. Also, LFOV data shows lower MTF than SFOV data. Thus SFOV optical path has higher resolution than LFOV optical path, which is suitable for high-resolution ROI imaging of samples.

 figure: Fig. 5.

Fig. 5. (a) Image from LFOV optical path; (b) image from SFOV optical path; (c) and (d) are the zoom-in images of red rectangle regions in (a) and (b), respectively.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. The MTF for Fig. 5(c) and (d).

Download Full Size | PDF

4.2 Cylinder phantom imaging

To further demonstrate the usefulness and feasibility of the proposed method, experiments with real cylindrical nylon phantom are implemented in this paper. The cylindrical nylon phantom with a diameter of about 5 mm is shown in Fig. 5(a), which contains three small cylinders with ferrous carbonate particles, white lime particles and gray lime particles, respectively. The diameters of the small cylinder is about 1 mm. The cylindrical nylon phantom is scanned by the micro-CT with DFOCD system. X-rays source is operating at 50 kVp tube voltage. The SOD is 44.153 mm and the SDD is 60.782 mm. The detection size of the two CCD of DFOCD system is 1024 pixels* 1024 pixels, and the pixel size is 27 µm *27 µm. The sampling angles are evenly spaced from 0 to 360 by 360 angles.

Figure 7(b) is the reconstruction image using the LFOV sampling data, which shows the complete cylindrical phantom but details of the particles in the three small circles are blurred. Figure 7(c) shows clear particle structures in the small circles, but the truncated SFOV sampling data leads to bright ring artifact. Note that the bright ring artifact in Fig. 7(c) is incomplete and asymmetric. There is an offset between the rotation center and the origin of the detector during the scanning process, the offset correction during the reconstruction process leads to asymmetric missing data, resulting in asymmetry of the bright ring artifact.

 figure: Fig. 7.

Fig. 7. (a) Cylinder nylon phantom; (b) reconstruction image using LFOV data, display window is [-0.2 1.53]; (c) reconstruction image using SFOV data. Display window is [-0.35 2.1].

Download Full Size | PDF

Fusion reconstruction images of the phantom are obtained based on data fusion method. Figure 8 displays DBP images using fusion data and LFOV data. DBP images using LFOV data are shown in Fig. 8(a) and (d). Figure 8(b) and (e) are the DBP images using fusion data without weighted superimposition in ROI edges. As shown in regions 1 and 2 in Fig. 8(g), the fusion images have stitching streak artifacts along lines indicated by blue arrows. Therefore, ROI edges of the fusion DBP image are processed using the weighted superimposition method described in section 2.2, and the results are shown Fig. 8(c), (f). In the zoom-in image Fig. 8(g), the stitching streak artifacts in regions 1 and 2 almost disappear in the corresponding regions 3 and 4. Figure 9 gives the profiles along the lines marked in Fig. 8(d), (e) and (f). Values of the fusion DBP image are roughly consistent with values of the LFOV DBP images, which is the premise of obtaining the correct reconstructed images. Besides, as shown in the sub-profiles in Fig. 9, fusion data without weighted superimposition show obvious data differences in ROI edges (indicated by blue arrows), which is caused by streak artifacts. Sub-profiles of fusion data with weighted superimposition shows that the obvious data differences in ROI edges are corrected, and ROI edges are well fused.

 figure: Fig. 8.

Fig. 8. (a) DBP image using LFOV data; (b) DBP image using the fusion data without weighted superimposition on the ROI edges; (c) DBP image using the fusion data with weighted superimposition on the ROI edges; (d), (e) and (f) are the zoom-in images of red rectangle regions in (a), (b) and (c), respectively. (g) The zoom-in image of rectangle regions in (e) and (f), labels 1-4 correspond to 1-4 in (e) and (f) respectively.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Profiles along the lines marked in the detailed zoom-in images of Fig. 8(a), (b) and (c), on the right shows the sub-profiles of regions 1 and 2 in the left. W-S represents weighted superimposition.

Download Full Size | PDF

Figure 10(a) and (b) are the LFOV reconstruction image and the fusion reconstruction image. Figure 10(a) has insufficient resolution on detail observation, as shown in the zoom-in image Fig. 10(c). The zoom-in image Fig. 10(d) of fusion reconstruction image shows the clear details of the ROI. Figure 11 is profiles of the fusion reconstruction image, SFOV reconstruction image (shown in Fig. 7(c)) and LFOV reconstruction image. The profile of LFOV reconstruction image is as reference. Values of left edge of the profile of SFOV image are much larger than that of the LFOV image and fusion image due to data truncation. Average values of the sub-profiles in Fig. 11 of fusion reconstruction image, LFOV reconstruction image and SFOV reconstruction image are 0.704, 0.697 and 0.759, respectively. Values of the SFOV reconstruction image are larger than the reference values. But values of the fusion reconstruction image are close to the reference values. Moreover, as indicated by blue arrows in Fig. 11, the outlines of small particles that do not exist in the LFOV reconstruction image are visible in the fused reconstruction image.

 figure: Fig. 10.

Fig. 10. (a)Reconstruction images using LFOV data; (b) reconstruction image using fusion data; (c) and (d) are the zoom-in images of red rectangle regions in (a) and (b), respectively. Display window is [-0.2 1.54].

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Profiles along the line marked in Fig. 10(a) of the fusion reconstruction image, LFOV reconstruction image and SFOV reconstruction image. On the right shows the sub- profiles within the black lines on the left.

Download Full Size | PDF

4.3 Botanical branch imaging study

A real botanical branch phantom is imaged by DFOCD system, which is a sprouted trunk with a maximum diameter of about 5.2 mm cut from a tree, as shown in the Fig. 12(a). The parameter settings for obtaining dual-FOV sampling images are shown in Table 2. The LFOV reconstruction image Fig. 12(b) shows unobservable details and the SFOV reconstruction image Fig. 12(c) has bright ring artifact caused by truncated data. The asymmetric bright ring artifact in Fig. 12(c) is due to the asymmetric truncation data caused by the offset of detector center, which is the same as that in Fig. 7(c).

 figure: Fig. 12.

Fig. 12. Botanical branch phantom; (b) reconstruction image using LFOV data, display window is [-0.14 1.02]; (c) reconstruction image using SFOV data. Display window is [-0.74 2.79].

Download Full Size | PDF

Tables Icon

Table 2. Imaging parameter settings

Figure 13 shows the fusion reconstruction image and the LFOV reconstruction image of the botanical branch phantom. In Fig. 13(c), labels 1-4 respectively indicate heartwood, sapwood, cambium and bark of the trunk. Label 5 represents the tree bud. ROI marked by green arcs and green straight lines contains the tree bud and part of the trunk. The attenuation capacity of the sapwood to x-rays is larger than that of the heartwood, thus values of the heartwood in the reconstructed image are very small due to beam hardening, etc. The ratio of heartwood to sapwood in the trunk of botanical branch can be calculated in Fig. 13(a) and (b). In the ROI of Fig. 13(c), the bright substances indicated on label 6 (interesting details) have high contrasts and obvious boundaries with other structures, which can be easily extracted and calculated in proportion. Yet most of the bright substances in the ROI of Fig. 13(d) are blurred and have low contrasts with other structures.

 figure: Fig. 13.

Fig. 13. Reconstruction images using fusion data (a) and LFOV data; (c) and (d) are the zoom-in images of red rectangle regions in (a) and (b), respectively. Display window is [-0.14 1.19].

Download Full Size | PDF

Figure 14 is the 3D rendered (in Avizo) of the botanical branch phantom with different display windows. Figure 14(a) and (b) show the distribution of all structures of the botanical branch phantom. In Fig. 14(b), the fibrous structures (as indicated on label 2 in Fig. 13(c)) located in the ROI are denser than that located in the non-ROI, so reconstruction images using fusion data have high-resolution ROI details. Figure 14(c), (d), (e) and (f) show the structures containing the bright substances (as indicated on label 6 in Fig. 13(c)). The red regions in Fig. 14 represent the bright substances. As shown in Fig. 13, the morphology of the bright substances is small granular. However, there are flake red regions (as indicated by blue arrows) representing bright substances in Fig. 14(a), (c) and (e), which shows the wrong segmentation caused by insufficient resolution. Yet the Fig. 14(b), (d) and (f) show small granular bright substances, which can be correctly segmented in reconstruction images using fusion data. Therefore, global large-scale analyses (ratio of heartwood and sapwood) can be provided by fusion reconstruction images. And local detailed analysis of ROI (proportion of bright substances in tree bud during the germination process of botanical branch phantom) can also be achieved by fusion reconstruction images.

 figure: Fig. 14.

Fig. 14. 3D rendering of the botanical branch phantom using the LFOV data (a), (c) and (e); 3D rendering of the botanical branch phantom using the fusion data (b), (d) and (f). Display window of (a) and (b) is [0.05 0.35]. (c), (d), (e) and (f) represent different views of the phantom, and display window of (c), (d), (e) and (f) is [0.47 0.53].

Download Full Size | PDF

5. Discussion and conclusion

In this paper, we propose a DFOCD system for dual-FOV image acquisition and a data fusion method for CT reconstruction. DFOCD system contains two sub-optical paths, which realizes dual-FOV imaging by two objectives with different magnification and FOV. In particular, a relay lens is introduced to expand the space to install the beam splitter. Achromatic doublet is used as the relay lens in DFOCD system. Usually, achromatic doublet is the common relay lens introduced into an optical path, which consists of two symmetric infinite conjugate lenses with the same focus so as to cancel optical aberration [30]. The exit pupil of the optical system of achromatic doublet is at infinity and the imaging size remains uniform with the variation of focus, the off-axis image remains the same as the central image. Thus image distortion caused by achromatic doublet is small and is ignored in this paper. In addition, sampling data from the two sub-optical paths are registered by mechanically adjusting, which facilitate the subsequent data processing. Meanwhile, a data fusion method is proposed, which is implemented on DBP images for high-quality fusion images. Images of phantoms are reconstructed using the fusion data by BPF algorithm.

Simulation results show that the resolution board is correctly reconstructed using fusion data. Interesting details in the ROI are observable and bright ring artifacts caused by truncated data are corrected. The proposed data fusion method is simple and effective. In the actual experiments of the cylindrical phantom, average reconstruction values of the ROI profiles using fusion image, LFOV image and SFOV image are 0.704, 0.697 and 0.759, respectively. Reconstruction values using the SFOV truncated data are obviously larger than reference values (values of LFOV image), yet values using the fusion data are close to reference values. Thus, errors of reconstruction images using SFOV data are effectively corrected by supplementing the missing data with LFOV data. In the actual experiments of the botanical branch phantom, the bright substances of botanical branch, not be segmented correctly in LFOV global reconstruction images because of insufficient resolution, are correctly segmented in high-resolution ROI of fusion reconstruction images. Therefore, fusion reconstruction images have better knowledge of samples.

Naturally, the proposed fusion method is also applicable to the data obtained through two acquisitions on different FOV micro-CT. But the proposed DFOCD system can collect dual-FOV data simultaneously, which is an irreplaceable advantage for two acquisitions on common micro-CT of samples that move or change over time. Besides, the simultaneous acquisition of dual-FOV data reduces the data acquisition time. However, lights generated by the scintillator in the DFOCD system are divided into two beams and then detected respectively, which reduces the detection efficiency and increases the noise of each sampling image. In high-resolution imaging, a SFOV results in fewer lights entering the detector. Thus, the improvement of optical components to increase the detection efficiency of the system is necessary. For example, scintillator with higher light yield, and CCD with higher detection efficiency. In addition, our future work is devoted to research on higher resolution imaging of large samples using other objectives.

Funding

National Natural Science Foundation of China (No. 61771328); National Key Research and Development Program of China (2017YFB1103900).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. S. T. Ho and D. W. Hutmacher, “A comparison of micro CT with other techniques used in the characterization of scaffolds,” Biomaterials 27(8), 1362–1376 (2006). [CrossRef]  

2. S. Y. Wan and A. P. Kiraly, “Extraction of the hepatic vasculature in rats using 3-D micro-CT images,” IEEE Trans. Med. Imaging 19(9), 964–971 (2000). [CrossRef]  

3. V. C. Paquit, S. S. Gleason, and U. C. Kalluri, “Monitoring plant growth using high resolution micro-CT images,” Proc. SPIE 7877, 78770W (2011). [CrossRef]  

4. B. D. Metscher, “Micro CT for comparative morphology: simple staining methods allow high-contrast 3D imaging of diverse non-mineralized animal tissues,” BMC Physiol. 9(1), 11 (2009). [CrossRef]  

5. K. P. Anoop and K. Rajgopal, “Estimation of missing data using windowed linear prediction in laterally truncated projections in cone-beam CT,” in International Conference of the IEEE Engineering in Medicine & Biology Society (Conf Proc IEEE Eng Med Biol Soc) (2007), pp. 2903–2906.

6. M. Defrise, F. Noo, R. Clackdoyle, and H. Kudo, “Truncated Hilbert transform and image reconstruction from limited tomographic data,” Inverse Probl. 22(3), 1037–1053 (2006). [CrossRef]  

7. Y. Ye, H. Yu, Y. Wei, and G. Wang, “A General Local Reconstruction Approach Based on a Truncated Hilbert Transform,” J. Biomed. Imaging 2007, 1–8 (2007). [CrossRef]  .

8. Y. Ye, H. Yu, and G. Wang, “Exact Interior Reconstruction from Truncated Limited-Angle Projection Data,” Int. J. Biomed. Imaging 2008, 1–6 (2008). [CrossRef]  .

9. L. Yu, Y. Zou, E. Y. Sidky, C. A. Pelizzari, P. R. Munro, and X. Pan, “Region of interest reconstruction from truncated data in circular cone-beam CT,” IEEE Trans. Med. Imaging 25(7), 869–881 (2006). [CrossRef]  

10. L. Li, K. Kang, Z. Chen, L. Zhang, and Y. Xing, “A general region-of-interest image reconstruction approach with truncated Hilbert transform,” J. X-Ray Sci. Technol. 17(2), 135–152 (2009). [CrossRef]  

11. B. Ohnesorge, T. Flohr, K. Schwarz, J. P. Heiken, and K. T. Bae, “Efficient correction for CT image artifacts caused by objects extending outside the scan field of view,” Med. Phys. 27(1), 39–46 (2000). [CrossRef]  

12. J. Hsieh, E. Chao, J. Thibault, B. Grekowicz, A. Horst, S. McOlash, and T. J. Myers, “A novel reconstruction algorithm to extend the CT scan field-of-view,” Med. Phys. 31(9), 2385–2391 (2004). [CrossRef]  

13. H. Zenji, T. Sano, and M. Nemoto, “A new form of the extrapolation method for absorption correction in quantitative X-ray microanalysis with the analytical electron microscope,” Ultramicroscopy 35(1), 27–36 (1991). [CrossRef]  

14. T. Martin and A. Koch, “Recent developments in X-ray imaging with micrometer spatial resolution,” J. Synchrotron Radiation 13(2), 180–194 (2006). [CrossRef]  

15. T. Martin, P. A. Douissard, M. Couchaud, A. Cecilia, and A. Rack, “LSO-Based Single Crystal Film Scintillator for Synchrotron-Based Hard X-Ray Micro-Imaging,” IEEE Trans. Nucl. Sci. 56(3), 1412–1418 (2009). [CrossRef]  

16. M. D. Simon, J. Schock, and F. Pfeiffer, “Dual-energy micro-CT with a dual-layer, dual-color, single-crystal scintillator,” Opt. Express 25(6), 6924–6935 (2017). [CrossRef]  

17. Q. Xu, H. Yu, J. Bennett, P. He, R. Zainon, R. Doesburg, A. Opie, M. Walsh, H. Shen, A. Butler, P. Butler, X. Mou, and G. Wang, “Image reconstruction for hybrid true-color micro-CT,” IEEE Trans. Biomed. Eng. 59(6), 1711–1719 (2012). [CrossRef]  

18. P. Ganasala and A. D. Prasad, “Medical image fusion based on laws of texture energy measures in stationary wavelet transform domain,” Int. J. Imaging Syst. Technol. 30(3), 544–557 (2020). [CrossRef]  .

19. P. J. Alwin, S. Paul, and P. Pushpalatha, “A Comparitive Study of Image Fusion in MRI and CT Images,” Int. J. Mech. Eng. Inform. Technol. 2(11), 875–882 (2014).

20. J. Zou, H. Chen, Q. Zhang, Y. Kang, and D. Xia, “Fast Cone-Beam CT Image Reconstruction Based on BPF Algorithm: Application to ORTHO-CT,” Int. J. Comput. Methods 11(04), 1350067 (2014). [CrossRef]  

21. Z. Yu and X. Pan, “Exact image reconstruction on PI-lines from minimum data in helical cone-beam CT,” Phys. Med. Biol. 49(6), 941–959 (2004). [CrossRef]  

22. X. Xia, X. Hu, Y. Mu, and J. Zou, “Dual-Energy Micro-CT Using GAGG:Ce and LYSO:Ce Scintillators,” IEEE Trans. Nucl. Sci. 68(2), 236–244 (2021). [CrossRef]  

23. B. D. Lucas and T. Kaneda, “An Iterative Image Registration Technique with an Application toStereo Vision,” in Proceedings of the 7th International Joint Conference on ArtificialIntelligence (1997).

24. B. Simon and I. Matthews, “Lucas-Kanade 20 Years On: A Unifying Framework,” Int. J. Comput. Vis. 56(3), 221–255 (2004). [CrossRef]  

25. T. Lehnhaauser and M. Schaafer, “Improved linear interpolation practice for finite-volume schemes on complex grids,” Int. J. Numer. Meth. Fluids 38(7), 625–645 (2002). [CrossRef]  

26. S. Hill, Trilinear Interpolation, (Betascript Publishing, 1994).

27. K. B. Mathieu, S. C. Kappadath, R. A. White, E. N. Atkinson, and D. D. Cody, “An empirical model of diagnostic x-ray attenuation under narrow-beam geometry,” Med. Phys. 38(8), 4546–4555 (2011). [CrossRef]  

28. R. L. Siddon, “Fast calculation of the exact radiological path for a three-dimensional CT array,” Med. Phys. 12(2), 252–255 (1985). [CrossRef]  

29. S. Richard, D. B. Husarik, G. Yadava, S. N. Murphy, and E. Samei, “Towards task-based assessment of CT performance: System and object MTF across different reconstruction algorithms,” Med. Phys. 39(7Part1), 4115–4122 (2012). [CrossRef]  

30. S. K. Eckhardt and N. L. Johnson, “Apparatus for Measuring the Retroreflectance of Materials,” U.S. patent 8675194 (18 March 2014).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. The schematic illustration of DFOCD system
Fig. 2.
Fig. 2. Image reconstruction workflow for the proposed DFOCD system.
Fig. 3.
Fig. 3. (a) Middle layer of the simulated phantom; (b) reconstruction image using SFOV data; (c) reconstruction image using LFOV data; (d) reconstruction image using fusion data; (e) and (f) are the zoom-in images of red rectangle regions in (c) and (d), respectively. Display window is [-0.2 0.8]. Results of the non-middle layers are similar.
Fig. 4.
Fig. 4. One-dimensional profiles of row 334 in Fig. 3(a), (b) and (d). The horizontal line of row 334 is shown in the green solid line in Fig. 3(b).
Fig. 5.
Fig. 5. (a) Image from LFOV optical path; (b) image from SFOV optical path; (c) and (d) are the zoom-in images of red rectangle regions in (a) and (b), respectively.
Fig. 6.
Fig. 6. The MTF for Fig. 5(c) and (d).
Fig. 7.
Fig. 7. (a) Cylinder nylon phantom; (b) reconstruction image using LFOV data, display window is [-0.2 1.53]; (c) reconstruction image using SFOV data. Display window is [-0.35 2.1].
Fig. 8.
Fig. 8. (a) DBP image using LFOV data; (b) DBP image using the fusion data without weighted superimposition on the ROI edges; (c) DBP image using the fusion data with weighted superimposition on the ROI edges; (d), (e) and (f) are the zoom-in images of red rectangle regions in (a), (b) and (c), respectively. (g) The zoom-in image of rectangle regions in (e) and (f), labels 1-4 correspond to 1-4 in (e) and (f) respectively.
Fig. 9.
Fig. 9. Profiles along the lines marked in the detailed zoom-in images of Fig. 8(a), (b) and (c), on the right shows the sub-profiles of regions 1 and 2 in the left. W-S represents weighted superimposition.
Fig. 10.
Fig. 10. (a)Reconstruction images using LFOV data; (b) reconstruction image using fusion data; (c) and (d) are the zoom-in images of red rectangle regions in (a) and (b), respectively. Display window is [-0.2 1.54].
Fig. 11.
Fig. 11. Profiles along the line marked in Fig. 10(a) of the fusion reconstruction image, LFOV reconstruction image and SFOV reconstruction image. On the right shows the sub- profiles within the black lines on the left.
Fig. 12.
Fig. 12. Botanical branch phantom; (b) reconstruction image using LFOV data, display window is [-0.14 1.02]; (c) reconstruction image using SFOV data. Display window is [-0.74 2.79].
Fig. 13.
Fig. 13. Reconstruction images using fusion data (a) and LFOV data; (c) and (d) are the zoom-in images of red rectangle regions in (a) and (b), respectively. Display window is [-0.14 1.19].
Fig. 14.
Fig. 14. 3D rendering of the botanical branch phantom using the LFOV data (a), (c) and (e); 3D rendering of the botanical branch phantom using the fusion data (b), (d) and (f). Display window of (a) and (b) is [0.05 0.35]. (c), (d), (e) and (f) represent different views of the phantom, and display window of (c), (d), (e) and (f) is [0.47 0.53].

Tables (2)

Tables Icon

Table 1. Simulated phantom geometry and value

Tables Icon

Table 2. Imaging parameter settings

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

q f u s i o n ( i )  =  { q H ( i ) i N + q L ( i ) ( 1 i N ) , 1 t h   p i x e l   i s n o t   i n   R O I q H ( i ) ( 1 i N ) + q L ( i ) i N , 1 t h   p i x e l   i s   i n   R O I i = 1 , 2 , 3 N ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.