Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Resolution improvement in real-time and video mosaicing for fiber bundle imaging

Open Access Open Access

Abstract

A fiber bundle allows easy access to a wide range of human tissue. It has smaller diameters, which limits its effective field of view (FOV), and consists of a large number of cores surrounded by a cladding layer, which reduces its spatial resolution. In this paper, we develop an algorithm that processes successively captured raw fiber bundle images in an online fashion. Our algorithm tackles the tasks of super-resolution (SR) and video mosaicing jointly. The natural movement of the fiber tip in successive frames produces offsets that are random in the pixel domain to apply multi-frame SR imaging. Meanwhile, the associated FOV can be extended by mosaicing reconstructed SR images with obtained shifted information. Our approach has low computational complexity that allows for processing in real-time. The performance of resolution improvement in real-time and video mosaicing is demonstrated on the resolution target and biological samples.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Fiber bundles have been used as endoscopic imaging probes to obtain live images of tissue micro architecture and cellular features from various optical imaging modalities, such as laser scanning microscope [1,2], optical coherence tomography (OCT) [3,4], and fluorescence microscopy [5]. Fiber bundles offer high flexibility, but many challenges still exist due to the nature of image acquisition through fiber bundles. Foremost among these challenges is limited spatial resolution and achievable field of view (FOV).

The effective FOV is inherently limited by the miniaturization of fiber bundle and is usually smaller than 1 mm2. In some cases, the small FOV limits the overall imaging of tissue morphology. To address this limitation, a time-series of video frames with partially overlapping regions can be aligned and stitched into a single frame with an extended FOV. This technique is referred to as video mosaicing. Over the years, there has been considerable studies in the development and clinical application of video mosaicing. Mosaicing requires the fiber shift between successive frames to be estimated, which can be achieved in real-time [6]. More computationally intensive methods can be carried out offline, considering the non-rigid deformation of the tissue [7,8].

Another significant challenge is the limited spatial resolution. Although the resolution can be further improved at the expense of FOV by the additional distal miniature objective lens with a certain magnification, this is still limited by the periodic honeycomb pattern on the acquired images. The presence of this pattern arises from the loss of information due to the interstices between the optical fibers. For fiber bundle imaging system, the achievable resolution is not limited by the diffraction of this imaging system, but limited by the honeycomb effect. Therefore, the processing of the acquired images and particularly image reconstruction are especially important. Typical methods, such as filtering methods [911] or interpolating methods [12,13], do not result in an improvement in spatial resolution. Recently, several computational iterative reconstruction methods have been proposed for removing this pattern, such as Bayesian approximation algorithm [14], l1 norm minimization in the wavelet domain [15], hierarchical Bayesian model [16,17], iterative shrinkage thresholding algorithm [18], and so on. However, these methods are considered computationally expensive and only reconstruct the limited information from one low-resolution (LR) input image.

Another effective method works by fusing together multiple LR images to provide the lost information, which allows for reconstructing a high resolution (HR) image [1922]. The LR image sequences are usually obtained by utilizing displacements between the sample and fiber bundles. If the motion is known or can be estimated with subpixel accuracy, HR image reconstruction is possible. Lee and Han [23] used four shifted LR images with predetermined displacements to remove honeycomb pattern. Cheon et al. [24] demonstrated that random motions during image acquisition can improve spatial resolution by 1.8 times using 20 LR images. The used superposition method usually chooses the average or maximum value of the aligned LR images. Vyas et al. [25] used linear interpolation with Delaunay triangulation to achieve better results and obtained a 2-fold resolution improvement using four shifted images. The shift of the fiber bundle was controlled by the piezoelectric tube (PZT) and the fiber needed to be held steady against the sample. When motion is present, the used interpolation method would be difficult to meet the requirements of real-time. Shao et al. [26] demonstrated the potential of computational imaging and a 2.8-fold resolution improvement was achieved using 16 LR images. However, this algorithm is unsuitable for real-time application.

In this paper, we introduce an algorithm that uses video frames captured across multiple shifted frames to produce super-resolution (SR) images in an online fashion. We formulate the SR problem as the reconstruction and interpolation of a higher-resolution continuous signal from a set of sparse images. Our algorithm tackles the tasks of super-resolution (SR) and video mosaicing jointly for fiber bundle imaging. Our approach further enables to increase the effective FOV of images by a time-series of reconstructed SR images as the fiber probe moves. Our algorithm requires no special motions, and natural random motion of the fiber probe can be used to apply multi-frame SR. Our algorithm is computationally inexpensive, and the effectiveness of our proposed algorithm is demonstrated on video frames obtained from resolution targets and biological samples.

2. Methods

2.1 Fiber bundle endomicroscopy system

A schematic of custom-built fiber endomicroscopy system for fluorescent imaging is shown in Fig. 1. We use a 475 nm light emitting diode (LED) source for the illumination. The incident light is filtered by an excitation filter (ET470/24 m, Chroma), reflected by a dichroic mirror (ZT515rdc, Chroma) and coupled with a fiber bundle by an objective lens (10×, NA=0.25, Olympus). The used fiber bundle (FIGH-30-650S, Fujikura) has approximately 30000 cores and the inter-core spacing is 3.3 μm. A miniature objective lens with a 2.5-fold magnification is designed at the distal end of the fiber bundle, which achieves a high magnification imaging close to the sample surface. Returning fluorescent light from the sample passes through the same dichroic mirror and an emission filter (ET525/50 m, Chroma). It is then imaged onto a CMOS camera (pixel size 6.5 μm, Dhyana 401DS, Tucsen) using a lens (AC254-150-A-ML, Thorlabs) with a150 mm focal length. The used fiber bundle in raw images has a circular FOV with a diameter of 240 μm. Raw fiber bundle images are acquired at a rate of 15 frames per second (FPS). Camera settings, image display and image processing are performed in C++ calling OpenCV libraries.

 figure: Fig. 1.

Fig. 1. Custom-built fiber bundle endomicroscopy system for fluorescent imaging.

Download Full Size | PDF

2.2 Real-time SR image reconstruction

The core fill factor of our used fiber bundle in obtained images is only about 28%, meaning that our raw images are sparsely sampled. In fact, the movement of the fiber tip in successive frames provides additional information for the regions previously marked by the fiber cladding. Multiple under-sampled LR frames from a video sequence can be used to produce a HR image by multi-frame SR algorithms. The simplest of these SR methods, both computationally and intuitively, is interpolation-reconstruction approach [2729]. The usual interpolation method such as Delaunay triangulation [25] method can use the core positions of registered LR images and the intensity values at these positions to reconstruct SR image. However, the random movement between successive frames are changing over time. The usual methods need to continuously update the pixel coordinates of the core positions and the interpolation weight of neighboring pixel, which does not meet the requirements of efficiency. We introduce here a simple method that creates a SR image in an online fashion. Figure 2 shows the processing sequence that is designed for this purpose. Our method can be divided into two stages: a preliminary stage and then subsequent reconstruction of SR images in real-time.

 figure: Fig. 2.

Fig. 2. A flowchart of our proposed method for real-time SR image reconstruction.

Download Full Size | PDF

The preliminary stage aims at building a binary mask spatially identifying each fiber central position so as to determine a correspondence between the pixels of the image and the fibers. This correspondence is obtained by processing the image acquired in a homogeneously fluorescent sample. Then central positions can be acquired by a Hough transform [30]. This homogeneously fluorescent image also can be used to correct the core heterogeneity in transmission efficiency. Besides, the background images are recorded to correct the influence of fiber auto-fluorescence and background noise [31,32]. These results of this preliminary stage are saved and used as the input to the reconstruction of SR images in the next stage.

In the second stage, a sequence of raw LR images are acquired online by the camera. We need to preliminarily determine a “matching window” for every input frame (i.e., the number and the position of matching frames) that should be used for SR image reconstruction. For example, a matching window with the size of 4 is determined as shown in Fig. 3. Previous frames fi-3(x,y), fi-2(x,y) and fi-1(x,y) are used to improve the resolution of current frame fi(x,y). Note that the subscript i represents temporal axis and (x,y) are the pixel coordinates. The first step for the input frames from matching window is to subtract the contribution from auto-fluorescence and correct for the heterogeneities in transmission efficiency of the fibers, yielding the calibrated images

$$f{c_k}({x,y} )= \frac{{{f_k}({x,y} )- {f_b}({x,y} )}}{{{f_h}({x,y} )- {f_b}({x,y} )}} \times M({x,y} ),$$
for k=1, 2, …, N. fb(x,y) and fh(x,y) are the background image and homogeneously fluorescent image, respectively. fck(x,y) denotes the calibrated image for the k-th frame fk(x,y) and N is the number of matching frames. M(x,y) is a binary mask where the value 1 is used to mark the presence of fiber central position. We assume that the fiber bundle mask M(x,y) remains fixed in Eq. (1) as long as the proximal end of the fiber bundle is fixed to the imaging system.

 figure: Fig. 3.

Fig. 3. Four consecutive input frames in a matching window used for SR image reconstruction.

Download Full Size | PDF

Next, registering LR frames from matching window against the current frame is to create relative motion estimates. We focus primarily on translational motion, but our proposed method is also suitable for other motion models. So far, numerous image registration techniques [3335] have been used and correlation-based methods are among the most popular. Here, we employ Lucas-Kanade optical flow described in [36] to obtain motion estimates. This approach reaches the necessary accuracy while keeping the computational cost low. To minimize the influence of the fixed honeycomb pattern on the registration process, we previously apply a Gaussian filter on LR frames as shown in Fig. 2. The resulting motion parameters are stored in θk, for k=1, 2, …, N. Since the motion of the fiber tip is smooth, we register each frame to the previous frame and then accumulate these.

The final step of our method involves an efficient scattered data interpolation based on the calibrated images fck(x,y) and obtained motion parameters θk. This step firstly goes through a geometric transformation Tθk: for the calibrated images fck(x,y) and mask M(x,y) with θk, and we further obtain the sum as

$$I({x,y} )= \sum\limits_{k = 1}^N {{T_{{\theta _k}}}\{{f{c_k}({x,y} )} \},}$$
$$w({x,y} )= \sum\limits_{k = 1}^N {{T_{{\theta _k}}}\{{M({x,y} )} \}.}$$
After this geometric transformation, we obtain a weighted intensity image I(x,y) and its associated weight w(x,y). Let P={p1, p2, …, pn} be a set of n sample points with associated data Ij=I(pj) and wj=w(pj) sampled from Eq. (2) and (3). pj=(xj,yj) and j=1, 2, …, n. Then, the value Isr(p) and its weight wsr(p) associated with a point in the resulting SR image can be reconstructed by a weighted average of the neighboring points like the method close to Shepard’s interpolation [37,38]. The intensity value of the point Isr(p) can be obtained by
$${I_{sr}}(p )= \sum\limits_j {\frac{{{h_j}(p )}}{{\sum\nolimits_l {{h_l}(p )} }}} {I_j},$$
where hj(p) is the weight function and usually equals to the inverse of the distance between the point p and pj. An approximation is obtained if a bounded weighting function hj(p) is chosen. Here, we use a Gaussian function hj(p)=G(p-pj)=exp[-(p-pj)2/2σ2] and then Eq. (4) can be rewritten as
$${I_{sr}}(p )= \frac{{\sum\nolimits_j {G({p - {p_j}} ){I_j}} }}{{\sum\nolimits_j {G({p - {p_j}} )} }} = \frac{{\left[ {G \otimes \sum\nolimits_j {{\delta_{{p_j}}}{I_j}} } \right](p )}}{{\left[ {G \otimes \sum\nolimits_j {{\delta_{{p_j}}}} } \right](p )}},$$
where δpj is a Dirac function centered at pj and ${\otimes} $ represents the convolution operator.

Similarly, the weight wsr(p) is also can be formulated as

$${w_{sr}}(p )= \frac{{\left[ {G \otimes \sum\nolimits_j {{\delta_{{p_j}}}{w_j}} } \right](p )}}{{\left[ {G \otimes \sum\nolimits_j {{\delta_{{p_j}}}} } \right](p )}}.$$
Based on Eq. (5) and (6), the resulting SR image SR(x,y) is given by
$$SR({x,y} )= \frac{{GF\{{I({x,y} )} \}}}{{GF\{{w({x,y} )} \}}},$$
where GF represents Gaussian filtering.

As shown in Eq. (7), our proposed method combines the interpolation and reconstruction of SR image into two Gaussian filtering and one division operation. This is an efficient method and can be implemented in real-time. The smoothness of reconstructed SR(x,y) is controlled by the used parameter of Gaussian filtering, i.e., the standard deviation σ of the Gaussian kernel. σ usually decreases with the increasing of sample points.

2.3 SR video mosaicking

When the fiber tip is scanned over the sample, our proposed method also can consecutively merge reconstructed SR images for increasing FOV. Reconstructed SR images are inserted into a large zero-value image called a canvas. The size of the canvas can be previously selected and usually depends on the size of scanning area. A time-series of SR images are inserted into the canvas at a relative shift previously recorded at the stage of real-time SR reconstruction. We use the dead leaves approach to insert images into the canvas, i.e., the pixel values of the new SR image completely overwrite any previous pixel values in the mosaic. Note that mosaics shown in this paper are constructed from the end of the stage of on-line SR reconstruction. The mosaics are saved as an audio video interleave (AVI) file. Meanwhile, LR images and real-time reconstructed SR images are also stored as AVI files, which allows for further detailed analysis. The whole algorithm is performed in C++ calling OpenCV libraries.

3. Results

3.1 Real-time SR imaging

In this section, we demonstrate the validity of our proposed method using a real video sequence obtained by our fiber endomicroscopy system. The camera continuously acquires raw fiber bundle images with the size of 1024×1024 pixels at a rate of 15 FPS. These images have relative motions introduced by a translation stage controlled by random hand motion. We use a matching window with the size of 4 for real-time SR image reconstruction.

Figure 4 shows the result of a high-resolution 1951 USAF target with 9 groups of horizontal and vertical line pairs. Because the target is not fluorescent, it is illuminated by a 525 nm LED and imaged in transmission. Figure 4(a) presents a zoom-in view of raw LR fiber bundle image. This and subsequent images in Fig. 4(b)–4(d) have the same view indicated by the orange box in Fig. 4(e). For better visualization, the enlarged region of interest (ROI) consisting of the element 5 and 6 of Group 8 (G8E5 and G8E6, marked in red) and all elements of Group 9 (marked in blue) is shown in Fig. 4(a)–4(d). The images in Fig. 4(b) are obtained by Gaussian smoothing with σ=2.0 pixels. Figure 4(c) and (d) show the reconstructed images by our proposed method using 1 and 4 LR images, respectively. Figure 4(e) shows one un-cropped reconstructed SR image for reference and ROI marked in orange corresponds to Fig. 4(d). Figure 4(f) shows the normalized intensity of the pixels along the line segment on G9E1-E3.

 figure: Fig. 4.

Fig. 4. Imaging results of 1951 USAF resolution target from a real video sequence. (a-d) Larger cropped large images show all Elements (E) of Group (G) 8 and 9. Smaller zoomed images in (a-d) show all elements of G9 and G8E5-6 by zooming in on the regions of interest (marked in blue and red) in large images (a-d). (a) Raw LR fiber bundle image. Images obtained by (b) Gaussian smoothing (σ=2.0 pixels) and (c) our proposed method (σ=3.0 pixels) on a single LR image. (d) SR image is reconstructed by 4 LR images using our proposed method (σ=1.0 pixels). (e) Un-cropped reconstructed SR image for reference. Video demonstration of this real-time SR imaging is shown in Visualization 1. Full FOV of fiber bundle in (e) is 240 μm. Region of interest (marked in orange) in (e) corresponds to the image in (d). A plot of the normalized intensity of the pixels along the line segment in (e) is shown in (f). The scale bar in (e) is 30 μm.

Download Full Size | PDF

The used fiber bundle has an inter-core spacing of 3.3 μm and a 2.5-fold magnification of a miniature lens. Based on the Nyquist sampling, we would expect approximately 378.8 lp/mm, which means the smallest line pairs that can be resolved is G8E4 as shown in Fig. 4(a). G8E4 corresponds to 362 lp/mm and a bar width of 1.38 μm. The process of Gaussian smoothing does not result in the improvement of spatial resolution as shown in Fig. 4(b). For the reconstructed image using one LR image (σ=3.0 pixels) in Fig. 4(c), our proposed method can remove the honeycomb pattern. Although the image quality is improved, no obvious enhancement in spatial resolution can be observed. Figure 4(d) and 4(f) show the smallest line pairs that can be resolved is G9E3, which corresponds to 645 lp/mm and a bar width of 0.78 μm. This means that there is a 1.77-fold resolution improvement by our proposed method using 4 LR images (σ=1.0 pixels). Video demonstration of this real-time SR imaging is shown in Visualization 1. Note that a circular mask for fiber bundle is used in Fig. 4(e) to remove artefacts from the edge of the fiber bundle.

The performance of our method is also tested by imaging the spider silk as shown in Fig. 5. The spider silk is typically labeled with a fluorescent solution (0.2% fluorescein sodium). The raw LR image is shown in Fig. 5(a), as well as the reconstructed images by Gaussian smoothing (σ=2.0 pixels) and our proposed method using one LR image (σ=3.0 pixels) in Fig. 5(b) and Fig. 5(c), respectively. Figure 5(d) shows the reconstructed SR images using 4 LR image (σ=1.0 pixels). Video demonstration of this real-time SR imaging is shown in Visualization 2. The highlighted ROIs where the spider silks overlap are chosen and enlarged in Fig. 5 for better visualization (see Visualization 3). These results show that our method can clearly identify more neighboring spider silks and enhance image detail.

 figure: Fig. 5.

Fig. 5. Experimental results of spider silk from a real video sequence. (a) Raw LR image and reconstructed images by (b) Gaussian smoothing (σ=2.0 pixels) and (c) our proposed method (σ=3.0 pixels) using a single LR image. SR image is reconstructed in (d) by 4 LR images using our proposed method (σ=1.0 pixels). Video demonstration of this real-time SR imaging is shown in Visualization 2. Three ROIs where the spider silks overlap are enlarged and show that our proposed method can enhance image detail (see Visualization 3). The scale bar represents 30 μm.

Download Full Size | PDF

The image quality for Fig. 4 and Fig. 5 is compared and evaluated in terms of the contrast-to-noise ratio (CNR) and β parameter as shown in Ref. [23]. The CNR is used to measure the difference in the contrast between the signals for the object and background. The β parameter describes the sharpness of the image and a value close to one indicates that the sharpness has been maintained compared to the reference image. As shown in Table 1, we can obtain a CNR improvement and preserve more image sharpness by our proposed method.

Tables Icon

Table 1. Image Quality Parameters

3.2 SR video mosaicing

Figure 6(a) and 6(b) show mosaicing results from 1951 USAF target (See Visualization 4) and the spider silk (See Visualization 5), respectively. Single reconstructed SR image used at the path of the mosaic is shown in Fig. 6 and the orange dotted circle indicates its corresponding inserted position. The ROI in Fig. 6(a) is enlarged to show all elements of G8 and G9. The ROI where the spider silks overlap is enlarged in Fig. 6(b) for better visualization. These results in Fig. 6 show that our proposed method allows us to directly create a SR image with an extended FOV. Video mosaicing increases the FOV 7 times in Fig. 6(b). To reduce frame boundary visibility in Fig. 6, we only insert SR images that move at least one ninth of a FOV from the last inserted SR image.

 figure: Fig. 6.

Fig. 6. SR video mosaics from (a) 1951 USAF target (See Visualization 4) and (b) spider silk (See Visualization 5). The mosaic size is 2000×3000 pixels in (a) and 2000×6000 pixels in (b). The orange dotted circle represents the position where one reconstructed SR image used at the path of the mosaic is inserted. The overlapping regions are replaced by this SR image. The ROI of SR image (marked in blue) is enlarged for better visualization. SR image is real-time reconstructed by 4 LR images using our proposed method (σ=1.0 pixels).

Download Full Size | PDF

Real-time SR imaging and video mosaicking are also tested on ex vivo porcine heart stained with fluorescein sodium, as shown in Fig. 7. Compared with a confocal endomicroscopy, our fiber bundle imaging system in Fig. 1 does not have the ability of optical sectioning. The ROI of raw LR fiber bundle image in Fig. 7 is influenced by out-of-focus fluorescence. Our proposed method on single LR image can improve the image quality. As four images are used by our method, small structures are clearly distinguished and visible in the reconstructed SR images. Video demonstration is shown in Visualization 6.

 figure: Fig. 7.

Fig. 7. SR video mosaics from ex vivo porcine heart (See Visualization 6). The mosaic size is 2000×3000 pixels. The orange dotted circle represents the position where one reconstructed SR image used at the path of the mosaic is inserted. The ROI of SR image (marked in blue) is enlarged for better visualization. SR image is real-time reconstructed by 4 LR images using our proposed method (σ=1.0 pixels).

Download Full Size | PDF

4. Discussion

We have proposed a real-time SR algorithm that works on a time-series of raw fiber bundle images. The problem of SR reconstruction is formulated as the interpolation of a higher-resolution continuous signal from a set of sparse images, which can be performed by two Gaussian filtering and one division operation. This is an efficient method and can be implemented in real-time. Besides, it is able to increase the effective FOV by the reconstructed SR images and obtained motion parameters. The performance of resolution improvement in real-time and video mosaicing is demonstrated by experimental results. In this section, we discuss some advantages and limitations of our proposed method.

4.1 Random motion

Having multiple shifted LR images allows us to both remove the fixed honeycomb pattern in fiber bundle images as well as reconstruct a SR image. The motion in this paper is the result of uncontrolled random motion. Some publications like Cheon et al. [24] use randomly offset images for the purpose of SR imaging. However, to our knowledge no prior work analyzes if the subpixel coverage produced by random hand motion is sufficient to apply multi-frame SR imaging in an online fashion. We show that using natural movement of fiber bundle can achieve this purpose. The estimated parameters of random motion used to obtain SR images in Fig. 4(e) and 5(d) are shown in Fig. 8(a). There is a 1.77-fold resolution improvement by our proposed method using 4 LR images. The smallest line pairs with a bar width of 0.78 μm can be resolved. The averaged inter-core spacing is 4.24 pixels in the image. When the fiber bundle is shifted by a half of the inter-core spacing like Vyas et al. [25], the resolution may be further improved by our method.

 figure: Fig. 8.

Fig. 8. Estimated (a) translational shifts for the reconstructed SR images in Fig. 4 and 5, and (b) speed of fiber bundle for the video sequence in Fig. 6 and 7.

Download Full Size | PDF

When the fiber bundle is scanned over the sample, like in a realistic clinical scenario, we can use the movements during data acquisition to reconstrue an SR image with an extended FOV as shown in Fig. 6 and 7. The estimated speed of fiber bundle is shown in Fig. 8(b). Our used fiber bundle has a 240 μm diameter FOV and acquisition frame rate of the camera is 15 FPS. The maximum speed in Fig. 8(b) is 0.15 mm/s, which corresponds to a maximum shift of 4.2% of the image diameter between consecutive images. Considering 4 frames used for SR reconstruction, there is a maximum shift of 16.8% of the image diameter between the images in matching window. These low translation speeds ensure sufficient overlap between images. Increased acquisition frame rate can improve translation speeds of fiber bundle and potentially reduce the tissue deformations between consecutive frames. Meanwhile, the signal to noise ratio (SNR) may be reduced due to high frame rates, which will be studied in further work.

4.2 Image registration

Accurate subpixel motion estimation is an important factor in the success of SR reconstruction algorithm. To address this issue, we use Lucas-Kanade optical flow to register frames. For improving computational efficiency, we use the strategy of progressive registration method [39], where the frames in the video sequence are registered in pairs. One frame in each pair acts at the reference frame for the other one. For example. The frame fi(x,y) is registered with respect to the frame fi+1(x,y), and the frame fi+1(x,y) is registered with respect to the frame fi+2(x,y), and so on. The motion between fi(x,y) and fi+2(x,y) is computed as the combined motion of the above estimates. This method works efficiently and well for the smooth motion. We show that, for the global translational motion, the needed real-time subpixel accuracy is able to achieve. Although we focus on global translational motion in this paper, our proposed SR reconstruction algorithm can also be applied to other motion models, i.e., rotation, affine transformation and nonglobal motion [4042]. With other motion models, the computational complexity goes up significantly. However, our proposed method lends itself to a parallel implementation. It may be possible to implement other motion model in real-time using a parallel hardware architecture.

4.3 Computational performance

Our algorithm is implemented on a PC with an Intel i7-8700 CPU and 16 GB RAM. The average execution time for frame registration is 7.6 ms for the raw images with the size of 1024×1024 pixels. The total time for creating a SR image is 48.1 ms, which is faster than the used frame rate of the camera (15 FPS). The execution time for video mosaics depends on the size of the mosaics. The cost of per mosaic with size of 2000×3000 pixels (Fig. 6(a) and Fig. 7) and 2000×6000 pixels (Fig. 6(b)) is 45.2 ms and 59.5 ms, respectively. Real-time SR imaging and real-time video mosaicing will be achieved using CUDA in the future.

Current use of 4 LR frames in matching window is chosen to produce a SR image. Theoretically, with the increase of frames we have more information and obtain better SNR. Meanwhile, the execution time for creating a SR image dose not goes up significantly (60.6 ms for 8 LR images). However, this limits the scanning speed of fiber bundle and is more susceptible to tissue deformations. The solution to these limitations could use a high frame rates camera and a two-step SR reconstruction stage. This two-step stage includes real-time SR reconstruction for live image acquisition in this paper and post-procedural SR reconstruction for higher accuracy. Because the post-procedural SR reconstruction is not limited by the requirement of real-time, it can use more LR images and more complex methods such as non-rigid motion model used for compensating for tissue deformations, and computational iterative reconstruction method. Besides, our raw fiber bundle images are influenced by out-of-focus signal. Our proposed method can be further applied to a confocal microendoscope, and then SR imaging with improved axial response can be achieved in future.

5. Conclusion

In this paper we have presented a real-time SR algorithm that processes successively captured raw fiber bundle images. We have demonstrated that given random motion of fiber bundle in successive frames, SR is indeed possible and practical. Our approach results in a 1.77-fold improvement in resolution. Meanwhile, the effective FOV of fiber bundle can be increased by collecting a time-series of reconstructed SR images. Our approach has low computational complexity and lends itself to a parallel implementation, which can be further developed to an effective tool for clinical endomicroscopy imaging.

Funding

Youth Innovation Promotion Association of ACS (2018359); China Postdoctoral Science Foundation (2019M651958); National Natural Science Foundation of China for Young Scholars (61905272); Key Technologies R & D Program of Jiangsu Province (No. BE2018666); Key Technologies Research and Development Program (2017YFC0109900, 2018YFC0114800).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. V. Dubaj, A. Mazzolini, A. Wood, and M. Harris, “Optic fibre bundle contact imaging probe employing a laser scanning confocal microscope,” J. Microsc. 207(2), 108–117 (2002). [CrossRef]  

2. M. Hughes and G.-Z. Yang, “High speed, line-scanning, fiber bundle fluorescence confocal endomicroscopy for improved mosaicking,” Biomed. Opt. Express 6(4), 1241–1252 (2015). [CrossRef]  

3. J.-H. Han, X. Liu, C. Song, and J. Kang, “Common path optical coherence tomography with fibre bundle probe,” Electron. Lett. 45(22), 1110–1112 (2009). [CrossRef]  

4. T. Xie, D. Mukai, S. Guo, M. Brenner, and Z. Chen, “Fiber-optic-bundle-based optical coherence tomography,” Opt. Lett. 30(14), 1803–1805 (2005). [CrossRef]  

5. B. A. Flusberg, E. D. Cocker, W. Piyawattanametha, J. C. Jung, E. L. Cheung, and M. J. Schnitzer, “Fiber-optic fluorescence imaging,” Nat. Methods 2(12), 941–950 (2005). [CrossRef]  

6. N. Bedard, T. Quang, K. Schmeler, R. Richards-Kortum, and T. S. Tkaczyk, “Real-time video mosaicing with a high-resolution microendoscope,” Biomed. Opt. Express 3(10), 2428–2435 (2012). [CrossRef]  

7. T. Vercauteren, A. Perchant, G. Malandain, X. Pennec, and N. Ayache, “Robust mosaicing with correction of motion distortions and tissue deformations for in vivo fibered microscopy,” Medical Image Analysis 10(5), 673–692 (2006). [CrossRef]  

8. V. Becker, T. Vercauteren, C. H. von Weyhern, C. Prinz, R. M. Schmid, and A. Meining, “High-resolution miniprobe based confocal microscopy in combination with video mosaicing (with video),” Gastrointestinal Endoscopy 66(5), 1001–1007 (2007). [CrossRef]  

9. C.-Y. Lee and J.-H. Han, “Integrated spatio-spectral method for efficiently suppressing honeycomb pattern artifact in imaging fiber bundle microscopy,” Opt. Commun. 306, 67–73 (2013). [CrossRef]  

10. C. Winter, S. Rupp, M. Elter, C. Munzenmayer, H. Gerhauser, and T. Wittenberg, “Automatic adaptive enhancement for images obtained with fiberscopic endoscopes,” IEEE Trans. Biomed. Eng. 53(10), 2035–2046 (2006). [CrossRef]  

11. J.-H. Han, J. Lee, and J. U. Kang, “Pixelation effect removal from fiber bundle probe based optical coherence tomography imaging,” Opt. Express 18(7), 7427–7439 (2010). [CrossRef]  

12. G. Le Goualher, A. Perchant, M. Genet, C. Cavé, B. Viellerobe, F. Berier, B. Abrat, and N. Ayache, “Towards optical biopsies with an integrated fibered confocal fluorescence microscope,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2004), pp. 761–768.

13. A. Shinde and M. V. Matham, “Pixelate removal in an image fiber probe endoscope incorporating comb structure removal methods,” J Med Imaging Hlth Inform 4(2), 203–211 (2014). [CrossRef]  

14. J.-H. Han and S. M. Yoon, “Depixelation of coherent fiber bundle endoscopy based on learning patterns of image prior,” Opt. Lett. 36(16), 3212–3214 (2011). [CrossRef]  

15. X. Liu, L. Zhang, M. Kirby, R. Becker, S. Qi, and F. Zhao, “Iterative l1-min algorithm for fixed pattern noise removal in fiber-bundle-based endoscopic imaging,” J. Opt. Soc. Am. A 33(4), 630–636 (2016). [CrossRef]  

16. A.K. Eldaly, Y. Altmann, A. Perperidis, N. Krstajic, T.R. Choudhary, K. Dhaliwal, and S. McLaughlin, “Deconvolut-ion and restoration of optical endomicroscopy images,” IEEE Trans. Comput. Imaging 4(2), 194–205 (2018). [CrossRef]  

17. A. K. Eldaly, Y. Altmann, A. Akram, P. McCool, A. Perperidis, K. Dhaliwal, and S. McLaughlin, “Bayesian bacterial detection using irregularly sampled optical endomicroscopy images,” Medical Image Analysis 57, 18–31 (2019). [CrossRef]  

18. J. Liu, W. Zhou, B. Xu, X. Yang, and D. Xiong, “Honeycomb pattern removal for fiber bundle endomicroscopy based on a two-step iterative shrinkage thresholding algorithm,” AIP Adv. 10(4), 045004 (2020). [CrossRef]  

19. M. Kyrish, R. Kester, R. Richards-Kortum, and T. Tkaczyk, “Improving spatial resolution of a fiber bundle optical biopsy system,” in Endoscopic Microscopy V, vol. 7558 (International Society for Optics and Photonics, 2010), p. 755807. [CrossRef]  

20. S. Rupp, M. Elter, and C. Winter, “Improving the accuracy of feature extraction for flexible endoscope calibration by spatial super resolution,” in 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, (IEEE, 2007), pp. 6565–6571.

21. C. Renteria, J. Suárez, A. Licudine, and S. A. Boppart, “Depixelation and enhancement of fiber bundle images by bundle rotation,” Appl. Opt. 59(2), 536–544 (2020). [CrossRef]  

22. N. C. Momsen, A. R. Rouse, and A. F. Gmitro, “Improvement in optical fiber bundle-based imaging using synchronized fiber motion,” Appl. Opt. 59(22), G249–G254 (2020). [CrossRef]  

23. C.-Y. Lee and J.-H. Han, “Elimination of honeycomb patterns in fiber bundle imaging by a superimposition method,” Opt. Lett. 38(12), 2023–2025 (2013). [CrossRef]  

24. G. W. Cheon, J. Cha, and J. U. Kang, “Random transverse motion-induced spatial compounding for fiber bundle imaging,” Opt. Lett. 39(15), 4368–4371 (2014). [CrossRef]  

25. K. Vyas, M. Hughes, B. G. Rosa, and G.-Z. Yang, “Fiber bundle shifting endomicroscopy for high-resolution imaging,” Biomed. Opt. Express 9(10), 4649–4664 (2018). [CrossRef]  

26. J. Shao, W.-C. Liao, R. Liang, and K. Barnard, “Resolution enhancement for fiber bundle imaging using maximum a posteriori estimation,” Opt. Lett. 43(8), 1906–1909 (2018). [CrossRef]  

27. M. Elad and Y. Hel-Or, “A fast super-resolution reconstruction algorithm for pure translational motion and common space-invariant blur,” IEEE Trans. on Image Process. 10(8), 1187–1193 (2001). [CrossRef]  

28. S. Lertrattanapanich and N. K. Bose, “High resolution image formation from low resolution frames using Delaunay triangulation,” IEEE Trans. on Image Process. 11(12), 1427–1441 (2002). [CrossRef]  

29. S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. on Image Process. 13(10), 1327–1344 (2004). [CrossRef]  

30. P. R. Fernandez, J. L. Lazaro, A. Gardel, Ó. Esteban, A. E. Cano, and P. A. Revenga, “Location of optical fibers for the calibration of incoherent optical fiber bundles for image transmission,” IEEE Trans. Instrum. Meas. 58(9), 2996–3003 (2009). [CrossRef]  

31. T. N. Ford, D. Lim, and J. Mertz, “Fast optically sectioned fluorescence hilo endomicroscopy,” J. Biomed. Opt. 17(2), 021105 (2012). [CrossRef]  

32. A. Perperidis, K. Dhaliwal, S. McLaughlin, and T. Vercauteren, “Image computing for fibre-bundle endomicroscopy: A review,” Medical Image Analysis 62, 101620 (2020). [CrossRef]  

33. M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett. 33(2), 156–158 (2008). [CrossRef]  

34. G. D. Evangelidis and E. Z. Psarakis, “Parametric image alignment using enhanced correlation coefficient maximization,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 1858–1865 (2008). [CrossRef]  

35. Y. Chang, W. Lin, J. Cheng, and S. C. Chen, “Compact high-resolution endomicroscopy based on fiber bundles and image stitching,” Opt. Lett. 43(17), 4168–4171 (2018). [CrossRef]  

36. J.-Y. Bouguet, “Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm,” Intel Crporation 5, 4 (2001).

37. D. Shepard, “A two-dimensional interpolation function for irregularly-spaced data,” in Proceedings of the 1968 23rd ACM national conference, (1968), pp. 517–524.

38. T. Vercauteren, A. Perchant, X. Pennec, and N. Ayache, “Mosaicing of confocal microscopic in vivo soft tissue video sequences,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2005), pp. 753–760.

39. S. Farsiu, D. M. Robinson, M. Elad, and P. Milanfar, “Dynamic demosaicing and color superresolution of video sequences,” in Image Reconstruction from Incomplete Data III, vol. 5562 (International Society for Optics and Photonics, 2004), pp. 169–178.

40. T. R. Tuinstra and R. C. Hardie, “High-resolution image reconstruction from digital video by exploitation of nonglobal motion,” Opt. Eng. 38(5), 806–814 (1999). [CrossRef]  

41. M. S. Erden, B. Rosa, J. Szewczyk, and G. Morel, “Understanding soft-tissue behavior for application to micro laparoscopic surface scan,” IEEE Trans. Biomed. Eng. 60(4), 1059–1068 (2013). [CrossRef]  

42. M. Hu, G. Penney, D. Rueckert, P. Edwards, F. Bello, M. Figl, R. Casula, Y. Cen, J. Liu, Z. Miao, and D. Hawkes, “A robust mosaicing method with super-resolution for optical medical images,” in International Workshop on Medical Imaging and Virtual Reality, (Springer, 2010), pp. 373-382.

Supplementary Material (6)

NameDescription
Visualization 1       Real-time SR imaging for 1951 USAF resolution target
Visualization 2       Real-time SR imaging for spider silk
Visualization 3       Real-time SR imaging for three enlarge ROIs where the spider silks overlap
Visualization 4       SR video mosaic from 1951 USAF target
Visualization 5       SR video mosaic for spider silk
Visualization 6       SR video mosaic from ex vivo porcine heart

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Custom-built fiber bundle endomicroscopy system for fluorescent imaging.
Fig. 2.
Fig. 2. A flowchart of our proposed method for real-time SR image reconstruction.
Fig. 3.
Fig. 3. Four consecutive input frames in a matching window used for SR image reconstruction.
Fig. 4.
Fig. 4. Imaging results of 1951 USAF resolution target from a real video sequence. (a-d) Larger cropped large images show all Elements (E) of Group (G) 8 and 9. Smaller zoomed images in (a-d) show all elements of G9 and G8E5-6 by zooming in on the regions of interest (marked in blue and red) in large images (a-d). (a) Raw LR fiber bundle image. Images obtained by (b) Gaussian smoothing (σ=2.0 pixels) and (c) our proposed method (σ=3.0 pixels) on a single LR image. (d) SR image is reconstructed by 4 LR images using our proposed method (σ=1.0 pixels). (e) Un-cropped reconstructed SR image for reference. Video demonstration of this real-time SR imaging is shown in Visualization 1. Full FOV of fiber bundle in (e) is 240 μm. Region of interest (marked in orange) in (e) corresponds to the image in (d). A plot of the normalized intensity of the pixels along the line segment in (e) is shown in (f). The scale bar in (e) is 30 μm.
Fig. 5.
Fig. 5. Experimental results of spider silk from a real video sequence. (a) Raw LR image and reconstructed images by (b) Gaussian smoothing (σ=2.0 pixels) and (c) our proposed method (σ=3.0 pixels) using a single LR image. SR image is reconstructed in (d) by 4 LR images using our proposed method (σ=1.0 pixels). Video demonstration of this real-time SR imaging is shown in Visualization 2. Three ROIs where the spider silks overlap are enlarged and show that our proposed method can enhance image detail (see Visualization 3). The scale bar represents 30 μm.
Fig. 6.
Fig. 6. SR video mosaics from (a) 1951 USAF target (See Visualization 4) and (b) spider silk (See Visualization 5). The mosaic size is 2000×3000 pixels in (a) and 2000×6000 pixels in (b). The orange dotted circle represents the position where one reconstructed SR image used at the path of the mosaic is inserted. The overlapping regions are replaced by this SR image. The ROI of SR image (marked in blue) is enlarged for better visualization. SR image is real-time reconstructed by 4 LR images using our proposed method (σ=1.0 pixels).
Fig. 7.
Fig. 7. SR video mosaics from ex vivo porcine heart (See Visualization 6). The mosaic size is 2000×3000 pixels. The orange dotted circle represents the position where one reconstructed SR image used at the path of the mosaic is inserted. The ROI of SR image (marked in blue) is enlarged for better visualization. SR image is real-time reconstructed by 4 LR images using our proposed method (σ=1.0 pixels).
Fig. 8.
Fig. 8. Estimated (a) translational shifts for the reconstructed SR images in Fig. 4 and 5, and (b) speed of fiber bundle for the video sequence in Fig. 6 and 7.

Tables (1)

Tables Icon

Table 1. Image Quality Parameters

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

f c k ( x , y ) = f k ( x , y ) f b ( x , y ) f h ( x , y ) f b ( x , y ) × M ( x , y ) ,
I ( x , y ) = k = 1 N T θ k { f c k ( x , y ) } ,
w ( x , y ) = k = 1 N T θ k { M ( x , y ) } .
I s r ( p ) = j h j ( p ) l h l ( p ) I j ,
I s r ( p ) = j G ( p p j ) I j j G ( p p j ) = [ G j δ p j I j ] ( p ) [ G j δ p j ] ( p ) ,
w s r ( p ) = [ G j δ p j w j ] ( p ) [ G j δ p j ] ( p ) .
S R ( x , y ) = G F { I ( x , y ) } G F { w ( x , y ) } ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.