Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Robust autofocus method based on patterned active illumination and image cross-correlation analysis

Open Access Open Access

Abstract

For the effectiveness of a computer-aided diagnosis system, the quality of whole-slide image (WSI) is the foundation, and a useful autofocus method is an important part of ensuring the quality of WSI. The existing autofocus methods need to balance focusing speed and focusing accuracy, and need to be optimized separately for different samples or scenes. In this paper, a robust autofocus method based on fiber bundle illumination and image normalization analysis is proposed. For various application scenes, it meets the requirements of autofocusing through active illumination, such as bright field imaging and fluorescence imaging. For different structures on samples, it ensures the autofocusing accuracy through image analysis. The experimental results imply that the autofocusing method in this paper can effectively track the change of the distance from the sample to the focal plane and significantly improve the WSI quality.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Examination of pathological images is the golden standard for diagnosing, but the scarcity and uneven distribution of pathologists limit the diagnostic efficiency [1]. Therefore, the development of an intelligent auxiliary diagnosis system to improve the efficiency of pathological diagnosis has heavy scientific and medical needs [24]. However, the performance of an intelligent diagnosis system is highly dependent on related to the quality of WSI images [5]. It has been reported that 97% of the errors in AI-aided diagnosis results are caused by poor image quality [6]. One of the main factors leading to the degradation of image quality is that the sample is out of focus during imaging [7]. Therefore, it is the unremitting pursuit of technical researchers to develop high-precision focusing methods for WSI imaging.

In this case, three types of autofocus methods have been reported. The first type of focusing method works based on the focus map surveying process [8,9]. The second type of focusing method works based on additional optical systems [10,11]. The third type of focusing method obtains defocus information based on image analysis [1214]. These focusing methods greatly promote the development of WSI technology, but there are still some problems to be solved. For example, the first type of focusing method requires multiple axial scans to obtain the axial positions of different fields of view (FOV) before imaging, which would slow down the imaging speed [15]. Therefore, researchers turned to developing a more efficient focusing method based on an additional optical system, such as the perfect focus system (PFS) [16] and the rapid autofocusing via pupil-split image phase detection (RAPID) system [17]. These methods have fast focus speed, but they use the intensity distribution of reflected light spots or the fluorescence information of samples to calculate defocusing distance, which will also limit their application scope. In particular, the intensity distribution of reflected light spot is easily influenced by the scatter of different structures, which would reduce the estimation accuracy of defocus distance. Besides, although the RAPID method has been well applied in fluorescence imaging, its application in bright-field imaging has not been reported. In addition to the above two types of methods, researchers also try to calculate the defocus distance information of the images with the assistance of artificial intelligence [1821]. However, the complexity of different sample structures would increase the training difficulty of model and thus affecting the universality of this method.

To sum up, there is still much room for improvement in the field of autofocus. This paper proposes a joint autofocusing method based on an additional optical system and image cross-correlation analysis was proposed. In the experiment, a group of hexagonal optical fibers was projected onto the sample by the conjugate imaging system, and then the reflected light spots of the optical fiber pattern are detected by a focusing camera. Finally, the normalized cross-correlation (NCC) parameters between the reflected light spot image and the reference image were used to judge the defocus distance. This method could focus only by one frame of image, and retains the fast focusing speed of the method based on an additional optical path system. What's more, the reflection pattern is processed by low-pass filtering in the frequency domain, which solves the problem that the intensity distribution of reflection spot changes due to different sample structures. In short, a new autofocus method based on NCC analysis of active illumination patterns is developed in this paper, which not only ensures the focusing speed, but also solves the problem of focusing failure caused by the change of sample structure. It has been successfully applied in whole-slide imaging experiments under bright field and fluorescence mode, which shows the effectiveness and universality of this method.

2. Method

2.1 Optical path settings

In this paper, a WSI imaging system was made based on Olympus microscope BX53, which mainly includes two parts: an imaging unit and an autofocus unit. The schematic diagram of the geometrical optical path of the autofocusing method is presented in Fig. 1. In this system, we used near-infrared LED as the light source of the autofocusing unit, which wouldn't occupy WSI imaging channels. In the autofocus unit, six fibers arranged in a hexagon are used as an illumination pattern. The end face of the fiber bundle is projected onto the sample by an optical path composed of Obj. 1, L1, L2, and Obj. 2, then its reflected light was collected by the imaging system composed of objective 2, L3, and focus camera. After the reflection pattern is obtained, the focusing process in the gray dotted box in Fig. 1 could be performed to estimate the defocus distance of the sample, and the specific process is as follows. Firstly, the maximum value Cp,i, Cn,i, Cf,i from NCC functions of the reflection pattern and three reference patterns are calculated respectively. Secondly, an intermediate variable ξ is obtained by the three coefficients. Finally, the defocus distance δZ is estimated by using the functional relationship between the ξ and the position Z. Autofocusing could be realized by feeding back δZ to the Z-axis stage.

 figure: Fig. 1.

Fig. 1. Schematic diagram of the geometrical optical path of the autofocusing method. The 3D model of the system is shown in the illustration in the lower left corner, and the process of defocus estimation is shown in the gray dotted box. The parameters of optical elements used in the system are shown in Table 1.

Download Full Size | PDF

Tables Icon

Table 1. Optical elements used in the WSI imaging system

In the autofocusing unit, a high-speed complementary metal-oxide-semiconductor (CMOS camera (HD-G130M-U3, Daheng Optoelectronics) is used as the focusing camera, and its shooting speed can reach 213 fps, which is the basis for ensuring the focusing speed. The imaging unit was equipped with a color camera (FL20, Tucsen) and a sCMOS camera (Flash 4.0 v3, Hamamatsu Photonics) to realize bright-field imaging and fluorescence imaging respectively. In order to obtain WSI, the system was also equipped with a high-precision linear stage (XY: H101E2F, Z: PS3H122r, Prior Scientific) to drive the sample to realize lateral and axial movement. The defocus distance calculation and image capture in WSI imaging were carried out on a desktop computer (Intel Xeon W-2245, 3.90 GHz, 32 GB RAM, Lenovo). Detailed parameters of optical elements used in the system can be referred to Table 1.

2.2 Image normalized cross-correlation computation

Before WSI imaging, we first randomly selected a FOV and imaged a group of optical fiber patterns at different axial positions. We then found three images above, at, and below the focal plane from this image stack as reference images. Finally, the NCC function distribution C(x,y) corresponding to each image in the image stack was calculated by Eq. (1), as shown in Fig. 2.

$$C(x,y) = \frac{{\sum\limits_{{x^\prime },{y^\prime }} {({T^\prime }(} {x^\prime },{y^\prime }) \cdot {I^\prime }(x + {x^\prime },y + {y^\prime }))}}{{\sqrt {\sum\limits_{{x^\prime },{y^\prime }} {{T^\prime }} {{({x^\prime },{y^\prime })}^2} \cdot \sum\limits_{{x^\prime },{y^\prime }} {{I^\prime }} {{(x + {x^\prime },y + {y^\prime })}^2}} }}$$
where I and T are respectively anyone in the image stack and one of the three reference images. By the above calculation process, we got three NCC function distributions, as shown in Fig. 2(b). In order to estimate the defocus distance from a single frame image, we select the maximum Cp,i, Cn,i, Cf,i from three NCC function distributions, and construct a new variable ξ by Eq. (2) according to the method in Ref [22].
$${\zeta _i} = \frac{{{C_{p,i}} - {C_{n,i}}}}{{{C_{f,i}}}}$$

 figure: Fig. 2.

Fig. 2. Schematic diagram of the NCC calculation process. (a) Reflected images of fiber surface at different axial positions. (b) NCC calculation process, in which image p, f, n corresponds to three reference images above, at and below the focal plane, the image i was anyone in the stack of Fig. 2(a). (c) The functional relationship between ξ and Z calculated, where the ξ value in L1-6 is calculated by the same reflected patterns stacks and different reference patterns. (d) The slope and intercept were obtained by linear fitting of L1-6 in (c). The red arrow here indicates a sudden change in intercept.

Download Full Size | PDF

In Eq. (2), variables p, n, and f are constant, so the variable ξ is only related to the variable i. As the variable i changes with different axial positions, the value of the variable ξ would also depend on the axial position Z. Among the three reference images, image f is the reflected pattern of the sample at the focal plane, while the other two reference images p and n are the reflected patterns of the sample at a certain distance above and below the focal plane. In order to get the best reference images p and n, we calculated the functional relationship between ξ and Z by the same set of reflected patterns and six sets of different reference images, and the result is shown in Fig. 2(c). Here, L1-6 is the relationship between ξ and Z calculated by reference images with distances of ±10 µm, ± 20 µm, ± 30 µm, ± 40 µm, ± 50 µm and ±60 µm from the focal plane. The experimental results show that the function of ξ and Z conforms to the linear trend in the range of ±40 µm on the Z axis, but beyond this range, it no longer conforms to this trend. The fitting parameters of L1-6 are shown in Table 2. Figure 2(d) shows the slope and intercept of the linear fitting function of L1-6. The results imply that the slope increases with the distance between reference images p and n. The increase in slope means that the ξ has a larger increment under the same change of Z, which is beneficial to the improvement of focusing accuracy. However, the increase of the distance between reference images p and n would lead to the abrupt change of the intercept o, as shown in Fig. 2(d). Considering the change of slope and intercept with the distance between the two reference images, we chose the images at ±40 µm from the focal plane as the reference image p and n.

Tables Icon

Table 2. The fitting parameters of L1-6

Furthermore, we used the same method to process the image stack obtained from eight different FOVs. The results showed that the slope of the linear function between the variable ξ and Z was slightly different. It implied that the difference of sample structure in different FOVs would affect our calculation results. The reflected patterns at the focal plane in the eight different FOVs are shown in Fig. 3(c). Considering that the change of sample structure belongs to spatial high-frequency fluctuation relative to the illumination pattern, we plan to adopt a frequency domain low-pass filtering method to reduce the influence of sample structure on estimation results. Here, we first set a cutoff frequency to filter the images in a certain FOVs to get a set of ξ and Z, and then get a slope value by linear fitting their distributions. Finally, different cut-off frequencies were set for images in different FOVs, and the above steps were repeated to obtain the relationship between the slopes and cut-off frequencies in different FOVs. Figure 3(a) shows the slopes of variable ξ and axial position Z under different cut-off frequencies. Observing the curves on the green surface in Fig. 3(a), we can find that the eight curves tend to coincide with the decrease of cut-off frequency d0, which implies that frequency domain filtering can reduce the influence of structural differences in different FOVs on slope. Figure 3(b) shows the relationship between ξ value and axial position obtained from eight different FOVs when the spatial cut-off frequency is set to 10 pixels -1. In Fig. 3(b), the black square represents the experimental data, the red solid line represents the fitting curve, and the height of the black square is the fluctuation range of the variable ξ in eight FOVs at the same axial position. As can be seen from the Fig. 3(b), the slope between ξ value and axial position Z in different FOVs is consistent after filtering in the frequency domain.

 figure: Fig. 3.

Fig. 3. (a) The slope of variable ξ and axial position as a function of cut-off frequency d0 in different FOVs. The curves in different colors are the results of different FOVs, and the curve on the light green surface is the projection of eight curves. (b) The relationship between the variable ξ and the axial position Z when the cut-off frequency is set to 15 pixel-1. The black square represents the experimental data, the red solid line represents the fitting curve, and the Error bar indicates the fluctuation range of variable ξ in different FOVs. (c) Reflected light spots at the focal plane in eight FOVs.

Download Full Size | PDF

So far, we have proved that the variable ξ has a linear functional relationship with the axial position, and the slope is consistent after preprocessing the reflected patterns in different FOVs by Fourier low-pass filtering. Based on this theory, we can randomly select a position to image a set of reflection patterns to obtain the functional relationship between variable ξ and axial position Z before WSI imaging. In the process of WSI imaging, every time we switch an imaging FOV, we first take a reflection pattern to calculate variable ξ value, and estimated the defocus δZ of the FOV according to the functional relationship between the variable ξ and Z. Finally, we adjusted the sample position with the defocus δZ to realize automatic focusing. The pattern used for NCC analysis is determined by the end face shape of the optical fiber bundle and does not change with the switching of imaging FOV. Therefore, the relationship between the variable ξ and the Z position only needs to be obtained once before WSI imaging. In other words, only one reflection pattern needs to be obtained in each FOV to realize autofocus. The time required for image processing during autofocus is shown in Table 3. In terms of hardware, the speed of the stepping motor used in this paper can reach 2000µm/s. Combining image processing and hardware moving, the autofocus system can complete the focusing within 100 ms.

Tables Icon

Table 3. Total time of image processing in the autofocus process

3. Experimental results

Applying the focusing method in Section 2 to WSI imaging, we have performed bright-field and fluorescence imaging on mouse kidney slices, and compared the imaging effects with and without the autofocusing method respectively. The WSI imaging process was summarized as follows.

Step 1: Randomly selected a FOV and imaged a set of reflection images when the axial position changes, and calculated the functional relationship between variable ξ with the axial position Z by using the method in Section 2.

Step 2: Switched FOV to the edge of the slice, and imaged a reflection pattern. Calculated values of variable ξ by using this image and the reference image obtain in step 1.

Step 3: The defocus distance Δ Z was estimated according to the function relation obtained in step 1 and the value of variable ξ obtained in step 2, and fed it back to the axial stage to adjust the position to realize automatic focusing.

Step 4: Switched to the next FOV and repeated step 2 and 3 until WSI imaging was completed.

In this paper, the above autofocus method was used to image the HE-stained kidney slices of mice, and the WSI quality with and without autofocus was compared. We used the Brenner function to quantify the image ambiguity to evaluate the effect of the autofocus method developed in this paper. The formula for calculating the value of the Brenner function is shown in Eq. (3).

$$f(I) = \mathop \sum \limits_x \mathop \sum \limits_y [I(x + 2,y)-I(x,y)]^2$$

It can be found from Eq. (3) that the Brenner function is essentially a gradient algorithm, so the larger the value of this function, the clearer the edge of the structure in the image. In other words, in a group of images at the same FOV but different axial positions, the Brenner function value will reach the maximum at the focal plane position.

From the WSI in Fig. 4, it is not difficult to find that the quality of WSI obtained with and without autofocus method was obviously different. We focused in the same FOV to start scanning imaging. The two imaging results are shown in Fig. 4(a1) and (b1), and the corresponding Brenner function values are 163 and 157, respectively. When scanning to the edge of the slice, we found that the Brenner function value of the image without the autofocus method has become 38, which is nearly 7 times different from the corresponding function value of the image with autofocus. It implies that the autofocus method developed in this paper is effective.

 figure: Fig. 4.

Fig. 4. (a) and (b) WSI of mouse kidney slices obtained with and without autofocus. (a1) and (b1) Initial FOV for WSI imaging with and without autofocus. (a2) and (b2) Images of slice edges with and without autofocus method. The scale bar corresponding to figure (a), (a1), and (a2) in the first row is 1500 µm, 100 µm, and 100 µm, and the scales corresponding to figures (b), (b1), and (b2) in the second row are consistent with figures (a), (a1), and (a2) respectively.

Download Full Size | PDF

In order to verify the universality of the autofocus method developed in this paper, we also used it to image the WSI of the DAPI-stained mouse kidney slices. The experiment results in Fig. 5 show that the image of the slice edge is more blurred without the autofocus method, which implies that the autofocus method developed in this paper is still effective in fluorescence imaging scenes. Specifically, the values of the Brenner function of figure (a1) and (b1) corresponding to the initial FOV with and without autofocus are almost the same, which are 499 and 508, respectively. However, when scanning to the edge of the slice, the Brenner values of the image obtained with and without autofocus are significantly different, which are 538 and 455 respectively.

 figure: Fig. 5.

Fig. 5. (a) and (b) Fluorescence WSI of mouse kidney slices obtained with and without autofocus. (a1) and (b1) Initial FOV for fluorescence WSI imaging with and without autofocus. (a2) and (b2) Fluorescence images of slice edges with and without autofocus method. The scale bar corresponding to figure (a), (a1), and (a2) in the first row is 500 µm, 30 µm, and 30 µm, and the scales corresponding to figures (b), (b1) and (b2) in the second row are consistent with figures (a), (a1) and (a2) respectively.

Download Full Size | PDF

In order to deeply evaluate the performance of the autofocusing method developed in this paper, we calculated the Brenner function values of different FOVs in Fig. 4. The change of sample structure would lead to the change of Brenner function value. For the convenience of comparison, we used the same Brenner function value to normalize the value in WSI images obtained with and without autofocus method, as shown in Fig. 6(a-b). Compared with Fig. 6(b), the heatmap in Fig. 6(a) has more blocks with similar colors, which indicates that there are more FOVs in WSI obtained by autofocus method with similar Brenner function values. In the same FOV, the closer the sample is to the focal plane, the greater the Brenner function value. Therefore, the comparison of the results in Fig. 6(a-b) shows that the autofocus method developed in this paper has played a role in the WSI imaging process. Furthermore, we recorded the focal plane position of different FOVs and the displacement of the stage in different FOVs in to analyze the performance of the autofocus method developed in this paper. As shown in Fig. 6(c), it was not difficult to find that the samples in different fields of view were not always on the focal plane and the difference between the lowest point and the highest point was close to 20 µm, which may be caused by the inclination between the stage plane and the focal plane. This degree of defocus is much larger than the depth of field range of a general objective lens, which is enough to cause the decline of WSI quality.

 figure: Fig. 6.

Fig. 6. Quantitative analysis of WSI quality. (a) and (b) are Brenner function values corresponding to the images in different FOVs obtained with and without the autofocus method in Fig. 4, respectively. (c) The distance between the sample and the focal plane in different FOVs in Fig. 4. (d) Displacement of the stage in different FOVs during WSI imaging in Fig. 4.

Download Full Size | PDF

Figure 6(d) shows the displacement of the axial stage in each FOV after using the autofocus method developed in this paper. It can be seen from Fig. 6(c), the displacement is close to 16 µm in the first FOV, while the displacement in other FOVs is about 1 µm, which indicates that the sample gradually moves away from the focal plane of the objective lens with a small amplitude during WSI imaging. This is consistent with the phenomenon of continuous change of focal plane in Fig. 6(c). Thus, this analysis further illustrates the effectiveness of the autofocus method developed in this paper by visualizing the change of focal planes in different FOVs.

4. Conclusion

In WSI imaging technology, the focusing accuracy of existing autofocus methods would be affected by the difference of sample structure. In order to solve this problem, this paper proposes an autofocus method based on patterned active illumination and NCC analysis. Firstly, in this method, the hexagonal optical fibers were projected onto the sample and the reflection pattern was obtained. Then Fourier low-pass filtering was used to eliminate the influence of differences from the sample structure on the illumination pattern. Finally, the defocus information carried by the reflection pattern was extracted through image NCC analysis and fed it back to the stage to realize the focal plane adjustment. The WSI results showed that this method could show a good focusing effect in bright field and fluorescence imaging experiments. In short, this paper proposes a robust autofocus method based on the patterned active illumination and image NCC analysis, which overcomes the difficulties caused by the change of sample structure and realizes the fast focusing based on a single frame image. The WSI imaging experiment proved that this method could meet the autofocus requirements in the scenes of bright-field and fluorescence microscopic WSI imaging.

Funding

Hainan Province Science and Technology Special Fund (ZDYF2022SHFZ079, ZDYF2022SHFZ126); National Natural Science Foundation of China (82260368); National Key Research and Development Program of China (2022YFC3400601); Start-up Fund from Hainan University (KYQD(ZR)-20077).

Acknowledgments

We thank all the other members of Digital Theranostics and Optical Microscopy (DigiTOM) group for their technical support.

Author Contributions. Y. J. Z. and Z. L. H. designed the experiment and led the project. C.W.L., K. H. L., and Y. H. X. performed the measurements. X. G. G. prepared the samples. Y. J. Z. and C.W.L. prepared the manuscript. All authors contributed to the data analyses.

Disclosures

A patent is pending.

Data availability

Codes and data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. L. A. Hassell and A. Afzal, “Flattening the world of pathology education and training and shortening the curve of pathology learning,” Am. J. Clin. Pathol. 156(2), 176–184 (2021). [CrossRef]  

2. H. Go, “Digital pathology and artificial intelligence applications in pathology,” Brain tumor research and treatment 10(2), 76–82 (2022). [CrossRef]  

3. K. Bera, K. A. Schalper, D. L. Rimm, et al., “Artificial intelligence in digital pathology - new tools for diagnosis and precision oncology,” Nat. Rev. Clin. Oncol. 16(11), 703–715 (2019). [CrossRef]  

4. B. Lai, J. Fu, Q. Zhang, et al., “Artificial intelligence in cancer pathology: challenge to meet increasing demands of precision medicine,” International Journal of Oncology 63(3), 107 (2023). [CrossRef]  

5. T. Kohlberger, Y. Liu, M. Moran, et al., “Whole-slide image focus quality: automatic assessment and impact on ai cancer detection,” Journal of pathology informatics 10(1), 39 (2019). [CrossRef]  

6. R. Brixtel, S. Bougleux, O. Lezoray, et al., “Whole slide image quality in digital pathology: review and perspectives,” IEEE Access 10, 131005–131035 (2022). [CrossRef]  

7. Z. Bian, C. Guo, S. Jiang, et al., “Autofocusing technologies for whole slide imaging and automated microscopy,” J. Biophotonics 13(12), e202000227 (2020). [CrossRef]  

8. J. Hu, B. Zhong, Z. Jin, et al., “Adaptive predictive scanning method based on a high-precision automatic microscopy system,” Appl. Opt. 58(27), 7305–7310 (2019). [CrossRef]  

9. L. Yu, R. Wang, J. Zhou, et al., “Rapid acquisition of high-volume microscopic images using predicted focal plane,” J. Microsc. 263(3), 320–327 (2016). [CrossRef]  

10. Y. Liron, Y. Paran, N. G. Zatorsky, et al., “Laser autofocusing system for high-resolution cell biological imaging,” J. Microsc. 221(2), 145–151 (2006). [CrossRef]  

11. C. S. Liu, R. C. Song, and S. J. Fu, “Design of a laser-based autofocusing microscope for a sample with a transparent boundary layer,” Appl. Phys. B 125(11), 199 (2019). [CrossRef]  

12. M. A. Baig, A. A. Moinuddin, E. Khan, et al., “DFT-based no-reference quality assessment of blurred images,” Multimed Tools Appl 81(6), 7895–7916 (2022). [CrossRef]  

13. J. Song and M. Liu, “A new methodology in constructing no-reference focus quality assessment metrics,” Pattern Recognition 142, 109688 (2023). [CrossRef]  

14. C. Guo, Z. Bian, S. Alhudaithy, et al., “Brightfield, fluorescence, and phase-contrast whole slide imaging via dual-LED autofocusing,” Biomed. Opt. Express 12(8), 4651–4660 (2021). [CrossRef]  

15. J. Liao, Y. Jiang, Z. Bian, et al., “Rapid focus map surveying for whole slide imaging with continuous sample motion,” Opt. Lett. 42(17), 3379–3382 (2017). [CrossRef]  

16. J. S. Silfies, E. G. Lieser, S. A. Schwartz, et al., “The Nikon perfect focus system (PFS),” https://www.microscopyu.com/tutorials/the-nikon-perfect-focus-system-pfs.

17. L. Silvestri, M. C. Müllenbroich, I. Costantini, et al., “Universal autofocus for quantitative volumetric microscopy of whole mouse brains,” Nat. Methods 18(8), 953–958 (2021). [CrossRef]  

18. Q. Li, X. Liu, J. Jiang, et al., “Rapid whole slide imaging via dual-shot deep autofocusing,” IEEE Trans. Comput. Imaging 7, 124–136 (2021). [CrossRef]  

19. J. Lightley, F. Gorlitz, S. Kumar, et al., “Robust deep learning optical autofocus system applied to automated multiwell plate single molecule localization microscopy,” J. Microsc. 288(2), 130–141 (2022). [CrossRef]  

20. Y. Luo, L. Huang, Y. Rivenson, et al., “Single-shot autofocusing of microscopy images using deep learning,” ACS Photonics 8(2), 625–638 (2021). [CrossRef]  

21. K. Xin, S. Jiang, X. Chen, et al., “Low-cost whole slide imaging system with single-shot autofocusing based on color-multiplexed illumination and deep learning,” Biomed. Opt. Express 12(9), 5644–5657 (2021). [CrossRef]  

22. R. McGorty, D. Kamiyama, and B. Huang, “Active microscope stabilization in three dimensions using image correlation,” Optical nanoscopy 2(1), 3 (2013). [CrossRef]  

Data availability

Codes and data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Schematic diagram of the geometrical optical path of the autofocusing method. The 3D model of the system is shown in the illustration in the lower left corner, and the process of defocus estimation is shown in the gray dotted box. The parameters of optical elements used in the system are shown in Table 1.
Fig. 2.
Fig. 2. Schematic diagram of the NCC calculation process. (a) Reflected images of fiber surface at different axial positions. (b) NCC calculation process, in which image p, f, n corresponds to three reference images above, at and below the focal plane, the image i was anyone in the stack of Fig. 2(a). (c) The functional relationship between ξ and Z calculated, where the ξ value in L1-6 is calculated by the same reflected patterns stacks and different reference patterns. (d) The slope and intercept were obtained by linear fitting of L1-6 in (c). The red arrow here indicates a sudden change in intercept.
Fig. 3.
Fig. 3. (a) The slope of variable ξ and axial position as a function of cut-off frequency d0 in different FOVs. The curves in different colors are the results of different FOVs, and the curve on the light green surface is the projection of eight curves. (b) The relationship between the variable ξ and the axial position Z when the cut-off frequency is set to 15 pixel-1. The black square represents the experimental data, the red solid line represents the fitting curve, and the Error bar indicates the fluctuation range of variable ξ in different FOVs. (c) Reflected light spots at the focal plane in eight FOVs.
Fig. 4.
Fig. 4. (a) and (b) WSI of mouse kidney slices obtained with and without autofocus. (a1) and (b1) Initial FOV for WSI imaging with and without autofocus. (a2) and (b2) Images of slice edges with and without autofocus method. The scale bar corresponding to figure (a), (a1), and (a2) in the first row is 1500 µm, 100 µm, and 100 µm, and the scales corresponding to figures (b), (b1), and (b2) in the second row are consistent with figures (a), (a1), and (a2) respectively.
Fig. 5.
Fig. 5. (a) and (b) Fluorescence WSI of mouse kidney slices obtained with and without autofocus. (a1) and (b1) Initial FOV for fluorescence WSI imaging with and without autofocus. (a2) and (b2) Fluorescence images of slice edges with and without autofocus method. The scale bar corresponding to figure (a), (a1), and (a2) in the first row is 500 µm, 30 µm, and 30 µm, and the scales corresponding to figures (b), (b1) and (b2) in the second row are consistent with figures (a), (a1) and (a2) respectively.
Fig. 6.
Fig. 6. Quantitative analysis of WSI quality. (a) and (b) are Brenner function values corresponding to the images in different FOVs obtained with and without the autofocus method in Fig. 4, respectively. (c) The distance between the sample and the focal plane in different FOVs in Fig. 4. (d) Displacement of the stage in different FOVs during WSI imaging in Fig. 4.

Tables (3)

Tables Icon

Table 1. Optical elements used in the WSI imaging system

Tables Icon

Table 2. The fitting parameters of L1-6

Tables Icon

Table 3. Total time of image processing in the autofocus process

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

C ( x , y ) = x , y ( T ( x , y ) I ( x + x , y + y ) ) x , y T ( x , y ) 2 x , y I ( x + x , y + y ) 2
ζ i = C p , i C n , i C f , i
f ( I ) = x y [ I ( x + 2 , y ) I ( x , y ) ] 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.