Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-speed color three-dimensional measurement based on parallel confocal detection with a focus tunable lens

Open Access Open Access

Abstract

Reflectance confocal microscopy is a widely used optical imaging technique for non-destructive three-dimensional (3D) surface measurement. In confocal microscopy, a stack of two-dimensional (2D) images along the axial position is used for 3D reconstruction. This means the speed of 3D volumetric acquisition is limited by the beam scanning and the mechanical axial scanning. To achieve fast volumetric imaging, simultaneous multiple point scanning by parallelizing the beam instead of transverse point scanning can be considered, using a pinhole array. Previously, we developed a direct-view confocal microscope with a focus tunable lens (FTL) to produce a monochrome 3D surface profile of a sample without any mechanical scanning. Here, we report a high-speed color 3D measurement method based on parallel confocal detection. The proposed method produces a color 3D image of an object by acquiring 180 2D color images with an acquisition time of 1 second. We also visualized the color information of the object by overlaying the color obtained with a color area detector and a white LED illumination on top of the 3D surface profile. In addition, we designed an improved optical system to reduce artifacts caused by internal reflections and developed a new algorithm for noise-resistant 3D measurements. The feasibility of the proposed non-contact high-speed color 3D measurement for use in industrial or biomedical fields was demonstrated by imaging the color 3D shapes of various specimens. We anticipate that this technology can be utilized in various fields, where rapid 3D surface profiles with color information are required.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Confocal microscopy is a non-destructive optical imaging technique that provides high spatial resolution and high contrast using a confocal pinhole that rejects out-of-focus light, while in-focus light passes through [1]. Because the signal from the focal point of the objective lens in confocal microscopy can only be detected through the conjugated pinhole, it enables non-contact optical sectioning with high spatial resolution [14]. In particular, reflectance confocal microscopy (RCM), which detects the back-reflected signal from the object, has been used as a powerful imaging tool for various industrial fields that require non-contact three-dimensional (3D) measurement. It is widely used in the surface evaluation and inspection of industrial tools and materials [510]. Because this non-contact optical 3D measurement can obtain clear in-focus surface information and accurate surface profiles of an object without exposing it to damage, RCM is widely used for a variety of 3D measurements [2,11].

In general, to reconstruct a 3D image in confocal microscopy, it is necessary to acquire a stack of sequential two-dimensional (2D) images according to focal depth [1215]. Also, in conventional point-scanning RCM, to generate a single 2D cross-sectional image, transverse scanning is needed, which is generally achieved by beam scanning with a combination of galvanometer scanner, resonant scanner, or polygon scanner. However, the speed of the beam scanning is limited by the mechanical scanners [4,13,16].

To overcome this speed limitation in 2D imaging, simultaneous multiple point scanning can be considered, such as spinning-disk confocal microscopy [16,17]. In the spinning-disk method, a pinhole array, which is arranged in a spiral shape on a disk, generates multiple foci on the object and detects the returning light through the pinholes. The 2D image is acquired by an area detector instead of a point detector, while the spinning-disk with the pinhole array rotates so that the multiple foci can cover the entire 2D area. [4]. Another disk containing a micro-lens array coaxially arranged to the pinhole array on the spinning-disk can be added to increase light efficiency [18].

Although tandem scanning can be faster than the transverse point scanning, the acquisition speed is still limited by the mechanical rotation of the spinning disks. Also, if synchronization between the spinning-disk and the image acquisition is not perfect, the resulting spiral-shaped banding noise pattern will affect image quality [19]. The bulky system and potential vibration are additional disadvantages of the spinning-disks approach.

Direct-view confocal microscopy is another method that can simultaneously acquire multiple points. It uses a stationary pinhole array arranged in a rectangular shape, like an area detector [20]. Because the pinholes in the pinhole array cover the full 2D imaging range, no transverse scanning is required, unlike the spinning-disk method. A micro-lens array can also be used to improve light efficiency [21].

Because it uses no mechanical scanning modules, the acquisition speed is no longer limited by the speed of transverse scanning in the direct-view confocal detection.

3D volumetric image speed can also be affected by an axial scanning method, as well as transverse scanning. Since mechanical scanning of the objective lens or the sample is required to change the axial focal position, the speed to acquire 3D volume is limited by the mechanical movements of the axial scanner [22].

To overcome the limit imposed by the mechanical axial scanning, a focus tunable lens (FTL) can be utilized [2325]. The FTL has a deformable elastic surface that enables the lens to have a variable focal length by changing its surface [26]. Axial scanning with an FTL can boost the acquisition rate of 3D volume, which is then only limited by the maximum frame rate of the camera. Jeong et al. proposed a 3D measurement approach based on direct-view confocal microscopy using an electrically tunable lens [27], however this used a narrow-band light source and a monochrome camera to produce a height map only. Thus, it could not provide color 3D information of the sample. Furthermore, it suffered from strong back reflection by the pinhole array, resulting in a decreased signal-to-noise ratio (SNR).

In this work, we report a color 3D measurement system based on parallel confocal detection. By using an FTL instead of an axial mechanical scanner, we acquired a series of 2D images through the depth without any mechanical scanning, thus eliminating several problems related to mechanical scanning, such as the limited scanning speed, bulkiness, and vibration. We achieved high-speed volumetric imaging of up to 1 volume (200 × 140 × 180 discrete voxels) per second using a high-speed color complementary metal-oxide semiconductor (CMOS) camera. 1 volume covers a truncated pyramid shape with a top area of 16.1 × 11.3 mm, a bottom area of 18.6 × 13.0 mm, and a height of 17.0 mm. Moreover, the use of a broadband light source enabled the color 3D imaging to provide detailed surface information, with the color information overlaid on top of the object surface. With a newly designed optical setup to reduce the artifacts caused by the internal reflection of the pinhole array, the proposed method can acquire high-SNR images with an SNR value of around 40 on a linear scale [28,29]. We also developed a new 3D reconstruction algorithm that is more robust to noise by utilizing the CMOS camera as a virtual pinhole array. This virtual pinhole allows us to apply additional post-processing algorithms that improve the 3D reconstruction [30]. By calculating the cross-correlation values with the pinhole shape instead of the intensity, we could acquire accurate 3D measurements of the object surface. 3D measurements of various samples are provided to demonstrate the performance capabilities and the potential impact of this technique.

2. Methods

2.1 Working principles

The basic working principle of our parallel confocal 3D measurement system is based on the direct-view scheme proposed in our previous study [27], but there are several major advancements. Jeong et al. used one pinhole array placed after the beam splitter, which is commonly used for both illumination and detection [27], as shown in Fig. 1(a). While this method ensures perfect alignment between the illumination and detection, the strong back-reflection by the pinhole array increased the back-ground noise of the image sensor, which significantly reduced the SNR. Thus, we modified the illumination and detection scheme to avoid the back-reflection from the pinhole array, as shown in Fig. 1(b).

 figure: Fig. 1.

Fig. 1. Schematic diagram of direct-view confocal microscopy which uses (a) common pinhole array for illumination and detection, and (b) two separate pinhole arrays for illumination and detection.

Download Full Size | PDF

To separate the illumination and the detection pinhole array, we placed the illumination pinhole array in front of the beam splitter. This way, the back-reflection at the surface of the pinhole array no longer affects the image quality. We used the area camera as the image sensor, which works as a virtual detection pinhole array. Because it is placed at the conjugate plane with the illumination pinhole array and the focal plane of the objective lens, an in-focus image signal can be obtained by applying post-processing using the pinhole array information.

Once the illumination beam passes through the illumination pinhole array, multiple beams from each pinhole are created and illuminated onto the object. The area camera sequentially captures the object images with varying focal planes. When the object’s surface is on the focal plane of the objective lens, the camera captures a bright pinhole image, otherwise a blurred weak signal is captured. By detecting the strong and clear pinhole pattern, we can determine the height, which is defined by the peak axial position of the axial response, at the corresponding pinhole position. Since the pinhole pattern covers the 2D area of the object, 3D surface map can be restored. In particular, by using a color camera, we can reconstruct the color 3D surface of the object.

2.2 System design

Figure 2 shows a schematic diagram of the proposed parallel confocal detection system. We fabricated a custom pinhole array, which is composed of an etched chrome coating with a pinhole array pattern on a soda-lime glass plate with a thickness of 1.6 mm. The size of the pinhole array was 11 × 7.7 mm. The pinhole array has 200 × 140 pinholes with diameters of 30 µm and a pitch of 55 µm, which was defined as the distance between two adjacent pinhole centers. In direct-view confocal microscopy, the pitch of the pinhole array limits the lateral resolution of 3D imaging. Although a smaller pitch is desirable for better lateral resolution, the SNR and depth resolution degrade as the pinhole pitch decreases due to cross-talk from adjacent pinholes. Therefore, in this study, the pinhole pitch was set slightly larger than the full-width at half-maximum (FWHM) of a single pinhole on the conjugate image plane.

 figure: Fig. 2.

Fig. 2. A schematic diagram of the proposed high-speed color 3D measurement system with a focus tunable lens. WLS: white light source, CL: condenser lens, PA: pinhole array, PBS: polarizing beam splitter, FTLA: focus tunable lens assembly.

Download Full Size | PDF

A high-power broadband light incoherent light source (MCWHLP1, Thorlabs, Newton, NJ) was used as a light source for color imaging. The illuminated beam passes through the pinhole array to form multiple foci. In order to reduce artifacts from internal reflections of the optical components, a cube polarizing beam splitter (PBS251, Thorlabs, Newton, NJ) was used along with film linear polarizers (HTA008, Midwest Optical Systems, Palatine, IL) and a polymer quarter-wave plate (APQ10ME-532, Thorlabs, Newton, NJ). With two orthogonal polarization states on the illumination and imaging path, unwanted internal reflections of the optical components were effectively rejected by the detector polarizer, and only the beam reflected from the object, which passed through the quarter-wave plate twice, was detected by the camera. This method improves the SNR by reducing artifacts caused by internal reflections [27]. An image of the pinhole array was formed at the focal plane by the FTL assembly, which was composed of a scan lens (CLS-SL, Thorlabs, Newton, NJ), a FTL (EL-10-30-C-VIS-LD-LP, Optotune Switzerland AG, Dietikon, CH), and a tube lens (TTL100-A, Thorlabs, Newton, NJ). The distance between the scan lens and the FTL was 29.6 mm and the distance between the FTL and the tube lens was 29.1 mm. The focal length of the system was controlled by changing the elastic surface of the FTL, which was determined by an electrical current. A high-speed color CMOS camera (acA-2040-180kc, Basler AG, Ahrensburg, DE) was used as an area detector. We also used the camera as a virtual imaging pinhole array because it was located on the conjugate plane of the pinhole array and the focal plane of the object. The pinhole array pattern was projected onto the surface of the object, which was then imaged on the camera.

180 sequential images were acquired by the camera with a frame rate of 180 frames/sec, while the focal position was being linearly scanned over 17 mm by controlling the FTL. The images consist of 2000 × 1400 pixels covering a field of view (FOV) of 18.6 × 13.0 mm to 16.1 × 11.3 mm of the object space from the farthest to the nearest focal plane, respectively.

2.3 3D reconstruction algorithms

We developed a 3D reconstruction algorithm to employ with the acquired images. In conventional confocal microscopy, the axial position of the object can be determined by calculating the peak intensity position of the axial response, which is the intensity distribution along the axial direction [31]. Because our system uses a pinhole array and a CMOS camera, with a single depth scan, multiple axial responses are acquired simultaneously at each point matching the pinhole arrangement. 200 × 140 virtual pinholes of the CMOS camera corresponding to the center points of each pinhole are used to determine the object height simultaneously. When the object is in focus, not only the intensity gets maximum value, but also the sharp pinhole pattern is formed in the image. Using these features, we developed the 3D reconstruction algorithm as shown in the below flow-chart (Fig. 3).

 figure: Fig. 3.

Fig. 3. A flow-chart of the 3D reconstruction algorithm.

Download Full Size | PDF

First, while the FTL is scanning the focal length linearly, the CMOS camera captures 180 2D color images of the sample followed by median filtering with a kernel size of 3 to minimize shot noise. Since the pinhole pattern of the pinhole array is projected on the object surface, clear and bright pinhole patterns are formed on the CMOS when the object is in focus, shown as red boxes in Fig. 4. In contrast, when the object is out of focus, the pinhole pattern is not visible and the image gets dimmer, as shown by the blue boxes in Fig. 4. Therefore, to determine the axial position of the sample, we can use not only the axial response of the intensity, but also the changes in the pinhole array image.

 figure: Fig. 4.

Fig. 4. Images of projected pinhole pattern on the surface of a doll with different focal lengths (a) and (b). Red boxes show the magnified view of the in-focus image, and blue boxes show the magnified view of out-of-focus image.

Download Full Size | PDF

Since the axial response of the intensity is strongly affected by random noise as well as the cross-talk of neighbor pinholes, we decided to use the image of the pinhole array.

Figures 5(a) and 5(b) show the in-focus and out-of-focus CMOS images of a flat gray card, respectively, and Fig. 5(c) shows the axial response at the center pinhole position. While we can find the peak intensity position of the axial response, it also contains significant random noise and background signal. Thus, we calculated the 2D cross-correlation between the acquired images and a 2D Gaussian kernel, which is identical to the image of the pinhole in focus. The 2D cross-correlation value reaches maximum when the object is in focus, as shown in Fig. 5(d) and it gets smaller as the object moves out of focus, as shown in Fig. 5(e). Although only the magnified region of interest (ROI) was shown in the center of the image sensor for visualization purposes, similar pinhole patterns were obtained across the entire sensor area. By obtaining the 2D cross-correlation at the pinhole position along the axial direction, the axial response of the pinhole pattern is acquired with lower noise and higher SNR, as shown in Fig. 5(f). It can then be used to extract the height information. When calculating the axial response, we additionally applied weighted averaging over 5 × 5 neighbor pixels, the same weights as the 2D Gaussian kernel. Using this process we can further minimize noise, and were able to robustly acquire an axial response even when the contrast of the pinhole image was low. Since the pinhole pattern was already fixed, we extracted the positions of each pinhole by imaging a flat gray card. We acquired the image of the flat gray card, which has a clear pinhole pattern when in focus, then found 200 × 140 local maximum points as pinhole centers. These pre-determined pinhole positions were then used to acquire the axial responses of the sample. In this way, we calculated the height of the object at these positions.

 figure: Fig. 5.

Fig. 5. (a) In-focus and (b) out-of-focus images of the flat gray card and (c) corresponding axial response of intensity. Cross-correlation maps of (d) in-focus image and (e) out-of-focus image with a pinhole-shaped kernel. (f) Axial response of cross-correlation acquired at the center pixel of the flat gray card.

Download Full Size | PDF

Since the axial response of the 2D cross-correlation reaches maximum when the object is in focus, we needed to find the maximum peak position from the axial response curve of the cross-correlation. In order to further reduce the effect of noise on axial direction, we applied filtering on the axial response curve using the 1D Gaussian distributed kernel, which has the FWHM of the size of the experimentally determined axial response curve. To determine the peak axial position, a 2nd order polynomial regression was applied to the Gaussian filtered axial response curve at each pinhole point.

After finding the peak point, data points within the FWHM of the axial response were used to find the regression function to estimate the height of the object. We determined the height as a peak axial position of the fitted polynomial regression function of the filtered axial response. We excluded height information when the axial response was abnormal, i.e., having more than 2 peaks, having too low intensity, or having too large FWHM. Then the height map of 200 × 140 points was created using just the set of successfully calculated points. The color map was then created by combining and averaging the color information along the measured surface, then overlaid to the height map.

Because of aberrations in the optical system, the height map can be slightly distorted. We calculated the distorted map by imaging the flat gray card as a standard specimen in advance. The distorted map was removed from the measured height map of the sample as a calibration process. The missing data in the 3D reconstructed image lost to the unsuccessful height calculation were filled by 2D interpolation.

2.4 Preparation of the specimen

We prepared various specimens to verify the imaging performance of our system. The flat gray card was used to extract the center positions of each pinhole. It was also used for measuring the distorted height map to compensate the curvature distortion of the optical system, and for measuring axial response and depth resolution.

To experimentally verify and visualize the depth resolution of our system, we fabricated a small height specimen made of flat aluminum with a gray color anodized surface. It had several engraved bars with different depths ranging between 20 to 200 µm, to clearly distinguish height in the 3D measurement. We also fabricated a large step height specimen with a step interval of 1 mm and a total height of 7 mm to show the ability of the proposed system to measure a large 3D volume. A color chart (X-Rite ColorChecker Passport, X-Rite, Inc., Grand Rapids, MI) was also used to qualitatively visualize color representation. For various color 3D imaging, we acquired 3D color images of a molded dental plaster, a doll, and an electrical circuit board.

3. Results

3.1 Resolutions and axial response

To predict the lateral resolution and the axial response of our system, we undertook a theoretical simulation of the image formation process using pinhole array images and Gaussian point-spread functions (PSFs) based on Gaussian beam propagation using MATLAB (The MathWorks, Inc., Natick, MA). We assumed that the image of the pinhole array was projected onto the object plane and then imaged by the CMOS camera through the optical system. During this process, the pinhole array image was blurred by convolution operations with an illumination Gaussian PSF and an imaging Gaussian PSF. We repeated this simulation at various focal plane positions. The theoretical lateral resolution and axial response were then calculated.

The optical lateral resolution is defined as the FWHM of the cross-sectional profile of a single pinhole image from a flat object in focus, taking into account that the image is blurred according to the PSF of the optical system. Experimentally, it was measured by imaging the flat gray card. The in-focus image of the pinhole array is shown in Fig. 5(a). In experiments, the optical lateral resolution of the system was measured and found to be 47.0 and 40.7 µm at the farthest and nearest focal planes, respectively, in good agreement with the theoretically predicted values of 46.3 to 40.2 µm. However, since the pitch of the pinhole array was larger than the optical lateral resolution, the effective lateral resolution of the system for 3D imaging was limited. The effective lateral resolution of the system for 3D imaging was 93.0 to 80.5 µm in the object plane from the farthest to the nearest focal plane. The FOV was measured and found to range from 18.6 × 13.0 mm to 16.1 × 11.3 mm of the object space from the farthest to the nearest focal plane.

The axial response and its FWHM allow us to evaluate the confocal microscopy performance, although these parameters do not solely determine the depth resolution [32]. Experimentally, the flat gray card was used as a standard sample to measure the axial response. We acquired 180 2D images while scanning the focal length by 17 mm, resulting in a depth step of 95 µm. Figure 6(a) shows the axial response of the 2D cross-correlation between the acquired images and the 2D Gaussian kernel along the optics axis. The 1D Gaussian filtered axial response is shown in Fig. 6(b), which has much less random noise. The measured FWHM of the axial response curve was 1.18 mm in the experiment, while 0.95 mm was theoretically predicted. The axial response was slightly wider than predicted, in part due to optical aberrations and a chromatic focal shift.

 figure: Fig. 6.

Fig. 6. The axial response of the system acquired at the center of the area detector. (a) Axial response of the 2D cross-correlation, and (b) Gaussian filtered axial response along the axial direction.

Download Full Size | PDF

The depth resolution of confocal profilometry is defined by the minimum distance from which the depth variations of the closely spaced sample surface can be distinguished, while the axial resolution is generally defined as the FWHM of the axial response curve in conventional confocal fluorescence microscopy and optical coherence tomography (OCT) [33,34]. Therefore, the depth resolution of our system, denoting the precision of the height measurement, can be defined by the standard deviation from the linear fit [32]. Using the data points of the axial response within the FWHM, 2nd order polynomial fitting was applied to find the peak axial position, i.e., the height of the sample. The fitting function allows the peak axial position to be determined with much less error within the FWHM of the axial response of the system, resulting a better depth resolution by orders of magnitude compared to the FWHM [32,33]. To evaluate the depth resolution of our system experimentally, we selected an ROI with a pixel size of 67 × 67 at the center of the 3D image, which covered about one-sixth of the overall region. The standard deviation of the measured height in this area, i.e., the depth resolution, was 9.37 µm.

3.2 Step height measurement

To experimentally verify the depth resolution and to visualize the ability of the imaging system to discriminate a small height step, we measured the custom-made small height sample. A photograph and schematic of the height sample are shown in Figs. 7(a) and 7(b), respectively. The width and separation of each engraved surface was 1 mm and the engraved surface had different depths, as indicated in Fig. 7(b). The 3D imaging result of the orange rectangular box in Fig. 7(a) is presented in Fig. 7(c) after applying 3 × 3 median filtering to suppress the noise. The red and black asterisk indicates the engraved surface, with depths of 20 µm and 40 µm, respectively. Figure 7(d) shows a cross-sectional profile along the dotted green line on the color 3D image in Fig. 7(c). Our system clearly distinguished the engraved bars with a depth of 20 µm in the color 3D image and the cross-sectional profile, confirming that our system can discriminate steps smaller than 20 µm.

 figure: Fig. 7.

Fig. 7. (a) A photograph and (b) schematic diagram of the specimen with small engraved bars. (c) 3D imaging result of fabricated specimen, and (d) cross-sectional profile.

Download Full Size | PDF

To verify that our system can acquire large 3D volume, the large step height specimen was measured. The large step height specimen was made of aluminum with gray color anodization. The interval between each stair was 1 mm except the first 3 mm step. Total height of the specimen was 7 mm from ground level. A photograph and a height map of the indicated area (red box) are shown in Figs. 8(a) and 8(b), respectively. The height map (Fig. 8(b)) represents the reconstructed height of the object using a pseudo-color scale. Our system was able to clearly express the different heights of the step height specimen with large 3D volume. Figure 8(c) shows the reconstructed color 3D image of the step height specimen by overlaying the acquired color information onto the height map.

 figure: Fig. 8.

Fig. 8. (a) A photograph, (b) height map in pseudo-color, and (c) reconstructed color 3D image of the large step height specimen made of gray-color anodized aluminum.

Download Full Size | PDF

3.3 Color 3D measurement

The system presented here uses the color CMOS camera and the white LED that covers the entire visible spectrum, allowing it to express color information. Color 3D images of the color chart were acquired to verify the capability of the system to express color information. Figure 9(a) shows a photograph of the color chart, which contains red, green and blue squares in the left column and dark gray, light gray and white squares in the right column. The reconstructed color 3D images of the black and orange dashed box regions in Fig. 9(a) are shown in Figs. 9(b) and 9(c), respectively. The acquired colors are able to express the red, green, and blue (RGB) color information of the color chart well. The colors in Fig. 9(a) are slightly different from the colors in Figs. 9(b) and 9(c), as the white-light illumination and the white balance settings differed between the photograph and the images taken with the proposed technology.

 figure: Fig. 9.

Fig. 9. (a) A photograph of the color chart and the reconstructed color 3D images acquired by the presented system in the regions of (b) the black dashed box and (c) the orange dashed box.

Download Full Size | PDF

In order to verify the feasibility of applying the presented technique in various fields which need non-contact and rapid 3D measurement, we acquired color 3D images of a dental plaster model made of gypsum, a doll, and an electronic circuit board. A photograph and the reconstructed color 3D image of the indicated area (red box in Fig. 10(a)) of the dental plaster model area are shown in Figs. 10(a) and 10(b), respectively. The complicated surface shape of the molded molar was clearly visualized. A photograph and the reconstructed color 3D image of indicated area (blue box in Fig. 10(c)) of the doll are shown in Figs. 10(c) and 10(d), respectively. The detailed shape of the surface and the color information was well presented. A photograph and the reconstructed color 3D image of the indicated area (orange box, in Fig. 10(e)) of the electronic circuit board are shown in Figs. 10(e) and 10(f), respectively. The small electronic components on the printed circuit board and surface details were successfully represented with the color and height information. The movies of the 3D images of the specimens with different viewing angles are shown in Visualizations 1 to 3.

 figure: Fig. 10.

Fig. 10. (a) A photograph and (b) reconstructed 3D image of molded dental plaster (See Visualization 1). (c) A photograph and (d) reconstructed 3D image of doll (See Visualization 2). (e) A photograph and (f) reconstructed 3D image of an electronic circuit board (See Visualization 3).

Download Full Size | PDF

4. Conclusion and discussions

In summary, we demonstrated a high-speed color 3D measurement system based on parallel confocal detection with the FTL. The parallel confocal detection method with the FTL enables scanner-less full 3D volumetric imaging. The system presented here covers a 3D volume (200 × 140 × 180 discrete voxels) in the form of the truncated pyramid with a top area of 16.1 × 11.3 mm, a bottom area of 18.6 × 13.0 mm, and a height of 17.0 mm. It has a depth resolution of 9.37 µm and a lateral resolution of 93.0 to 80.5 µm at the farthest and nearest focal plane, respectively. We successfully demonstrated 3D color images of various specimens, such as the molded dental plaster, the doll, and the electronic circuit board, to demonstrate the feasibility of non-contact rapid 3D reconstruction for various applications.

In fact, there are many other color 3D measurement methods, such as color OCT, imaging topological radar (ITR), and structured light illumination. Yu et al. [35] reported a full-color 3D wide-field OCT that produces tomographic 3D images using a color charge-coupled device (CCD) camera and red, green, and blue LEDs. It takes a few seconds to acquire the 3D volume, which depends on the axial height of the sample (145 to 980 µm thick) and its axial sampling interval (5 to 10 µm) with the CCD camera at a frame rate of 30 frames/sec. The authors state that the color OCT may be useful for physiological and pathological applications given its ability to monitor tissue structures in natural colors. Ceccarelli et al. [36] reported an RGB-ITR system that acquires color 3D images using a laser scanner and three amplitude-modulated lasers in the visible spectrum. Although this technique requires long acquisition times ranging from a minimum of one hour to a maximum of one week, the RGB-ITR system is suitable for the 3D modeling of medium/large artistic and cultural heritage items, such as chapels and sarcophagi. Lou et al. [37] reported 3D hyperspectral imaging based on structured light illumination and optical triangulation. It uses a pattern projector for structured light illumination and a hyperspectral camera with a liquid-crystal tunable filter that captures 30 wavelengths covering the entire visible spectrum. 3D hyperspectral images of the human face were acquired. This system is applicable to dermatology and cosmetology as it can provide optical skin analyses of oxygen, blood, and melanin in 3D, with an acquisition time of less than 5 seconds. While different color 3D imaging techniques have their own pros and cons, our system has several unique advantages. First, based on the direct-view confocal microscopy with the pinhole array and the FTL, the proposed technique acquires a color 3D image within 1 second without any mechanical scanning. Secondly, the optical system is small and compact enough to be developed as a hand-held device. Finally, our system can express the color information of an object by capturing full-field images using a color CMOS camera with illumination using a white LED.

The measured FWHM of the axial response of the proposed system was found to be 1.18 mm, which is about 9 times larger compared to that in our previous work [27]. The main reason for the broadening of the axial response is the difference in the effective numerical aperture (NA) between the proposed system and our previous system. The FWHM of the axial response in confocal microscopy is proportional to the square of the NA [31]. While the NA of the previous system was 0.1, the NA of the current system is 0.03 for illumination and 0.05 for imaging, as the current system has a longer focal length and a wider FOV. The NA difference accounts for the difference of nearly sevenfold in the FWHM. In addition, the broad-band white LED used in this study may induce a chromatic focal shift, while the monochromatic LED used in the previous study caused only a minimal chromatic focal shift. In confocal profilometry, the depth resolution is affected by many factors, including the width of the axial response, the SNR, the axial step size, and the peak detection algorithm used [27,32]. In fact, the depth resolution of the current system is similar to that of the previous system, despite the fact that the axial response is much broader due to the improved SNR and the improved peak detection algorithm.

This technique poses some challenges to acquiring accurate 3D measurements. A major challenge is the data loss due to specular reflection or shaded regions. To acquire accurate 3D height information as well as undistorted color information, the intensities of each point over the FOV must be within a particular range to obtain suitable axial responses. It is hard to apply the algorithm to determine height because saturated data causes an incorrect estimation of height in the case of specular reflection. Low intensity in the shaded regions made it difficult to obtain accurate height information due to detector noise. Thus, we had to exclude these points from the height determination algorithm. For 3D visualization, we interpolated both the height and color of these undetermined points using neighbor points, which inevitably induced distortion in the 3D color image. A high dynamic range (HDR) imaging technique is one solution for relieving problems associated with specular reflection. HDR imaging can be achieved by modulating the intensity of the light source in our system. By dynamically controlling the luminosity of the light source, images of both the bright area with specular reflection and the dark area with shading can be adjusted to within a suitable range.

Our system has different FOVs along the focal plane, because it was designed with a non-telecentric configuration. In the telecentric configuration where the magnification is constant regardless of the focal position, the aperture of the imaging optics should be larger than the FOV. However, due to the relatively large FOV compared to the optics, it was difficult to design the optical system to be telecentric. Because the FOV changes depending on the focal plane, post-processing is required to compensate for the FOV changes for accurate 3D measurements.

In direct-view confocal microscopy, the pinhole array forms multiple foci, and the area detector enables to acquire images of the multiple foci, simultaneously. While it is possible simultaneously to obtain the axial positions of the object using the number of pinholes in the pinhole array, any object features in the blank areas between the pinholes in the pinhole array will be missing, as object information only on the pinhole array pattern can be sampled. This is an inevitable limitation of the direct-view confocal microscopy which uses a pinhole array. The object information in the blank area can be restored by interpolating the information of adjacent pinholes in most cases when there is no abrupt change in the object. However, if there are features smaller than the pinhole pitch, the information will be lost. There is also a possibility of aliasing problems. Therefore, care should be taken when applying this imaging technology. This problem can potentially be solved by moving the pinhole array half a pitch laterally to cover the blank area and reacquiring the image to realize an effective decease in the sampling interval. This can be done using piezoelectric actuators or voice-coil motors.

Because the acquisition speed of the proposed parallel confocal detection with the FTL is only limited by the frame rate of the camera and power of the illumination in our system, real-time 3D measurement may be achieved with a higher speed area camera and a high-power white LED. We anticipate that this high-speed non-contact color 3D measurement technology will be applicable in various fields, where accurate and fast 3D measurements are necessary. Potential biomedical applications include 3D dental scanning, intraoral scanners [27,38,39], bone scanning for prosthetics [40], plastic surgery [41], and face and skin restoration procedures for those with damaged skin [4143]. In addition, due to the non-destructive color 3D measurement capability of the proposed technique, it can be useful for inspecting various products, such as electronic boards [4345]. This technique may also be used during restoration projects involving works of art, such as paintings, sculptures, and other cultural assets [43,46]. It may also be a useful 3D scanning tool for 3D printing, where digitizing physical objects is often required.

Funding

National Research Foundation of Korea (NRF-2017R1E1A1A01074822).

References

1. M. Gu, Principles of Three-Dimensional Imaging in Confocal Microscopes, Principles of Three-Dimensional Imaging in Confocal Microscopes (World Scientific, 1996).

2. H. Leeghim, M. Ahn, and K. Kim, “Novel approach to optical profiler with gradient focal point methods,” Opt. Express 20(21), 23061–23073 (2012). [CrossRef]  

3. J. A. Conchello and J. W. Lichtman, “Optical sectioning microscopy,” Nat. Methods 2(12), 920–931 (2005). [CrossRef]  

4. P. Furrer and R. Gurny, “Recent advances in confocal microscopy for studying drug delivery to the eye: concepts and pharmaceutical applications,” Eur. J. Pharm. Biopharm. 74(1), 33–40 (2010). [CrossRef]  

5. J. H. Nurre, S. Cha, P. C. Lin, L. Zhu, E. L. Botvinick, P. C. Sun, Y. Fainman, and B. D. Corner, “3D profilometry using a dynamically configurable confocal microscope,” in Three-Dimensional Image Capture and Applications II, (Jose, CA, United States, 1999), pp. 246–253.

6. Eckart Uhlmann, Dirk Oberschmidt, and Gerald Kunath-Fandrei, “3D-analysis of microstructures with confocal laser scanning microscopy,” in Machines and processes for micro-scale and meso-scale fabrication, metrology and assembly, (American Society for Precision Engineering, 2003), 93–97.

7. W. Kaplonek and K. Nadolny, “Advanced 3D laser microscopy for measurements and analysis of vitrified bonded abrasive tools,” J. Eng. Sci. Technol. 7(6), 661–678 (2012).

8. H.-J. Jordan, M. Wegner, and H. Tiziani, “Highly accurate non-contact characterization of engineering surfaces using confocal microscopy,” Meas. Sci. Technol. 9(7), 1142–1151 (1998). [CrossRef]  

9. E. Mersona, A. V. Kudryab, V. A. Trachenkob, D. Mersonac, V. Danilova, and A. Vinogradov, “The Use of Confocal Laser Scanning Microscopy for the 3D Quantitative Characterization of Fracture Surfaces and Cleavage Facets,” in 21st European Conference on Fracture, 2016), 533–540.

10. C. M. Belcher, S. W. Punyasena, and M. Sivaguru, “Novel application of confocal laser scanning microscopy and 3D volume rendering toward improving the resolution of the fossil record of charcoal,” PLoS One 8(8), e72265 (2013). [CrossRef]  

11. D. K. Hamilton and T. Wilson, “Three-dimensional surface measurement using the confocal scanning microscope,” Appl. Phys. B: Photophys. Laser Chem. 27(4), 211–213 (1982). [CrossRef]  

12. P. Ye, J. L. Paredes, Y. Wu, C. Chen, G. R. Arce, and D. W. Prather, “Compressive Confocal Microscopy: 3D Reconstruction Algorithms,” Proc. SPIE 7210, 72100G (2009). [CrossRef]  

13. K. Carlsson and N. Åslund, “Confocal imaging for 3-D digital microscopy,” Appl. Opt. 26(16), 3232–3238 (1987). [CrossRef]  

14. K. Carlsson, P. E. Danielsson, R. Lenz, A. Liljeborg, L. Majlöf, and N. Åslund, “Three-dimensional microscopy using a confocal laser scanning,” Opt. Lett. 10(2), 53–55 (1985). [CrossRef]  

15. H. Yoo, S. Lee, D. Kang, T. Kim, D. Gweon, S. Lee, and K. Kim, “Confocal Scanning Microscopy : a High-Resolution Nondestructive Surface Profiler,” Int. J. Precis. Eng. Man. 7, 3–7 (2006).

16. B. S. Chun, K. Kim, and D. Gweon, “Three-dimensional surface profile measurement using a beam scanning chromatic confocal microscope,” Rev. Sci. Instrum. 80(7), 073706 (2009). [CrossRef]  

17. J.-A. Conchello and J. W. Lichtman, “Theoretical analysis of a rotating-disk partially confocal scanning microscope,” Appl. Opt. 33(4), 585–596 (1994). [CrossRef]  

18. T. Tanaami, S. Otsuki, N. Tomosada, Y. Kosugi, M. Shimizu, and H. Ishida, “High-speed 1-frame/ms scanning confocal microscope with a microlens and Nipkow disks,” Appl. Opt. 41(22), 4704–4708 (2002). [CrossRef]  

19. S. Stehbens, H. Pemble, L. Murrow, and T. Wittmann, “Imaging intracellular protein dynamics by spinning disk confocal microscopy,” Methods Enzymol. 504, 293–313 (2012). [CrossRef]  

20. D. T. Fewer, S. J. Hewlett, E. M. McCabe, and J. Hegarty, “Direct-view microscopy: experimental investigation of the dependence of the optical sectioning characteristics on pinhole-array configuration,” J. Microsc. 187(1), 54–61 (1997). [CrossRef]  

21. M. Ishihara and H. Sasaki, “High-speed surface measurement using a non-scanning multiple-beam confocal microscope,” Opt. Eng. 38(6), 1035–1040 (1999). [CrossRef]  

22. L. Deck and P. de Groot, “High-speed noncontact profiler based on scanning white-light interferometry,” Appl. Opt. 33(31), 7334–7338 (1994). [CrossRef]  

23. P. Pokorny and A. Miks, “3D optical two-mirror scanner with focus-tunable lens,” Appl. Opt. 54(22), 6955–6960 (2015). [CrossRef]  

24. B. Javidi, J.-Y. Son, A. Doblas, E. Sánchez-Ortiga, G. Saavedra, J. Sola-Pikabea, M. Martínez-Corral, P.-Y. Hsieh, and Y.-P. Huang, “Three-dimensional microscopy through liquid-lens axial scanning,” Proc. SPIE 9495, 949503 (2015). [CrossRef]  

25. M. Martínez-Corral, P. Hsieh, A. Doblas, E. Sánchez-Ortiga, G. Saavedra, and Y. Huang, “Fast Axial-Scanning Widefield Microscopy With Constant Magnification and Resolution,” J. Disp. Technol. 11(11), 913–920 (2015). [CrossRef]  

26. Optotune, “Fast electrically tunable lens EL-10-30 series”, retrieved http://www.optotune.com/.

27. H. J. Jeong, H. Yoo, and D. Gweon, “High-speed 3-D measurement with a large field of view based on direct-view confocal microscope with an electrically tunable lens,” Opt. Express 24(4), 3806–3816 (2016). [CrossRef]  

28. Y. Zhang, A. Roshan, S. Jabari, S. A. Khiabani, F. Fathollahi, and R. K. Mishra, “Understanding the Quality of Pansharpening - A lab study,” Photogramm. Eng. Remote Sens. 82(10), 747–755 (2016). [CrossRef]  

29. Scientific-Volume-Imaging, “Signal-to-Noise ratio”, retrieved https://svi.nl/SignalToNoiseRatio.

30. E. Sánchez-Ortiga, C. J. R. Sheppard, G. Saavedra, M. Martínez-Corral, A. Doblas, and A. Calatayud, “Subtractive imaging in confocal scanning microscopy using a CCD camera as a detector,” Opt. Lett. 37(7), 1280–1282 (2012). [CrossRef]  

31. C. Sheppard and D. Shotton, Confocal Laser Scanning Microscopy (BIOS Scientific Publishers, 1997).

32. S. Cha, P. C. Lin, L. Zhu, P.-C. Sun, and Y. Fainman, “Nontranslational three-dimensional profilometry by chromatic confocal microscopy with dynamically configurable micromirror scanning,” Appl. Opt. 39(16), 2605–2613 (2000). [CrossRef]  

33. S. Lawman and H. Liang, “High precision dynamic multi-interface profilometry with optical coherence tomography,” Appl. Opt. 50(32), 6039–6048 (2011). [CrossRef]  

34. T. Wilson, “Resolution and optical sectioning in the confocal microscope,” J. Microsc. 244(2), 113–121 (2011). [CrossRef]  

35. L. Yu and M. K. Kim, “Full-color three-dimensional microscopy by wide-field optical coherence tomography,” Opt. Express 12(26), 6632–6641 (2004). [CrossRef]  

36. S. Ceccarelli, M. Guarneri, M. F. de Collibus, M. Francucci, M. Ciaffi, and A. Danielis, “Laser Scanners for High-Quality 3D and IR Imaging in Cultural Heritage Monitoring and Documentation,” J. Imaging 4(11), 130 (2018). [CrossRef]  

37. L. Gevaux, C. Adnet, P. Séroul, R. Clerc, A. Trémeau, J. L. Perrot, and M. Hébert, “Three-dimensional hyperspectral imaging: a new method for human face acquisition,” Electronic Imaging 2018(8), 152 (2018). [CrossRef]  

38. P. Hong-Seok and S. Chintal, “Development of High Speed and High Accuracy 3D Dental Intra Oral Scanner,” Procedia Eng. 100, 1174–1181 (2015). [CrossRef]  

39. S. Ting-shu and S. Jian, “Intraoral Digital Impression Technique: A Review,” J Prosthodont. 24(4), 313–321 (2015). [CrossRef]  

40. E. Stindel, J. L. Briard, P. Merloz, S. Plaweski, F. Dubrana, C. Lefevre, and J. Troccaz, “Bone Morphing: 3D Morphological Data for Total Knee Arthroplasty,” J. Prosthodontics 7(3), 156–168 (2002). [CrossRef]  

41. J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011). [CrossRef]  

42. U. Jacobi, M. Chen, G. Frankowski, R. Sinkgraven, M. Hund, B. Rzany, W. Sterry, and J. Lademann, “In vivo determination of skin surface topography using an optical 3D device,” Skin Res Technol 10(4), 207–214 (2004). [CrossRef]  

43. G. Sansoni, M. Trebeschi, and F. Docchio, “State-of-The-Art and Applications of 3D Imaging Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation,” Sensors 9(1), 568–601 (2009). [CrossRef]  

44. Z. Zhang and C. Li, “Defect Inspection for Curved Surface with Highly Specular Reflection,” in Integrated Imaging and Vision Techniques for Industrial Inspection: Advances and Applications, Z. Liu, H. Ukida, P. Ramuhalli, and K. Niel, eds. (Springer, 2015), pp. 251–317.

45. M. Munaro, E. W. Y. So, S. Tonello, and E. Menegatti, “Efficient Completeness Inspection Using Real-Time 3D Color Reconstruction with a Dual-Laser Triangulation System,” in Integrated Imaging and Vision Techniques for Industrial Inspection: Advances and Applications, Z. Liu, H. Ukida, P. Ramuhalli, and K. Niel, eds. (Springer, 2015), pp. 201–225.

46. L. Cournoyer, M. Rioux, F. Blais, J. B. Taylor, M. Picard, C. Lahanier, J. A. Beraldin, G. Godin, and L. Borgeat, “Ultra high-resolution 3D laser color imaging of paintings: The Mona Lisa by Leonardo da Vinci,” in 7th International Conference on Lasers in the Conservation of Artworks, (Taylor & Francis Group, 2007), 435–440.

Supplementary Material (3)

NameDescription
Visualization 1       Visualization 1: The 3D movie of the 3D image (Fig. 10(b))
Visualization 2       Visualization 2: The 3D movie of the 3D image (Fig. 10(d))
Visualization 3       Visualization 3: The 3D movie of the 3D image (Fig. 10(f))

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Schematic diagram of direct-view confocal microscopy which uses (a) common pinhole array for illumination and detection, and (b) two separate pinhole arrays for illumination and detection.
Fig. 2.
Fig. 2. A schematic diagram of the proposed high-speed color 3D measurement system with a focus tunable lens. WLS: white light source, CL: condenser lens, PA: pinhole array, PBS: polarizing beam splitter, FTLA: focus tunable lens assembly.
Fig. 3.
Fig. 3. A flow-chart of the 3D reconstruction algorithm.
Fig. 4.
Fig. 4. Images of projected pinhole pattern on the surface of a doll with different focal lengths (a) and (b). Red boxes show the magnified view of the in-focus image, and blue boxes show the magnified view of out-of-focus image.
Fig. 5.
Fig. 5. (a) In-focus and (b) out-of-focus images of the flat gray card and (c) corresponding axial response of intensity. Cross-correlation maps of (d) in-focus image and (e) out-of-focus image with a pinhole-shaped kernel. (f) Axial response of cross-correlation acquired at the center pixel of the flat gray card.
Fig. 6.
Fig. 6. The axial response of the system acquired at the center of the area detector. (a) Axial response of the 2D cross-correlation, and (b) Gaussian filtered axial response along the axial direction.
Fig. 7.
Fig. 7. (a) A photograph and (b) schematic diagram of the specimen with small engraved bars. (c) 3D imaging result of fabricated specimen, and (d) cross-sectional profile.
Fig. 8.
Fig. 8. (a) A photograph, (b) height map in pseudo-color, and (c) reconstructed color 3D image of the large step height specimen made of gray-color anodized aluminum.
Fig. 9.
Fig. 9. (a) A photograph of the color chart and the reconstructed color 3D images acquired by the presented system in the regions of (b) the black dashed box and (c) the orange dashed box.
Fig. 10.
Fig. 10. (a) A photograph and (b) reconstructed 3D image of molded dental plaster (See Visualization 1). (c) A photograph and (d) reconstructed 3D image of doll (See Visualization 2). (e) A photograph and (f) reconstructed 3D image of an electronic circuit board (See Visualization 3).
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.