Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Color three-dimensional imaging based on patterned illumination using a negative pinhole array

Open Access Open Access

Abstract

Reflectance confocal microscopy is widely used for non-destructive optical three-dimensional (3D) imaging. In confocal microscopy, a stack of sequential two-dimensional (2D) images with respect to the axial position is typically needed to reconstruct a 3D image. As a result, in conventional confocal microscopy, acquisition speed is often limited by the rate of mechanical scanning in both the transverse and axial directions. We previously reported a high-speed parallel confocal detection method using a pinhole array for color 3D imaging without any mechanical scanners. Here, we report a high-speed color 3D imaging method based on patterned illumination employing a negative pinhole array, whose optical characteristics are the reverse of the conventional pinhole array for transmitting light. The negative pinhole array solves the inherent limitation of a conventional pinhole array, i.e., low transmittance, meaning brighter color images with abundant color information can be acquired. We also propose a 3D image processing algorithm based on the 2D cross-correlation between the acquired image and filtering masks, to produce an axial response. By using four-different filtering masks, we were able to increase the sampling points in calculation of height and enhance the lateral resolution of the color acquisition by a factor of four. The feasibility of high-speed non-contact color 3D measurement with the improved lateral resolution and brightness provided by the negative pinhole array was demonstrated by imaging various specimens. We anticipate that this high-speed color 3D measurement technology with negative pinhole array will be a useful tool in a variety of fields where rapid and accurate non-contact measurement are required, such as industrial inspection and dental scanning.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Reflectance confocal microscopy (RCM) has been widely used for non-destructive optical three-dimensional (3D) imaging. RCM uses a confocal pinhole to provide non-contact optical sectioning capability with high spatial resolution [13], and is used where precise surface measurements are required, such as in industrial and biomedical fields [1,49]. Based on confocal detection principles, it provides accurate surface profiles to be reconstructed without damaging an object.

In confocal microscopy, a stack of sequential two-dimensional (2D) images with respect to the axial position is typically needed to reconstruct a 3D image [1012]. However, with a conventional confocal microscopy setup, the rate of mechanical scanning in both the transverse and the axial directions limits overall acquisition speed. A resonant scanner and polygon scanner can provide high-speed acquisition of 2D images of up to 200 frames/sec, but high-speed acquisition with beam scanning approaches can only achieve a limited field of view (FOV) [13,14]. In addition, the bulky high-speed scanners limit RCM applications, due to space constraints [14].

To overcome the speed limitation when acquiring 2D images, approaches have been considered that can scan multiple points simultaneously, such as spinning-disk confocal microscopy, which uses an array of pinholes arranged in a spiral shape on a rotating disk [15]. By simultaneously capturing multiple foci of the object with an area detector, this approach has a higher acquisition speed than the point scanning methods, although the rotation speed of the disk still limits overall acquisition speed. Another drawback of using a rotating disk is the bulkiness of the system, which is impractical for various applications. Also, vibration from the rotating disk can be a potential problem.

Direct-view confocal microscopy (DVCM) is an alternative method for simultaneously detecting multiple points without mechanical scanners [1618]. A 2D image can be produced within a single capture of the detector because the FOV in the object plane is entirely covered by the stationary array of pinholes, which are arranged in a rectangular shape, conjugated with the image plane. High-speed imaging can be achieved since DVCM requires no mechanical scanning modules to produce the 2D images.

Jeong et al. proposed a 3D measurement system based on DVCM using a focus tunable lens (FTL) with a deformable surface driven by electrical current to control the focal power of the lens [19,20]. The FTL can boost the speed of full 3D volume acquisition with removing the mechanical axial scanning. However, the height information could only be produced without color information, because a narrow-band light source was used with a monochrome camera. The lack of color information makes it unsuitable for evaluating various color objects.

Recently, we reported a color 3D measurement system based on parallel confocal detection using a white light-emitting diode (LED) and a color complementary metal-oxide semiconductor (CMOS) camera [21]. The pixels of the CMOS camera acted as a virtual pinhole array, thus the parallel confocal detection was successfully achieved in the proposed system. We improved the optical scheme to increase the signal-to-noise ratio (SNR), by reducing the strong internal back reflection. Also, new 3D reconstruction algorithms were developed to determine the axial height of each pixel by calculating the cross-correlation value using the shape of the pinhole, instead of directly using the intensity measured at the pinhole position.

Although we successfully demonstrated the reconstruction of a color 3D image of an object, the scheme of the DVCM had inherent limitations, such as low light efficiency and limited lateral resolution caused by the limited number of the sampling points, i.e., the number of pinholes. The pinhole array provided high-speed simultaneous acquisition with optical sectioning, however, high power illumination was required since most of the light is rejected by the pinhole array. Also, the height and color information could only be acquired at the sampling point of each pinhole.

On the other hand, our approach can also be interpreted as a type of patterned illumination from the point of view that the illumination patterns are projected onto the sample to determine the surface shape of the sample. The patterned or structured illumination is widely used for 3D surface imaging. The structured illumination is an active illumination method that projects structured patterns onto a sample using a projection device such as a spatial light modulator or digital micro-mirror device [22,23]. Typically, the structured illumination imaging reconstructs a 3D surface by decoding phase information from recorded images which contain the coded projection pattern. There are various codification methods in the structured illumination, such as binary coding [24,25] and gray-level pattern coding [26,27], sinusoidal phase encoding [2832], use of continuous varying color [3335], and use of stripe/grid indexing [3638]. The 3D surface map can be obtained by decoding/unwrapping the phase map from the recorded image including the coded phase information. 3D surface measurement using structured illumination has been used in a variety of fields where the high-speed and high-accuracy are required, such as facial imaging, dental imaging, and industrial imaging [22,23].

Here, we propose a novel method for color 3D reconstruction based on patterned illumination using a negative pinhole array, which significantly improves optical performances of our previous study [21]. The negative pinhole array has the opposite optical characteristics of a conventional pinhole array for transmitting light, that is, light is blocked in the circular area corresponding to the pinhole of a conventional pinhole array, and instead passes through the remaining area, which corresponds to the beam block of the conventional pinhole array. Since the negative pinhole array considerably improves light efficiency, bright 2D images can be captured under the same illumination conditions. As a result, abundant color information can be acquired compared to a conventional pinhole array, which has the same pinhole diameter and pitch, defined as the distance between the centers of two adjacent pinholes.

The improvement in transmittance also increases acquisition speed, which is limited by the frame rate of the color camera, and when a faster color camera is used, higher 3D volume acquisition can be achieved. In other words, less illumination power is needed to acquire 3D volumes with the identical color camera.

The proposed system’s performance was demonstrated by imaging various samples. Four different masks were used at each pinhole position to calculate the 2D cross-correlation, to produce an axial response at the corresponding sampling points. The use of the four different masks increased the sampling points for calculating the axial height in both the x- and y- directions. This has significantly improved the lateral resolution for acquiring color information by a factor of four. In addition, after applying the algorithms, image mosaicking of the color 3D image was introduced to compensate the distorted 3D volume, i.e., the truncated pyramid shape of the 3D FOV based on the affine transform.

2. Methods

2.1 Working principles of the optical system using a negative pinhole array

The basic working principle of the proposed system is based on the scheme of the DVCM [21]. The major advances of this work are the use of a negative pinhole array instead of a conventional pinhole array, as shown in Fig. 1(a), and a novel algorithm for determining the axial height of the sample surface.

 figure: Fig. 1.

Fig. 1. A schematic diagram of (a) color three-dimensional imaging based on patterned illumination which uses a negative pinhole array, (b) a negative pinhole array, and (c) a conventional pinhole array. WLS: white light source, CL: condenser lens, NPA: negative pinhole array, PBS: polarizing beam splitter, SL: scan lens, FTL: focus tunable lens, TL: tube lens, QWP: quarter-wave plate.

Download Full Size | PDF

The negative pinhole array is used as an illumination pattern which is conjugated with both the object plane and image plane. Figure 1(b) shows a schematic diagram of the negative pinhole array. It has a pattern that is opposite that of a conventional pinhole array for transmitting and blocking light, as shown in Fig. 1(c). The beam is blocked when it passes through the circular areas, indicated by black, in Fig. 1(b), while the beam passes through the remaining area, which is white in Fig. 1(b). Since the illumination pattern is directly projected onto the surface of the object in the proposed system, the area detector captures an image containing the signals from both the surface of the object and the projected pattern of the negative pinhole pattern.

2.2 System design and data acquisition

A custom-built negative pinhole array with 200 × 140 negative pinholes, with 30 µm diameters and a pitch of 55 µm were fabricated by etched chrome coating on a soda-lime glass plate. The negative pinhole array served as the illumination pattern array, and was fabricated with an active area of 11 × 7.7 mm and a thickness of 1.6 mm.

A high-power broadband incoherent light source (MCWHLP1, Thorlabs, Newton, NJ) was used for color illumination. The illuminated beam passed through the negative pinhole array, forming multiple black dots. An image of the negative pinhole array pattern was formed at the focal plane of the optical system on the surface of the object after passing through a polarizing beam splitter (PBS251, Thorlabs, Newton, NJ) and the FTL assembly, which was placed between the polarizing beam splitter and the object. A quarter-wave plate (WPQ10ME-532, Thorlabs, Newton, NJ) was placed in front of the sample to maximize the detection of sample reflection and minimize the effect of internal reflection of the optics.

The FTL was used to achieve axial scanning of the optical system by adjusting the deformable surface, by changing electrical current. The FTL assembly was composed of a scan lens (CLS-SL, Thorlabs, Newton, NJ), an FTL (EL-10-30-C-VIS-LD-LP, Optotune Switzerland AG, Dietikon, CH), and a tube lens (TTL100-A, Thorlabs, Newton, NJ). A high-speed color CMOS camera (acA-2040-180kc, Basler AG, Ahrensburg, DE) captured the illuminated area. The camera acquired in-focus signals from the object plane since it was placed in the conjugate plane of the negative pinhole array and the focal plane of the object.

To acquire the 3D volume, 180 sequential images were captured at a frame rate of 180 frames/sec, while the focal plane was being linearly scanned over 17 mm with the FTL. The CMOS image sensor recorded the 3D field of view (FOV) with a truncated pyramidal shape of 18.6 × 13.0 mm to 16.1 × 11.3 mm from the farthest to the nearest focal plane in the object space, which corresponds to an imaging magnification of 0.59 X to 0.68 X. The image sensor has the same size as the negative pinhole array of 11 × 7.7 mm, which corresponds to 2000 × 1400 pixels. Since the image sensor and the negative pinhole array are conjugated to each other, the diameter and the pitch of each negative pinhole were expressed as 5.5 pixels and 10 pixels, respectively.

After capturing the 2D color images, median filtering with a kernel size of 3 was applied to reduce noise. Sharp patterns from the negative pinhole and the surface of the object are formed on the image plane of the CMOS, shown as red boxes in Fig. 2, while the negative pinhole pattern is not visible and the image is blurred in the out-of-focus plane, shown as blue boxes in Fig. 2.

 figure: Fig. 2.

Fig. 2. Images of projected negative pinhole pattern on the surface of a molded dental plaster with different focal lengths (a) and (b). Red boxes and the blue boxes show magnified views of in-focus and out-of-focus images, respectively.

Download Full Size | PDF

Although the illumination patterns are clearly projected onto the object surface, the intensities of each pixel may be affected by neighboring negative pinholes, especially when the sample is out of focus. Therefore, not only the intensity along the axial direction, but information about the projected pattern can also be utilized to determine the object surface. Since the pattern of the negative pinhole is clearly projected onto the surface of the object when the object is in focus, we can determine the object’s surface by finding the axial height where the image of the projected pattern matches that of the illumination pattern, which can be used as a filtering mask. Our algorithm to determine the axial height of the object surface is based on calculating the 2D cross-correlation with filtering masks. Use of the 2D cross-correlation values can effectively reduce random noise from the camera and artifact by defocused neighboring negative pinholes during parallel acquisition.

2.3 3D reconstruction algorithm

Here, we propose a 3D reconstruction algorithm for improving the lateral resolution of the color 3D measurement system using the negative pinhole array. In conventional confocal microscopy, the axial position of the object can be determined by calculating the peak position of the intensity axial response, which is generally defined by the axial distribution of the intensity. To calculate the axial response in our method we used the 2D cross-correlation values along the axial direction instead of the intensity itself. The 2D cross-correlation values become larger when the acquired image is highly matched with the filtering mask, while the 2D cross-correlation values become smaller when the acquired image is different from the filtering mask. As a result, in the proposed system we can use the 2D cross-correlation values to determine the sampling points and the axial responses at those sampling points. The multiple axial responses are simultaneously acquired in a single axial scanning because our system utilizes the principle of parallel confocal detection, based on DVCM. Using the image processing algorithm described here, 399 × 279 axial responses were simultaneously acquired from the negative pinhole array, which had 200 × 140 negative pinholes.

First, we need to calibrate the negative pinhole array to find sampling points corresponding to the negative pinhole positions, using a standard flat scattering specimen, i.e., a flat gray card. Figures 3(a) to 3(d) describe the pattern masks, expressed as binary distribution related to the sampling points of the negative pinhole array, to increase the sampling points. We used a filtering mask, defined as the convoluted image of the proposed binary pattern masks, with the point spread function (PSF) of the optical system. The sampling points needed to produce the axial responses were extracted by calculating the 2D cross-correlation values between the filtering mask and the acquired in-focus image of the flat gray card. The determined positions of the sampling points were used for further 3D image processing, because the negative pinhole array is fixed in the imaging system.

 figure: Fig. 3.

Fig. 3. Schematic diagrams of filtering masks, which extract the center positions of (a) each negative pinhole, (b) the adjacent negative pinholes in the horizontal direction, (c) the adjacent negative pinholes in the vertical direction, and (d) the adjacent negative pinholes in the diagonal direction. (e) The position map extracted using the four filtering masks with the negative pinhole array. (f) The position map extracted from the filtering mask with the conventional pinhole array.

Download Full Size | PDF

A position map of the sampling points extracted from imaging of the flat gray card obtained with the negative pinhole array is shown in Fig. 3(e). In contrast, the position map of the sampling points extracted from imaging of the flat gray card with a single filtering mask of the conventional pinhole array, which has same diameters and pitch, is provided in Fig. 3(f). The four different masks were able to extract not only the center position of each negative pinhole (colored orange in Fig. 3(e)), but also the mid-positions between the adjacent negative pinholes in the horizontal (colored red in Fig. 3(e)), vertical (colored green in Fig. 3(e)), and diagonal (colored blue in Fig. 3(e)) directions, respectively. In contrast, the filtering mask of the conventional pinhole array could only extract the center position of each pinhole (colored white in Fig. 3(f)). The four types of filtering masks provide dense sampling points both for calculating the axial height and acquiring color information. 399 × 279 sampling points were extracted with the 200 × 140 negative pinholes in the proposed system. The sampling points of the negative pinhole array were doubled in both x- and y- directions compared to the sampling points of the conventional pinhole array.

We simulated theoretical image formation to evaluate the axial response of our system using MATLAB (The MathWorks, Inc., MA). The image of the negative pinhole was projected onto the object plane and then imaged by the image sensor. The image acquired at the image sensor can be assumed to be the result of the convolution operation between the negative pinhole array image and the Gaussian PSF of the imaging system. We acquired 180 2D images while changing the focal length by 17 mm. The axial responses can be calculated by repeating the simulation while changing the focal planes at each sampling point.

Figure 4(a) shows the theoretical axial response of intensity acquired at the sampling points matched with the center positions of the negative pinholes. The axial response has an inverted shape because a sharp image of the negative pinhole is formed, making the surface of the in focus object darker. Thus, the minimum intensity position corresponds to the sample surface. Then, we calculated 2D cross-correlation values along the axial direction for each sampling point. We acquired the same axial responses no matter what type of sampling points, as shown in Fig. 4(b). The peak position in the axial response of the 2D cross-correlation corresponds to the axial position of the object, because the value of the 2D cross-correlation reaches maximum when the object is in focus.

 figure: Fig. 4.

Fig. 4. The axial responses of the image formation simulation, with the negative pinhole acquired at the center area of the image sensor. (a) An axial response of the intensity acquired at the center position of the negative pinhole, (b) an axial response of 2D cross-correlation, and (c) a filtered axial response.

Download Full Size | PDF

Then, we applied filtering to the axial response of the 2D cross-correlation (Fig. 4(b)) to reduce any random noise and to sharpen the peak in the axial direction using a 1D Gaussian-shaped kernel. This filtering kernel takes the shape of a Gaussian function with the same full width at half maximum (FWHM) of the axial response of the 2D cross-correlation with a negative offset so that the sum of the kernel values is zero. The axial response filtered by the 1D Gaussian-shaped kernel is shown in Fig. 4(c). Finally, the height of the object, which is defined by the peak axial position of the filtered axial response, can be accurately calculated using a quadratic regression of the data points within the FWHM of the filtered axial response, which has a higher SNR compared to the axial response of intensity.

The calculated heights from abnormal axial responses, i.e., those having multiple peaks, having too low peak intensity, or having too large FWHM, were excluded and then interpolated using the adjacent successfully calculated heights. A height map of 399 × 279 points was created, which had double the information in both the x- and y- directions from 200 × 140 negative pinholes. A color map, defined by the set of RGB color values obtained from each sampling point, was created using the in-focus color information from each sampling point, by averaging the neighbor 3 × 3 pixels, and then overlaid onto the height map. Any distortion in the height map induced by aberrations in the optical system can be compensated using a calibration process, using the reference height map measured from the flat gray card image.

2.4 FOV compensation

FOVs varied linearly as the focal power of the FTL was changed over the axial scanning range, since the proposed system has different magnifications with respect to focal length. Accordingly, to express physical dimensions accurately, the distortion in the acquired 3D volume, induced by the different magnifications, has to be compensated. We assumed that the different magnification of the 2D images could be corrected by scaling and translations in the lateral directions without considering rotation. Then, we can compensate the FOV distortion by matching the information of the physical space to the voxel space with one-to-one correspondence, using the image registration based on affine transform.

Both the height and color information of the acquired 3D image can be mapped in lateral directions using the matrix calculation with the same transform matrix, to compensate the voxel to physical information. Since the FOV relates to the focal length of the system and the focal lengths are changed linearly, we can calculate the transform matrix using the registration between the two 3D images of the reference patterns at different focal lengths, i.e., axial positions. We assumed the height maps and the color maps as the matrices, then each map expressed with pixels was converted to maps expressed in units of mm, by the transform matrix in the mapping. The coordinates (x, y) in the voxel space were then converted to the coordinates (xmm, ymm) in physical space. The relationship between the lateral coordinates can be expressed as:

$$\left( {\begin{array}{{c}} {{x_{mm}}}\\ {{y_{mm}}}\\ 1 \end{array}} \right) = \left( {\begin{array}{{ccc}} {{S_x}(z)}&0&{{T_x}(z)}\\ 0&{{S_y}(z)}&{{T_y}(z)}\\ 0&0&1 \end{array}} \right)\left( {\begin{array}{{c}} x\\ y\\ 1 \end{array}} \right),$$
where Sx(z) and Sy(z) are the scaling factors, and Tx(z) and Ty(z) are the translation factors related to the x- and y- directions, respectively.

The scaling factors and the translation factors are linear functions of the axial height of the object, z, since the magnification varies linearly with the focal length, and can be expressed as:

$$\begin{array}{l} {S_x}(z) = {m_1}z + {m_2},\\ {S_y}(z) = {m_3}z + {m_4},\\ {T_x}(z) = {m_5}z + {m_6},\\ {T_y}(z) = {m_7}z + {m_8}, \end{array}$$
where z is the measured axial height of the object at the coordinates (x, y).

The transform matrix, M, can be obtained by simplifying Eq. (1) using Eq. (2) as:

$$\left( {\begin{array}{{c}} {{x_{mm}}}\\ {{y_{mm}}} \end{array}} \right) = M\left( {\begin{array}{{c}} x\\ y\\ z\\ {zx}\\ {yz}\\ 1 \end{array}} \right),$$
where
$$M = \left( {\begin{array}{{cccccc}} {{m_2}}&0&{{m_5}}&{{m_1}}&0&{{m_6}}\\ 0&{{m_4}}&{{m_7}}&0&{{m_3}}&{{m_8}} \end{array}} \right).$$

We can calculate the transform matrix by selecting the coordinates (x, y, z) of the arbitrary four points in the height map and the corresponding physical coordinates in the two height maps at different focal lengths. Once the transform matrix is found from the height map, the lateral coordinates of the color map are registered using the matrix operation with the same transform matrix. After compensating the difference in 3D FOV, we can mosaic the 3D images to obtain a larger object to generate a larger volume.

2.5 Preparation of the specimen

We prepared a variety of specimens to evaluate the imaging performance of the proposed system. The flat gray card was used to extract the sampling points and to compensate the distortion in the height map induced by the curvature of the optical system. It was also used to obtain the axial response and to measure depth resolution.

To evaluate the lateral resolution, we imaged the positive USAF 1951 resolution target (R3L1S4P, Thorlabs, Newton, NJ). We also fabricated a 3D bump specimen using engineering plastic. It had rectangular bumps of 1 × 1 × 1 mm (W × D × H) with a period of 2 mm in both the x- and y- directions, to calculate the transform matrix to match the voxel space to the physical space. For various color 3D imaging, a molded dental plaster and an electrical circuit board were imaged.

3. Results

3.1 Axial response

We experimentally acquired 180 2D images while changing the focal length by 17 mm. The axial responses with the negative pinhole array are shown in Fig. 5. The flat gray card, which was used a standard sample, was imaged to measure the axial response. The axial response of intensity, shown in Fig. 5(a), was acquired at the sampling point of the negative pinhole position. We acquired the axial responses of 2D cross-correlation, which were produced using the cross correlation values between the acquired images and the corresponding filtering masks along the axial direction, as shown in Fig. 5(b). The average pixel values of each RGB channel were used in calculation of 2D cross-correlation. The 2D cross-correlations produced nearly the same axial responses no matter what type of filtering masks was used. Finally, the filtered axial responses, which can reduce the random noise in the axial direction as shown in Fig. 5(c), were produced to determine the axial height of each sampling point. The FWHM values of the filtered axial responses were measured in the experiments to range from 1.15 mm to 1.20 mm, while the FWHM values were theoretically predicted to be 0.95 mm. There was a slight broadening in the axial response induced by the optical aberration of the imaging system, and a chromatic focal shift.

 figure: Fig. 5.

Fig. 5. The axial responses of the imaging system with the negative pinhole, acquired at the center area of the image sensor. (a) An axial response of the intensity, (b) an axial response of 2D cross-correlation, and (c) a filtered axial response.

Download Full Size | PDF

3.2 Lateral and axial resolution

The resolutions in both the lateral and axial directions are important parameters for evaluating the imaging system. The optical lateral resolution of our system was measured to be 34.8 µm by the deconvolution between the in-focus image of the negative pinhole and the pinhole shape. However, the interval between sampling points determines the effective lateral resolution, because height or color information calculated from the axial responses are only obtained at the sampling points in our system. Thus, the effective lateral resolution was measured to be 46.5 µm in the object plane, considering the interval of the sampling point.

The lateral resolution was quantitatively measured by imaging the USAF 1951 resolution target. The lateral resolution of the imaging system can be evaluated using the line profiles in the acquired image to resolve the adjacent bars. We imaged the bars in group 3 of the resolution target. The color map of the 3D imaging result of the positive USAF 1951 resolution target with the negative pinhole array and the line profiles are shown in Fig. 6. The vertical and the horizontal line profiles of the dashed lines in the color map shown in Fig. 6(a) are described in Figs. 6(b) and 6(c), respectively.

 figure: Fig. 6.

Fig. 6. (a) Imaging result of the group 3 of the USAF 1951 resolution. Line profiles acquired at (b) the vertical and (c) the horizontal lines.

Download Full Size | PDF

The line profiles were presented with the same colors, corresponding to the dashed line in Fig. 6(a). Three bars in element 4 of group 3 are clearly distinguishable in both the vertical and horizontal line profiles, while the bars smaller than element 4 of group 3 are not clearly distinguished. Thus, the lateral resolution of our system was measured to range between 12.7 line pairs/mm and 11.3 line pairs/mm, which correspond to line widths of 39.4 µm and 44.2 µm. Compared to the conventional pinhole array, the lateral resolution was improved by increasing the sampling points, by using four-different filtering masks.

In confocal profilometry, the depth resolution or axial resolution is defined by the smallest discriminable difference in the height of the surface [39], while the axial resolution of the confocal microscopy is generally defined by the FWHM of the axial response [40]. We measured the depth resolution of our system using the precision of the height measurement, which is defined by the standard deviation of the measured height of the flat sample [39]. To experimentally evaluate the depth resolution, we selected a region of interest (ROI) at the center of the 3D image of the flat gray card consisting of 133 × 133 pixels, which covered about one-sixth of the entire FOV. The depth resolution, i.e., the standard deviation of the height measurement in the ROI, was measured to be 19.5 µm.

3.3 FOV compensation

The FOVs of the proposed system linearly changed as the focal lengths of the system were linearly changed over a range of 17 mm. We acquired two different color 3D images of the fabricated rectangular bump specimen, located at different axial positions. Although identical volume was illuminated, different areas were recorded with respect to the axial position of the specimen, due to the different magnifications, as indicated in Fig. 7. The difference in the axial positions in Figs. 7(a) and 7(b) was 10.9 mm.

 figure: Fig. 7.

Fig. 7. Acquired color maps of the fabricated bump specimen, which were located at different axial positions of (a) 2.5 mm and (b) 13.4 mm from the bottom of the scanning range. Acquired color maps with voxel coordinates with different magnifications in the left column, were converted to color maps in mm with the same magnification in the right column.

Download Full Size | PDF

We calculated the relationship between the FOV and the focal length, i.e., a transform matrix using 4 voxel coordinates and the corresponding physical coordinates. As shown in the first column in Figs. 7(a) and 7(b), the color maps were expressed with the same pixel size, but the recorded FOVs were different with respect to the focal length. The FOV is smaller when the object is located higher (Fig. 7(b)). After mapping the voxel to the physical space, the transverse coordinates of the color map were corrected to reflect the physical dimensions in units of mm, using the matrix operation with the transform matrix, as shown in the second column of Figs. 7(a) and 7(b), respectively. We quantified the position errors before and after applying the FOV compensation method to monitor the improvement. We randomly selected and measured five 10 mm distances (5 period of rectangular bumps) to quantify the position errors at different axial positions presented in Fig. 7. Before the FOV compensation, the 10 mm distance was measured to be 9.52 ± 0.035 mm and 10.40 ± 0.032 mm at the foal position of 2.5 mm and 13.4 mm, respectively, as in left column of Figs. 7(a) and 7(b). After the FOV compensation, the 10 mm distance was measured to be 9.96 ± 0.033 mm and 9.95 ± 0.046 mm, as in right column of Figs. 7(a) and 7(b). In this way, we were able to achieve a precise distance measurement by compensating the position errors due to the magnification change.

3.4 Color 3D measurement

The system presented here can reconstruct 3D surface information by overlaying color. Figures 8(a) and 8(b) show a photograph of the electronic circuit board and a reconstructed color 3D image of the red box in Fig. 8(a), respectively. In order to compare the imaging results of the negative pinhole array and the conventional pinhole array, magnified views of the yellow box in Fig. 8(b) were compared.

 figure: Fig. 8.

Fig. 8. (a) A photograph and (b) a reconstructed color 3D image of the electronic circuit board. (c) A photograph acquired by the stereoscope and color maps acquired (d) with the negative pinhole array and (e) with the conventional pinhole array.

Download Full Size | PDF

Figure 8(c) shows a photograph captured using a magnifying stereoscope as a reference. The color maps from the reconstructed color 3D image from the negative pinhole array and the conventional pinhole array are shown in Figs. 8(d) and 8(e), respectively. There are letters printed with the dots on the surface of the electronic components in this area. The letters are more clearly expressed by the negative pinhole array than the conventional pinhole array, since the lateral resolution is improved. Also, color information of the surface is more naturally expressed with an abundant acquisition of color information when the negative pinhole array was used. These imaging results confirm the feasibility of the system for high precision non-contact industrial inspection applications.

We also acquired color 3D images of a dental plaster to verify the feasibility of rapid 3D measurement in the biomedical imaging field. Color 3D images of a dental plaster model made of gypsum shown in Fig. 9(a) was acquired. A reconstructed color 3D image of the area of dental plaster in the red box in Fig. 9(a) is presented in Fig. 9(b). The molded molar, which has a complicated shape on its surface, is clearly visualized. Notably, a larger volume than a single FOV can be imaged by mosaicking the color 3D images. Figure 9(c) presents the full arch of the dental plaster, made by successfully stitching 15 different FOVs, with slight overlapping.

 figure: Fig. 9.

Fig. 9. (a) A photograph of the molded dental plaster, and (b) the reconstructed color 3D image of indicate area. (c) The color 3D image of the full arch of the dental plaster by image mosaicking.

Download Full Size | PDF

4. Conclusion

In summary, we have proposed the color imaging system using a negative pinhole array based on patterned illumination for color 3D measurement. We successfully demonstrated high-speed imaging system with no mechanical scanner. Four-fold sampling points of 399 × 279 were acquired using four different filtering masks from 200 × 140 negative pinholes, which significantly improved lateral resolution of color imaging and light efficiency compared to a conventional pinhole array. The system had a lateral resolution of around 40 µm and a depth resolution of 19.5 µm. We successfully produced an image mosaic of a 3D image after compensating the FOV differences with respect to focal length over the full volume. Also, successful 3D color images of various specimens confirmed the feasibility of the proposed non-contact rapid 3D reconstruction for various applications. We anticipate that this high-speed color 3D measurement technology using the negative pinhole array will be a useful tool in a variety of fields where rapid and accurate non-contact measurement are required, such as industrial inspection and dental scanning.

Funding

National Research Foundation of Korea (NRF-2020R1A2C3006745).

Disclosures

The authors declare no conflicts of interest.

References

1. H. Leeghim, M. Ahn, and K. Kim, “Novel approach to optical profiler with gradient focal point methods,” Opt. Express 20(21), 23061–23073 (2012). [CrossRef]  

2. J. A. Conchello and J. W. Lichtman, “Optical sectioning microscopy,” Nat. Methods 2(12), 920–931 (2005). [CrossRef]  

3. M. Gu, Principles of Three-Dimensional Imaging in Confocal Microscopes, Principles of Three-Dimensional Imaging in Confocal Microscopes (World Scientific, 1996).

4. S. Cha, P. Lin, L. Zhu, E. Botvinick, P. C. Sun, and Y. Fainman, “3D profilometry using a dynamically configurable confocal microscope,” in Electronic Imaging ‘99, (SPIE, 1999).

5. W. Kaplonek and K. Nadolny, “Advanced 3D laser microscopy for measurements and analysis of vitrified bonded abrasive tools,” Journal of Engineering Science and Technology 7, 661–678 (2012).

6. C. M. Belcher, S. W. Punyasena, and M. Sivaguru, “Novel application of confocal laser scanning microscopy and 3D volume rendering toward improving the resolution of the fossil record of charcoal,” PLoS One 8(8), e72265 (2013). [CrossRef]  

7. D. K. Hamilton and T. Wilson, “Three-dimensional surface measurement using the confocal scanning microscope,” Appl. Phys. B 27(4), 211–213 (1982). [CrossRef]  

8. E. Mersona, A. V. Kudryab, V. A. Trachenkob, D. Mersonac, V. Danilova, and A. Vinogradov, “The Use of Confocal Laser Scanning Microscopy for the 3D Quantitative Characterization of Fracture Surfaces and Cleavage Facets,” in 21st European Conference on Fracture, (Elsevier, 2016), 533–540.

9. H. Yoo, D. Kang, A. J. Katz, G. Y. Lauwers, N. S. Nishioka, Y. Yagi, P. Tanpowpong, J. Namati, B. E. Bouma, and G. J. Tearney, “Reflectance confocal microscopy for the diagnosis of eosinophilic esophagitis: a pilot study conducted on biopsy specimens,” Gastrointestinal Endoscopy 74(5), 992–1000 (2011). [CrossRef]  

10. K. Carlsson and N. Åslund, “Confocal imaging for 3-D digital microscopy,” Appl. Opt. 26(16), 3232–3238 (1987). [CrossRef]  

11. H. Yoo, S. Lee, D. Kang, T. Kim, D. Gweon, S. Lee, and K. Kim, “Confocal Scanning Microscopy : a High-Resolution Nondestructive Surface Profiler,” International Journal of Precision Engineering and Manufacturing7, 3–7 (2006).

12. K. Carlsson, P. E. Danielsson, R. Lenz, A. Liljeborg, L. Majlöf, and N. Åslund, “Three-dimensional microscopy using a confocal laser scanning,” Opt. Lett. 10(2), 53–55 (1985). [CrossRef]  

13. S. Choi, P. Kim, R. Boutilier, M. Y. Kim, Y. J. Lee, and H. Lee, “Development of a high speed laser scanning confocal microscope with an acquisition rate up to 200 frames per second,” Opt. Express 21(20), 23611–23618 (2013). [CrossRef]  

14. P. Xi, Y. Liu, and Q. Ren, Scanning and Image Reconstruction Techniques in Confocal Laser Scanning Microscopy, Laser Scanning, Theory and Applications, (InTech, 2011).

15. J.-A. Conchello and J. W. Lichtman, “Theoretical analysis of a rotating-disk partially confocal scanning microscope,” Appl. Opt. 33(4), 585–596 (1994). [CrossRef]  

16. D. T. Fewer, S. J. Hewlett, E. M. Mccabe, and J. Hegarty, “Direct-view microscopy: experimental investigation of the dependence of the optical sectioning characteristics on pinhole-array configuration,” J. Microsc. 187(1), 54–61 (1997). [CrossRef]  

17. M. Ishihara and H. Sasaki, “High-speed surface measurement using a non-scanning multiple-beam confocal microscope,” Opt. Eng. 38(6), 1035–1040 (1999). [CrossRef]  

18. C. J. R. Sheppard and T. Wilson, “The theory of the direct-view confocal microscope,” 124, 107–117 (1981).

19. Optotune, “Fast electrically tunable lens EL-10-30 series”, retrieved http://www.optotune.com/.

20. H. J. Jeong, H. Yoo, and D. Gweon, “High-speed 3-D measurement with a large field of view based on direct-view confocal microscope with an electrically tunable lens,” Opt. Express 24(4), 3806–3816 (2016). [CrossRef]  

21. C.-S. Kim, W. Kim, K. Lee, and H. Yoo, “High-speed color three-dimensional measurement based on parallel confocal detection with a focus tunable lens,” Opt. Express 27(20), 28466–28479 (2019). [CrossRef]  

22. S. Zhang, “High-speed 3D shape measurement with structured light methods: A review,” Opt. Lasers Eng. 106, 119–131 (2018). [CrossRef]  

23. J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011). [CrossRef]  

24. R. J. Valkenburg and A. M. McIvor, “Accurate 3D measurement using a structured light system,” Image Vis. Comput. 16(2), 99–110 (1998). [CrossRef]  

25. I. Ishii, K. Yamamoto, K. Doi, and T. Tsuji, “High-speed 3D image acquisition using coded structured light projection,” in 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, (IEEE, 2007), 925–930.

26. B. Carrihill and R. Hummel, “Experiments with the intensity ratio depth sensor,” Comput Vis Graph Image Process 32(3), 337–358 (1985). [CrossRef]  

27. J. L. Posdamer and M. D. Altschuler, “Surface measurement by space-encoded projected beam systems,” Comput Vis Graph Image Process 18(1), 1–17 (1982). [CrossRef]  

28. C. Zuo, Q. Chen, G. Gu, S. Feng, F. Feng, R. Li, and G. Shen, “High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection,” Opt. Lasers Eng. 51(8), 953–960 (2013). [CrossRef]  

29. Y. Xing, C. Quan, and C. J. Tay, “A modified phase-coding method for absolute phase retrieval,” Opt. Lasers Eng. 87, 97–102 (2016). [CrossRef]  

30. T. Bell, N. Karpinsky, and S. Zhang, “Real-Time 3D Sensing With Structured Light Techniques,” in Interactive Displays (2014), pp. 181–213.

31. X. Su and W. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Lasers Eng. 42(3), 245–261 (2004). [CrossRef]  

32. G. Sansoni, M. Carocci, and R. Rodella, “Three-dimensional vision based on a combination of gray-code and phase-shift light projection: analysis and compensation of the systematic errors,” Appl. Opt. 38(31), 6565–6573 (1999). [CrossRef]  

33. X. Chen, C. Lu, M. Ma, X. Mao, and T. Mei, “Color-coding and phase-shift method for absolute phase measurement,” Opt. Commun. 298–299, 54–58 (2013). [CrossRef]  

34. Z. J. Geng, “Rainbow three-dimensional camera: new concept of high-speed three-dimensional vision systems,” Opt. Eng. 35(2), 376 (1996). [CrossRef]  

35. A. Silva, J. L. Flores, A. Muñoz, G. A. Ayubi, and J. A. Ferrari, “Three-dimensional shape profiling by out-of-focus projection of colored pulse width modulation fringe patterns,” Appl. Opt. 56(18), 5198–5203 (2017). [CrossRef]  

36. K. L. Boyer and A. C. Kak, “Color-Encoded Structured Light for Rapid Active Ranging,” IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-9, 14–28 (1987).

37. C. Han and Z. Jiang, “Indexing Coded Stripe Patterns Based on De Bruijn in Color Structured Light System,” in 2012 National Conference on Information Technology and Computer Science (Atlantis Press, 2012), 928–931.

38. M. Maruyama and S. Abe, “Range sensing by projecting multiple slits with random cuts,” IEEE Trans. Pattern Anal. Machine Intell. 15(6), 647–651 (1993). [CrossRef]  

39. S. Cha, P. C. Lin, L. Zhu, P.-C. Sun, and Y. Fainman, “Nontranslational three-dimensional profilometry by chromatic confocal microscopy with dynamically configurable micromirror scanning,” Appl. Opt. 39(16), 2605–2613 (2000). [CrossRef]  

40. C. Sheppard and D. Shotton, Confocal Laser Scanning Microscopy (BIOS Scientific Publishers, 1997).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. A schematic diagram of (a) color three-dimensional imaging based on patterned illumination which uses a negative pinhole array, (b) a negative pinhole array, and (c) a conventional pinhole array. WLS: white light source, CL: condenser lens, NPA: negative pinhole array, PBS: polarizing beam splitter, SL: scan lens, FTL: focus tunable lens, TL: tube lens, QWP: quarter-wave plate.
Fig. 2.
Fig. 2. Images of projected negative pinhole pattern on the surface of a molded dental plaster with different focal lengths (a) and (b). Red boxes and the blue boxes show magnified views of in-focus and out-of-focus images, respectively.
Fig. 3.
Fig. 3. Schematic diagrams of filtering masks, which extract the center positions of (a) each negative pinhole, (b) the adjacent negative pinholes in the horizontal direction, (c) the adjacent negative pinholes in the vertical direction, and (d) the adjacent negative pinholes in the diagonal direction. (e) The position map extracted using the four filtering masks with the negative pinhole array. (f) The position map extracted from the filtering mask with the conventional pinhole array.
Fig. 4.
Fig. 4. The axial responses of the image formation simulation, with the negative pinhole acquired at the center area of the image sensor. (a) An axial response of the intensity acquired at the center position of the negative pinhole, (b) an axial response of 2D cross-correlation, and (c) a filtered axial response.
Fig. 5.
Fig. 5. The axial responses of the imaging system with the negative pinhole, acquired at the center area of the image sensor. (a) An axial response of the intensity, (b) an axial response of 2D cross-correlation, and (c) a filtered axial response.
Fig. 6.
Fig. 6. (a) Imaging result of the group 3 of the USAF 1951 resolution. Line profiles acquired at (b) the vertical and (c) the horizontal lines.
Fig. 7.
Fig. 7. Acquired color maps of the fabricated bump specimen, which were located at different axial positions of (a) 2.5 mm and (b) 13.4 mm from the bottom of the scanning range. Acquired color maps with voxel coordinates with different magnifications in the left column, were converted to color maps in mm with the same magnification in the right column.
Fig. 8.
Fig. 8. (a) A photograph and (b) a reconstructed color 3D image of the electronic circuit board. (c) A photograph acquired by the stereoscope and color maps acquired (d) with the negative pinhole array and (e) with the conventional pinhole array.
Fig. 9.
Fig. 9. (a) A photograph of the molded dental plaster, and (b) the reconstructed color 3D image of indicate area. (c) The color 3D image of the full arch of the dental plaster by image mosaicking.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

( x m m y m m 1 ) = ( S x ( z ) 0 T x ( z ) 0 S y ( z ) T y ( z ) 0 0 1 ) ( x y 1 ) ,
S x ( z ) = m 1 z + m 2 , S y ( z ) = m 3 z + m 4 , T x ( z ) = m 5 z + m 6 , T y ( z ) = m 7 z + m 8 ,
( x m m y m m ) = M ( x y z z x y z 1 ) ,
M = ( m 2 0 m 5 m 1 0 m 6 0 m 4 m 7 0 m 3 m 8 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.