Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Color-image reconstruction for two-wavelength digital holography using a generalized phase-shifting approach

Open Access Open Access

Abstract

We propose a color-image reconstruction method for two-wavelength digital holography using generalized phase-shifting digital holography (GPSDH). In this method, color interference fringes are captured by a digital camera with a Bayer array color filter, and phase shifting is simultaneously performed for all wavelengths. Color interference fringes are separated into three monochromatic interference fringes using a color-separation method that suppresses the color-filter crosstalk. The object wave is extracted from each monochromatic interference fringe using GPSDH, which prevents problems due to the phase shift wavelength dependence. Image reconstruction is performed using a shifted Fresnel transform-based method, in which the color reconstructed image is obtained by directly superposing the reconstructed images for all wavelengths. We verify the proposed method through optical experiments with a two-wavelength digital holography system. The results show that the dual-color image can be successfully reconstructed without chromatic aberration.

© 2017 Optical Society of America

1. INTRODUCTION

Digital holography (DH) techniques can record both amplitude and phase information of an object wave and acquire three-dimensional object information without direct contact [1,2]. DH has been studied in various fields such as shape measurement [3], phase imaging [4], microscopy [5], and polarization imaging [6]. In DH, an interference fringe between an object wave and a reference wave is directly recorded by a digital image sensor, such as a charge-coupled device (CCD) or a complementary metal–oxide–semiconductor camera. The equation I=|O|2+|R|2+OR*+O*R characterizes the interference fringe, where O and R are the complex amplitudes of the object and reference waves on the hologram plane, respectively. The third term contains the object-wave component. Therefore, it is necessary to extract only the object-wave component from the interference fringe to obtain a clear reconstructed image. Phase-shifting DH (PSDH) is an effective technique for retrieving the object wave using numerical computations with three or more interference fringes that are generated by reference waves with a constant phase shift [7]. The phase shift generally takes a value equal to an integral multiple of π/2 or some other constant value [8]. In general, a monochromatic light source is employed in PSDH.

Multiwavelength DH has been developed to measure object shapes accurately or to obtain the color information of an object [921]. In this technique, the interference fringe is represented by the sum of the interference fringes recorded using several light sources that have different wavelengths. Assuming that the light sources consist of N different wavelengths, the interference fringe can be expressed as

I=n=1NIn=n=1N(|On|2+|Rn|2+OnRn*+On*Rn),
where On and Rn are the complex amplitudes of the object and reference waves of wavelength λn, respectively. Three wavelengths corresponding to red, green, and blue are generally used. According to Eq. (1), the object-wave components for each wavelength simultaneously exist in the color interference fringe. Obtaining the correct reconstructed image for each wavelength requires separation of the monochromatic interference fringe from the color interference fringe and subsequent extraction of the object wave for each wavelength. Moreover, all reconstructed images for each wavelength should be superposed on one image to obtain a color reconstructed image.

The phase-shifting scheme for multiwavelength DH proposed by Yamaguchi et al. is an effective method for extracting the object wave from the color interference fringe [9,10]. In this method, a phase-shifting scheme is implemented in an inline optical configuration that consists of three lasers corresponding to the three primary colors and a color digital camera equipped with color filters for the three primary colors. The color interference fringe is captured in a single shot using the color digital camera. Three monochromatic interference fringes are then derived from this fringe using a red-green-blue (RGB) color-separation method. The object waves for each wavelength are extracted by the standard PSDH technique. However, color crosstalk often occurs in the color interference fringe because the spectral bandwidth of the color filter used in the color digital camera is not sufficiently narrow. Color crosstalk may be avoided using a method that selects the wavelength of the light source according to the color filter or vice versa. However, the use of special filters and light sources results in an expensive measurement system. Further, the available light sources may be limited to specific combinations. Moreover, even if the color crosstalk is negligible, it is difficult to generate correct phase shifts for multiple wavelengths because the phase depends on the wavelength. Thus, an appropriate phase shift for one wavelength may be inappropriate for other wavelengths.

An off-axis configuration for multiwavelength DH was proposed by Demoli et al. [11]. This method employs a quasi-Fourier off-axis setup with a reference point source. Therefore, an object wave can be reconstructed from the numerical Fourier transform of the digital hologram. Kühn et al. proposed a single-shot multiwavelength DH approach using an off-axis configuration [12]. In this method, the interference fringe is generated from several reference waves that have different wavelengths and incident angles in the off-axis setup, which are then captured by a monochromatic digital camera. A monochromatic interference fringe for each wavelength is obtained by the numerical Fourier-filtering technique. Thus, the object wave for each wavelength is extracted from the corresponding monochromatic interference fringe. However, the optical system for generating the reference waves at different incident angles may be complicated. Strict incident-angle adjustment of the reference waves is required to avoid overlaps between the object wave and other components, even though a high-resolution digital camera is used. Moreover, the reconstructed image is generally influenced by the characteristics of the chosen band-pass filter. As an alternative, multiwavelength DH using an off-axis configuration with a stacked color image sensor was proposed by Tankam et al. [13]. In this method, a two-wavelength digital hologram is simultaneously recorded using a simple experimental setup. However, a special color image sensor consisting of three stacked photodiode layers is required.

Color image synthesis is performed by superposing three monochromatic reconstructed images corresponding to each of the three primary colors. To obtain a color reconstructed image without chromatic aberration, the image for each wavelength should be adjusted to have the same image size. When a convolution-based Fresnel transform is used for image reconstruction, we can obtain the correct color reconstructed image by directly superposing the reconstructed images for each wavelength because the size of the reconstructed image is the same as the image-sensor size, regardless of the wavelength [9,10]. However, this also means that the size of the reconstructed image is limited by the image-sensor size. The zero-padding method was proposed by Ferraro et al. to control the image size of the reconstructed image by padding the hologram data with zeros [14]. A lensless Fourier-transform holography approach using this method allowed precise superposition of the monochromatic reconstructed images of digital holograms with different recording distances [15]. The zero-padding method has also been applied in three-dimensional object reconstruction and fusion [16,17], measurement of optical-phase retardation maps [18], synthetic aperture techniques for resolution improvement, and speckle-noise reduction [19]. Zhang et al. proposed a double Fresnel-transform method to control the image size in color DH [20]. This method employs two-stage reconstruction by Fresnel transform to magnify the size of the reconstructed image. Picart et al. proposed a reconstructed-image-scaling method using a convolution method with a numerical spherical wave [21,22] that was used as a virtual reconstruction wave to extend the spatial bandwidth of the reconstructed image. Here, we refer to this method as the adjustable magnification method. The compensation method proposed by Leclercq et al. for the chromatic aberration induced by optical elements uses the zero-padding method and the adjustable magnification method [23]. These methods successfully generate a color image without chromatic aberration. However, it is necessary to devise additional factors, such as the zero-padding area and the virtual spherical wave.

In this study, we propose a color-image reconstruction method for two-wavelength DH using a generalized phase-shifting approach. In the proposed method, phase shifting is simultaneously performed for all wavelengths using a typical inline phase-shifting optical configuration. A color interference fringe is captured by a color CCD camera with a Bayer array color filter and then separated into three monochromatic interference fringes using a color-separation method that suppresses color crosstalk. The object waves for each wavelength are extracted by a statistical generalized PSDH (SGPSDH) approach, the algorithm of which is designed for arbitrary phase shifts from 0 to 2π [2426]. A monochromatic reconstructed image for each wavelength is obtained through a wavelength-compensated reconstruction method based on a shifted Fresnel transform (SFT) [27]. We present a strategy to adjust the sizes of reconstructed images for a certain wavelength to the size of the reference-wavelength image. Thus, a color image without chromatic aberration can be obtained by directly superposing the monochromatic reconstructed images for all wavelengths. We demonstrate preliminary experiments to verify the validity of the proposed method using a two-wavelength PSDH system.

The remainder of this paper is organized as follows. In Section 2, we present the color-image reconstruction method for two-wavelength DH using the generalized phase-shifting approach. First, we describe the color-separation method for obtaining monochromatic interference fringes from the color interference fringe. Next, the statistical generalized phase-shifting approach that gives the object waves for each wavelength is detailed. Finally, the wavelength-compensated image reconstruction method is discussed. We present the results of optical experiments in Section 3, followed by our conclusions in Section 4.

2. METHOD

We consider the optical configuration for two-wavelength PSDH shown in Fig. 1 We assume that the object is placed at a distance d from the hologram plane. Three phase-shifted color interference fringes are captured by a color CCD camera with a Bayer array color filter. Phase shifts are introduced by moving a reference mirror along the optical axis. In this study, we assume that the wavelength of the He–Ne laser is the reference wavelength.

 figure: Fig. 1.

Fig. 1. Optical setup for two-wavelength PSDH: beam splitter (BS), mirror (M), moving mirror (MM), and object (OBJ). The inset shows the color object.

Download Full Size | PDF

A flowchart of the proposed method is shown in Fig. 2 First, the color interference fringe is separated into two monochromatic interference fringes by the color-separation method detailed in Section 2.A. Thus, we have three phase-shifted interference fringes for each of the He–Ne and Nd–YAG lasers (for a total of six interference fringes). Next, the object wave on the hologram plane for each wavelength is retrieved by the generalized phase-shifting approach detailed in Section 2.B. Subsequently, image reconstruction is performed for each wavelength, where SFT is used to reconstruct the scaling image, as detailed in Section 2.C. The digital hologram for the red channel corresponding to the reference wavelength is reconstructed straightforwardly. The digital hologram for the green channel is reconstructed with an appropriate magnification factor to avoid chromatic aberration. Consequently, we have two reconstructed images for different wavelengths that have the same size. Finally, the color reconstructed image is synthesized by directly superposing the reconstructed images for all wavelengths.

 figure: Fig. 2.

Fig. 2. Flowchart of proposed method using two-wavelength PSDH.

Download Full Size | PDF

A. Color Separation

The color image captured by the color digital camera can be easily separated into its three primary-color components by a typical RGB color-separation technique. When a digital camera with a Bayer array color filter is used, color separation is generally performed using a demosaicing process based on hardware or software for converting a Bayer format image into a viewable color image. However, the spectral bandwidth of the color filter used in the common color digital camera is not sufficiently narrow, and thus, crosstalk often occurs between adjacent colors. A color-separation method that suppresses color crosstalk has been proposed in fringe projection profilometry [28,29]. This method assumes a linear relation between the intensity of the incident light and the intensity detected by the color image sensor. The linear relation is represented by a 3×3 matrix, and its elements are determined using the contrast factor of a fringe generated by a projector. Therefore, color separation can be easily performed by a simple matrix calculation. This study presents a modified color-separation method using the linear relation strategy to suppress color crosstalk in multiwavelength DH.

First, we assume that a color hologram is recorded using three lasers generating red, green, and blue light. One pixel can detect the laser beam, the intensity of which decreases according to the transmittance of the color filter. Suppose that IR, IG, and IB denote the intensities of the monochromatic holograms generated by the red, green, and blue lasers, respectively. Further, suppose that IR, IG, and IB denote the color-filtered intensities in the red, green, and blue pixels, respectively. Then, we can assume a linear relation between the hologram intensity and the intensity detected by the color-filtered pixel as follows:

(IRIGIB)=(tRRtRGtRBtGRtGGtGBtBRtBRtBB)(IRIGIB),
where tpq is the characteristic spectral transmittance of the color q filter with respect to the primary color p. For example, tRG represents the characteristic spectral transmittance of the green color filter with respect to the red laser. The diagonal elements are assumed to be unity. In this study, we refer to the matrix consisting of the spectral transmittances as the transmittance matrix. According to Eq. (2), the original intensity of the monochromatic hologram can be obtained using the inverse of the transmittance matrix. Therefore, we can separate the color hologram into three monochromatic holograms by a pointwise calculation using the transmittance matrix.

The transmittance matrix can be determined by the ratio of the RGB pixel values of the image obtained with the color digital camera. Once the transmittance matrix has been obtained, we can reuse the same transmittance matrix, provided the color hologram is recorded under the same conditions. We consider uniform and sinusoidal patterns as the input images for determining the transmittance matrix and refer to these as uniform and contrast schemes, respectively.

The procedure for implementing the uniform scheme is as follows. First, a uniform beam of a monochromatic laser is captured by the color digital camera. The ratio between RGB pixels is then calculated. By repeating this procedure for the other lasers of different colors, all elements of the transmittance matrix can be determined. This scheme can be easily implemented. However, the estimated values may be directly affected by dark-current noise in the image sensor and fluctuation of the light source. Therefore, it is necessary to estimate the pixel value of the noise in a preliminary experiment and to use the RGB pixel value obtained by subtracting this noise value in the calculation.

In the contrast scheme, the contrast value of a sinusoidal pattern is used to determine the transmittance matrix. The sinusoidal pattern can be generated by the interference between two plane waves, which permits use of the contrast value of the ideal sinusoidal fringe, unlike in fringe projection profilometry [29]. The procedure is as follows. First, the interference fringe between two plane waves generated by a monochromatic laser is captured by the color digital camera. Then, interference fringes in red, green, and blue pixels are extracted from the captured interference fringe, where a Bayer raw format image is assumed. Thus, the captured image is transformed into three primary color images using linear interpolation. The contrast factor of the interference fringe for each color is calculated by a fringe analysis technique. In this study, we use the contrast factor given by the magnitude of the first-order Fourier spectrum of the interference fringe. The ratio between the contrast factors of the different colors is then calculated. All elements of the transmittance matrix are determined by following this procedure for all lasers. The advantage of the contrast scheme is that the contrast factor is less sensitive to dark-current noise. Therefore, the contrast scheme may have the ability to estimate the transmittance matrix more easily and accurately compared to the uniform scheme.

B. Object Wave Extraction

Multiwavelength DH, in which uniform phase shifting is performed simultaneously for all wavelengths, presents difficulties in correctly extracting the object waves for all wavelengths by the standard phase-shifting method because the phase shift depends on the wavelength. Therefore, we focus on generalized PSDH. When we consider three interference fringes In (n=0, 1, 2) with phase shift ϕn, the object wave can be expressed, after some algebraic calculations, as

O=eiϕ0{(1eiΔϕ20)ΔI01+(1eiΔϕ01)ΔI20}2i|R|(sinΔϕ01+sinΔϕ12+sinΔϕ20),
where ΔIpq=IqIp, and Δϕpq=ϕqϕp [24].

The generalized phase-shifting approach does not require the phase shift to be a constant value, such as π/2. Thus, the main task of this approach should be to estimate the unknown phase shifts. If the correct phase shift for each wavelength is estimated, we can extract the object wave for each wavelength using Eq. (3). However, phase-shift estimation often encounters problems, such as complex optical system implementations and phase-shift amounts limited to the range of 0π. In this study, we employ SGPSDH [2426]. This approach can be implemented using the typical phase-shifting optical system. The phase shift can be estimated using three phase-shifted holograms by a simple computation using the statistical properties of the Fresnel diffraction field of the object wave [30,31].

The procedure of the phase-shift estimation in SGPSDH is as follows. First, we calculate the spatial average of all the square differences between two of the three phase-shifted interference fringes. In the Fresnel diffraction field, the object wave often satisfies the phase-randomness condition, which holds that the statistical properties of the Fresnel diffraction field correspond to those of a fully random field. Alternatively, the phase-randomness condition can be controlled by slightly increasing the incident angle of the reference wave [26]. If the phase randomness is sufficiently developed, the phase shifts can be derived from

Δϕpq=arccos{1κ|ΔIpq|2},
where is the averaging operator over the entire frame, and κ=[4|R|2|O|2]1. As Eq. (4) involves the arccosine, the phase shift is limited to [0,π]. The range of the phase shift can be expanded to [0,2π] by determining its sign. Thus, we examine the combination of signs that satisfies the cyclic phase constraint condition given by Δϕ01+Δϕ12+Δϕ20=2mπ, where m is an integer [24]. The correct sign combination can be estimated by finding the root of the cost function of the parameter κ, given by
f(κ)=c01Δϕ01+c12Δϕ12+c20Δϕ20,
where the coefficient corresponds to the sign of the phase-shift, i.e., cpq=+1 or 1. If the cost function equals 2mπ, it satisfies the cyclic phase constraint condition, and its coefficients give the correct signs of the phase shift. Therefore, the optimum cost function and its solution κ0 can be determined by evaluating the zero-crossing property of the cost function. Consequently, the signed phase shift can be obtained from
Δϕpq=c^pqarccos{1κ0|ΔIpq|2},
where ĉpq is the estimated sign coefficient. In the proposed method, the phase shifts for each wavelength are estimated with Eq. (6) using three monochromatic interference fringes; then, the object wave for each wavelength in the hologram plane can be obtained from Eq. (3). The object wave in the object plane is calculated using the inverse Fresnel transform of the object wave in the hologram plane.

Because an arbitrary phase shift can be used in the generalized phase-shifting approach, there are no problems due to the wavelength dependence of the phase shift. Moreover, implementing the phase shift for multiwavelength DH is not difficult. These properties constitute significant advantages of using the generalized phase-shifting approach in multiwavelength DH.

C. Color-Image Reconstruction

The SFT can reconstruct an image at an arbitrary magnification by changing the sampling interval on both the object and hologram planes [27]. We use the following notation. We assume the coordinates of the hologram and object planes to be (x,y) and (X,Y), respectively, where a square Cartesian coordinate system of N×N pixels is assumed for simplicity. The complex amplitudes in the hologram and object planes are denoted by u(x,y) and U(X,Y), respectively. The shifted coordinates in the hologram plane are defined by xr=xo+rΔx and ys=yo+sΔy (0r,s<N), and the shifted coordinates in the object plane are defined by Xp=Xo+pΔX and Yq=Yo+qΔY (0p,q<N), where (xo,yo) and (Xo,Yo) are the positions of the corner of the grid shifted from the z axis, Δx and Δy are the sampling intervals in the hologram plane, and ΔX and ΔY are the sampling intervals in the object plane. The discrete SFT for a propagation distance d is represented by

U(p,q)=C(p,q)r=0N1s=0N1u˜(r,s)exp[i2π(mxpr+myqs)],
C(p,q)=exp(ikd)iλdexp[iπλd(Xp2+Yq2)]×exp[i2πλd(px0ΔX+qy0ΔY)],
u˜(r,s)=u(r,s)exp[iπλd(xr2+ys2)]exp[i2πλd(xrX0+ysY0)],
where mx=ΔxΔX/λd and my=ΔyΔY/λd are the scaling parameters relative to the x and y axes, respectively. The SFT is practically computed with the fast SFT algorithm using the fast Fourier transform (FFT) algorithm and the convolution theorem.

Even though image reconstruction is performed by standard SFT, direct superposition for color-image synthesis causes image blurring due to the wavelength dependence of the image size, as shown in Fig. 3(a). In this study, we use the SFT-based image-reconstruction method to develop a color-image synthesis scheme that can adjust the size of the reconstructed images for all wavelengths to the reference image size. This is achieved by employing an appropriate magnification factor for each wavelength, as shown in Fig. 3(b).

 figure: Fig. 3.

Fig. 3. Wavelength-compensated image reconstruction based on SFT; (a) MR=MG=M; (b) MR=M, MG=M(λG/λR).

Download Full Size | PDF

The strategy for determining the magnification factor consists of several steps. Hereafter, we consider the one-dimensional case for convenience. When the magnification is equal to one, the SFT can be regarded as the single FFT method [9]. In other words, the scaling parameter satisfies the condition mx=1/N. Therefore, the sampling interval ΔX in the object plane is represented by ΔX=λd/NΔx. It should be noted that Δx corresponds to the pixel pitch of the image sensor. Next, if the hologram for wavelength λ is reconstructed by the SFT with magnification M, the sampling interval in the object plane is represented by ΔX=λd/MNΔx. Thus, if the hologram for wavelengths λR and λG is reconstructed by the SFT with magnifications MR and MG, respectively, the sampling intervals in the object plane are represented by ΔXR=λRd/MRNΔx and ΔXG=λGd/MGNΔx, respectively. Now, we assume the wavelength λR to be the reference wavelength. To set the reconstructed images of wavelengths λR and λi to the same size, the magnification for the latter should be set to

Mi=MλiλR,
to produce the same sampling interval. For instance, for λG, we have ΔXG=λGd/M(λG/λR)NΔx=λRd/MNΔx=ΔXR. Consequently, the size of the reconstructed image for all wavelengths is adjusted to the reference image size, which allows the direct superposition of reconstructed images obtained for different wavelengths.

The proposed color-image synthesis scheme has several advantageous features. Image scaling can be performed by specifying the magnification factor. Therefore, the reconstructed image can be continuously magnified. Moreover, the lateral shift of the reconstructed image can be easily performed by specifying the origin of the shifted coordinates. These properties distinguish this approach from the conventional method.

In color DH, chromatic aberration is caused by the mismatch in size of reconstructed images of different wavelengths. To evaluate the degree of coincidence of the size of the reconstructed image between different wavelengths, we employ the two-dimensional correlation coefficient C given by

C=pq(IR(p,q)I¯R)(IG(p,q)I¯G)pq(IR(p,q)I¯R)2pq(IG(p,q)I¯G)2,
where IR and IG are the monochromatic reconstructed images for the wavelengths λR and λG, respectively, and I¯R and I¯G are the mean values of IR and IG, respectively.

3. EXPERIMENTS

We conducted optical experiments to verify the feasibility of the proposed method. We used the optical setup shown in Fig. 1. A He–Ne laser (632.8 nm) and a Nd–YAG laser (532 nm) were used as light sources. A phase shift was created using a reference mirror mounted on a translation stage with a stepping motor, which had a minimum motion distance of approximately 1 μm. We did not control the phase shift precisely, because the generalized phase-shifting approach was used. We used a color CCD camera (Sony, XCL-5005CR, 2048×2048pixels, 3.45 μm pixel pitch) equipped with a Bayer-array color filter. The captured color image was saved in 12-bit Bayer raw format. Numerical processing was performed using a personal computer (Intel Core i7–3770, 3.4 GHz) on MATLAB R2015b.

First, the transmittance matrix was estimated using the uniform and contrast schemes. In the uniform scheme, an image was captured in darkness, and the pixel value was subsequently evaluated. The pixel value of the noise was approximately 241.6 regardless of the color filter. The fluctuation of the light source was negligible compared with the magnitude of the dark-current noise. The RGB pixel value obtained by subtracting the noise value was used for the computation. In the contrast scheme, the RGB pixel value was used directly. Hence, we found that (tRR,tRG,tGR,tGG,tBR,tBG)=(1,0.0210,0.0823,1,0.0019,0.1174) and (tRR,tRG,tGR,tGG,tBR,tBG)=(1,0.0237,0.0868,1,0.0020,0.1962) for the uniform and contrast schemes, respectively. Here, the blue component was omitted because the experiment was performed using two wavelengths only. As similar results were obtained in the experiment for both schemes, we adopted the transmittance matrix estimated by the contrast scheme.

Next, we applied the proposed color-separation method to the color interference fringe generated by both the He–Ne and Nd–YAG lasers. The color interference fringe was produced by placing a mirror in place of the target in the optical setup shown in Fig. 1 For comparison, the monochromatic interference fringes generated individually by each laser were also evaluated. Figure 4 shows a part of a color interference fringe and its magnified version, where the 12-bit Bayer raw format image without demosaicing was converted to an 8-bit monochrome image for display. The results show that the color interference fringe includes two interference fringes originating from the He–Ne and Nd–YAG lasers.

 figure: Fig. 4.

Fig. 4. Color interference fringe and its magnified version: (a) color interference fringe generated by the He–Ne and Nd–YAG lasers; (b) magnified image corresponding to the central square area in (a). The images are presented in Bayer raw format.

Download Full Size | PDF

Figure 5 shows the interference fringes in the red and green channels processed by the proposed color-separation method and the synthesized color interference fringe. The results processed by the conventional color-separation method, i.e., the demosaic function in MATLAB R2015b, are also shown for comparison. We evaluated the performance of the color-separation method by defining the distortion factor DF=vc2/(vi2+vc2), where vi and vc are the magnitudes of the first-order Fourier spectra of the interference fringe of interest and the crosstalk component, respectively. The DF values for the interference fringes shown in Fig. 5 and for the monochromatic interference fringe are shown in Fig. 6 In this figure, the crosstalk component in the monochromatic interference fringe was assumed to be noise existing in the region where the first-order spectrum of the crosstalk component appeared. In the conventional color-separation method, slight distortion occurred in the interference fringe, and DF was relatively large. This result indicates that crosstalk remained in the interference fringe. In the proposed method, the distortion of the interference fringe was not observed, and DF became sufficiently small. Therefore, the suppression of crosstalk is considered successful.

 figure: Fig. 5.

Fig. 5. Interference fringes in red and green channels and synthesized color fringes for (a)–(c) conventional color-separation method and (d)–(f) proposed method (contrast scheme).

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Distortion factors in red (left) and green (right) channels.

Download Full Size | PDF

We also evaluated the reconstructed image of the color interference fringe. Figure 7 shows the reconstructed images of the interference fringes depicted in Fig. 5 and the synthesized color reconstructed image, where the angular spectrum method for a propagation distance of 0.36 m is used. It is worth noting that the small intensity variation in Fig. 7 was caused by stray light. In the conventional color-separation method, a horizontal striped pattern occurred in the green channel. Consequently, significant color distortion was observed in the color reconstructed image. The influence of crosstalk was noticeable in the reconstructed image because the plane wave can be regarded as an object of uniform amplitude. In the proposed method, the reconstructed image in each channel was almost identical to that of the corresponding monochromatic interference fringes. Thus, the color reconstructed image was synthesized without color distortion.

 figure: Fig. 7.

Fig. 7. Reconstructed images in red and green channels and a synthesized color reconstructed image obtained by (a)–(c) conventional color-separation method and (d)–(f) proposed method (contrast scheme).

Download Full Size | PDF

We investigated the influence of the color crosstalk component only. The color crosstalk component can be obtained by subtracting the reconstructed image of the true monochromatic interference fringe from that of the monochromatic interference fringe extracted from the color interference fringe using the color-separation method. We consider the reconstructed image normalized by the maximum intensity.

Figure 8 shows a cross-sectional view of the color crosstalk component in the vertical direction of the reconstructed image shown in Fig. 7 We evaluate the color crosstalk component using the peak-to-valley value (PV), the average and the variance, as listed in Table 1, where PV is defined by the difference between the highest and lowest values within the measurement region.

 figure: Fig. 8.

Fig. 8. Cross-sectional view of crosstalk components in red and green channels for (a) conventional color-separation method and (b) proposed method (contrast scheme).

Download Full Size | PDF

Tables Icon

Table 1. Peak-to-Valley, Average and Variance of Crosstalk Components in Red and Green Channels for the Conventional and Proposed Color-Separation Methods

In the conventional color-separation method, the intensity variation in the red channel is small. However, in the green channel, large periodic intensity fluctuations were observed, as shown in Fig. 8(a). PV is two or more times greater than that of the other components, and the variance is 10 or more times greater than that of the other components. Assuming an 8-bit image, the PV of the color crosstalk component in the green channel corresponds to a pixel value of 52. Thus, we can notice a periodic color fluctuation in the green channel, though its contrast is low. Similarly, the PV in the red channel corresponds to a pixel value of 20. However, according to the average and the variance, the magnitude of the crosstalk component is sufficiently small. Therefore, it may be difficult to notice the color variation in the red channel. Consequently, it is found that the color reconstructed image is distorted by the color crosstalk in the green channel when the conventional color separation method is employed in this experimental system.

In the proposed color-separation method, as shown in Fig. 8(b), the color crosstalk components of both red and green channels were sufficiently small. PV and the variance are slightly lower than those of the red channel in the conventional method, as listed in Table 1. Therefore, color fluctuations may not be noticeable for the 8-bit image.

These results indicate that the proposed color-separation method sufficiently separated the color interference fringe into different monochromatic interference fringes. Moreover, we demonstrated that the proposed method effectively suppresses the color distortion caused by color-filter crosstalk.

Next, we verified the object wave extraction and color image reconstruction in the proposed framework. We used a yellow car model as the target object, which was placed 0.34 m from the digital camera. Three two-wavelength phase-shifted holograms were recorded using the He–Ne and Nd–YAG lasers. Then, each two-wavelength hologram was separated into its red and green components using the proposed color-separation method.

The object wave for each wavelength was extracted by the SGPSDH approach. In the red channel, the phase shifts Δϕ01, Δϕ12, and Δϕ20 were estimated to be 1.20, 0.55, and 2.15, respectively. Similarly, the phase shifts in the green channel were estimated to be 1.70, 0.85, and 1.52, respectively. The object waves in the hologram plane were then calculated using Eq. (3).

The object wave in the object plane was reconstructed by standard SFT and the proposed color-image reconstruction method, where the magnification was set to 3.5; the positions of the grid corner in the hologram and object planes were set to (xo,yo)=(0,0) and (Xo,Yo)=(250ΔX,0), respectively; and the wavelength of the He–Ne laser was defined as the reference wavelength. Color image synthesis was performed by directly superposing the reconstructed images in the red and green channels.

The reconstructed images in the red and green channels, processed by standard SFT, are shown in Figs. 9(a) and 9(b), respectively. The object wave for each wavelength was clearly reconstructed without the zeroth-order and conjugate components, which implies that object wave extraction was successfully performed using SGPSDH. The color image and its magnified version are shown in Figs. 9(c) and 9(d), respectively. A double image was obtained because of chromatic aberration.

 figure: Fig. 9.

Fig. 9. Reconstructed images without wavelength compensation: (a) He–Ne laser image, (b) Nd–YAG laser image, (c) color image obtained by the direct superposition of (a) and (b), and (d) magnified view of the color image.

Download Full Size | PDF

Figure 10 shows the results obtained by the proposed color-image reconstruction method, where the magnification for the Nd–YAG laser was set to 2.942 (3.5λYAG/λHeNe), following the magnification determination strategy. We refer to this magnification as the optimum magnification Mopt in the following image evaluation. It is found that the reconstructed images for both channels are of the same size as the reconstructed image shown in Fig. 9(a). In fact, the reconstructed images for all wavelengths were of the same size; thus, the color image could be obtained straightforwardly by direct superposition, as shown in Figs. 10(c) and 10(d). Chromatic aberration had no observable influence.

 figure: Fig. 10.

Fig. 10. Wavelength-compensated reconstructed images: (a) He–Ne laser image, (b) Nd–YAG laser image, (c) color image obtained by the direct superposition of (a) and (b), and (d) magnified view of the color image.

Download Full Size | PDF

We examined the validity of the optimum magnification Mopt. First, we define Mα=αMopt, where α is a parameter for controlling the magnification. Next, the reconstructed images with the magnification Mα were obtained for all wavelengths. Then, the correlation coefficient was calculated using Eq. (11). In the case of the standard SFT, as shown in Fig. 9, the correlation coefficient was 0.295. In the case of the proposed color-image reconstruction method, as shown in Fig. 10, the correlation coefficient was 0.481.

Figure 11 shows the correlation coefficient as a function of α, where α varied from 0.8 to 1.2 with increments of 0.01. Note that the correlation coefficient is less than unity because speckle noise is embedded in the reconstructed image. It is found that, if α=1, i.e., the optimum magnification is used, then the correlation coefficient takes the maximum value. When the magnification deviated from the optimum value, the correlation coefficient greatly decreased, even if the deviation amount was 1%. This result indicates that the use of the optimum magnification is most appropriate for obtaining color reconstructed images without chromatic aberration. It should be noted that the influence of speckle noise is much greater than that of the color distortion caused by the color-filter crosstalk. Therefore, it is difficult to evaluate color distortion in reconstructed images with speckle noise.

 figure: Fig. 11.

Fig. 11. Correlation coefficient between the reconstructed images in red and green channels.

Download Full Size | PDF

Thus, we have demonstrated that the proposed method enables the extraction of the object wave for each wavelength without unwanted components, as well as the synthesis of the correct color image.

Other color-image reconstruction methods can be used with the proposed framework. Thus, we created and verified color reconstructed images using the zero-padding method and the adjustable magnification method under the conditions used in the experiment. We obtained color reconstructed images similar to those acquired using the proposed method. The computation times were approximately 11.1, 3.74, and 5.63 s for the zero-padding method, the adjustable magnification method, and the proposed method, respectively. In the zero-padding method, the pixel numbers of the zero padding for the He–Ne and Nd–YAG lasers were 5120 and 3978 pixels, respectively. These numbers necessitate large memory resources, which may be a cause of the longer computation time. The calculation time for the adjustable magnification method was shorter than that for the other methods. However, for each of the He–Ne and Nd–YAG lasers, it was necessary to design additional complex amplitudes, such as a virtual spherical wave and a linear phase component for the lateral shift. In the proposed method, the calculation time was approximately 1.5 times that for the adjustable magnification method. This ratio was equal to the ratio of the number of FFT calculations used in each method (the convolution and fast SFT algorithms involve two and three FFT calculations, respectively). Note that the image reconstruction was performed by directly specifying the magnification and lateral shift amount, which enables easy adjustment of the position of the reconstructed image.

4. CONCLUSIONS

We proposed a color-image reconstruction method for two-wavelength PSDH using the statistical generalized phase-shifting approach. The proposed method can separate the color interference fringe detected by a color digital camera with a Bayer array color filter into three primary color interference fringes without color crosstalk. Uniform and contrast schemes were presented for estimating the transmittance matrix by following a linear relation strategy. In the proposed method, the object wave for each wavelength can be extracted correctly by SGPSDH because the phase shift can be estimated without any wavelength dependence. A color reconstructed image can be synthesized by direct superposition of the monochromatic reconstructed images for all wavelengths, which are obtained by the wavelength-compensated image reconstruction method based on the SFT. A series of experiments demonstrated the validity of the proposed method, which yields color reconstructed images without chromatic aberration. In this paper, we presented the results of experiments conducted with a two-wavelength optical system using He–Ne and Nd–YAG lasers; however, the proposed method can also be implemented in three-wavelength systems. Therefore, it is possible to obtain full-color reconstructed images by adding a blue laser to the optical system.

Funding

Japan Society for the Promotion of Science (JSPS) (25420400).

REFERENCES

1. J. W. Goodman and R. W. Lawrence, “Digital image formation from electronically detected holograms,” Appl. Phys. Lett. 11, 77–79 (1967). [CrossRef]  

2. U. Schnars and W. Jüptner, “Direct recording of holograms by a CCD target and numerical reconstruction,” Appl. Opt. 33, 179–181 (1994). [CrossRef]  

3. C. Wagner, S. Seebacher, W. Osten, and W. Jüptner, “Digital recording and numerical reconstruction of lensless Fourier holograms in optical metrology,” Appl. Opt. 38, 4812–4820 (1999). [CrossRef]  

4. E. Cuche, P. Marquet, and C. Depeursinge, “Simultaneous amplitude-contrast and quantitative phase-contrast microscopy by numerical reconstruction of Fresnel off-axis holograms,” Appl. Opt. 38, 6994–7001 (1999). [CrossRef]  

5. B. Kemper and G. von Bally, “Digital holographic microscopy for live cell applications and technical inspection,” Appl. Opt. 47, A52–A61 (2008). [CrossRef]  

6. T. Nomura, B. Javidi, S. Murata, E. Nitanai, and T. Numata, “Polarization imaging of a 3D object by use of on-axis phase-shifting digital holography,” Opt. Lett. 32, 481–483 (2007). [CrossRef]  

7. I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22, 1268–1270 (1997). [CrossRef]  

8. H. Schreiber and J. H. Bruning, “Phase shifting interferometry,” in Optical Shop Testing, D. Malacara, ed. (Wiley, 2007), pp. 547–666.

9. I. Yamaguchi, T. Matsumura, and J. Kato, “Phase-shifting color digital holography,” Opt. Lett. 27, 1108–1110 (2002). [CrossRef]  

10. J. Kato, I. Yamaguchi, and T. Matsumura, “Multicolor digital holography with an achromatic phase shifter,” Opt. Lett. 27, 1403–1405 (2002). [CrossRef]  

11. N. Demoli, D. Vukicevic, and M. Torzynski, “Dynamic digital holographic interferometry with three wavelengths,” Opt. Express 11, 767–774 (2003). [CrossRef]  

12. J. Kühn, T. Colomb, F. Montfort, F. Charrière, Y. Emery, E. Cuche, P. Marquet, and C. Depeursinge, “Real-time dual-wavelength digital holographic microscopy with a single hologram acquisition,” Opt. Express 15, 7231–7242 (2007). [CrossRef]  

13. P. Tankam, P. Picart, D. Mounier, J. M. Desse, and J. C. Li, “Method of digital holographic recording and reconstruction using a stacked color image sensor,” Appl. Opt. 49, 320–328 (2010). [CrossRef]  

14. P. Ferraro, S. De Nicola, G. Coppola, A. Finizio, D. Alfieri, and G. Pierattini, “Controlling image size as a function of distance and wavelength in Fresnel-transform reconstruction of digital holograms,” Opt. Lett. 29, 854–856 (2004). [CrossRef]  

15. J. Zhao, H. Jiang, and J. Di, “Recording and reconstruction of a color holographic image by using digital lensless Fourier transform holography,” Opt. Express 16, 2514–2519 (2008). [CrossRef]  

16. B. Javidi, P. Ferraro, S.-H. Hong, S. De Nicola, A. Finizio, D. Alfieri, and G. Pierattini, “Three-dimensional image fusion by use of multiwavelength digital holography,” Opt. Lett. 30, 144–146 (2005). [CrossRef]  

17. D. Alfieri, G. Coppola, S. De Nicola, P. Ferraro, A. Finizio, G. Pierattini, B. Javidi, S. De Nicola, P. Ferraro, A. Finizio, G. Pierattini, and B. Javidi, “Method for superposing reconstructed images from digital holograms of the same object recorded at different distance and wavelength,” Opt. Commun. 260, 113–116 (2006). [CrossRef]  

18. S. De Nicola, A. Finizio, G. Pierattini, D. Alfieri, S. Grilli, L. Sansone, and P. Ferraro, “Recovering correct phase information in multiwavelength digital holographic microscopy by compensation for chromatic aberrations,” Opt. Lett. 30, 2706–2708 (2005). [CrossRef]  

19. H. Jiang, J. Zhao, and J. Di, “Digital color holographic recording and reconstruction using synthetic aperture and multiple reference waves,” Opt. Commun. 285, 3046–3049 (2012). [CrossRef]  

20. F. Zhang, I. Yamaguchi, and L. P. Yaroslavsky, “Algorithm for reconstruction of digital holograms with adjustable magnification,” Opt. Lett. 29, 1668–1670 (2004). [CrossRef]  

21. P. Picart, P. Tankam, D. Mounier, Z. Peng, and J. Li, “Spatial bandwidth extended reconstruction for digital color Fresnel holograms,” Opt. Express 17, 9145–9156 (2009). [CrossRef]  

22. P. Picart and P. Tankam, “Analysis and adaptation of convolution algorithms to reconstruct extended objects in digital holography,” Appl. Opt. 52, A240–A253 (2013). [CrossRef]  

23. M. Leclercq and P. Picart, “Method for chromatic error compensation in digital color holographic imaging,” Opt. Express 21, 26456–26467 (2013). [CrossRef]  

24. N. Yoshikawa, “Phase determination method in statistical generalized phase-shifting digital holography,” Appl. Opt. 52, 1947–1953 (2013). [CrossRef]  

25. N. Yoshikawa and K. Kajihara, “Statistical generalized phase-shifting digital holography with a continuous fringe-scanning scheme,” Opt. Lett. 40, 3149–3152 (2015). [CrossRef]  

26. N. Yoshikawa, T. Shiratori, and K. Kajihara, “Robust phase-shift estimation method for statistical generalized phase-shifting digital holography,” Opt. Express 22, 14155–14165 (2014). [CrossRef]  

27. R. P. Muffoletto, J. M. Tyler, and J. E. Tohline, “Shifted Fresnel diffraction for computational holography,” Opt. Express 15, 5631–5640 (2007). [CrossRef]  

28. D. Caspi, N. Kiryati, and J. Shamir, “Range imaging with adaptive color structured light,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 470–480 (1998). [CrossRef]  

29. P. S. Huang, Q. Hu, F. Jin, and F.-P. Chiang, “Color-encoded digital fringe projection technique for high-speed three-dimensional surface contouring,” Opt. Eng. 38, 1065–1071 (1999). [CrossRef]  

30. L. Z. Cai, Q. Liu, and X. L. Yang, “Phase-shift extraction and wave-front reconstruction in phase-shifting interferometry with arbitrary phase steps,” Opt. Lett. 28, 1808–1810 (2003). [CrossRef]  

31. L. Z. Cai, Q. Liu, and X. L. Yang, “Generalized phase-shifting interferometry with arbitrary unknown phase steps for diffraction objects,” Opt. Lett. 29, 183–185 (2004). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Optical setup for two-wavelength PSDH: beam splitter (BS), mirror (M), moving mirror (MM), and object (OBJ). The inset shows the color object.
Fig. 2.
Fig. 2. Flowchart of proposed method using two-wavelength PSDH.
Fig. 3.
Fig. 3. Wavelength-compensated image reconstruction based on SFT; (a)  M R = M G = M ; (b)  M R = M , M G = M ( λ G / λ R ) .
Fig. 4.
Fig. 4. Color interference fringe and its magnified version: (a) color interference fringe generated by the He–Ne and Nd–YAG lasers; (b) magnified image corresponding to the central square area in (a). The images are presented in Bayer raw format.
Fig. 5.
Fig. 5. Interference fringes in red and green channels and synthesized color fringes for (a)–(c) conventional color-separation method and (d)–(f) proposed method (contrast scheme).
Fig. 6.
Fig. 6. Distortion factors in red (left) and green (right) channels.
Fig. 7.
Fig. 7. Reconstructed images in red and green channels and a synthesized color reconstructed image obtained by (a)–(c) conventional color-separation method and (d)–(f) proposed method (contrast scheme).
Fig. 8.
Fig. 8. Cross-sectional view of crosstalk components in red and green channels for (a) conventional color-separation method and (b) proposed method (contrast scheme).
Fig. 9.
Fig. 9. Reconstructed images without wavelength compensation: (a) He–Ne laser image, (b) Nd–YAG laser image, (c) color image obtained by the direct superposition of (a) and (b), and (d) magnified view of the color image.
Fig. 10.
Fig. 10. Wavelength-compensated reconstructed images: (a) He–Ne laser image, (b) Nd–YAG laser image, (c) color image obtained by the direct superposition of (a) and (b), and (d) magnified view of the color image.
Fig. 11.
Fig. 11. Correlation coefficient between the reconstructed images in red and green channels.

Tables (1)

Tables Icon

Table 1. Peak-to-Valley, Average and Variance of Crosstalk Components in Red and Green Channels for the Conventional and Proposed Color-Separation Methods

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

I = n = 1 N I n = n = 1 N ( | O n | 2 + | R n | 2 + O n R n * + O n * R n ) ,
( I R I G I B ) = ( t RR t RG t RB t GR t GG t GB t BR t BR t BB ) ( I R I G I B ) ,
O = e i ϕ 0 { ( 1 e i Δ ϕ 20 ) Δ I 01 + ( 1 e i Δ ϕ 01 ) Δ I 20 } 2 i | R | ( sin Δ ϕ 01 + sin Δ ϕ 12 + sin Δ ϕ 20 ) ,
Δ ϕ p q = arccos { 1 κ | Δ I p q | 2 } ,
f ( κ ) = c 01 Δ ϕ 01 + c 12 Δ ϕ 12 + c 20 Δ ϕ 20 ,
Δ ϕ p q = c ^ p q arccos { 1 κ 0 | Δ I p q | 2 } ,
U ( p , q ) = C ( p , q ) r = 0 N 1 s = 0 N 1 u ˜ ( r , s ) exp [ i 2 π ( m x p r + m y q s ) ] ,
C ( p , q ) = exp ( i k d ) i λ d exp [ i π λ d ( X p 2 + Y q 2 ) ] × exp [ i 2 π λ d ( p x 0 Δ X + q y 0 Δ Y ) ] ,
u ˜ ( r , s ) = u ( r , s ) exp [ i π λ d ( x r 2 + y s 2 ) ] exp [ i 2 π λ d ( x r X 0 + y s Y 0 ) ] ,
M i = M λ i λ R ,
C = p q ( I R ( p , q ) I ¯ R ) ( I G ( p , q ) I ¯ G ) p q ( I R ( p , q ) I ¯ R ) 2 p q ( I G ( p , q ) I ¯ G ) 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.