Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Method for chromatic error compensation in digital color holographic imaging

Open Access Open Access

Abstract

This paper proposes an all-numerical robust method to compensate for the chromatic aberrations induced by the optical elements in digital color holographic imaging. It combines a zero-padding algorithm and a convolution approach with adjustable magnification, using a single recording of a reference rectangular grid. Experimental results confirm and validate the proposed approach.

© 2013 Optical Society of America

1. Introduction

An increasing number of applications rely on the possibilities of using digital color holography to record and reconstruct colored objects at high precisions by using a simple optical set-up [18]. Different wavelengths can be used to record colored objects, but due to the chromatic aberrations, a classical optical system will not only image the same object at different planes, but these images will also have different sizes regarding the wavelength. This leads to severe chromatic aberrations [9]. Furthermore, in a simultaneous 3-color holographic detection scheme [10], the reference beams cannot be perfectly aligned, which results in the occurrence of a lateral shift between the different images. In a usual Fresnel configuration, due to the Shannon conditions, the recording distance has to be increased so far that the study of large objects is ruled out, which reduces considerably the area of possibilities. In 1996, Schnars et al. proposed to use a negative lens so as to reduce the spatial frequency spectrum of the object, as well as the recording distance [11]. Unfortunately, the use of a negative lens introduces aberrations that modify both the sensor-to-object distance and the virtual object size along each wavelength. The use of telecentric imaging systems is also an alternative possibility for imaging extended objects. In Digital Holographic Microscopy (DHM), a microscope objective is used to image the object onto or near the sensor, and this can also lead to aberrations. Several actions were taken in the past years in order to correct optical aberrations in digital holography. In 2000, Stadelmaier et al. proposed a technique using a quadratic phase term in order to get rid of the spherical aberration [12]. In 2006, Colomb et al. extended this approach and proposed a method for full compensation of geometrical aberrations using a numerical parametric lens, whose parameters were based on a polynomial decomposition [13]. For chromatic aberrations, De Nicola et al., in 2005, revealed a method to equalize the pixel pitch in reconstructed color images, in order to retrieve corrected superposed phase images. This method used a zero-padding approach that depends on the wavelengths used in the experiment [14,15]. However, in literature, there does not exist any all-numerical and simple method to fully compensate for the chromatic aberrations in digital color holography. So, there is a need for an efficient empirical evaluation of aberration parameters. In addition, it is demonstrated in this paper that the origins of chromatic aberration are multiple, and necessitate a full compensation. The opportunity provided by digital color holography is that chromatic aberrations can be corrected in a pure numerical way. So, this paper proposes an all-numerical robust and simple method to compensate for chromatic aberrations in a digital 3-color holographic scheme.

This paper is organized as follows: Section 2 presents the theoretical background of digital holography; Section 3 details the different origins of chromatic aberrations and Section 4 describes a robust method to compensate for these chromatic aberrations. Finally, Section 5 shows an experimental validation of the proposed method and Section 6 draws conclusions.

2. Theoretical background

2.1 Digital color holography scheme

Figure 1 describes the basic scheme for a digital color holographic set-up. Three lasers with wavelengths λ in the red, green and blue domains [10] are used to illuminate the object. The three color beams are combined using dichroic plates, and then the reference and object beams are separated. The object wave illuminates the useful area with a unique or several dissociated illumination directions [10]. The reference beam is extended and spatially filtered to produce an inclined reference plane wave (off-axis holography) that is combined with the diffracted object wave, using a beam splitter. The sensor records simultaneously the three colors, with image having M × N pixels sized px × py, providing real-time capabilities to the holographic set-up. For the study of large objects, a negative lens is placed just in front of the beam splitter, in the object optical path. Thus, a virtual object is produced in front of the sensor at a smaller distance than the initial one, and this enables the Shannon conditions to be fulfilled [11]. The use of telecentric imaging systems is also an alternative possibility for imaging extended objects. Note that in DHM, the scheme is quite similar except that the microscope objective is used to image the object onto or near the sensor [13]. Although the proposed method is discussed for a pure digital holographic configuration, it can also be applied to DHM or to telecentric imaging systems.

 figure: Fig. 1

Fig. 1 Basic scheme for digital color holography using 3 wavelengths

Download Full Size | PDF

2.2 Numerical reconstruction

In digital holography, the complex amplitude Ar of the reconstructed field at a plane (x,y) located at a distance d from the sensor, retrieved from a recorded hologram H, is written according to Eq. (1):

Ar(x,y,d)=idλc++H(X,Y)w(X,Y,λc,Rc)×exp[2iπ/λcd2+(Xx)2+(Yy)2]d2+(Xx)2+(Yy)2dXdY
In Eq. (1), w(X,Y,λc,Rc) is a spherical reconstruction wave, whose parameters are its wavelength λc and its curvature radius Rc [3].

In most cases, the reconstructed wave w is plane (i.e. w = 1), and the wavelength is the same (λc = λ). The numerical focusing on the virtual or real image is obtained at a distance d = ± d0 (d0: distance from the object to the sensor). There are several approaches to the numerical reconstruction of images in digital holography: the Fresnel transform [11], the convolution [16], the convolution with adjustable magnification [17], the Fourier transform [18], the double Fresnel transform [19], or also the Fresnel-Bluestein transform [20].

The next subsections give few details on the first three approaches, which will be adopted in the proposed method to correct the chromatic aberrations.

2.2.1 The Fresnel transform approach

Equation (1) can be expressed as a discrete Fresnel transform, for λc = λ and w = 1:

Ar(x,y,d)=iexp(2iπd/λ)λdexp[iπλd(x2+y2)]×k=K/2k=K/21l=L/2l=L/21H(lpx,kpy)exp[iπλd(l2px2+k2py2)]×exp[2iπλd(lpxx+kpyy)]
The algorithm uses (K,L) data points, generally (K,L)≥ (M,N). The pixel pitches of the reconstructed image in the x and y directions are given respectively by Δηx = λd/Lpx and Δηy = λd/Kpy, and depend on the wavelength. This may be a problem in digital color holography, as the reconstructed horizon (i.e. the calculated area) becomes different for each wavelength. The zero-padding approach [15], which consists in artificially amplifying the size of the hologram by filling around data with zeros in order to reach the perfect size, is a way of dealing with the problem in order to maintain the same pixel pitch for every wavelength.

2.2.2 The convolution approach

Equation (1) can also be expressed as a convolution equation, using two Fourier transforms (FT):

Ar=FT1[FT[H×w]×G]
where G is the angular spectrum transfer function, and is given by (λc = λ):
G(u,v,d)={exp[2iπd/λ1λ2(uu0λ)2λ2(vv0λ)2]if|uu0λ|Lpx/2λdand|vv0λ|Kpy/2λd0elsewhere
where (uλ0,vλ0) are the mean spatial frequencies localizing the reconstructed object at wavelength λ [17,21,22].

The pixel pitch in the reconstructed plane is that of the sensor. Basically, with w = 1 this algorithm is not suitable for the reconstruction of large objects [21]. However, with w≠1, one gets an adjustable magnification. If, instead of a plane reconstruction wave, the convolution approach is used with a spherical one, the reconstruction plane is now located at a distance 1/dr = 1/Rc −1/d0, and the transverse magnification γ of the object is now given by [3,17,21]:

γ=drd0
This property can be quite useful to correct the chromatic aberrations, as the size of the object plays an important part in the quality of the reconstruction.

2.2.3 Case of extended objects

The Shannon conditions impose a general rule [21,22]: the size of the object must be smaller than the spatial window of the reconstructed image. In the Fresnel approach, the spatial window of the reconstructed field is proportional to the reconstruction as λd/px. So the size of the set-up is directly proportional to the size of the object. For example, to study a 15 × 15cm2 object using a wavelength at 532nm, with pixels 6.45μm2, the distance must be at least 4.849m [22].

In order to respect the Shannon criteria with an acceptable size of the set-up, and to maintain an acceptable illumination (a 10m long set-up will necessitate a powerful laser), the use of a negative lens was introduced by Schnars et al. [11], then generalized to multiple lenses by Mundt et al. in 2010 [23]. The negative optical system will provide a virtual image of the object at a distance much closer to the sensor; for example, a 10 × 10cm2 object could be located at 1.1m of the sensor, and its virtual image at less than 30cm and still respects the Shannon criteria.

The drawback of reducing the spatial frequency spectrum using a lens assembly is the chromatism induced by the lenses, especially if the system is not achromatic for the considered wavelengths. Section 3 discusses the origin of the chromatic aberrations induced in a color holography set-up.

3. Origin of chromatic aberrations

Figure 2 presents two parts of a basic set-up used to record large objects using digital color holography.

 figure: Fig. 2

Fig. 2 Origins of chromatism in a digital color holographic set-up, (a) the negative lens in front of the sensor brings a shift in the positions and sizes of the virtual images, (b) a small misalignment of the three color-beams in the reference wave brings a spatial shift in the reconstruction of the images

Download Full Size | PDF

Figure 2(a) considers the object beam, diffracted from the object and propagating towards the sensor through the lens. The object beam passes through the negative lens, creating a virtual image of the object. Its position will depend on the wavelength of the illumination. If the position of the red image pR is taken as the reference position, there is a shift in the position of the other images (distance Δd, cf Fig. 2(a)), which is given in Eq. (6), where f’ is the focal of the lens for the red light, and ν is the Abbe number [24]:

Δd=p'R2ν×f'
As seen in section 2.2, the reconstruction distance is important in the reconstruction of digital holograms. It is therefore mandatory to correct the axial shifts of the different images. But these axial shifts will also induce a size difference Δy’ between the red image and the other images, depending on the wavelength. This is expressed in Eq. (7), where yR is the size of the red image, and where γopt and Δγopt are respectively the optical magnification (provided by the lens or imaging system) and its variation [24]:
Δy'=y'RΔγoptγoptΔdp'R
An important parameter in Eq. (7) is the variation of the magnification Δγopt that must be experimentally measured.

Figure 2(b) considers the reference beam, and particularly the alignment of the three color beams. In the reconstruction process, the spatial frequencies u0 and v0 for the three reference waves in the x and y directions are respectively given by

{u0λ=sinθxλλv0λ=sinθyλλ
where θλx,y are the incidence angles at the sensor plane. A small misalignment in the propagation direction for the three reference waves is unavoidable, and a lateral shift between the three reconstructed images will be induced. Experience shows that a misalignment of only a few milliradians will lead to a spatial shift of approximately 10 pixels at the end of the reconstruction process.

These chromatic aberrations cannot be easily avoided, as they are inherent to every digital color holography set-up, and there is a need for an efficient empirical evaluation of the aberration parameters. In the next section, an all-numerical method is proposed for a full compensation of the chromatic error in digital color holography.

4. Total compensation of chromatic aberrations in digital color holography

The proposed method is based on a full numerical process and combines both a modified version of the zero-padding algorithm and the adjustable magnification approach. The test object that has to be used is a rectangular reference grid with a good contrast, but any object containing parallel lines is also acceptable.

4.1 Modified zero-padding algorithm

The first step is to reconstruct, with the Fresnel transform, the three color image with the same pixel pitch using the zero-padding algorithm at the different distances given by the chromatic aberration. This step is necessary to compare the different sizes and positions of the monochromatic images in the same set of reference axes. The required condition can be written as Eq. (9) for the x-direction (a similar relation holds for the y-direction) [25,26]:

Δη(λ)=λdrλKλpx=constant
In Eq. (9), Δη is the pixel pitch, depending on the wavelength λ, drλ is the numerical reconstruction distance used in the algorithm, px is the pixel pitch in the x-direction and Kλ is the number of pixels of the discrete Fresnel transform. The zero-padding algorithm is based on the modification of the number of pixels Kλ by adding rows and columns of “zeros” around the matrix formed by the image to be reconstructed, in order to fulfill Eq. (9). This principle can be modified to get the invariance of the pixel pitch as accurate as possible. This adaptation implies also to change slightly the reconstruction distance for every wavelength, and to find a pair of even integers fulfilling this equation:
Kλ1=λ1drλ1λ2drλ2Kλ2,KλΝ
In order to keep these changes under a very small amount, some constraints to the condition in Eq. (10) are added. These constraints, summarized in Eq. (11), ensure working within the best achievable spatial resolution [26].
{drλ1drλ2=d0λ1d0λ2drλ1+drλ2=d0λ1+d0λ2
where d0λ is the physical image distance due to the lens aberration. Equation (10) can now be rewritten as:
Kλ1=λ1d0λ1λ2d0λ2Kλ2,KλΝ
There is little chance that a pair of even integers will satisfy Eq. (12) exactly. So, the integer pair (K*λ1,K*λ2) has to be closest to an exact solution. Then, one can adjust the reconstruction distances, which are now calculated according to:
{drλ1=d0λ1+d0λ21+Kλ2*λ1Kλ1*λ2drλ2=d0λ1+d0λ2drλ1
If more than two wavelengths are used, their associated number of pixels and reconstruction distances are found using conditions similar to that detailed in Eq. (12). For example, these are the results for a third wavelength:

{Kλ3*=λ3d0λ3λ2d0λ2Kλ2*drλ3=λ2Kλ3*λ3Kλ2*drλ2

4.2 Correction parameters

The images reconstructed using the modified zero-padding algorithm have now the same physical reconstruction horizon (i.e. the reconstructed area does not depend on λ). However, they are slightly laterally shifted and exhibit different sizes. The lateral shift can be estimated by measuring the center of the grid for each wavelength and will be compensated at the next step with the convolution algorithm. The size difference is more difficult to evaluate with good precision. One could easily measure the size of the grid, but the precision would be one pixel at best, which is not sufficient. A better precision is obtained by using the Hough transform [27]. In a set of polar coordinate axes, the Hough transform estimates every possible lines passing at each pixel of the image. Parameterization specifies a straight line with the angle of its normal (θ) and its algebraic distance from the origin (ρ). If a line is actually passing by a point in the image, it is automatically shown in the Hough transform by a clear intersection of all the possible lines into a real one, appearing through the polar coordinates of the intersection point (i.e. angle and position). Using the Hough transform, one has access to the equations of any straight line in the image. So, one gets an average value of the distances between each vertical and horizontal line of the grid, thus providing the measurement of the grid period. The blue color is considered to be the reference image to compensate for the red and green color. The ratio Γ(λ) of the values of the estimated periods for the red and green images in regard of the blue one determines the transverse magnification that has to be applied to the green and red image to get the red, green, blue, images with the same size.

4.3 Convolution with adjustable magnification

The last step is to use the estimated transverse magnification and lateral shift position as inputs in the convolution algorithm with adjustable magnification [17]. From Eq. (5), the magnification Γ(λ) between colors is used to get the same image sizes from the convolution algorithm by retrieving the new reconstruction distance dr:

dr'λ=Γ(λ)d0λ
Finally, the mean spatial frequency along each color is slightly modified to account for the lateral shift, so that the useful spatial bandwidth may be localized at the suitable spectral region [17]. The new values (uλ0, vλ0) for the spatial frequencies of the angular spectrum transfer window are given by [17], knowing the spatial shifts (ΔXY) for each wavelength:

{u0'λ=u0λ+ΔXλλd0λv0'λ=v0λ+ΔYλλd0λ

5. Experimental validation

5.1 Retrieving the correction parameters

In order to validate this approach, a 30 × 30mm2 grid containing clearly marked parallel lines was used; three laser wavelengths in the red, green and blue domains (respectively 660nm, 532nm, 457nm) were used, as well as a 3-CCD recording device (pixel pitch at px = py = 6.45μm). The grid was located at 1450mm of the recording device, and the focal length of the negative lens was −250mm. Table 1 gives the parameters of the zero-padding algorithm, retrieved using Eqs. (12) and (13).

Tables Icon

Table 1. Experimental results for the parameters used in the modified zero-padding algorithm

Figure 3 shows the three binary images reconstructed using these parameters (part of the field of view obtained using the Fresnel transform).

 figure: Fig. 3

Fig. 3 Three reconstructed images, from left to right: λ = 457, 532 and 660nm respectively

Download Full Size | PDF

The superposition of the three R-G-B individual images is shown in Fig. 4. A lateral shift clearly exists, but one can also notice that the size of the color images depends on the wavelength, as the interspaces between each line differ from one color to the other.

 figure: Fig. 4

Fig. 4 Three reconstructed images superposed without any correction

Download Full Size | PDF

From these reconstructed images and their spatial localization, one can deduce the lateral shift to apply to the convolution algorithm. Figure 5 shows the result of the correction of the lateral shift obtained by a slight modification in the mean spatial frequency for each color, making the center of the grating the new center of the reconstructed images.

 figure: Fig. 5

Fig. 5 Three reconstructed images superposed using the convolution algorithm with correction of the lateral shift only

Download Full Size | PDF

Figure 5 shows that there is a chromatic error since the images are different in sizes. A precise knowledge of the interspaces between two consecutive parallel lines for each color is then required.

Figure 6 shows the Hough transform of the reconstructed image for the green wavelength (in polar coordinates ρ, θ). One can see the intersection points corresponding to the 22 lines seen in the reconstructed image of the grid (11 verticals for θ = 90°, 11 horizontals for θ = 0°), within the white squares on the center and on the left of Fig. 7.

 figure: Fig. 6

Fig. 6 Hough Transform of the reconstructed green image of the grid in the polar coordinates; the white squares circle the intersection points, indicating all the real lines

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Three images superposed after the adjustable magnification and the lateral shift corrections

Download Full Size | PDF

By retrieving the coordinates (ρ,θ) of the points located inside the white squares in Fig. 7, corresponding to the real lines in the reconstructed image, one can measure the distance between each consecutive line of the reference object in the two directions of space. Using this method, the magnification for any wavelength to be applied into the convolution algorithm can be deduced, by measuring the ratio between the average period for the reference wavelength (with a magnification Γ = 1), and the average period for the other wavelengths. The blue image was used as the reference wavelength in our experiment, as its reconstructed image is the smallest, and the retrieved magnifications for the red and green wavelengths, as well as the retrieved average periods, are summarized in Table 2.

Tables Icon

Table 2. Size and magnification parameters estimated using the Hough transform

Figure 8 shows the results obtained at the final step of the proposed approach, after applying not only the correction to the lateral shift, but also the size correction obtained by the Hough transform, into the convolution algorithm with adjustable magnification. The three images are now perfectly superposed, giving the opportunity to reconstruct color images without any error.

 figure: Fig. 8

Fig. 8 From left to right: color photography, the superposition of the non-corrected reconstructed images, and the superposition of the corrected reconstructed images of a yellow and pink mask

Download Full Size | PDF

5.2 Application to three color holographic imaging

In order to illustrate the relevance of the method, it was applied to the reconstructions of colored Chinese porcelain masks (Beijing Opera) around 50mm high and containing different colors. The results are shown in Figs. 8 and 9.

 figure: Fig. 9

Fig. 9 From left to right: color photography, the superposition of the non-corrected reconstructed images, and the superposition of the corrected reconstructed images of a green and purple mask

Download Full Size | PDF

Figures 8 and 9 show a snapshot of the mask using a classical color camera with an imaging lens (left). The superposition of the three-color images reconstructed simply using the Fresnel transform (middle). The superposition of the three-color images corrected from the chromatic aberrations with the proposed method (right). Results clearly show the improvement obtained with the method as details combining different colors are perfectly restituted. The weights in intensity were adjusted so that the ratio of red, blue and green may be 1:1:1.

The results presented in Figs. 8 and 9 are quite satisfactory and exhibit very faithful three-color image reconstructions.

6. Conclusion

This paper has presented an all-numerical, robust, simple and effective way to compensate for the chromatic aberrations induced in every digital color holographic display, in which an optical system is used to compact the dimensions of the set-up. The method combines a modified version of the wavelength zero-padding algorithm and the adjustable magnification approach. The reference object that has to be used to determine the correction parameters is a reference rectangular grid with a good contrast. Then, the efficient empirical evaluation of correction parameters is obtained. The suitability of the proposed method is demonstrated through two examples with different colored objects. In each case, the correction algorithm allows a faithful three-color image to be reconstructed.

Acknowledgments

This research is funded from the French National Agency for Research (ANR) under grant agreement n°ANR 2010 BLAN 0302.

References and links

1. S. Yeom, B. Javidi, P. Ferraro, D. Alfieri, S. Denicola, and A. Finizio, “Three-dimensional color object visualization and recognition using multi-wavelength computational holography,” Opt. Express 15(15), 9394–9402 (2007). [CrossRef]   [PubMed]  

2. C. J. Mann, P. R. Bingham, V. C. Paquit, and K. W. Tobin, “Quantitative phase imaging by three-wavelength digital holography,” Opt. Express 16(13), 9753–9764 (2008). [CrossRef]   [PubMed]  

3. P. Picart, P. Tankam, D. Mounier, Z. J. Peng, and J. C. Li, “Spatial bandwidth extended reconstruction for digital color Fresnel holograms,” Opt. Express 17(11), 9145–9156 (2009). [CrossRef]   [PubMed]  

4. P. Xia, Y. Shimozato, Y. Ito, T. Tahara, T. Kakue, Y. Awatsuji, K. Nishio, S. Ura, T. Kubota, and O. Matoba, “Improvement of color reproduction in color digital holography by using spectral estimation technique,” Appl. Opt. 50(34), H177–H182 (2011). [CrossRef]   [PubMed]  

5. J. Garcia-Sucerquia, “Color lensless digital holographic microscopy with micrometer resolution,” Opt. Lett. 37(10), 1724–1726 (2012). [CrossRef]   [PubMed]  

6. Y. Ito, Y. Shimozato, P. Xia, T. Tahara, T. Kakue, Y. Awatsuji, K. Nishio, S. Ura, T. Kubota, and O. Matoba, “Four-wavelength color digital holography,” J. Disp. Technol. 8(10), 570–576 (2012). [CrossRef]  

7. A. Kowalczyk, M. Bieda, M. Makowski, M. Sypek, and A. Kolodziejczyk, “Fiber-based real-time color digital in-line holography,” Appl. Opt. 52(19), 4743–4748 (2013). [CrossRef]   [PubMed]  

8. M. K. Kim, “Full color natural light holographic camera,” Opt. Express 21(8), 9636–9642 (2013). [CrossRef]   [PubMed]  

9. R. Kingslake, Lens Design Fundamentals, (Academic Pr, 1978).

10. P. Tankam, Q. Song, M. Karray, J. C. Li, J.-M. Desse, and P. Picart, “Real-time three-sensitivity measurements based on three-color digital Fresnel holographic interferometry,” Opt. Lett. 35(12), 2055–2057 (2010). [CrossRef]   [PubMed]  

11. U. Schnars, T. M. Kreis, and W. O. Jüptner, “Digital recording and numerical reconstruction of holograms: reduction of the spatial frequency spectrum,” Opt. Eng. 35(4), 977–982 (1996). [CrossRef]  

12. A. Stadelmaier and J. H. Massig, “Compensation of lens aberrations in digital holography,” Opt. Lett. 25(22), 1630–1632 (2000). [CrossRef]   [PubMed]  

13. T. Colomb, F. Montfort, J. Kühn, N. Aspert, E. Cuche, A. Marian, F. Charrière, S. Bourquin, P. Marquet, and C. Depeursinge, “Numerical parametric lens for shifting, magnification, and complete aberration compensation in digital holographic microscopy,” J. Opt. Soc. Am. A 23(12), 3177–3190 (2006). [CrossRef]   [PubMed]  

14. S. De Nicola, A. Finizio, G. Pierattini, D. Alfieri, S. Grilli, L. Sansone, and P. Ferraro, “Recovering correct phase information in multiwavelength digital holographic microscopy by compensation for chromatic aberrations,” Opt. Lett. 30(20), 2706–2708 (2005). [CrossRef]   [PubMed]  

15. P. Ferraro, S. Grilli, L. Miccio, D. Alfieri, S. De Nicola, A. Finizio, and B. Javidi, “Full color 3-D imaging by digital holography and removal of chromatic aberrations,” J. Disp. Technol. 4(1), 97–100 (2008). [CrossRef]  

16. T. M. Kreis, “Frequency analysis of digital holography,” Opt. Eng. 41(4), 771–778 (2002). [CrossRef]  

17. J. C. Li, P. Tankam, Z. J. Peng, and P. Picart, “Digital holographic reconstruction of large objects using a convolution approach and adjustable magnification,” Opt. Lett. 34(5), 572–574 (2009). [CrossRef]   [PubMed]  

18. S. Seebacher, W. Osten, T. Baumbach, and W. Juptner, “The determination of material parameters of microcomponents using digital holography,” Opt. Lasers Eng. 36(2), 103–126 (2001). [CrossRef]  

19. F. Zhang, I. Yamaguchi, and L. P. Yaroslavsky, “Algorithm for reconstruction of digital holograms with adjustable magnification,” Opt. Lett. 29(14), 1668–1670 (2004). [CrossRef]   [PubMed]  

20. J. F. Restrepo and J. Garcia-Sucerquia, “Magnified reconstruction of digitally recorded holograms by Fresnel-Bluestein transform,” Appl. Opt. 49(33), 6430–6435 (2010). [CrossRef]   [PubMed]  

21. P. Picart and P. Tankam, “Analysis and adaptation of convolution algorithms to reconstruct extended objects in digital holography,” Appl. Opt. 52(1), A240–A253 (2013). [CrossRef]   [PubMed]  

22. P. Picart and J. Leval, “General theoretical formulation of image formation in digital Fresnel holography,” J. Opt. Soc. Am. A 25(7), 1744–1761 (2008). [CrossRef]   [PubMed]  

23. J. Mundt and T. Kreis, “Digital holographic recording and reconstruction of large scale objects for metrology and display,” Opt. Eng. 49(12), 125801 (2010). [CrossRef]  

24. F. A. Jenkins and H. E. White, Fundamentals of Optics, 4th ed., (McGraw-Hill, Inc., 1981).

25. P. Ferraro, S. De Nicola, G. Coppola, A. Finizio, D. Alfieri, and G. Pierattini, “Controlling image size as a function of distance and wavelength in Fresnel-transform reconstruction of digital holograms,” Opt. Lett. 29(8), 854–856 (2004). [CrossRef]   [PubMed]  

26. P. Tankam and P. Picart, “Use of digital color holography for crack investigation in electronic components,” Opt. Lasers Eng. 49(11), 1335–1342 (2011). [CrossRef]  

27. P. V. C. Hough, Method and means for recognizing complex patterns, US Patent 3069654 (1960).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Basic scheme for digital color holography using 3 wavelengths
Fig. 2
Fig. 2 Origins of chromatism in a digital color holographic set-up, (a) the negative lens in front of the sensor brings a shift in the positions and sizes of the virtual images, (b) a small misalignment of the three color-beams in the reference wave brings a spatial shift in the reconstruction of the images
Fig. 3
Fig. 3 Three reconstructed images, from left to right: λ = 457, 532 and 660nm respectively
Fig. 4
Fig. 4 Three reconstructed images superposed without any correction
Fig. 5
Fig. 5 Three reconstructed images superposed using the convolution algorithm with correction of the lateral shift only
Fig. 6
Fig. 6 Hough Transform of the reconstructed green image of the grid in the polar coordinates; the white squares circle the intersection points, indicating all the real lines
Fig. 7
Fig. 7 Three images superposed after the adjustable magnification and the lateral shift corrections
Fig. 8
Fig. 8 From left to right: color photography, the superposition of the non-corrected reconstructed images, and the superposition of the corrected reconstructed images of a yellow and pink mask
Fig. 9
Fig. 9 From left to right: color photography, the superposition of the non-corrected reconstructed images, and the superposition of the corrected reconstructed images of a green and purple mask

Tables (2)

Tables Icon

Table 1 Experimental results for the parameters used in the modified zero-padding algorithm

Tables Icon

Table 2 Size and magnification parameters estimated using the Hough transform

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

A r ( x,y,d )= id λ c + + H( X,Y )w( X,Y, λ c , R c ) × exp[ 2iπ/ λ c d 2 + ( Xx ) 2 + ( Yy ) 2 ] d 2 + ( Xx ) 2 + ( Yy ) 2 dXdY
A r ( x,y,d )= iexp( 2iπd/λ ) λd exp[ iπ λd ( x 2 + y 2 ) ]× k=K/2 k=K/21 l=L/2 l=L/21 H(l p x, k p y )exp[ iπ λd ( l 2 p x 2 + k 2 p y 2 ) ]× exp[ 2iπ λd ( l p x x+k p y y ) ]
A r =F T 1 [ FT[ H×w ]×G ]
G( u,v,d )={ exp[ 2iπd/λ 1λ 2 ( u u 0 λ ) 2 λ 2 ( v v 0 λ ) 2 ] if| u u 0 λ |L p x /2λdand| v v 0 λ |K p y /2λd 0elsewhere
γ= d r d 0
Δd= p ' R 2 ν×f'
Δy'=y ' R Δ γ opt γ opt Δd p ' R
{ u 0 λ = sin θ x λ λ v 0 λ = sin θ y λ λ
Δη( λ )= λ d r λ K λ p x =constant
K λ 1 = λ 1 d r λ 1 λ 2 d r λ 2 K λ 2 , K λ Ν
{ d r λ 1 d r λ 2 = d 0 λ 1 d 0 λ 2 d r λ 1 + d r λ 2 = d 0 λ 1 + d 0 λ 2
K λ 1 = λ 1 d 0 λ 1 λ 2 d 0 λ 2 K λ 2 , K λ Ν
{ d r λ 1 = d 0 λ 1 + d 0 λ 2 1+ K λ 2 * λ 1 K λ1 * λ 2 d r λ 2 = d 0 λ 1 + d 0 λ 2 d r λ 1
{ K λ 3 * = λ 3 d 0 λ 3 λ 2 d 0 λ 2 K λ 2 * d r λ 3 = λ 2 K λ 3 * λ 3 K λ 2 * d r λ 2
d r 'λ =Γ(λ) d 0 λ
{ u 0 'λ = u 0 λ + Δ X λ λ d 0 λ v 0 'λ = v 0 λ + Δ Y λ λ d 0 λ
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.