## Abstract

This paper presents a reconstruction algorithm based on the convolution formula of diffraction which uses the Fresnel impulse response of free space propagation. The bandwidth of the reconstructing convolution kernel is extended to the one of the object in order to allow the direct reconstruction of objects with size quite larger than the recording area. The spatial bandwidth extension is made possible by the use of a numerical spherical wave as a virtual reconstructing wave, thus modifying the virtual reconstruction distance and increasing the kernel bandwidth. Experimental results confirm the suitability of the proposed method in the case of the simultaneous recording of two-color digital holograms by using a spatial color multiplexing scheme.

©2009 Optical Society of America

## 1. Introduction

Digital holography appeared in the last decade with cheap high resolution CCD cameras and the increasing power of computers [1]. Digital Fresnel holography is a powerful tool for metrological applications such as biological imaging, polarization imaging, heterogeneous material investigation, surface shape measurement, particle tracking or also vibration analysis [2–7]. Digital color holography was established recently by I. Yamaguchi *et al* who proposed a phase shifting scheme based on the use of a Bayer mosaic chromatic filter for the recording at three wavelengths [8]. In 2003, Demoli *et al* presented the first study on fluids using digital color Fourier holography using a monochrome sensor with a sequential recording of three laser wavelengths [9]. Knowing that the image sampling in the reconstruction plane of
the object depends on the wavelength, P. Ferraro *et al* proposed to compute the Fresnel transform with a zero padding depending on the ratio between the wavelengths in order to keep the pixel pitch as constant as possible [10]. Such approach was applied in three dimensional image fusion using a sequential color recording at several distances [11,12]. Such strategy was also considered by J. Zhao *et al* using three laser wavelengths [13]. Note that all these methods use a sequential recording along each wavelength and generally off axis
reference waves are impacting the recording area at a constant angle. The use of
multidirectional carriers was proposed by J. Kuhn et al for two color digital holographic
microscopy. They proposed a filtering scheme in the Fourier plane of the hologram in order to extract the +1 order along each wavelength [14]. The method was applied to smooth microscopic surface such as vibrating MEMS. The general strategy of such method is similar to spatial multiplexing of monochrome holograms as was proposed in [15]. Recently, the use of a stack of photodiode for the simultaneous color recording was proposed [16,17]. The main advantage of the method is that the color recording can be performed simultaneously with off axis reference waves impacting the recording area at a constant angle. The reconstruction of digital color holograms can be performed using the discrete Fresnel transform [10] or the convolution method with a zero-padding [8,18]. The direct reconstruction of large objects encoded in digital color holograms using a convolution algorithm is not a resolved problem. In ref 16, the reconstruction of the large object (compared to the recording area) was performed by a filter banc and the convolution method based on double Fourier transforms. However, the scanning process takes a very long time. For example, the object of ref [16] was 25mm in
diameter and this leads to 20 scans each of them needing two FFT operations; if the object is 60mm in diameter, we get 108 scans and the reconstructed field includes 12288×12240 data points. Thus, such an approach is not very adapted for large objects since the computation time becomes prohibitive. In 2004, F. Zhang *et al* [19] proposed an algorithm based on a double Fresnel transform allowing the adjustment of the side length of the field of view. The algorithm was applied to an object with a small size since the field of view was only 10mm×10mm.

In this paper we propose an alternative reconstructing strategy using the Fresnel response of free space propagation combined with an extension of the spatial bandwidth in the associated transfer function. The spatial bandwidth extension is made possible by the use of a numerical spherical wave as a virtual reconstructing wave. The algorithm can be classified in the family of the convolution algorithms and it is very well adapted to large objects. Experimental results confirm the suitability of the proposed approach in the case of the simultaneous recording of two-color digital holograms by using a spatial color multiplexing scheme. Section 2 presents the basic fundamentals of the algorithm. Section 3 describes the experimental setup and section 4 presents experimental results obtained with the proposed method. Section 5 draws some conclusions about the study.

## 2. Theory

When a rough object is illuminated by a coherent beam, the complex amplitude at its surface can simply be written

where *A*
_{0} is the amplitude of the object, *ψ*
_{0} is a random phase uniformly distributed over [-*π*,+*π*] and *i* = √−1 . At any distance *d*
_{0}, the diffracted field produced by this wave front is related to the object field by a convolution relation between the initial field *A*(*x*,*y*) and the convolution kernel, according to Eq. 1 [20] (^{*} means convolution):

in which the convolution kernel is the impulse response of free space propagation. In the Fresnel approximations, the convolution kernel can be simplified to [1,18,20]:

In digital Fresnel holography, encoding of the object wave is performed through a Fresnel diffraction and interferences with a reference wave [1,21,22]. The reference wave is generally chosen plane and smooth and is written *r*(*x*,*y*)=*a*
^{λ}
_{0}exp[-2*i*
*π*(*u*
^{λ}
_{0}
*x*+*v*
^{λ}
_{0}
*y*)], where *a*
^{λ}
_{0}, {*u*
^{λ}
_{0}, *v*
^{λ}
_{0}} are amplitude and spatial frequencies of the reference wave at wavelength *λ*. Omitting the effect of the pixel surface which spatially integrates the signal, the recorded hologram is the interferometric mixing between the two waves such as:

$$\phantom{\rule[-0ex]{3.5em}{0ex}}+{r}^{*}\left(x,y\right)O\left(x,y,{d}_{0}\right)+r\left(x,y\right){O}^{*}\left(x,y,{d}_{0}\right).$$

In digital Fresnel holography, the object field can be numerically reconstructed from the recorded hologram H by computing the diffracted field at the distance -*d*
_{0} according to Eq. (2,3) [1,18]. Note that the Fresnel convolution kernel transforms the convolution relation of Eq. (2) into a Fourier transform which can be numerically implemented with FFT algorithms, i.e. the two-dimensional discrete Fresnel transform. However, the pixel pitch in the reconstructed plane depends on the wavelength, ie Δ*η*=*λ*
*d*
_{0}/*Lp _{x}* and Δ

*ξ*=

*λ*

*d*

_{0}/

*Kp*respectively in the

_{y}*x*and

*y*directions ({

*p*,

_{x}*p*} pixel pitches, {

_{y}*K*,

*L*} number of data points used for the numerical computation). In the case of digital color holography, the color reconstruction needs the superimposition of the reconstructed images and in the case of digital holographic metrology, one needs to compute phase differences between computed optical phases [8,10,16]. The computation of Eq. (2) as a Fourier transform is not the most appropriate for digital color holography. However its computation as a convolution using a double Fourier transform is quite appropriate since the pixel pitch in the reconstructed plane remains invariant and is equal to that of the detector {

*p*,

_{x}*p*}, whatever the wavelength [8]. However, it is well known that such an approach is not suitable for large objects, i.e. objects with lateral dimensions quite greater than that of the recording area. The reason for this can be understood by considering the spatial frequency bandwidths of object and convolution kernel. Indeed, the spatial bandwidth of the kernel must cover at least the one of the object. If the object bandwidth is greater than that of the kernel, the numerical reconstruction must be implemented with a scanning of the spatial spectrum. This approach was proposed in [16] and it is a strategy consisting in increasing the spatial bandwidth of the convolution kernel by juxtaposing the useful number of elementary kernel bandwidth to cover that of the object. The elementary bandwidth of the convolution kernel is related to the reconstruction distance

_{y}*d*

_{0}, the spatial horizon on which the kernel is defined and the wavelength of the light according to

Equation (5) shows that the best way to increase the kernel bandwidth for covering the full object bandwidth is to modify the physical reconstruction distance *d*
_{0}. This means that the use of a virtual reconstruction distance smaller than the physical one allows the extension of the bandwidth to the useful one of the object. The way in which the reconstruction distance can be modified is related to the use of a virtual illuminating numerical spherical wave front having a curvature radius *R _{c}*. Such virtual illuminating wave can be chosen according to the Fresnel approximations and can be simply written as:

Virtual curvature radius and distance are related to the physical distance by the following equation [20,22,23]:

According to [20,23], such modification in the reconstructing distance also induces a change in the object size giving a transversal magnification:

where *γ* is the ratio between the reconstructed object and the real object dimensions. The choice of the value of *γ* depends on the physical size of the object. The use of the spherical wave as a reconstructing wave induces a change in the object bandwidth according to

where {ΔA_{x},ΔA_{y}} are respectively the object sizes along the *x* and *y* directions. Since the aim is to get a kernel bandwidth greater than the one of the object, {Δ*u*
_{kernel},Δ*v*
_{kernel}} ≥ {Δ*u*
_{object}, Δ*v*
_{object}}, the transversal magnification must be chosen such that *γ* ≤ min {*Lp _{x}*/Δ

*A*,

_{x}*Kp*/Δ

_{y}*A*and fixes the radius of the spherical wave front according to

_{y}*R*=

_{c}*γ*

*d*

_{0}/(

*γ*-1). The optimum transversal magnification

*γ*can be chosen according to this procedure: first choose the number of data points (

*K*,

*L*) of the reconstructed horizon, this choice can be done according to the power of the computer; note that this reconstruction horizon sets the maximum value for

*γ*since the reconstructed object must fully lie in this horizon; last compute ratio

*Lp*/Δ

_{x}*A*(or

_{x}*Kp*/Δ

_{y}*A*) and choose

_{y}*γ*; if

*γ*is chosen greater than

*Lp*/Δ

_{x}*A*(or

_{x}*Kp*/Δ

_{y}*A*) then the object will not fully lie in the reconstructed area, if

_{y}*γ*is chosen smaller than

*Lp*/Δ

_{x}*A*, the object will be fully included in the field of view (and respectively for

_{x}*y*). After the choice for

*γ*, compute

*d*and

_{R}*R*. Note that if the choice is

_{c}*γ*<<min{

*Lp*/Δ

_{x}*A*,

_{x}*Kp*/Δ

_{y}*A*)}, then the object will be much smaller than the reconstruction horizon. Note also that the choice can be

_{y}*γ*<0, in this case the reconstructed object will be reversed.

The Fourier transform of Eq. (4) leads to

where *Õ*
_{0}(*u*,*v*) is the Fourier transform of the zero order diffraction included in the interferometric mixing and *Õ*(*u*, *v*, *d*
_{0}) is the Fourier transform of the object wave defined in Eq. (2). Since the useful spectrum content related to the object wave, *Õ*(*u*, *v*, *d*
_{0}), is localized at frequency coordinate {*u*
_{0}
^{λ},*v*
_{0}
^{λ}}, the kernel bandwidth must be also centered at frequency {*u*
_{0}
^{λ},*v*
_{0}
^{λ}} in the hologram spectrum. This is necessary in order to reconstruct the +1 order which corresponds to the virtual image. Note that centering at frequency {−*u*
_{0}
^{λ},−*v*
_{0}
^{λ}} leads to the reconstruction of the pseudoscopic image (i.e. -1 order). The centering can be performed by modulating the convolution kernel with a spatially biased phase, according to

Thus, the spatial modulation produces a transfer function *h̃ _{m}*(

*u*,

*v*,

*d*

_{R}) =

*h̃*(

*u*-

*u*

_{0}

^{λ},

*v*-

*v*

_{0}

^{λ},

*d*

_{R}) localized at frequencies {

*u*

_{0}

^{λ},

*v*

_{0}

^{λ}}. Here

*h̃*(

_{m}*u*,

*v*,

*d*

_{R}) is the Fourier transform of

*h*(

*x*,

*x*,

*d*

_{R})

According to the Shannon theorem, the Fresnel function *h*(*x*,*y*,*d _{R}*) will be correctly sampled until the number of data points such that {

*L*,

*K*} ≤ {

*λd*/

_{R}*P*

_{x}^{2},

*λd*/

_{R}*Py*

^{2}}. Symmetrically, the minimum distance

*d*fulfilling the Shannon theorem is given by

_{R}*d*= sup{

_{R}*Lp*/

_{x}*λ*,

*Kp*/

_{x}*λ*}. For example, if

*K*=

*L*=1024,

*p*=

_{x}*p*=4.65μm and

_{y}*λ*=0.6328μm

*d*must be greater than 35mm and

_{R}*d*>70mm if

_{R}*K*=

*L*=2048. Since the modulated Fresnel function constitutes a band-pass spatial filter in the Fourier spectrum of the hologram, the domain in which its numerical values will be computed can be designed according to the contour of the object. Indeed, opportunity is offered by such quadratic function to operate an efficient filtering in the Fourier space in order to eliminate all parasitic perturbations included in the recorded hologram. This can be done by adjusting the effective bandwidth of the kernel strictly to the one of the object. For example if the object has a circular form (Δ

*A*=Δ

_{x}*A*=Δ

_{y}*A*), the Fresnel function can be computed according to Eq. (12):

If the object has a rectangular form, then *h*(*x*, *y*, *d _{R}* can be computed according to Eq.
13:

This spatial restriction associated with the spatial modulation induces a kernel bandwidth perfectly adjusted to that of the object bandwidth in the Fourier plane.

The theoretical analysis presented in this section leads to a convolution algorithm allowing the reconstruction of an object whose size is quite larger than the recording area. A basic synoptic of the proposed algorithm is given in Fig. 1.

The aim of the proposed algorithm is to extend the convolution kernel to that of the object in order to perform a “single shot” reconstruction of the encoded object. Therefore, this algorithm is a generalized convolution algorithm since it consists in a spatial bandwidth extended strategy for the reconstruction of large object encoded in digitally recorded holograms.

## 4. Optical set-up

The optical setup is described in Fig. 2. It uses a continuous green laser (*λ _{G}*=532nm) and a continuous red laser (

*λ*=632.8nm). It is composed of a twin wavelength Mach-Zehnder interferometer and a monochrome CCD camera (PCO Pixel Fly). The sensor is a 12-bit digital CCD with (

_{R}*M*,

*N*)=(1024,1360) pixels with pitches

*p*=

_{x}*p*=4.65μm. Each laser beam is split into an illuminating beam and a reference beam which are co-polarized for each wavelength. Each laser beam illuminates the object under interest with illuminating angles

_{y}*θ*and

_{R}*θ*respectively for the red and green lines. The smooth and reference plane waves are produced through the two spatial filters (SF1 and SF2). Since the monochrome sensor is not able to record the two colors simultaneously at each pixel, the spatial frequencies of the reference waves (

_{G}*R*and

*G*) are adjusted so that the two-color holograms are spatially multiplexed in the field of view. Thus, the off-line holographic recording is carried out using the two spatial filters in which each collimating lens is displaced out of the afocal axis by means of two micrometric transducers (not represented in Fig. 2). Thus, the amount of the translation of each lens fixes the spatial frequency along each color, i.e. {

*u*

_{0}

^{R},

*v*

_{0}

^{R}} and {

*u*

_{0}

^{G},

*v*

_{0}

^{G}} [15]. Therefore, spatial frequencies of each reference wave are adjusted to fulfill the Shannon theorem of digital Fresnel holography [15,21]. The object is placed at a distance

*d*

_{0}allowing the non overlapping of the three diffraction orders included in the digital hologram.

## 5. Experimental results

In order to illustrate the suitability of proposed set-up and algorithm to record/reconstruct digital color holograms of large objects, the object is firstly chosen to be a plaster Chinese head sized {Δ*A _{x}*,Δ

*A*}≈{25mm,50mm}. The reconstruction of such object using a classical zero-padding convolution algorithm would need {

_{y}*K*,

*L*}={10752,5373} data points. The distance for the recording is set to be 1320mm and the spatial frequencies were adjusted with the two spatial filters to {

*u*

_{0}

^{G},

*v*

_{0}

^{G}}={65.2, −67.9}mm

^{−1}for the green line and {

*u*

_{0}

^{R},

*v*

_{0}

^{R}}={−64.4, −71.9}mm

^{−1}for the red line. As indicated previously, the magnification depends on the size of the reconstructed area. Choosing

*K*=2048, the magnification according to the size along the vertical direction must be set to

*γ*=0.17. Since the object is larger in the vertical direction than in the horizontal one, the number of data points for the reconstruction along the horizontal direction can be set to

*L*=1360, so that there is no truncation of the recorded hologram and no loss in the spatial resolution [21]. Note that since the horizontal pixel pitch is

*p*=4.65μm then the reconstructed object will occupy about 900 data points (in

_{x}*x*direction). This is why the choice for

*L*=2048 is not judicious in this case of object dimensions. With

*γ*=0.17, the reconstruction distance is

*d*=−224.4mm and the curvature radius is

_{R}*R*=−270.36mm. Figure 3 illustrates the reconstruction procedure for the red image. Fig. 3(a) shows the real part of the impulse response computed according to Eq. (13). Figure 3(b) shows the transfer function of the impulse response modulated by the spatially biased phase (Eq. (11)), it can be seen that the useful bandwidth is localized on the object bandwidth shown in Fig. 3(c). The white rectangular line corresponds to the useful bandwidth of the red object (Eq. (9)) and the white cross is localized at spectral coordinate {

_{c}*u*

_{0}

^{R},

*v*

_{0}

^{R}}. Note that the useful power spectrum for the green object is also visible in the upper right corner of Fig. 3(c). Figure 3(d) shows the reconstructed object obtained with the algorithm and {

*K*,

*L*}={2048,1360}.

Figure 4 shows steps of the reconstruction procedure applied to the green object. In Fig. 4(b) and 4(c) the transfer function is now localized at spatial frequencies {*u*
_{0}
^{G},*v*
_{0}
^{G}}. Figure 4(d) shows the reconstructed green object with {K,L}={2048,1360}.

The horizon of each monochromatic reconstruction obtained from the extended bandwidth
algorithm is always equal to {*K*,*L*}={2048,1360} pixels with pitches {*p _{x}*,

*p*}. So the two monochrome images can be superimposed only by consideration of the weight of intensity that is necessary to reproduce the two color image. In the superposition of the two color images, the intensity weights were adjusted so that the ratio of red and green was 1:1. Figure 5 shows the comparison between the digitally reconstructed color hologram and a photograph of the same object illuminated with the two laser beams. Suitable weighting leads to color reproduction shown in Fig. 5(a). It is a satisfactorily faithful picture compared to the ordinary picture in Fig. 5(b), taken by the CCD with an imaging lens.

_{y}As a second illustration, we choose a circular sport medal as an object. Its size is
{Δ*A _{x}*=Δ

*A*=Δ

_{y}*A*}={53mm}. The reconstruction of such object using a classical zero-padding convolution algorithm would need {

*K*,

*L*}={11398,11398} data points. The sport medal is simultaneously illuminated by the two laser beams according to Fig. 2. The recording distance was adjusted to

*d*

_{0}=1250mm. The spatial frequencies are the same as for the previous example, i.e. {

*u*

_{0}

^{G},

*v*

_{0}

^{G}} = {65.2, −67.9}mm

^{−1}and {

*u*

_{0}

^{R},

*v*

_{0}

^{R}}={−64.4, −71.9}mm

^{−1}. Since the object is circular, the reconstruction horizon was chosen to be {K,L}={2048,2048}, thus leading to a theoretical magnification of 0.179. So we chose

*γ*=0.17, giving

*d*=−212.5mm and

_{R}*R*=−256.02mm.

_{c}Figure 6 illustrates the reconstruction procedure for the red image. Figure 6(a) shows the real part of the impulse response computed according to Eq. 12. Figure 6(b) shows the transfer function of the impulse response modulated by the spatially biased phase (Eq. (11)). It can be seen that the useful bandwidth is now circular. Figure 6(c) shows the object bandwidth in which the white circular line corresponds to the useful bandwidth of the red object (Eq. (9)) and the white cross is localized at spectral coordinate {*u*
_{0}
^{R},*v*
_{0}
^{R}}. Figure 6(d) shows the reconstructed object obtained with the algorithm. Figure 7 shows steps of the reconstruction procedure applied to the green channel. In Fig. 7(b) and 7(c) the transfer function is now localized at spatial frequencies {*u*
_{0}
^{G},*v*
_{0}
^{G}}. Figure 7(d) shows the reconstructed green object with {K,L}={2048,2048}.

For the same reason as previously the two color images can be superimposed only by consideration of the weight of intensity that is necessary to reproduce the image taken with a classical CCD with an imaging lens. The weights in intensity were adjusted so that the ratio of red and green was 1:1. Figure 8 shows the comparison between the digitally reconstructed color hologram and a photograph of the same object illuminated with the two laser beams. The results presented in Fig. 8 are quite satisfactory.

## 6. Conclusion

This paper has presented a strategy for the reconstruction of spatially multiplexed large objects encoded in two color digital holograms. The method is based on a convolution algorithm in which the spatial frequency bandwidth of the convolution kernel is extended to the useful one of the object. The bandwidth extension is made possible by the use of a numerical spherical wave instead of a reconstructing plane wave, mixed with a spatial modulation of the impulse response of the Fresnel propagation. Such strategy modifies the virtual reconstruction distance and increases the kernel bandwidth. The spatially biased phase modulation allows the optimal filtering of useful information included in the digital holographic recording. The suitability of the proposed method is demonstrated through two examples with two different object geometries. In each case the algorithm allows a faithful two color image reconstruction.

## References and Links

**1. **U. Schnars and W. Jüptner, “Direct recording of holograms by a CCD target and numerical reconstruction,” App. Opt. **33**, 179–181 (1994). [CrossRef]

**2. **P. Ferraro, D. Alferi, S. De Nicola, L. De Petrocellis, A. Finizio, and G. Pierattini, “Quantitative phase-contrast microscopy by a lateral shear approach to digital holographic image reconstruction,” Opt. Lett. **31**, 1405–1407 (2006). [CrossRef] [PubMed]

**3. **T. Nomura, B. Javidi, S. Murata, E. Nitanai, and T. Numata, “Polarization imaging of a 3D object by use of on-axis phase-shifting digital holography,” Opt. Lett. **32**, 481–483 (2007). [CrossRef] [PubMed]

**4. **P. Picart, B. Diouf, E. Lolive, and J.-M. Berthelot, “Investigation of fracture mechanisms in resin concrete using spatially multiplexed digital Fresnel holograms,” Opt. Eng. **43**, 1169–1176 (2004). [CrossRef]

**5. **I. Yamaguchi, J. Kato, and S. Ohta, “Surface shape measurement by phase shifting digital holography,” Opt. Rev. **8**, 85–89 (2001). [CrossRef]

**6. **S. Coetmellec, D. Lebrun, and C. Oskul, “Application of the two-dimensional fractional-order Fourier transformation to particle field digital holography,” J. Opt. Soc. Am. A **19**, 1537–1546 (2002). [CrossRef]

**7. **P. Picart, J. Leval, D. Mounier, and S. Gougeon, “Time averaged digital holography,” Opt. Lett. **28**, 1900–1902 (2003). [CrossRef] [PubMed]

**8. **I. Yamaguchi, T. Matsumura, and J. Kato, “Phase shifting color digital holography,” Opt. Lett. **27**, 1108–1110 (2002). [CrossRef]

**9. **N. Demoli, D. Vukicevic, and M. Torzynski, “Dynamic digital holographic interferometry with three wavelengths,” Opt. Express **11**, 767–774 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEx-11-7-767. [CrossRef] [PubMed]

**10. **P. Ferraro, S. De Nicola, G. Coppola, A. Finizio, D. Alfieri, and G. Pierattini, “Controlling image size as a function of distance and wavelength in Fresnel-transform reconstruction of digital holograms,” Opt. Lett. **29**, 854–856 (2004). [CrossRef] [PubMed]

**11. **B. Javidi, P. Ferraro, S. Hong, S. De Nicola, A. Finizio, D. Alfieri, and G. Pierattini, “Three-dimensional image fusion by use of multiwavelength digital holography,” Opt. Lett. **30**, 144–146 (2005). [CrossRef] [PubMed]

**12. **D. Alfieri, G. Coppola, S. De Nicola, P. Ferraro, A. Finizio, G. Pierattini, and B. Javidi, “Method for superposing reconstructed images from digital holograms of the same object recorded at different distance and wavelength,” Opt. Commun. **260**, 113–116 (2006). [CrossRef]

**13. **J. Zhao, H. Jiang, and J. Di, “Recording and reconstruction of a color holographic image by using digital lensless Fourier transform holography,” Opt. Express **16**, 2514–2519 (2008), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-4-2514. [CrossRef] [PubMed]

**14. **J. Kuhn, T. Colomb, F. Montfort, F. Charriere, Y. Emery, E. Cuche, P. Marquet, and C. Depeursinge, “Real-time dual-wavelength digital holographic microscopy with a single hologram acquisition,” Opt. Express **15**, 7231–7242 (2007), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-15-12-7231. [CrossRef] [PubMed]

**15. **P. Picart, E. Moisson, and D. Mounier, “Twin-sensitivity measurement by spatial multiplexing of digitally recorded holograms,” App. Opt. **42**, 1947–1957 (2003). [CrossRef]

**16. **P. Picart, D. Mounier, and J. M. Desse, “High resolution digital two-color holographic metrology,” Opt. Lett. **33**, 276–278 (2008). [CrossRef] [PubMed]

**17. **J.M. Desse, P. Picart, and P. Tankam, “Digital three-color holographic interferometry for flow analysis,” Opt. Express **16**, 5471–5480 (2008), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-8-5471. [CrossRef] [PubMed]

**18. **Th. Kreis, M. Adams, and W. Jüptner, “Methods of digital holography: a comparison,” Proc. SPIE **3098**, 224–233 (1997). [CrossRef]

**19. **F. Zhang, I. Yamaguchi, and L. P. Yaroslavsky, “Algorithm for reconstruction of digital holograms with adjustable magnification,” Opt. Lett. **29**, 1668–1670 (2004). [CrossRef] [PubMed]

**20. **J.W. Goodman, *Introduction to Fourier Optics* (second edition, McGraw-Hill Editions, New York, 1996).

**21. **P. Picart and J. Leval, “General theoretical formulation of image formation in digital Fresnel holography,” J. Opt. Soc. Am. A **25**, 1744–1761 (2008). [CrossRef]

**22. **I. Yamaguchi, J. Kato, S. Ohta, and J. Mizuno, “Image formation in phase shifting digital holography and application to microscopy,” App. Opt. **40**, 6177–6186 (2001). [CrossRef]

**23. **U. Schnars and W. Jüptner, “Digital recording and numerical reconstruction of holograms,” Meas. Sci. Technol. **13**, R85–R101 (2002). [CrossRef]