Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Extending the depth of focus for enhanced three-dimensional imaging and profilometry: an overview

Open Access Open Access

Abstract

We overview the benefits that extended depth of focus technology may provide for three-dimensional imaging and profilometry. The approaches for which the extended depth of focus benefits are being examined include stereoscopy, light coherence, pattern projection, scanning line, speckles projection, and projection of axially varied shapes.

© 2009 Optical Society of America

Data sets associated with this article are available at http://hdl.handle.net/10376/1474. Links such as View 1 that appear in figure captions and elsewhere will launch custom data views if ISP software is present.

1. Introduction

Contour, topography analysis, and movement estimation are very appealing problems that nowadays are being addressed by many researchers in the field of computer vision. The need to estimate those parameters out of an image is a difficult problem because the image is two dimensional, while the general description of objects and movements in space is three dimensional (3D). On the other hand, the need to extract this 3D information is also urged by the development of various applications in the fields of security, surveillance, military, medicine, and others. In this paper we present several popular topography extraction approaches that can benefit from an addition of extended depth of focus (EDOF) technology.

There are several relevant approaches for extracting the topography of an object. The basic approach is based upon stereoscopy, where the object is viewed from different points of view, and, by computing the relative shift between the various points of view, one may estimate the distance [1, 2, 3].

Other types of techniques involve active projection of patterns, for instance, projection of a grating and computing the gradients obtained in the image [4, 5, 6]. The main disadvantage is that the gradients are obtained only in locations with height changes that are usually very space limited and shadowed. Since the height estimation in this approach is cumulative, a miss of a certain gradient (height change) accumulates an error. In addition, the technique will obtain the height change only in the direction perpendicular to the projected grating. If the height change coincides with the grating direction, no gradient will be obtained. Other techniques involve projection of a line on the object and scanning the object with that line. The height might be obtained based on the curvature of the projected line [7, 8, 9]. The main problem with that approach is that the object must be static during the scanning process; otherwise, the height estimation is blurred. Such an approach will not work for motion estimation. In order to solve this, an illumination with a 2D periodic pattern is possible. Local shifts can be translated to 3D information. However, the main drawback of this method is the phase wrapping. Local shifts exceeding one period cannot be separated from shifts that are smaller than one period. To overcome this, one may encode the second spatial dimension by wavelengths [10] or by special code [11], rather than by using the time domain degree of freedom (as done in the line scanning approach).

The coherence of the light is also an important domain that may be used to encode topography. In an interferometric experiment, interference fringes are generated if optical path differences (generated due to the surface topography of the sample as described in Ref. [12]) are smaller than a coherence length. By shifting the sample each time, the fringes will appear in different transversal locations, and the 3D topography of the sample can be mapped. Another co herence related approach is called optical coherence tomography (OCT) and is widely used for microscopy and for biomedical imaging [13].

There is a large variety of EDOF techniques. Some require digital postprocessing [14, 15], aperture apodization by absorptive mask [16, 17], or diffraction optical phase elements such as multifocal lenses or spatially dense distributions [18]. Other approaches tailor the modulation transfer functions (MTFs) with high focal depth [19] or use logarithmic asphere lenses [20]. Some all-optical approaches are available as well and are based upon attachment of a phase-affecting, binary optical element defining a spatially low frequency phase transition that codes the entrance pupil of the imaging system [21].

Obviously there is a large variety of 3D or ranging approaches that are used in the scientific community and that relate to digital holography [22, 23, 24], imaging through plate of random pinholes [25, 26], laser based RADAR (LIDAR) [27], and various image processing approaches using allocation of the geometrical transformation over straight lines in the image (e.g., vanishing points that are the points that lines are drawn toward in order to induce perspective [28]). Those approaches are less relevant to the benefits that may be obtained when adding the EDOF feature to the imaging system.

2. Stereoscopy Based Techniques

In stereoscopy the range is estimated by examining the parallax generated between the images seen from two points of view [1]. There are extensions of this method that use more points of view, but we will focus on the basic one. Given the distance between the two separated cameras (named left and right cameras), the relative shift in the position of the object seen by both cameras is related to its range.

The 3D reconstruction process out of stereo images is usually composed of three phases—camera calibration, image rectification, and disparity map. In the calibration, step one determines the mutual camera orientations and certain camera parameters, such as focal length and geometrical aberrations. The latter should be corrected since a pinhole camera model is used. In the rectification phase, one should correct geometrical aberrations due to imperfection of the imaging system and transform the left image such that the left camera focal plane would coincide with the right camera focal plane. Usually the transformation makes left and right images aligned row-wise. This makes the third step easier. Next, one should find the disparity map, i.e., the location shift of features appearing in both images. This is basically a feature matching process. To make this process successful, the feature should have a decent contrast.

In Fig. 1 one may see a schematic sketch of the stereoscopic calculation where (xrxl) is a disparity map result, xr designates the location of a given feature in the right camera image, xl is the location of the same feature in the left camera image, f is the focal length of the cameras, b is the distance between the two cameras, and Z is the estimated distance to the feature:

Z=bf(xlxr).
In order to be able to obtain accurate estimation of the range, the object in both images must be in focus. For a large range of distances, one usually needs to reduce the aperture size since the depth of focus is proportional to f-number square, i.e. to the square of the ratio between the focal length and the aperture of the imaging lens:
Δz=2F#COC=κ1λF#2=κ1λf2D2,
where Δz is the depth of focus, COC is called in the literature a circle of confusion, κ1 is a constant, λ is the wavelength, F# is the f-number, f is the focal length, and D is the diameter of the lens aperture.

However, such a reduction reduces the resolution of the imaging system (the smallest feature δ that may be imaged is proportional to the product between the wavelength and the f-number):

δ=κ2λF#,
where κ2 is a constant. Reducing the aperture diameter (increasing the f-number) also reduces the energetic efficiency, which is proportional to the area of the aperture. Therefore one important feature in stereoscopic imaging that benefits from EDOF is obtaining the same depth of focus as can be obtained with increased f-number but without losing resolution or energetic efficiency as would have been obtained when physically decreasing the size of the aperture.

In the simulation presented in Fig. 2 we have performed a simulation, using ZEMAX software, in which through focus MTF for a frequency of 40  cycles/mm was computed for a given stereoscopic system with F#=12.5 and no addition of an EDOF approach [Fig. 2a], F#=7.2 and no addition of an EDOF approach [Fig. 2b], and F#=7.2 with the addition of an EDOF element [Fig. 2c]. The EDOF element added to the simulation was designed following the technical description presented in Ref. [21]. It was an annular like binary phase element extending the depth of focus by creating proper interference of light passing through the different regions of the aperture of the lens. The phase given to the various parts of the aperture creates a desired constructive interference in a “focus channel” while a disruptive interference is created around it. The generated “focus channel” is the created EDOF. The designed phase only EDOF element that was used in this simulation was designed along this operation principle, and it had a single phase ring with external diameter of about 200 μm, while the width of the phase ring was approximately 50 μm. The etching depth was less than 600nm.

The considerations for the proper design of the phase transitions providing this EDOF are described in Ref. [21].

One may see that for a contrast threshold of about 20%, the system with the EDOF addition and F#=7.2 has the same depth of focus as the same system without the EDOF but with F#=12.5. The meaning of this is that a gain of 300%=(12.5/7.2)2 is obtained in the overall energetic efficiency of the imaging system. Such a significant improvement in the efficiency is especially important for biomedical applications where low light conditions are common.

In Fig. 3 we present the obtained experimental results. The EDOF element used for the experimental validations was designed using the operation principle of Ref. [21]. The fabricated EDOF element had a single phase ring with external diameter smaller than 1mm, while the width of the phase ring was below 0.3mm. The etching depth of the fabricated profile that creates the proper phase delay between the various parts of the lens aperture was below a micrometer.

In our experimental setup the inspected object was a set of letters positioned on a tilted plane having distance linearly ranging from 20cm (right side of the letters text) to 40cm (left side of the letters text). The focal length of the lenses of the two cameras was 4.5mm and the f-number was 2.8. The sepa ration distance between the cameras was 5cm. In Figs. 3a, 3b we present the images captured by the right and the left cameras, respectively, while no EDOF element was added. Due to lack of an EDOF element, some of the letters are defocused (right side of the image), which yields the wrong range estimation as one may see in Fig. 3c. The depth map is in millimeters. The x,y axes units are pixel values (3 μm per pixel).

In Figs. 3d, 3e we present the images captured by the right and the left cameras after adding the EDOF element. Now the images are all in focus, and therefore also the range estimation that is presented in Fig. 3f coincides with the experimental conditions. In Fig. 3f as well, the depth map is in millimeters, and the units of the depth map are 3 μm per each pixel. The EDOF element that was fabricated for the experiment follows the specifications of Ref. [21].

3. Light Coherence Based Techniques

Using the coherence of light can assist in ranging. One possible approach was presented in Ref. [29], where the contrast of the secondary speckles generated on the surface of the inspected object when illuminated by a coherent spot of light provided the indication for the range. The contrast can also be extracted by temporal scanning of the object [30]. Obviously prior to operation one needs to map the variation of the contrast versus the axial domain and to construct a lookup table. By comparing the measured local contrast to the lookup table, one may estimate the topography.

For clarity, in Eq. (4), one may see the definition of the coherence function, where P designates the spatial coordinate and τ is the temporal axis. u is the electrical field and is an ensemble average operation:

Γ11(τ(P))=u(P1,t+τ(P))*u(P1,t).
By proper coherence function shaping, one may generated a desired (and different) function for every lateral position coordinate P [31]. This generation of desired distribution can benefit from the EDOF techniques, not for direct extension of the depth of focus, but rather to obtain better control or improved proximity to the desired shaping (e.g., axial) of the coherence function (of the illuminating beam). Shaping the coherence function can assist in range estimation, as it was demonstrated to be helpful in increasing resolution [32] or in field of view enlargement [33]. In the papers of Refs. [32, 33], proper coherence shaping allowed illuminating the inspected object in such a way that every spatial region in the object was coded by an orthogonal or uncorrelated coherence distribution. That way, after multiplexing and mixing the various lateral regions, and after their transmission through a resolution limited imaging system, they could still be separated and be used to reconstruct a high resolution and over the full field of view image. Since improved imaging capabilities (higher resolution) affect the 3D estimation capabilities of various approaches, we have mentioned this topic as well in this overview paper.

4. Patterns Projection Based Techniques

The pattern projection 3D approaches include a projector and an imaging camera. The projector is responsible for projecting special patterns upon the object, and the imaging camera extracts the object's topography by comparing the original reference with the modification generated in the projected pattern on top of the object.

The physical operation principle of all the approaches described in the section and that involve pattern projection is similar to triangulation (explained in Section 2). For all the 3D approaches that are described in this section, the depth of focus aspects are relevant both for the projector as well as for the imager that needs to estimate the topography from the imaged pattern being reflected from the object. For the projector, the EDOF is important in order to generate the desired distribution for large axial range. For the imager, the EDOF is important in order to observe an adequately focused image with all the spatial details that are relevant for the topography estimation.

The defocusing reduces the resolution at which an imaging system perceives an object as well as affects the resolution at which a projected structure reaches the illuminated object.

In the case of incoherent light, the optical transfer function (OTF) is involved. This function determines the transfer of the spatial frequencies in the intensity of the imaged object. The reduction in the frequencies’ transmission due to defocusing is formulated as follows: The coefficient Wm determines the severity of the defocusing error as it is related to the quadratic phase factor multiplying the imaging system coherence transfer function (CTF): exp(iWm(x2+y2)), where x and y are the coordinates of the exit pupil plane. The Wm coefficient is defined as follows:

Wm=Ψλ2π,
where λ is the wavelength, and ψ is a phase factor that represents the severity of out of focus, and it is defined as
Ψ=πD24λ(1Zi+1Zo1f),
where Zo is the distance between the imaging lens and the object, Zi is the distance between the imaging lens and the image plane, D is the diameter of the aperture of the imaging lens, and f is the focal length of the lens. When the imaging condition is fulfilled, one has
1Zi+1Zo=1f,
and the distortion factor ψ equals to zero. This quadratic phase factor affects the exit pupil plane, i.e., the CTF, which after performing the autocorrelation generates the following OTF (for rectangular aperture):
H(μ)=(1|μ|2μc.o.)sinc{8Wmπλ(μ2μc.o.)(1μ2μc.o.)},
where the coherent cutoff spatial frequency is μc.o.=D/2λZi.

In the case of using a laser to project patterns (coherent rather than incoherent illumination), one needs to observe the CTF rather than the OTF. The CTF describes the transmission of the spatial frequencies of the field (rather than intensity). As previously mentioned, the defocusing is expressed in the addition of a quadratic phase to the spatial spectral transmission of the relevant field distribution.

We will now briefly review and explain some typical pattern projection based techniques, and then we will focus on a 2D grating projection approach and validate experimentally the added value gained by the addition of EDOF technology into this 3D estimation approach.

4A. Projection of 2D Periodic Patterns

One of the most popular approaches for shape estimation includes projection of 2D periodic patterns [9]. The local shift of each period structure may assist in estimating the topography of the spatial coordinate corresponding to the spot on which the structure was projected. That is, comparison of the captured image of the pattern reflected from the inspected object with the originally projected pattern (e.g., by local correlation operation) provides the estimation for the object’s topography.

The operation principle of this approach is schematically described by Fig. 4a, where it is assumed that in a certain spatial position, an object has a height change of Δh in comparison to the reference height plane. The illumination pattern will be shifted by the amount of Δx and Δy (in the transversal plane, which we also refer to as the object plane) in comparison to the pattern obtained when the reference plane is being illuminated [see Fig. 4a]:

Δh=Δxtanαx,Δh=Δytanαy,
where αx and αy are the known angles between the pattern’s illumination source and the horizontal and vertical axes in the object plane determined by the camera, respectively. Thus, by measuring the transversal shifts of the projected lines, one may estimate the local height distribution of the imaged object.

An experimental example of how the projected lines are shifted due to the elevation of the object can be seen in Fig. 4b, where we have marked by white arrows the shift of the projected lines. The object used for this demonstration may be seen in the upper left corner of Fig. 4b.

The main drawback in this approach is related to phase wrapping. The maximal depth that may be estimated corresponds to local shifts of one period. Larger shifts are wrapped.

4B. Scanning Line

The aforementioned approach provides the topography estimation over the full field of view, but it suffers from a phase wrapping problem [7]. In one of the applied approaches to overcome it, a scanning with a line projected on top of the object is performed. The operation principle is exactly the same as in the case of a grating projection. The deformation of the line provides the relevant information to extract the topography. Here the phase wrapping problem is replaced with time multiplexing, and thus this technique is practical mainly for static objects.

4C. Speckle Projection

Another direction to avoid the phase wrapping is to project a random rather than a periodic pattern [34, 35]. The simplest choice is to project speckles, i.e., to illuminate the object through a diffuser. The lack of periodicity solves the phase wrapping problem. The local shifts of the random spots are detected by correlating the imaged pattern with the projected pattern. The shifts as in triangulation are translated to 3D topography. In this case the EDOF is important to maintain the same random pattern over a large axial range.

In order to demonstrate the operation principle of this approach, we present the experimental results of Fig. 5. In Fig. 5 the left image is an image of the projected speckles when reflected from the reference surface without the addition of the inspected object. In the right part of the figure the inspected object was added in. The darker regions of the right part of Fig. 5 correspond to the elevated surface of the object. These regions have shifted speckle distribution—as one may see by comparing the right image with the reference image presented in the left part of Fig. 5. Note that the white circles designate identical regions in both pictures.

4D. Projection of Z-Varied Patterns

Instead of using a directly related triangulation based technique, where the local shifts of the projected pattern are inspected, one may use known beam shaping algorithms to project an axially varied pattern. From the image captured by the camera, one may easily estimate the range, since the imaged pattern has a direct correlation to the range from which the pattern is being reflected [36, 37]. Although not directly related, in this approach the triangulation considerations are also relevant. This is because the control over the Z variation resolution obtained by the optical beam shaping element is related to the maximal angle of diffraction coming from the element (as in triangulation, the angle determines the 3D resolution or accuracy).

In the case that an axially varying speckle pattern is generated, almost the same height decoding algorithm as in the case of Subsection 4C is applied. The difference is that instead of estimating the relative shifts between the reference pattern and the captured pattern when the object is in place, one compares the captured image (with the object in place) with a set of N axially varying reference patterns and tries to find the axial pattern that is the closest to the captured image. Since each reference pattern corresponds to different axial range, the height of that specific lateral region can be estimated. In Fig. 6 one may see a schematic example how a Z-varied line distribution can be generated (see Ref. [36]) and used for ranging and profile estimation application. In the schematic sketch of Fig. 6, in each axial distance, lines with different tilting angles are projected. The angles of the lines that are being reflected from the object can be used for ranging when compared with the projected reference distribution. Obviously, instead of rotating lines, any other Z varied distribution may be generated, even a random speckle distribution as described in Ref. [37].

4E. Experimental Validation: Usage of EDOF

In Fig. 7 we present experimental results demonstrating the benefits gained by the addition of EDOF technology to the pattern projection 3D approaches. In Fig. 7a we present the experimental setup. The setup contains a grating projector located on the left side of the picture and an imaging camera positioned on its right side. The period of the projected grating was about 1.5mm in the object plane. The distance between the projector and the camera was 15cm. The distance between the projector and object was about 17cm, and the object that we used was a curved edge of a page seen in the central upper part of Fig. 7a. The maximal curving was in the left side of the object and it equaled to approximately 2mm. The imaging cameras had focal length of 4.5mm and f-number of 2.8. In Figs. 7b, 7c, we present the projected grating as it is imaged by the camera without and with the inspected object, respectively. One may see that the lines of the grating are defocused. The 3D estimation obtained from the images of Figs. 7b, 7c is seen in Fig. 7d.

The same experiment was repeated, but this time after the addition of an EDOF element to the imaging lens of the camera following the design of Ref. [21]. In Figs. 7e, 7f, one may see the projected grating as it is imaged without and with the object, respectively. In Fig. 7g, one may see the 3D reconstruction, which clearly visualizes the improvement obtained due to the addition of the EDOF. The gradually varied profile of the tilted page is clearly seen. Both for Figs. 7d, 7g, each pixel corresponds to 3 μm, and the reconstruction is presented in the camera plane.

Note that in Fig. 7, the x,y axis units are pixel values. The depth maps (color bar) are in relative units. The EDOF element used in this experiment was the same element that was used for the experiment of Fig. 3.

5. Conclusions

In this paper we overviewed several three dimension (3D) and profile extraction approaches that may benefit from the addition of extended depth of focus technology. The paper explained the general possible benefit that may be gained by each 3D technique, and for several selected 3D approaches, numerical and experimental investigations were presented to illustrate this gain.

 figure: Fig. 1

Fig. 1 Schematic sketch of stereoscopic system.

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 Through focus MTF for frequency of 40  cycles/mm: (a) F#=12.5, no EDOF; (b) F#=7.2, no EDOF; (c) F#=7.2, with EDOF.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Experimental results for letters positioned on a tilted plane (x and y axis units are pixels, where each pixel is 3 μm): (a)–(c) without EDOF (View 1), (d)–(f) with EDOF (View 2), (a)–(d) image captured from the right camera, (b)–(e) image captured from the left camera, and (c)–(f) the resulting depth map.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 (a) Schematic description of the operation principle of 3D estimation by illuminating the object with a periodic structure; (b) experimental demonstration of the operation principle.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Projection of speckles for 3D estimation.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Schematic sketch of Z-varied patterns projection.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 (a) Experimental setup (x and y axis units are pixels, where each pixel is 3 μm); (b) imaged grating without EDOF and without the inspected object; (c) imaged grating without EDOF and with the inspected object; (d) 3D reconstruction for the case of without EDOF (View 3) (color bar values are relative); (e) imaged grating with EDOF and without the inspected object; (f) imaged grating with EDOF and with the inspected object; (g) 3D reconstruction with EDOF (View 4) (color bar values are relative).

Download Full Size | PDF

1. T. Sawatari, “Real-time noncontacting distance measurement using optical triangulation,” Appl. Opt. 15, 2821–2827 (1976). [CrossRef]   [PubMed]  

2. G. Hausler and D. Ritter, “Parallel three-dimensional sensing by color-coded triangulation,” Appl. Opt. 32, 7164–7170 (1993). [CrossRef]   [PubMed]  

3. R. G. Dorsch, G. Hausler, and J. M. Herrmann, “Laser tri angulation: fundamental uncertainty in distance measurement,” Appl. Opt. 33, 1306–1312 (1994). [CrossRef]   [PubMed]  

4. A. M. Bruckstein, “On shape from shading,” Comput. Vis. Graph. ImageProcess. 44, 139–154 (1988). [CrossRef]  

5. Y. G. Leclerc and A. F. Bobick, “The direct computation of height from shading,” in Proceedings of IEEE Computer Vision and Pattern Recognition (IEEE, 1991), pp. 552–558. [CrossRef]  

6. R. Zhang and M. Shah, “Shape from intensity gradient,” IEEE Trans. Syst. Man Cybern. 29, 318–325 (1999). [CrossRef]  

7. M. Asada, H. Ichikawa, and S. Tjuji, “Determining of surface properties by projecting a stripe pattern,” in Proceedings of the International Pattern Recognition Conference (IEEE, 1986), pp. 1162–1164.

8. M. Asada, H. Ichikawa, and S. Tsuji, “Determining surface orientation by projecting a stripe pattern,” IEEE Trans. Pattern Anal. Mach. Intell. 10, 749–754 (1988). [CrossRef]  

9. R. Kimmel, N. Kiryati, and A. M. Bruckstein, “Analyzing and synthesizing images by evolving curves with the Osher-Sethian method,” Int. J. Comput. Vis. 24, 37–56 (1997). [CrossRef]  

10. L. Zhang, B. Curless, and S. M. Seitz, “Rapid shape acquisition using color structured light and multi pass dynamic programming,” in Proceedings of 1st International Symposium on 3D DataProcessing Visualization and Transmission (3DPVT) (IEEE Computer Society, 2002), pp. 24–37. [CrossRef]   [PubMed]  

11. E. Horn and N. Kiryati, “Toward optimal structured light patterns,” in Proceedings of the International Conference on Recent Advances in 3-D Digital Imaging and Modeling (IEEE Computer Society, 1997), pp. 28–37.

12. J. Rosen and A. Yariv, “General theorem of spatial coherence: application to three-dimensional imaging,” J. Opt. Soc. Am. A 13, 2091–2095 (1996). [CrossRef]  

13. J. M. Schmitt, “Optical coherence tomography (OCT): a review,” IEEE J. Sel. Top. Quantum Electron. 5, 1205–1215 (1999). [CrossRef]  

14. E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995). [CrossRef]   [PubMed]  

15. J. van der Gracht, E. Dowski, M. Taylor, and D. Deaver, “Broadband behavior of an optical-digital focus-invariant system,” Opt. Lett. 21, 919–921 (1996). [CrossRef]   [PubMed]  

16. J. O. Castaneda, E. Tepichin, and A. Diaz, “Arbitrary high focal depth with a quasi optimum real and positive transmittance apodizer,” Appl. Opt. 28, 2666–2669 (1989). [CrossRef]  

17. J. O. Castaneda and L. R. Berriel-Valdos, “Zone plate for arbitrary high focal depth,” Appl. Opt. 29, 994–997 (1990). [CrossRef]  

18. E. Ben Eliezer, Z. Zalevsky, E. Marom, and N. Konforti, “All-optical extended depth of field imaging system,” J. Opt. A, Pure Appl. Opt. 5, S164–S169 (2003). [CrossRef]  

19. A. Sauceda and J. Ojeda-Castaneda, “High focal depth with fractional-power wavefronts,” Opt. Lett. 29, 560–562 (2004). [CrossRef]   [PubMed]  

20. W. Chi and N. George, “Electronic imaging using a logarithmic asphere,” Opt. Lett. 26, 875–877 (2001). [CrossRef]  

21. Z. Zalevsky, A. Shemer, A. Zlotnik, E. Ben-Eliezer, and E. Marom, “All-optical axial super resolving imaging using low-frequency binary-phase mask,” Opt. Express 14, 2631–2643 (2006). [CrossRef]   [PubMed]  

22. C. Iemmi, A. Moreno, and J. Campos, “Digital holography with a point diffraction interferometer,” Opt. Express 13, 1885–1891 (2005). [CrossRef]   [PubMed]  

23. J. Garcia-Sucerquia, W. Xu, S. K. Jericho, P. Klages, M. H. Jericho, and H. J. Kreuzer, “Digital in-line holographic microscopy,” Appl. Opt. 45, 836–850 (2006). [CrossRef]   [PubMed]  

24. I. Yamaguchi and T. Zhang, “Phase shifting digital holography,” Opt. Lett. 22, 1268–1270 (1997). [CrossRef]   [PubMed]  

25. Z. Zalevsky and A. Zlotnik, “Axially and transversally super resolved imaging and ranging with random aperture coding,” J. Opt. A, Pure Appl. Opt. 10, 064014 (2008). [CrossRef]  

26. A. Stern and B. Javidi, “Random projections imaging with extended space-bandwidth product,” J. Display Technol. 3, 315–320 (2007). [CrossRef]  

27. M. Pfennigbauer, B. Möbius, and J. Pereira do Carmo, “Echo digitizing imaging LIDAR for rendezvous and docking,” Proc. SPIE 7323, 732302 (2009). [CrossRef]  

28. J. A. Shufelt, “Performance evaluation and analysis of vanishing point detection rechniques,” IEEE Trans. Pattern Anal. Mach. Intell. 21, 282–288 (1999). [CrossRef]  

29. Z. Zalevsky, O. Margalit, E. Vexberg, R. Pearl, and J. Garcia, “Suppression of phase ambiguity in digital holography by using partial coherence or specimen rotation,” Appl. Opt. 47, D154–D163 (2008). [CrossRef]   [PubMed]  

30. T. Dresel, G. Hausler, and H. Venzke, “Three-dimensional sensing of rough surfaces by coherence radar,” Appl. Opt. 31, 919–925 (1992). [CrossRef]   [PubMed]  

31. Z. Zalevsky, D. Mendlovic, and H. M. Ozaktas, “Energetic efficient synthesisof mutual intensity distribution,” J. Opt. A, Pure Appl. Opt. 2, 83–87 (2000). [CrossRef]  

32. Z. Zalevsky, J. García, P. García-Martínez, and C. Ferreira, “Spatial information transmission using orthogonal mutual coherence coding,” Opt. Lett. 30, 2837–2839 (2005). [CrossRef]   [PubMed]  

33. V. Micó, J. García, C. Ferreira, D. Sylman, and Z. Zalevsky, “Spatial information transmission using axial temporal coherence coding,” Opt. Lett. 32, 736–738 (2007). [CrossRef]   [PubMed]  

34. J. Garcia and Z. Zalevsky, “Range mapping using speckle decorrelation,” U.S.patent 7,433,024 (October 2008); World Intellectual Property Organization publication WO/2007/096893 (27 February 2007).

35. A. Shpunt and Z. Zalevsky, “Three-dimensional sensing using speckle patterns,” World Intellectual Property Organization publication WO/2007/105205 ( 8 March 2007).

36. D. Sazbon, Z. Zalevsky, and E. Rivlin, “Qualitative real-time range extraction for preplanned scene partitioning using laser beam coding,” Pattern Recogn. Lett. 26, 1772–1781 (2005). [CrossRef]  

37. J. García, Z. Zalevsky, P. García-Martínez, C. Ferreira, M. Teicher, and Y. Beiderman, “Three-dimensional mapping and range measurement by means of projected speckle patterns,” Appl. Opt. 47, 3032–3040 (2008). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Schematic sketch of stereoscopic system.
Fig. 2
Fig. 2 Through focus MTF for frequency of 40   cycles / mm : (a)  F # = 12.5 , no EDOF; (b)  F # = 7.2 , no EDOF; (c)  F # = 7.2 , with EDOF.
Fig. 3
Fig. 3 Experimental results for letters positioned on a tilted plane (x and y axis units are pixels, where each pixel is 3 μm): (a)–(c) without EDOF (View 1), (d)–(f) with EDOF (View 2), (a)–(d) image captured from the right camera, (b)–(e) image captured from the left camera, and (c)–(f) the resulting depth map.
Fig. 4
Fig. 4 (a) Schematic description of the operation principle of 3D estimation by illuminating the object with a periodic structure; (b) experimental demonstration of the operation principle.
Fig. 5
Fig. 5 Projection of speckles for 3D estimation.
Fig. 6
Fig. 6 Schematic sketch of Z-varied patterns projection.
Fig. 7
Fig. 7 (a) Experimental setup (x and y axis units are pixels, where each pixel is 3 μm); (b) imaged grating without EDOF and without the inspected object; (c) imaged grating without EDOF and with the inspected object; (d) 3D reconstruction for the case of without EDOF (View 3) (color bar values are relative); (e) imaged grating with EDOF and without the inspected object; (f) imaged grating with EDOF and with the inspected object; (g) 3D reconstruction with EDOF (View 4) (color bar values are relative).

Datasets

Datasets associated with ISP articles are stored in an online database called MIDAS. Clicking a "View" link in an Optica ISP article will launch the ISP software (if installed) and pull the relevant data from MIDAS. Visit MIDAS to browse and download the datasets directly. A package containing the PDF article and full datasets is available in MIDAS for offline viewing.

Questions or Problems? See the ISP FAQ. Already used the ISP software? Take a quick survey to tell us what you think.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

Z = b f ( x l x r ) .
Δ z = 2 F # COC = κ 1 λ F # 2 = κ 1 λ f 2 D 2 ,
δ = κ 2 λ F # ,
Γ 11 ( τ ( P ) ) = u ( P 1 , t + τ ( P ) ) * u ( P 1 , t ) .
W m = Ψ λ 2 π ,
Ψ = π D 2 4 λ ( 1 Z i + 1 Z o 1 f ) ,
1 Z i + 1 Z o = 1 f ,
H ( μ ) = ( 1 | μ | 2 μ c . o . ) sin c { 8 W m π λ ( μ 2 μ c . o . ) ( 1 μ 2 μ c . o . ) } ,
Δ h = Δ x tan α x , Δ h = Δ y tan α y ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.