Abstract

This paper presents a depth recovery method that gives the depth of any scene from its defocused images. The method combines depth from defocusing and depth from automatic focusing techniques. Blur information in defocused images is utilised to measure depth in a way similar to determining depth from automatic focusing but without searching for sharp images of objects. The proposed method does not need special scene illumination and involves only a single camera. Therefore, there are no correspondence, occlusion and intrusive emissions problems. The paper gives experimental results which demonstrate the accuracy of the method.

©2007 Optical Society of America

1. Introduction

The depth of a visible surface of a scene is the distance between the surface and the sensor. Recovering depth information from two-dimensional images of a scene is an important task in computer vision that can assist numerous applications such as object recognition, scene interpretation, obstacle avoidance, inspection and assembly.

Various passive depth computation techniques have been developed for computer vision applications [1,2]. They can be classified into two groups. The first group operates using just one image. The second group requires more than one image which can be acquired using either multiple cameras or a camera whose parameters and positioning can be changed.

Single-image depth cues such as texture gradients and surface shading require heuristic assumptions. Therefore, they cannot be used to recover absolute depth. Multiple-image depth cues, such as stereo vision and motion parallax, usually require a solution to the correspondence problem of matching features amongst the images and all suffer from the occlusion problem where not everything that can be viewed from one position can be seen from another. These problems are computationally expensive and difficult to solve.

Researchers have developed two different depth computation techniques based on focus blur information: Depth from Automatic Focusing (DFAF) and Depth from Defocusing (DFD). DFAF methods search for the sharpest image position of an object by varying the camera parameters or moving the camera or object with respect to one another. After the sharp image has been obtained, the lens law can be used to compute the depth [3–12].

DFAF techniques may necessitate large changes in the camera parameters or large movements of the camera (or object) to obtain a sharp image. These cause alterations to the image magnification and mean image brightness, which in turn results in feature shifts and edge bleeding. DFAF searches for the sharpest image position of an object by comparing the sharpness values of images. Because there is no information on the sharpness value that a sharp image will have, local optima can cause miscalculation of the position of the sharp image. These problems will, in turn, affect the object distance.

DFD methods do not require an object to be in focus in order to compute its depth. If an image of a scene is acquired by a real lens, points on the surface of the scene at a particular distance from the lens will be in focus whereas points at other distances will be out of focus by varying degrees depending on their distances. DFD methods make use of this information to determine the depth of an object [8, 13–36].

DFD techniques have drawbacks such as restriction on the camera parameters and appearance of objects, restriction on the form of the point-spread function (PSF) of the camera systems, limited range of effectiveness and high noise sensitivity [8]. The main source of depth errors in DFD is inaccurate modelling of the PSF.

The technique developed in this paper, called Depth from Automatic Defocusing (DFAD), is a combination of DFD and DFAF. Unlike DFD techniques, DFAF uses blur information without modelling or assuming the PSF of the camera system. The technique computes depth in a similar manner to DFAF but does not require the sharp image of an object or large alterations in camera settings. In contrast to DFAF, the sharpness value to be found is known in DFAD. Therefore, DFAD is more accurate and reliable than DFAF and DFD techniques.

DFAD does not need special scene illumination and involves only a single camera. Therefore, there are no correspondence and occlusion problems as found in stereo vision and motion parallax or intrusive emissions as with active depth computation techniques.

The remainder of the paper comprises four sections. Section 2 explains the theory underlying the proposed technique. Section 3 analyses several general issues such as the selection of the camera parameters, criterion function and evaluation window size, which should be determined before implementing DFAD. Edge bleeding, which is a problem with DFAF techniques, is also discussed in this section. Section 4 presents the results obtained. Section 5 concludes the paper.

2. Theory of depth from automatic defocusing

2.1 Basics of DFD and DFAF

Figure 1 shows the basic geometry of image formation. All light rays that are radiated by the object O and intercepted by the lens are refracted by the lens to converge at point If on the focal plane. Each point in a scene is projected onto a single point on the focal plane, causing a focused image to be formed on it. For a camera with a thin convex lens of focal length F, the relation between the distance DOL from a point in a scene to the lens and the distance DLF from its focused image to the lens is given by the Gaussian lens law:

1DOL+1DLF=1F
 figure: Fig. 1.

Fig. 1. Basic image formation geometry

Download Full Size | PPT Slide | PDF

However, if the sensor plane does not coincide with the focal plane, the image Id formed on the sensor plane will be a circular disk known as a “circle of confusion” or “blur circle” with diameter 2R, provided that the aperture of the lens is also circular. By using similar triangles, a formula can be derived to establish the relationship between the radius of the blur circle R and the displacement δ of the sensor plane from the focal plane:

R=2DLF

where L is the diameter of the aperture of the lens. From Fig. 1 which shows the object behind the plane of best focus (PBF), an equation for δ can be derived as:

δ=DLSDLF

where DLS is the distance between the lens and the sensor plane. The quantities DLS, L and F together are referred to as the camera parameters. The aperture diameter L of a lens is often given as [37]:

L=Ff

where f is the f-number of a given lens system. Substituting Eq.s (3) and (4) into Eq. (2) gives:

R=FDLSFDLF2fDLF

Then, using Eq. (1), DLF is eliminated from Eq. (5) to give:

R=DOL(DLSF)FDLS2fDOL

Figure 2 shows the theoretical blur circle radius R versus the distance DOL of an object for an f/2.8, 50mm (F) lens with the camera focused on an object located 1m in front of the lens [for a focal plane distance value of 52.53 mm].

By solving Eq. (6) for DLS, the following equation is obtained:

DOL=FDLSDLSF2fR
 figure: Fig. 2.

Fig. 2. Plot of theoretical blur circle radius versus depth for an f/2.8, 50mm lens [camera focused on an object 1m away from the lens].

Download Full Size | PPT Slide | PDF

When the object is in front of the plane of best focus, Eq.s (3) and (7) become:

δ=DLFDLS
DOL=FDLSDLSF+2fR

Equations (7) and (9) relate the object distance DOL to the radius of the blur circle R.

Using Eqs. (7) and (9), the object distance can be calculated in two ways. First, it can be computed by estimating the radius of the blur circle R (as in Depth from Defocusing, DFD, techniques). Second, a sharp image of an object can be obtained by varying some, or all, of the camera parameters or the distance between the camera and the object to reduce R to zero. Then, the above equations become well known Gussian lens law:

DOL=FDLSDLSF

By employing the camera parameters, Eq. (10) can be used to compute the depth. Techniques that work in this way are known as Depth from Automatic Focusing (DFAF) techniques.

2.2 Theory of DFAD

DFD techniques require accurate modelling of the PSF to compute the blur circle radius (R) and relate R to the object distance. Although more and more accurate modelling of the PSF can be expected to produce better estimates of depth, such estimates seem to be quite sensitive to presence of noise in the image [22]. In DFAD, instead of relating R to depth, the sharpness (quality of focus) of an image is related to depth. Therefore, DFAD does not need to model or evaluate the point-spread function. The sharpness value of an image can be measured using any of the criterion functions given in the literature [3, 7, 38–44].

To explain this further, consider an object placed behind the PBF as in Fig. 1. Also, let I 1(x,y) and I 2(x,y) be images taken using two different camera parameters settings: F 1, f 1, D LS1 and F 2, f 2, D LS2 The blur circle diameters R 1 and R 2, corresponding to I 1(x,y) and I 2(x,y), respectively, are:

R1=DOL(DLS1F1)F1DLS12f1DOL
R2=(DOL+d)(DLS2F2)F2DLS22f2(DOL+d)

where d is the displacement of the camera and the object away from each other between the taking of images I 1(x,y) and I 2(x,y).

 figure: Fig. 3.

Fig. 3. Cross sections of three edges. (The step edge was placed at a distance of 200mm from the lens. Blurred edge 1 was obtained using camera parameters DLS1=75.0mm, F1=50.0mm and f1=1.4. The camera parameters used for the blurred edge 2 were DLS2=74.0mm, F2=47.49mm and f2=2.0.)

Download Full Size | PPT Slide | PDF

If the measured sharpness values are the same for both images, the blur circle radii R 1 and R 2 should be equal. (In other words, exactly the same images of an object can be obtained using different camera settings.) Figure 3 shows the blurred versions of a step edge obtained using different camera parameters. In this fig., the blurred edges match each other exactly.

By equating R 1 and R 2 in Eqs. (11) and (12) and solving for DOL, the following equation is obtained:

DOL2+[d(DLS1F1f2DLS2F2f1)(DLS1F1)f2(DLS2F2)f1]DOLDLS1F1f2d(DLS1F1)f2(DLS2F2)f1=0

Equation (13) is valid for an object behind or in front of the PBF provided that the position of the object relative to that plane remains the same after the camera parameters are changed (that is an object initially in front of the PBF stays in front of it after the change of parameters). However, two identical blurred images of an object, which give the same sharpness values, can be obtained when the object is placed in front of or behind the PBF. Therefore, if one of the sharpness values is measured when the object is located on one side of the PBF and the other is obtained when the object is on the other side, Eq. (13) should be rewritten as:

DOL2+[d(DLS1F1f2+DLS2F2f1)(DLS1F1)f2+(DLS2F2)f1]DOLDLS1F1f2d(DLS1F1)f2+(DLS2F2)f1=0

The depth computation process of DFAD is as follows. First, the camera is directed to acquire an image (defocused or focused). The sharpness of this image is measured and recorded. Then, one (or more) of the camera parameters are altered. This causes the sharpness of the image to change. The camera is then used to obtain another image by changing one (or more) of the previously unaltered camera parameters. The aim of making this change is to try to restore the original sharpness. The actual sharpness of the second image is measured and the difference is calculated between this and the previously recorded sharpness value. This process is continued until the minimum difference is obtained. The difference between the sharpness values gives the direction for the camera setting adjustment. The process terminates when the minimum difference is obtained. Then, Eq. (13) or (14) is used to compute the depth by employing the camera parameters giving the minimum difference in sharpness values. Determining which equation to employ will be discussed in detail in the next section.

3. Selection of camera parameters, criterion function and evaluation window

Before performing experiments with the proposed DFAD technique, several related issues should be considered. These include the selection of the camera parameters, criterion function and evaluation window size. This section addresses these issues. It also considers the problem of edge bleeding which affects DFAF techniques, as mentioned previously.

3.1 Selection of the camera parameters

Equations (13) and (14) suggest that it is possible to vary more than one camera parameter simultaneously. However, this should not be done in a random manner because the effects of one camera parameter may be cancelled by varying another camera parameter. Therefore, it is better to change just one camera parameter after measuring the sharpness of the first image. This sharpness value is subsequently to be obtained again by altering another camera parameter.

As mentioned in the previous section, the same sharpness value can also be obtained when the object is placed behind or in front of the PBF. For example, in Fig. 4, the sharpness values measured at A and C are equal. This causes difficulties in experiments. For example, if the first sharpness value is measured at A and the same sharpness value is again obtained at A but with different camera parameters, Eq. (13) can be used to compute the depth. However, if the first sharpness value is measured at A and the same sharpness value is obtained at C, Eq. (14) needs to be employed.

 figure: Fig. 4.

Fig. 4. Different camera parameters giving the same sharpness value. B is the point of best focus

Download Full Size | PPT Slide | PDF

Without knowing whether the object is behind or in front of the PBF, there is a problem with deciding which equation to employ. This problem can be solved by focusing the camera on the limit of the viewing distance by adjusting the camera parameters. Having obtained the sharpness value at that distance, the first changes in one of the previously unaltered camera parameters should make the image more defocused. Another solution to the problem is to change the f-number (f) of camera by a small amount after having computed and recorded the sharpness value of the first image. Then, the recorded sharpness value is searched for by changing one of the other camera parameters (DLS, F, d). These techniques allow the object to remain on one side of the PBF and Eq. (13) to be employed.

If the object is at the PBF, images taken with different f-numbers will have the same sharpness values. Changing the values of the other camera parameters causes images to become more defocused and therefore the developed technique does not allow this to be carried out. Hence, the camera parameters will stay the same except f-numbers (D LS1=D LS2, F 1=F 2, d = 0, f 1f 2). Equations (13) and (14) then become the well known Gaussian lens law (Eq. 10). These observations also prove the theoretical soundness of the derived equations. The distance of the object can be calculated using either of them.

How ambiguity can arise and be resolved, if the above techniques are not adopted, can be explained by considering the following for the sake of the theoretical completeness of the developed technique. Assume that the object is located at point B behind the PBF which is at point C. After having computed the first sharpness value (S) at point C, one of the camera parameters, for example DLS, is altered. Changes in DLS can make the camera focus at one of four different regions (see regions I to IV in Fig. 5). Assume that S is searched for by moving the camera with respect to the object. If the camera is focused at I, the sharpness value obtained from that distance is less than S. Therefore, the camera should be moved towards to the object to obtain S. If the camera is focused at II, the sharpness value obtained from that distance is larger than S. Therefore, the camera should be moved away from the object.

 figure: Fig. 5.

Fig. 5. Possible focusing positions for an object placed in front of the camera. B corresponds to the object location. (Arrows show direction of camera movements)

Download Full Size | PPT Slide | PDF

Table 1 shows the parameter adjustments for objects behind and in front of the PBF. The first column gives the changes in parameter DLS after the first sharpness value is recorded (S). The second column shows how the sharpness value obtained using the new DLS compares with S. The third column gives the relative changes needed in camera parameter (d or F) to restore S. The last column indicates which equation is to be used for depth computation. As can be observed from the table, ambiguity arises in some cases. For example, the parameter and sharpness changes are identical between the first and the last rows of the table but different equations are required. The same problem also exists between the fourth and fifth rows of the table.

The same ambiguity also exists when F or d is changed first and searching is performed with one of the other camera parameters. There are many ways to solve this problem. For example, when an ambiguous situation is encountered, one of the equations is used to compute the object distance. The camera is focused at that distance and an image is obtained. If the sharpness value of the image is greater than the first sharpness value, the equation used for depth computation was the correct equation. Otherwise, the wrong equation was chosen.

Tables Icon

Table 1. Parameter adjustments and depth computation. “+” and “-” indicate that this camera parameter needs to be increased or decreased, respectively

Another technique to avoid ambiguity is to focus the camera slightly further than the initially focused distance after computing the first sharpness value. If the object is behind the PBF, the image acquired from the new position will be sharper and consequently its sharpness value will be higher than the previously obtained value. If the object is in front of the PBF, the image will be more blurred and its sharpness value will be less than the previously obtained value. From this information, the object position can be estimated. Having estimated the place of the object, it is straightforward to know which equation to employ. However, if searching is done by changing the f-number (f) an extra image is always needed to determine which equation to employ.

3.2 Selection of the criterion function

The grey level of a pixel alone does not convey any information about the blurring process. Therefore, the sharpness criterion function must operate on a neighbourhood of pixels, called the evaluation window, and utilise information about the relative changes in grey levels within the window for images taken using different camera settings. Surveys and comprehensive comparative studies of criterion functions can be found in [3, 7, 38–44]. The Tenengrad function is recommended by most researchers for the following reasons:

  • 1. It can be used with different window sizes and for a wide variety of scenes.
  • 2. It is relatively insensitive to noise.
  • 3. It is straightforward to compute and the computation can be implemented in parallel and in hardware.

The Tenengrad function estimates the gradient at each image point I(x,y) in an evaluation window by summing all magnitudes greater than a pre-defined threshold value. To enhance the effect of the larger values (i.e. the edges), the gradients are squared. The criterion function is defined as:

maxxNyNZ(x,y)2forZ(x,y)2>T

where Z(x,y)=Gx(x,y)2+Gy(x,y)2 is the gradient magnitude.

There are many discrete operators which can approximate the values of the gradient components Gx(x,y) and Gy(x,y). The Tenengrad function uses the Sobel convolution operator. The masks required to implement the Sobel operator in the horizontal and vertical directions are given below:

[101202101][121000121]

It has been argued that it is unnecessary to use a threshold value in the Tenengrad function [43]. Krotkov [3] also disregarded the threshold in his implementation of the Tenengrad function because threshold selection requires heuristic choices. Therefore, in this work, no threshold value was chosen.

3.3 Selection of the evaluation window size

Criterion values remain the same for evaluation windows situated in a homogeneous region of an image regardless of the amount of defocus. Therefore, criterion functions must be evaluated in windows that have some kind of visual variations such as edges, lines, textures etc. Also, a window must contain the projection of object points that lie at the same distance from the lens. Otherwise, the criterion function will, in general, give multiple solutions which cause a miscalculation of the object depth.

There is a trade-off associated with choosing the size of the window. Larger windows increase the robustness of the criterion function. However, they reduce the spatial resolution of the resulting depth array and increase the computation time. Smaller windows increase the spatial resolution and decrease the computation time but are more affected by noise.

When computing object distance using an automatic focusing technique, if a large alteration in the camera parameters is required to obtain a sharp image of an object, this causes the magnification of the lens and the image brightness to change. The larger the change in camera settings, the greater the effect on images. Consequently, many pixels will move in or out of the evaluation window. These effects can be compensated for either optically or computationally. The former requires extra equipment such as another camera or a natural density filter. The latter increases the execution time of the techniques. These effects are reduced in DFAD since it does not require a large alteration of the camera parameters to compute the distance of an object.

3.4 Noise reduction

The quality of an image is often corrupted by noise from the digitisation process for the particular equipment employed, interference from nearby computers and connecting cables, and ambient light which varies from moment to moment. This can cause a miscalculation of the searched image position of an object and consequently its distance. Therefore, the amount of noise in the image should be reduced as much as possible. There are several methods in computer vision for minimising image noise. One of the most common techniques is spatial averaging [45]. However, this blurs images. Another technique is to threshold the criterion values. As previously mentioned, threshold selection inevitably requires heuristic choices. Therefore, focusing programs should avoid employing a threshold for noise reduction.

In this work, temporal averaging was employed. At each stage of the searching process, temporal averaging is performed by taking more than one image of the same scene at different times. The images are acquired by employing the same camera parameters for each object position. In the resulting image, the grey level of each pixel is the average intensity of the same pixel location in all the acquired images. That is:

I(x,y)=1ni=1nIi(x,y)

where Ii(x,y) is the pixel grey level value at point (x,y) for image i and n is the number of images used for averaging. The larger the value of n, the greater the reduction in noise becomes. However, using a larger n increases the amount of computation time.

3.5 Edge bleeding effects

Focus-based methods divide images into subimages to compute the depth. This causes an error in depth computation because the intensity outside a window “bleeds” into the window. The effect is larger when the blur increases. To illustrate this, consider the effect on two points A and B which are 16 pixels apart (see Fig. 6(a)) and placed at 200mm and 150mm respectively from the camera. The criterion function will be employed within a window “W” to compute the sharpness value of point A. If the camera is focused at point B (DOL = 150mm, DLF = 75mm, F = 50mm, f = 1.4), the image of point A will be blurred (Fig. 6(b)). If a DFAF technique is used to compute the distance of point A by moving the camera, a 50mm camera movement is needed to obtain the sharp image of point A. As can be seen from Fig. 6(c), the image of point B will be blurred and will bleed into W. This causes miscalculation of the sharp image position of point A and consequently its distance.

With DFAD, having recorded the sharpness value of A at 200mm with the camera focused at B, DLF is changed from 75mm to 74mm. This causes the sharpness of point A to vary slightly. To obtain the recorded sharpness value, the camera is moved. In this case, a camera movement of 8.45mm is required to restore the sharpness of A. Comparing Fig. 6(c) and Fig. 6(d) shows that the bleeding effect is much less for DFAD than for DFAF.

4. Results

Since a computer-controlled system was not available, experiments to test the proposed DFAD method were carried out manually. The tests were conducted with two different lenses having fixed focal lengths of 50mm and 90mm. The method was evaluated on three objects located within 1000mm of the camera. The images of the objects used in the experiments are shown in Fig. 7. The objects were placed at 20 different known positions. At each position, temporal averaging was performed with n = 20. The size of the evaluation window was 80×80. The window was chosen to be large enough so that the object stayed within it regardless of variations in the camera parameters. The depth computation process was as follows. The f-number (f) of the lens was first set to 4 and the camera was directed to obtain an image. The sharpness value of the image was measured and recorded. Then, f- number was changed from 4 to 2.8 and the camera was redirected to acquire another image. The second image was rescaled to have the same mean grey level as the first image. The sharpness value of the second image was computed and the difference between this and the previously recorded sharpness value was calculated. The object was moved with respect to the camera until the minimum difference was obtained. Movements were made using a slide with an accuracy of 0.1mm.

 figure: Fig. 6.

Fig. 6. (a) Cross section at points A and B (assuming a pin-hole camera with infinite depth of field) (b) the camera is focused at point B (c) the camera is focused at point A (d) after the movement required for depth computation by DFAD

Download Full Size | PPT Slide | PDF

 figure: Fig. 7.

Fig. 7. Images of the objects used in the experiments

Download Full Size | PPT Slide | PDF

The direction of the object movement was determined by the difference between the sharpness values. When the difference increased, it was known that the object was being moved in the wrong direction. Otherwise, it was being moved in the right direction. After having obtained the minimum difference, Eq. (13) was chosen to compute the depth since the objects always stayed behind the PBF. Eq. (13) can be rewritten as:

DOL2+[dD0]DOLD0f2d(f2f1)=0

where D 0 is the distance to the PBF from the lens at the beginning of the experiment. D 0 is given by the following lens law:

D0=DLSFDLSF

Solving Eq. (17) gives two DOL values, one of which is positive and the other negative. The positive value yields the distance of the object. The results are plotted in Fig. 8. The percentage error in distance was found to be approximately 0.15%.

Instead of computing sharpness values of images, experiments were also carried out by simply subtracting the images. However, the results obtained were not as accurate as employing a criterion function. If fast and less accurate results are required in a specific application, then image subtraction can be performed. It is also possible to employ correlation values between the images.

 figure: Fig. 8.

Fig. 8. (a) Estimated depth vs. real depth (b) Errors for different depths

Download Full Size | PPT Slide | PDF

5. Conclusion

Most DFD techniques are based on modelling or evaluating the point-spread function. However, accurate modelling of the PSF is very difficult and computationally expensive. Therefore, it is usually assumed to be either a Gaussian or circularly symmetric function. Alternatively, DFAF techniques search for a sharp image of an object to compute its distance. To obtain a sharp image, large alterations of the camera parameters or movements of the camera (or object) may be needed. These cause image magnification and the mean image brightness to change. Variations in image magnification also result in feature shifts and edge bleeding.

In this paper, a depth calculation method called Depth from Automatic Defocusing has been presented. The method computes depth in a similar way to DFAF techniques. However, it does not require sharp images of an object to determine its distance. The technique uses blur information and does not need to model or evaluate the point-spread function.

The method was implemented to compute the depth of general scenes using their defocused images. On average, experimental results have shown that the depth estimation error was approximately 0.15%. Thus, DFAD is an accurate technique for depth computation. Having determined the distance of an object from the camera, it is easy to obtain the sharp image of the object. Therefore, this method can also be used for automatic focusing.

Changes in the f-number (f) of the lens alter the mean image intensity. Therefore, the images were rescaled to have the same mean intensity value. However, rescaling causes errors in depth computation. To prevent this, DFAD can be performed by varying camera parameters other than the f-number of the lens.

With DFAF techniques, there is no information on the sharpness value that a sharp image will have. Therefore, local optima can cause miscalculation of the position of the sharp image. However, with DFAD, the sharpness value to be found is known. Hence, DFAD results are more reliable than those for DFAF.

As with DFD, DFAF and stereo techniques, one of the sources of errors in DFAD is the limited spatial resolution of the detector array inherent in any imaging system. The size of pixels plays an important role both on image sampling and depth of field. These will, in turn, affect the computation of object distance. Therefore, the higher the resolution of the detector array (the smaller the pixel size), the more certain the accuracy of results will be.

The proposed DFAD method currently has two main drawbacks. As with DFAF, a weakness of DFAD is that it requires the processing of more images than with DFD techniques which need only a few images to determine the depth of objects. Also, DFAD cannot compute the distance of plain objects. This is a common problem with passive depth computation techniques. However, if a random illumination pattern can be projected onto such objects, then DFAD can be applied.

References and links

1. S.F. El-Hakim, J.-A. Beraldin, and F. Blais, “A Comparative Evaluation of the Performance of Passive and Active 3D Vision Systems,” in Digital Photogrammetry and Remote Sensing, Eugeny A. Fedosov, Ed., Proc. SPIE2646,14–25 (1995). [CrossRef]  

2. M. Hebert, “Active and passive range sensing for robotics,” in Proceedings of IEEE Conference on Robotics and Automation, (Institute of Electrical and Electronics Engineers, San Francisco, CA, 2000), pp.102–110.

3. E.P. Krotkov, “Focusing,” Int. J. Compt. Vision 1,223–237 (1987). [CrossRef]  

4. T. Darell and K. Wohn, “Depth from Focus Using a Pyramid Architecture,” Pattern Recogn. Lett. 11,787–796 (1990). [CrossRef]  

5. S.K. Nayar and Y. Nakagawa, “Shape from Focus: An Effective Approach for Rough Surfaces,” in Proceedings of IEEE Conference on Robotics and Automation, (Institute of Electrical and Electronics Engineers, Cincinnati, Ohio, 1990), pp.218–225.

6. H.N. Nair and C.V. Stewart, “Robust Focus Ranging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Illinois, 1992), pp.309–314.

7. D.T. Pham and V. Aslantas, “Automatic Focusing,” in Birinci Turk Yapay Zeka ve Yapay Sinir Aglari Sempozyumu, (Bilkent Universitesi, Ankara, 1992), pp.295–303.

8. M. Subbarao and T. Wei, “Depth from Defocus and Rapid Autofocusing: A Practical Approach,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Champaign, Illinois, 1992), pp.773–776.

9. M. Subbarao and T. Choi, “Accurate Recovery of Three Dimensional Shape from Focus,” IEEE Trans. Pattern Anal. Mach. Intell. 17,266–274 (1995). [CrossRef]  

10. M. Subbarao and J.K. Tyan, “Selecting the optimal focus measure for autofocusing and depth-from-focus,” IEEE Trans. Pattern Anal. Mach. Intell. 20,864–870 (1998). [CrossRef]  

11. N. Asada, H. Fujiwara, and T. Matsuyama, “Edge and depth from focus,” Int. J. Comput. Vision 26,153–163 (1998). [CrossRef]  

12. Bilal Ahmad and Tae-Sun Choi, “A heuristic approach for finding best focused shape,” IEEE Trans. Circuits Syst. 15,566–574 (2005).

13. P. Grossmann, “Depth from Focus,” Pattern Recogn. Lett. 5,63–69 (1987). [CrossRef]  

14. A.P. Pentland, “A New Sense for Depth of Field,” IEEE Trans. Pattern Anal. Mach. Intell. 9,523–531 (1987). [CrossRef]   [PubMed]  

15. M. Subbarao and N. Gurumoorthy, “Depth Recovery from Blurred Edges,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Ann Arbor, MI, 1988), pp.498–503.

16. M. Subbarao, “Efficient Depth Recovery Through Inverse Optics,” Machine Vision Inspection and Measurement, H. Freeman ed., (Academic, Boston, 1989).

17. C. Cardillo and M.A. Sid-Ahmed, “3-D Position Sensing Using Passive Monocular Vision System,” IEEE Trans. Pattern Anal. Mach. Intell. 13,809–813 (1991). [CrossRef]  

18. R.V. Dantu, N.J. Dimopoulos, R.V. Patel, and A.J. Al-Khalili, “Depth Perception Using Blurring and its Application in VLSI Wafer Probing,” Mach. Vision Appl. 5,35–45 (1992). [CrossRef]  

19. S.H. Lai, C.W. Fu, and S. Chang, “A Generalised Depth Estimation Algorithm with a Single Image,” IEEE Trans. Pattern Anal. Mach. Intell. 14,405–411 (1992). [CrossRef]  

20. J. Ens and P. Lawrence, “Investigation of Methods for Determining Depth from Focus,” IEEE Trans. Pattern Anal. Mach. Intell. 15,97–108 (1993). [CrossRef]  

21. L.F. Holeva, “Range Estimation from Camera Blur by Regularised Adaptive Identification,” Int. J. Pattern Recogn. Artif. Intell. 8,1273–1300 (1994). [CrossRef]  

22. A.P. Pentland, S. Scherock, T. Darrell, and B. Girod, “Simple Range Cameras based on Focal Error,” J. Opt. Soc. Am. A 11,2925–2934 (1994). [CrossRef]  

23. M. Subbarao and G. Surya, “Depth from Defocus: A Spatial Domain Approach,” Int. J. Comput. Vision 13,271–294 (1994). [CrossRef]  

24. S. Xu, D.W. Capson, and T.M. Caelli, “Range Measurement from Defocus Gradient,” Mach. Vision Appl. 8,179–186 (1995). [CrossRef]  

25. M. Watanabe and S.K. Nayar, “Rational filters for passive depth from defocus,” Int. J. Comput. Vision 27,203–225 (1998). [CrossRef]  

26. N. Asada, H. Fujiwara, and T. Matsuyama, “Particle depth measurement based on depth-from-defocus,” Opt. Laser Technol. 31,95–102 (1999). [CrossRef]  

27. S. Chaudhuri and A.N. Rajagopalan, “Depth from Defocus: A Real Aperture Imaging Approach,” (Springer-Verlag New York, Inc. 1999).

28. D.T. Pham and V. Aslantas, “Depth from Defocusing Using a Neural Network,” J. Pattern Recogn. 32,715–727 (1999). [CrossRef]  

29. M. Asif and T.S. Choi, “Shape from focus using multilayer feedforward neural networks,” IEEE Trans. Image Process. 10,1670–1675 (2001). [CrossRef]  

30. J. Rayala, S. Gupta, and S.K. Mullick, “Estimation of depth from defocus as polynomial system identification,” IEE Proceedings, Vision, Image and Signal Processing 148,356–362 (2001). [CrossRef]  

31. P. Favaro, A. Mennucci, and S. Soatto, “Observing Shape from Defocused Images,” Int. J. Comput. Vision 52,25–43 (2003). [CrossRef]  

32. D. Z. F. Deschenes, “Depth from Defocus Estimation in Spatial Domain,” Computer Vision and Image Understanding 81,143–165 (2001). [CrossRef]  

33. P. Favaro and S. Soatto, “Learning Shape from Defocus,” in European Conference on Computer Vision, (Copenhagen, Denmark, 2002), pp.735–45.

34. V. Aslantas and M. Tunckanat, “Depth from Image Sharpness Using A Neural Network,” in International Conference on Signal Processing, (Canakkale, Turkey, 2003), pp.260–265.

35. V. Aslantas, “Estimation of Depth From Defocusing Using A Neural Network,” in International Conference on Signal Processing, (Canakkale, Turkey, 2003), pp.305–309.

36. V. Aslantas and M. Tunckanat, “Depth of General Scenes from Defocused Images Using Multilayer Feedforward Network,” LNCS 3949,41–48 (2006).

37. B.K.P. Horn, Robot Vision, (McGraw-Hill, New York, 1986).

38. R.A. Jarvis, “Focus Optimisation Criteria for Computer Image Processing,” Microscope ,24,163–180 (1976).

39. J.F. Schlag, A.C. Sanderson, C.P. Neuman, and F.C. Wimberly, “Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control,” CMU-RI-TR-83-14, (Robotics Institution, Carnegie Mellon University, 1983).

40. F.C.A. Groen, I.T. Young, and G. Ligthart, “A Comparison of Different Focus Functions for Use in Autofocus Algorithms,” Cytometry ,6,81–91 (1985). [CrossRef]   [PubMed]  

41. L. Firestone, K. Cook, K. Culp, N. Talsania, and Jr.K. Preston, “Comparison of Autofocus Methods for Automated Microscopy,” Cytometry ,12,195–206 (1991). [CrossRef]   [PubMed]  

42. M. Subbarao, T. Choi, and A. Nikzat, “Focusing Techniques,” Optical Engineering ,32,2824–2836 (1993). [CrossRef]  

43. T.T.E. Yeo, S.H. Ong, Jayasooriah, and R. Sinniah, “Autofocusing for Tissue Microscopy,” J. Image and Vision Computing ,11,629–639 (1993). [CrossRef]  

44. V. Aslantas, “Criterion functions for automatic focusing,” in 10. Turkish Symposium on Artificial Intelligence and Neural Networks, (Gazimagusa, Turkish Republic of Northern Cyprus2001), pp.301–311.

45. R.C. Gonzalez and R.E. Woods, “Digital Image Processing,” (Addison-Wesley, Reading, MA1992).

References

  • View by:
  • |
  • |
  • |

  1. S.F. El-Hakim, J.-A. Beraldin, and F. Blais, “A Comparative Evaluation of the Performance of Passive and Active 3D Vision Systems,” in Digital Photogrammetry and Remote Sensing, Eugeny A. Fedosov, Ed., Proc. SPIE2646,14–25 (1995).
    [Crossref]
  2. M. Hebert, “Active and passive range sensing for robotics,” in Proceedings of IEEE Conference on Robotics and Automation, (Institute of Electrical and Electronics Engineers, San Francisco, CA, 2000), pp.102–110.
  3. E.P. Krotkov, “Focusing,” Int. J. Compt. Vision 1,223–237 (1987).
    [Crossref]
  4. T. Darell and K. Wohn, “Depth from Focus Using a Pyramid Architecture,” Pattern Recogn. Lett. 11,787–796 (1990).
    [Crossref]
  5. S.K. Nayar and Y. Nakagawa, “Shape from Focus: An Effective Approach for Rough Surfaces,” in Proceedings of IEEE Conference on Robotics and Automation, (Institute of Electrical and Electronics Engineers, Cincinnati, Ohio, 1990), pp.218–225.
  6. H.N. Nair and C.V. Stewart, “Robust Focus Ranging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Illinois, 1992), pp.309–314.
  7. D.T. Pham and V. Aslantas, “Automatic Focusing,” in Birinci Turk Yapay Zeka ve Yapay Sinir Aglari Sempozyumu, (Bilkent Universitesi, Ankara, 1992), pp.295–303.
  8. M. Subbarao and T. Wei, “Depth from Defocus and Rapid Autofocusing: A Practical Approach,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Champaign, Illinois, 1992), pp.773–776.
  9. M. Subbarao and T. Choi, “Accurate Recovery of Three Dimensional Shape from Focus,” IEEE Trans. Pattern Anal. Mach. Intell. 17,266–274 (1995).
    [Crossref]
  10. M. Subbarao and J.K. Tyan, “Selecting the optimal focus measure for autofocusing and depth-from-focus,” IEEE Trans. Pattern Anal. Mach. Intell. 20,864–870 (1998).
    [Crossref]
  11. N. Asada, H. Fujiwara, and T. Matsuyama, “Edge and depth from focus,” Int. J. Comput. Vision 26,153–163 (1998).
    [Crossref]
  12. Bilal Ahmad and Tae-Sun Choi, “A heuristic approach for finding best focused shape,” IEEE Trans. Circuits Syst. 15,566–574 (2005).
  13. P. Grossmann, “Depth from Focus,” Pattern Recogn. Lett. 5,63–69 (1987).
    [Crossref]
  14. A.P. Pentland, “A New Sense for Depth of Field,” IEEE Trans. Pattern Anal. Mach. Intell. 9,523–531 (1987).
    [Crossref] [PubMed]
  15. M. Subbarao and N. Gurumoorthy, “Depth Recovery from Blurred Edges,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Ann Arbor, MI, 1988), pp.498–503.
  16. M. Subbarao, “Efficient Depth Recovery Through Inverse Optics,” Machine Vision Inspection and Measurement, H. Freeman ed., (Academic, Boston, 1989).
  17. C. Cardillo and M.A. Sid-Ahmed, “3-D Position Sensing Using Passive Monocular Vision System,” IEEE Trans. Pattern Anal. Mach. Intell. 13,809–813 (1991).
    [Crossref]
  18. R.V. Dantu, N.J. Dimopoulos, R.V. Patel, and A.J. Al-Khalili, “Depth Perception Using Blurring and its Application in VLSI Wafer Probing,” Mach. Vision Appl. 5,35–45 (1992).
    [Crossref]
  19. S.H. Lai, C.W. Fu, and S. Chang, “A Generalised Depth Estimation Algorithm with a Single Image,” IEEE Trans. Pattern Anal. Mach. Intell. 14,405–411 (1992).
    [Crossref]
  20. J. Ens and P. Lawrence, “Investigation of Methods for Determining Depth from Focus,” IEEE Trans. Pattern Anal. Mach. Intell. 15,97–108 (1993).
    [Crossref]
  21. L.F. Holeva, “Range Estimation from Camera Blur by Regularised Adaptive Identification,” Int. J. Pattern Recogn. Artif. Intell. 8,1273–1300 (1994).
    [Crossref]
  22. A.P. Pentland, S. Scherock, T. Darrell, and B. Girod, “Simple Range Cameras based on Focal Error,” J. Opt. Soc. Am. A 11,2925–2934 (1994).
    [Crossref]
  23. M. Subbarao and G. Surya, “Depth from Defocus: A Spatial Domain Approach,” Int. J. Comput. Vision 13,271–294 (1994).
    [Crossref]
  24. S. Xu, D.W. Capson, and T.M. Caelli, “Range Measurement from Defocus Gradient,” Mach. Vision Appl. 8,179–186 (1995).
    [Crossref]
  25. M. Watanabe and S.K. Nayar, “Rational filters for passive depth from defocus,” Int. J. Comput. Vision 27,203–225 (1998).
    [Crossref]
  26. N. Asada, H. Fujiwara, and T. Matsuyama, “Particle depth measurement based on depth-from-defocus,” Opt. Laser Technol. 31,95–102 (1999).
    [Crossref]
  27. S. Chaudhuri and A.N. Rajagopalan, “Depth from Defocus: A Real Aperture Imaging Approach,” (Springer-Verlag New York, Inc. 1999).
  28. D.T. Pham and V. Aslantas, “Depth from Defocusing Using a Neural Network,” J. Pattern Recogn. 32,715–727 (1999).
    [Crossref]
  29. M. Asif and T.S. Choi, “Shape from focus using multilayer feedforward neural networks,” IEEE Trans. Image Process. 10,1670–1675 (2001).
    [Crossref]
  30. J. Rayala, S. Gupta, and S.K. Mullick, “Estimation of depth from defocus as polynomial system identification,” IEE Proceedings, Vision, Image and Signal Processing 148,356–362 (2001).
    [Crossref]
  31. P. Favaro, A. Mennucci, and S. Soatto, “Observing Shape from Defocused Images,” Int. J. Comput. Vision 52,25–43 (2003).
    [Crossref]
  32. D. Z. F. Deschenes, “Depth from Defocus Estimation in Spatial Domain,” Computer Vision and Image Understanding 81,143–165 (2001).
    [Crossref]
  33. P. Favaro and S. Soatto, “Learning Shape from Defocus,” in European Conference on Computer Vision, (Copenhagen, Denmark, 2002), pp.735–45.
  34. V. Aslantas and M. Tunckanat, “Depth from Image Sharpness Using A Neural Network,” in International Conference on Signal Processing, (Canakkale, Turkey, 2003), pp.260–265.
  35. V. Aslantas, “Estimation of Depth From Defocusing Using A Neural Network,” in International Conference on Signal Processing, (Canakkale, Turkey, 2003), pp.305–309.
  36. V. Aslantas and M. Tunckanat, “Depth of General Scenes from Defocused Images Using Multilayer Feedforward Network,” LNCS 3949,41–48 (2006).
  37. B.K.P. Horn, Robot Vision, (McGraw-Hill, New York, 1986).
  38. R.A. Jarvis, “Focus Optimisation Criteria for Computer Image Processing,” Microscope, 24,163–180 (1976).
  39. J.F. Schlag, A.C. Sanderson, C.P. Neuman, and F.C. Wimberly, “Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control,” CMU-RI-TR-83-14, (Robotics Institution, Carnegie Mellon University, 1983).
  40. F.C.A. Groen, I.T. Young, and G. Ligthart, “A Comparison of Different Focus Functions for Use in Autofocus Algorithms,” Cytometry, 6,81–91 (1985).
    [Crossref] [PubMed]
  41. L. Firestone, K. Cook, K. Culp, N. Talsania, and Jr.K. Preston, “Comparison of Autofocus Methods for Automated Microscopy,” Cytometry, 12,195–206 (1991).
    [Crossref] [PubMed]
  42. M. Subbarao, T. Choi, and A. Nikzat, “Focusing Techniques,” Optical Engineering, 32,2824–2836 (1993).
    [Crossref]
  43. T.T.E. Yeo, S.H. Ong, Jayasooriah, and R. Sinniah, “Autofocusing for Tissue Microscopy,” J. Image and Vision Computing, 11,629–639 (1993).
    [Crossref]
  44. V. Aslantas, “Criterion functions for automatic focusing,” in 10. Turkish Symposium on Artificial Intelligence and Neural Networks, (Gazimagusa, Turkish Republic of Northern Cyprus2001), pp.301–311.
  45. R.C. Gonzalez and R.E. Woods, “Digital Image Processing,” (Addison-Wesley, Reading, MA1992).

2006 (1)

V. Aslantas and M. Tunckanat, “Depth of General Scenes from Defocused Images Using Multilayer Feedforward Network,” LNCS 3949,41–48 (2006).

2005 (1)

Bilal Ahmad and Tae-Sun Choi, “A heuristic approach for finding best focused shape,” IEEE Trans. Circuits Syst. 15,566–574 (2005).

2003 (1)

P. Favaro, A. Mennucci, and S. Soatto, “Observing Shape from Defocused Images,” Int. J. Comput. Vision 52,25–43 (2003).
[Crossref]

2001 (3)

D. Z. F. Deschenes, “Depth from Defocus Estimation in Spatial Domain,” Computer Vision and Image Understanding 81,143–165 (2001).
[Crossref]

M. Asif and T.S. Choi, “Shape from focus using multilayer feedforward neural networks,” IEEE Trans. Image Process. 10,1670–1675 (2001).
[Crossref]

J. Rayala, S. Gupta, and S.K. Mullick, “Estimation of depth from defocus as polynomial system identification,” IEE Proceedings, Vision, Image and Signal Processing 148,356–362 (2001).
[Crossref]

1999 (2)

N. Asada, H. Fujiwara, and T. Matsuyama, “Particle depth measurement based on depth-from-defocus,” Opt. Laser Technol. 31,95–102 (1999).
[Crossref]

D.T. Pham and V. Aslantas, “Depth from Defocusing Using a Neural Network,” J. Pattern Recogn. 32,715–727 (1999).
[Crossref]

1998 (3)

M. Subbarao and J.K. Tyan, “Selecting the optimal focus measure for autofocusing and depth-from-focus,” IEEE Trans. Pattern Anal. Mach. Intell. 20,864–870 (1998).
[Crossref]

N. Asada, H. Fujiwara, and T. Matsuyama, “Edge and depth from focus,” Int. J. Comput. Vision 26,153–163 (1998).
[Crossref]

M. Watanabe and S.K. Nayar, “Rational filters for passive depth from defocus,” Int. J. Comput. Vision 27,203–225 (1998).
[Crossref]

1995 (2)

M. Subbarao and T. Choi, “Accurate Recovery of Three Dimensional Shape from Focus,” IEEE Trans. Pattern Anal. Mach. Intell. 17,266–274 (1995).
[Crossref]

S. Xu, D.W. Capson, and T.M. Caelli, “Range Measurement from Defocus Gradient,” Mach. Vision Appl. 8,179–186 (1995).
[Crossref]

1994 (3)

L.F. Holeva, “Range Estimation from Camera Blur by Regularised Adaptive Identification,” Int. J. Pattern Recogn. Artif. Intell. 8,1273–1300 (1994).
[Crossref]

A.P. Pentland, S. Scherock, T. Darrell, and B. Girod, “Simple Range Cameras based on Focal Error,” J. Opt. Soc. Am. A 11,2925–2934 (1994).
[Crossref]

M. Subbarao and G. Surya, “Depth from Defocus: A Spatial Domain Approach,” Int. J. Comput. Vision 13,271–294 (1994).
[Crossref]

1993 (3)

J. Ens and P. Lawrence, “Investigation of Methods for Determining Depth from Focus,” IEEE Trans. Pattern Anal. Mach. Intell. 15,97–108 (1993).
[Crossref]

M. Subbarao, T. Choi, and A. Nikzat, “Focusing Techniques,” Optical Engineering, 32,2824–2836 (1993).
[Crossref]

T.T.E. Yeo, S.H. Ong, Jayasooriah, and R. Sinniah, “Autofocusing for Tissue Microscopy,” J. Image and Vision Computing, 11,629–639 (1993).
[Crossref]

1992 (2)

R.V. Dantu, N.J. Dimopoulos, R.V. Patel, and A.J. Al-Khalili, “Depth Perception Using Blurring and its Application in VLSI Wafer Probing,” Mach. Vision Appl. 5,35–45 (1992).
[Crossref]

S.H. Lai, C.W. Fu, and S. Chang, “A Generalised Depth Estimation Algorithm with a Single Image,” IEEE Trans. Pattern Anal. Mach. Intell. 14,405–411 (1992).
[Crossref]

1991 (2)

C. Cardillo and M.A. Sid-Ahmed, “3-D Position Sensing Using Passive Monocular Vision System,” IEEE Trans. Pattern Anal. Mach. Intell. 13,809–813 (1991).
[Crossref]

L. Firestone, K. Cook, K. Culp, N. Talsania, and Jr.K. Preston, “Comparison of Autofocus Methods for Automated Microscopy,” Cytometry, 12,195–206 (1991).
[Crossref] [PubMed]

1990 (1)

T. Darell and K. Wohn, “Depth from Focus Using a Pyramid Architecture,” Pattern Recogn. Lett. 11,787–796 (1990).
[Crossref]

1987 (3)

E.P. Krotkov, “Focusing,” Int. J. Compt. Vision 1,223–237 (1987).
[Crossref]

P. Grossmann, “Depth from Focus,” Pattern Recogn. Lett. 5,63–69 (1987).
[Crossref]

A.P. Pentland, “A New Sense for Depth of Field,” IEEE Trans. Pattern Anal. Mach. Intell. 9,523–531 (1987).
[Crossref] [PubMed]

1985 (1)

F.C.A. Groen, I.T. Young, and G. Ligthart, “A Comparison of Different Focus Functions for Use in Autofocus Algorithms,” Cytometry, 6,81–91 (1985).
[Crossref] [PubMed]

1976 (1)

R.A. Jarvis, “Focus Optimisation Criteria for Computer Image Processing,” Microscope, 24,163–180 (1976).

Ahmad, Bilal

Bilal Ahmad and Tae-Sun Choi, “A heuristic approach for finding best focused shape,” IEEE Trans. Circuits Syst. 15,566–574 (2005).

Al-Khalili, A.J.

R.V. Dantu, N.J. Dimopoulos, R.V. Patel, and A.J. Al-Khalili, “Depth Perception Using Blurring and its Application in VLSI Wafer Probing,” Mach. Vision Appl. 5,35–45 (1992).
[Crossref]

Asada, N.

N. Asada, H. Fujiwara, and T. Matsuyama, “Particle depth measurement based on depth-from-defocus,” Opt. Laser Technol. 31,95–102 (1999).
[Crossref]

N. Asada, H. Fujiwara, and T. Matsuyama, “Edge and depth from focus,” Int. J. Comput. Vision 26,153–163 (1998).
[Crossref]

Asif, M.

M. Asif and T.S. Choi, “Shape from focus using multilayer feedforward neural networks,” IEEE Trans. Image Process. 10,1670–1675 (2001).
[Crossref]

Aslantas, V.

V. Aslantas and M. Tunckanat, “Depth of General Scenes from Defocused Images Using Multilayer Feedforward Network,” LNCS 3949,41–48 (2006).

D.T. Pham and V. Aslantas, “Depth from Defocusing Using a Neural Network,” J. Pattern Recogn. 32,715–727 (1999).
[Crossref]

V. Aslantas and M. Tunckanat, “Depth from Image Sharpness Using A Neural Network,” in International Conference on Signal Processing, (Canakkale, Turkey, 2003), pp.260–265.

D.T. Pham and V. Aslantas, “Automatic Focusing,” in Birinci Turk Yapay Zeka ve Yapay Sinir Aglari Sempozyumu, (Bilkent Universitesi, Ankara, 1992), pp.295–303.

V. Aslantas, “Estimation of Depth From Defocusing Using A Neural Network,” in International Conference on Signal Processing, (Canakkale, Turkey, 2003), pp.305–309.

V. Aslantas, “Criterion functions for automatic focusing,” in 10. Turkish Symposium on Artificial Intelligence and Neural Networks, (Gazimagusa, Turkish Republic of Northern Cyprus2001), pp.301–311.

Beraldin, J.-A.

S.F. El-Hakim, J.-A. Beraldin, and F. Blais, “A Comparative Evaluation of the Performance of Passive and Active 3D Vision Systems,” in Digital Photogrammetry and Remote Sensing, Eugeny A. Fedosov, Ed., Proc. SPIE2646,14–25 (1995).
[Crossref]

Blais, F.

S.F. El-Hakim, J.-A. Beraldin, and F. Blais, “A Comparative Evaluation of the Performance of Passive and Active 3D Vision Systems,” in Digital Photogrammetry and Remote Sensing, Eugeny A. Fedosov, Ed., Proc. SPIE2646,14–25 (1995).
[Crossref]

Caelli, T.M.

S. Xu, D.W. Capson, and T.M. Caelli, “Range Measurement from Defocus Gradient,” Mach. Vision Appl. 8,179–186 (1995).
[Crossref]

Capson, D.W.

S. Xu, D.W. Capson, and T.M. Caelli, “Range Measurement from Defocus Gradient,” Mach. Vision Appl. 8,179–186 (1995).
[Crossref]

Cardillo, C.

C. Cardillo and M.A. Sid-Ahmed, “3-D Position Sensing Using Passive Monocular Vision System,” IEEE Trans. Pattern Anal. Mach. Intell. 13,809–813 (1991).
[Crossref]

Chang, S.

S.H. Lai, C.W. Fu, and S. Chang, “A Generalised Depth Estimation Algorithm with a Single Image,” IEEE Trans. Pattern Anal. Mach. Intell. 14,405–411 (1992).
[Crossref]

Chaudhuri, S.

S. Chaudhuri and A.N. Rajagopalan, “Depth from Defocus: A Real Aperture Imaging Approach,” (Springer-Verlag New York, Inc. 1999).

Choi, T.

M. Subbarao and T. Choi, “Accurate Recovery of Three Dimensional Shape from Focus,” IEEE Trans. Pattern Anal. Mach. Intell. 17,266–274 (1995).
[Crossref]

M. Subbarao, T. Choi, and A. Nikzat, “Focusing Techniques,” Optical Engineering, 32,2824–2836 (1993).
[Crossref]

Choi, T.S.

M. Asif and T.S. Choi, “Shape from focus using multilayer feedforward neural networks,” IEEE Trans. Image Process. 10,1670–1675 (2001).
[Crossref]

Choi, Tae-Sun

Bilal Ahmad and Tae-Sun Choi, “A heuristic approach for finding best focused shape,” IEEE Trans. Circuits Syst. 15,566–574 (2005).

Cook, K.

L. Firestone, K. Cook, K. Culp, N. Talsania, and Jr.K. Preston, “Comparison of Autofocus Methods for Automated Microscopy,” Cytometry, 12,195–206 (1991).
[Crossref] [PubMed]

Culp, K.

L. Firestone, K. Cook, K. Culp, N. Talsania, and Jr.K. Preston, “Comparison of Autofocus Methods for Automated Microscopy,” Cytometry, 12,195–206 (1991).
[Crossref] [PubMed]

Dantu, R.V.

R.V. Dantu, N.J. Dimopoulos, R.V. Patel, and A.J. Al-Khalili, “Depth Perception Using Blurring and its Application in VLSI Wafer Probing,” Mach. Vision Appl. 5,35–45 (1992).
[Crossref]

Darell, T.

T. Darell and K. Wohn, “Depth from Focus Using a Pyramid Architecture,” Pattern Recogn. Lett. 11,787–796 (1990).
[Crossref]

Darrell, T.

Deschenes, D. Z. F.

D. Z. F. Deschenes, “Depth from Defocus Estimation in Spatial Domain,” Computer Vision and Image Understanding 81,143–165 (2001).
[Crossref]

Dimopoulos, N.J.

R.V. Dantu, N.J. Dimopoulos, R.V. Patel, and A.J. Al-Khalili, “Depth Perception Using Blurring and its Application in VLSI Wafer Probing,” Mach. Vision Appl. 5,35–45 (1992).
[Crossref]

El-Hakim, S.F.

S.F. El-Hakim, J.-A. Beraldin, and F. Blais, “A Comparative Evaluation of the Performance of Passive and Active 3D Vision Systems,” in Digital Photogrammetry and Remote Sensing, Eugeny A. Fedosov, Ed., Proc. SPIE2646,14–25 (1995).
[Crossref]

Ens, J.

J. Ens and P. Lawrence, “Investigation of Methods for Determining Depth from Focus,” IEEE Trans. Pattern Anal. Mach. Intell. 15,97–108 (1993).
[Crossref]

Favaro, P.

P. Favaro, A. Mennucci, and S. Soatto, “Observing Shape from Defocused Images,” Int. J. Comput. Vision 52,25–43 (2003).
[Crossref]

P. Favaro and S. Soatto, “Learning Shape from Defocus,” in European Conference on Computer Vision, (Copenhagen, Denmark, 2002), pp.735–45.

Firestone, L.

L. Firestone, K. Cook, K. Culp, N. Talsania, and Jr.K. Preston, “Comparison of Autofocus Methods for Automated Microscopy,” Cytometry, 12,195–206 (1991).
[Crossref] [PubMed]

Fu, C.W.

S.H. Lai, C.W. Fu, and S. Chang, “A Generalised Depth Estimation Algorithm with a Single Image,” IEEE Trans. Pattern Anal. Mach. Intell. 14,405–411 (1992).
[Crossref]

Fujiwara, H.

N. Asada, H. Fujiwara, and T. Matsuyama, “Particle depth measurement based on depth-from-defocus,” Opt. Laser Technol. 31,95–102 (1999).
[Crossref]

N. Asada, H. Fujiwara, and T. Matsuyama, “Edge and depth from focus,” Int. J. Comput. Vision 26,153–163 (1998).
[Crossref]

Girod, B.

Gonzalez, R.C.

R.C. Gonzalez and R.E. Woods, “Digital Image Processing,” (Addison-Wesley, Reading, MA1992).

Groen, F.C.A.

F.C.A. Groen, I.T. Young, and G. Ligthart, “A Comparison of Different Focus Functions for Use in Autofocus Algorithms,” Cytometry, 6,81–91 (1985).
[Crossref] [PubMed]

Grossmann, P.

P. Grossmann, “Depth from Focus,” Pattern Recogn. Lett. 5,63–69 (1987).
[Crossref]

Gupta, S.

J. Rayala, S. Gupta, and S.K. Mullick, “Estimation of depth from defocus as polynomial system identification,” IEE Proceedings, Vision, Image and Signal Processing 148,356–362 (2001).
[Crossref]

Gurumoorthy, N.

M. Subbarao and N. Gurumoorthy, “Depth Recovery from Blurred Edges,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Ann Arbor, MI, 1988), pp.498–503.

Hebert, M.

M. Hebert, “Active and passive range sensing for robotics,” in Proceedings of IEEE Conference on Robotics and Automation, (Institute of Electrical and Electronics Engineers, San Francisco, CA, 2000), pp.102–110.

Holeva, L.F.

L.F. Holeva, “Range Estimation from Camera Blur by Regularised Adaptive Identification,” Int. J. Pattern Recogn. Artif. Intell. 8,1273–1300 (1994).
[Crossref]

Horn, B.K.P.

B.K.P. Horn, Robot Vision, (McGraw-Hill, New York, 1986).

Jarvis, R.A.

R.A. Jarvis, “Focus Optimisation Criteria for Computer Image Processing,” Microscope, 24,163–180 (1976).

Jayasooriah,

T.T.E. Yeo, S.H. Ong, Jayasooriah, and R. Sinniah, “Autofocusing for Tissue Microscopy,” J. Image and Vision Computing, 11,629–639 (1993).
[Crossref]

Krotkov, E.P.

E.P. Krotkov, “Focusing,” Int. J. Compt. Vision 1,223–237 (1987).
[Crossref]

Lai, S.H.

S.H. Lai, C.W. Fu, and S. Chang, “A Generalised Depth Estimation Algorithm with a Single Image,” IEEE Trans. Pattern Anal. Mach. Intell. 14,405–411 (1992).
[Crossref]

Lawrence, P.

J. Ens and P. Lawrence, “Investigation of Methods for Determining Depth from Focus,” IEEE Trans. Pattern Anal. Mach. Intell. 15,97–108 (1993).
[Crossref]

Ligthart, G.

F.C.A. Groen, I.T. Young, and G. Ligthart, “A Comparison of Different Focus Functions for Use in Autofocus Algorithms,” Cytometry, 6,81–91 (1985).
[Crossref] [PubMed]

Matsuyama, T.

N. Asada, H. Fujiwara, and T. Matsuyama, “Particle depth measurement based on depth-from-defocus,” Opt. Laser Technol. 31,95–102 (1999).
[Crossref]

N. Asada, H. Fujiwara, and T. Matsuyama, “Edge and depth from focus,” Int. J. Comput. Vision 26,153–163 (1998).
[Crossref]

Mennucci, A.

P. Favaro, A. Mennucci, and S. Soatto, “Observing Shape from Defocused Images,” Int. J. Comput. Vision 52,25–43 (2003).
[Crossref]

Mullick, S.K.

J. Rayala, S. Gupta, and S.K. Mullick, “Estimation of depth from defocus as polynomial system identification,” IEE Proceedings, Vision, Image and Signal Processing 148,356–362 (2001).
[Crossref]

Nair, H.N.

H.N. Nair and C.V. Stewart, “Robust Focus Ranging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Illinois, 1992), pp.309–314.

Nakagawa, Y.

S.K. Nayar and Y. Nakagawa, “Shape from Focus: An Effective Approach for Rough Surfaces,” in Proceedings of IEEE Conference on Robotics and Automation, (Institute of Electrical and Electronics Engineers, Cincinnati, Ohio, 1990), pp.218–225.

Nayar, S.K.

M. Watanabe and S.K. Nayar, “Rational filters for passive depth from defocus,” Int. J. Comput. Vision 27,203–225 (1998).
[Crossref]

S.K. Nayar and Y. Nakagawa, “Shape from Focus: An Effective Approach for Rough Surfaces,” in Proceedings of IEEE Conference on Robotics and Automation, (Institute of Electrical and Electronics Engineers, Cincinnati, Ohio, 1990), pp.218–225.

Neuman, C.P.

J.F. Schlag, A.C. Sanderson, C.P. Neuman, and F.C. Wimberly, “Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control,” CMU-RI-TR-83-14, (Robotics Institution, Carnegie Mellon University, 1983).

Nikzat, A.

M. Subbarao, T. Choi, and A. Nikzat, “Focusing Techniques,” Optical Engineering, 32,2824–2836 (1993).
[Crossref]

Ong, S.H.

T.T.E. Yeo, S.H. Ong, Jayasooriah, and R. Sinniah, “Autofocusing for Tissue Microscopy,” J. Image and Vision Computing, 11,629–639 (1993).
[Crossref]

Patel, R.V.

R.V. Dantu, N.J. Dimopoulos, R.V. Patel, and A.J. Al-Khalili, “Depth Perception Using Blurring and its Application in VLSI Wafer Probing,” Mach. Vision Appl. 5,35–45 (1992).
[Crossref]

Pentland, A.P.

Pham, D.T.

D.T. Pham and V. Aslantas, “Depth from Defocusing Using a Neural Network,” J. Pattern Recogn. 32,715–727 (1999).
[Crossref]

D.T. Pham and V. Aslantas, “Automatic Focusing,” in Birinci Turk Yapay Zeka ve Yapay Sinir Aglari Sempozyumu, (Bilkent Universitesi, Ankara, 1992), pp.295–303.

Preston, Jr.K.

L. Firestone, K. Cook, K. Culp, N. Talsania, and Jr.K. Preston, “Comparison of Autofocus Methods for Automated Microscopy,” Cytometry, 12,195–206 (1991).
[Crossref] [PubMed]

Rajagopalan, A.N.

S. Chaudhuri and A.N. Rajagopalan, “Depth from Defocus: A Real Aperture Imaging Approach,” (Springer-Verlag New York, Inc. 1999).

Rayala, J.

J. Rayala, S. Gupta, and S.K. Mullick, “Estimation of depth from defocus as polynomial system identification,” IEE Proceedings, Vision, Image and Signal Processing 148,356–362 (2001).
[Crossref]

Sanderson, A.C.

J.F. Schlag, A.C. Sanderson, C.P. Neuman, and F.C. Wimberly, “Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control,” CMU-RI-TR-83-14, (Robotics Institution, Carnegie Mellon University, 1983).

Scherock, S.

Schlag, J.F.

J.F. Schlag, A.C. Sanderson, C.P. Neuman, and F.C. Wimberly, “Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control,” CMU-RI-TR-83-14, (Robotics Institution, Carnegie Mellon University, 1983).

Sid-Ahmed, M.A.

C. Cardillo and M.A. Sid-Ahmed, “3-D Position Sensing Using Passive Monocular Vision System,” IEEE Trans. Pattern Anal. Mach. Intell. 13,809–813 (1991).
[Crossref]

Sinniah, R.

T.T.E. Yeo, S.H. Ong, Jayasooriah, and R. Sinniah, “Autofocusing for Tissue Microscopy,” J. Image and Vision Computing, 11,629–639 (1993).
[Crossref]

Soatto, S.

P. Favaro, A. Mennucci, and S. Soatto, “Observing Shape from Defocused Images,” Int. J. Comput. Vision 52,25–43 (2003).
[Crossref]

P. Favaro and S. Soatto, “Learning Shape from Defocus,” in European Conference on Computer Vision, (Copenhagen, Denmark, 2002), pp.735–45.

Stewart, C.V.

H.N. Nair and C.V. Stewart, “Robust Focus Ranging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Illinois, 1992), pp.309–314.

Subbarao, M.

M. Subbarao and J.K. Tyan, “Selecting the optimal focus measure for autofocusing and depth-from-focus,” IEEE Trans. Pattern Anal. Mach. Intell. 20,864–870 (1998).
[Crossref]

M. Subbarao and T. Choi, “Accurate Recovery of Three Dimensional Shape from Focus,” IEEE Trans. Pattern Anal. Mach. Intell. 17,266–274 (1995).
[Crossref]

M. Subbarao and G. Surya, “Depth from Defocus: A Spatial Domain Approach,” Int. J. Comput. Vision 13,271–294 (1994).
[Crossref]

M. Subbarao, T. Choi, and A. Nikzat, “Focusing Techniques,” Optical Engineering, 32,2824–2836 (1993).
[Crossref]

M. Subbarao and N. Gurumoorthy, “Depth Recovery from Blurred Edges,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Ann Arbor, MI, 1988), pp.498–503.

M. Subbarao, “Efficient Depth Recovery Through Inverse Optics,” Machine Vision Inspection and Measurement, H. Freeman ed., (Academic, Boston, 1989).

M. Subbarao and T. Wei, “Depth from Defocus and Rapid Autofocusing: A Practical Approach,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Champaign, Illinois, 1992), pp.773–776.

Surya, G.

M. Subbarao and G. Surya, “Depth from Defocus: A Spatial Domain Approach,” Int. J. Comput. Vision 13,271–294 (1994).
[Crossref]

Talsania, N.

L. Firestone, K. Cook, K. Culp, N. Talsania, and Jr.K. Preston, “Comparison of Autofocus Methods for Automated Microscopy,” Cytometry, 12,195–206 (1991).
[Crossref] [PubMed]

Tunckanat, M.

V. Aslantas and M. Tunckanat, “Depth of General Scenes from Defocused Images Using Multilayer Feedforward Network,” LNCS 3949,41–48 (2006).

V. Aslantas and M. Tunckanat, “Depth from Image Sharpness Using A Neural Network,” in International Conference on Signal Processing, (Canakkale, Turkey, 2003), pp.260–265.

Tyan, J.K.

M. Subbarao and J.K. Tyan, “Selecting the optimal focus measure for autofocusing and depth-from-focus,” IEEE Trans. Pattern Anal. Mach. Intell. 20,864–870 (1998).
[Crossref]

Watanabe, M.

M. Watanabe and S.K. Nayar, “Rational filters for passive depth from defocus,” Int. J. Comput. Vision 27,203–225 (1998).
[Crossref]

Wei, T.

M. Subbarao and T. Wei, “Depth from Defocus and Rapid Autofocusing: A Practical Approach,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Champaign, Illinois, 1992), pp.773–776.

Wimberly, F.C.

J.F. Schlag, A.C. Sanderson, C.P. Neuman, and F.C. Wimberly, “Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control,” CMU-RI-TR-83-14, (Robotics Institution, Carnegie Mellon University, 1983).

Wohn, K.

T. Darell and K. Wohn, “Depth from Focus Using a Pyramid Architecture,” Pattern Recogn. Lett. 11,787–796 (1990).
[Crossref]

Woods, R.E.

R.C. Gonzalez and R.E. Woods, “Digital Image Processing,” (Addison-Wesley, Reading, MA1992).

Xu, S.

S. Xu, D.W. Capson, and T.M. Caelli, “Range Measurement from Defocus Gradient,” Mach. Vision Appl. 8,179–186 (1995).
[Crossref]

Yeo, T.T.E.

T.T.E. Yeo, S.H. Ong, Jayasooriah, and R. Sinniah, “Autofocusing for Tissue Microscopy,” J. Image and Vision Computing, 11,629–639 (1993).
[Crossref]

Young, I.T.

F.C.A. Groen, I.T. Young, and G. Ligthart, “A Comparison of Different Focus Functions for Use in Autofocus Algorithms,” Cytometry, 6,81–91 (1985).
[Crossref] [PubMed]

Computer Vision and Image Understanding (1)

D. Z. F. Deschenes, “Depth from Defocus Estimation in Spatial Domain,” Computer Vision and Image Understanding 81,143–165 (2001).
[Crossref]

Cytometry (2)

F.C.A. Groen, I.T. Young, and G. Ligthart, “A Comparison of Different Focus Functions for Use in Autofocus Algorithms,” Cytometry, 6,81–91 (1985).
[Crossref] [PubMed]

L. Firestone, K. Cook, K. Culp, N. Talsania, and Jr.K. Preston, “Comparison of Autofocus Methods for Automated Microscopy,” Cytometry, 12,195–206 (1991).
[Crossref] [PubMed]

IEE Proceedings, Vision, Image and Signal Processing (1)

J. Rayala, S. Gupta, and S.K. Mullick, “Estimation of depth from defocus as polynomial system identification,” IEE Proceedings, Vision, Image and Signal Processing 148,356–362 (2001).
[Crossref]

IEEE Trans. Circuits Syst. (1)

Bilal Ahmad and Tae-Sun Choi, “A heuristic approach for finding best focused shape,” IEEE Trans. Circuits Syst. 15,566–574 (2005).

IEEE Trans. Image Process. (1)

M. Asif and T.S. Choi, “Shape from focus using multilayer feedforward neural networks,” IEEE Trans. Image Process. 10,1670–1675 (2001).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (6)

C. Cardillo and M.A. Sid-Ahmed, “3-D Position Sensing Using Passive Monocular Vision System,” IEEE Trans. Pattern Anal. Mach. Intell. 13,809–813 (1991).
[Crossref]

A.P. Pentland, “A New Sense for Depth of Field,” IEEE Trans. Pattern Anal. Mach. Intell. 9,523–531 (1987).
[Crossref] [PubMed]

S.H. Lai, C.W. Fu, and S. Chang, “A Generalised Depth Estimation Algorithm with a Single Image,” IEEE Trans. Pattern Anal. Mach. Intell. 14,405–411 (1992).
[Crossref]

J. Ens and P. Lawrence, “Investigation of Methods for Determining Depth from Focus,” IEEE Trans. Pattern Anal. Mach. Intell. 15,97–108 (1993).
[Crossref]

M. Subbarao and T. Choi, “Accurate Recovery of Three Dimensional Shape from Focus,” IEEE Trans. Pattern Anal. Mach. Intell. 17,266–274 (1995).
[Crossref]

M. Subbarao and J.K. Tyan, “Selecting the optimal focus measure for autofocusing and depth-from-focus,” IEEE Trans. Pattern Anal. Mach. Intell. 20,864–870 (1998).
[Crossref]

Int. J. Compt. Vision (1)

E.P. Krotkov, “Focusing,” Int. J. Compt. Vision 1,223–237 (1987).
[Crossref]

Int. J. Comput. Vision (4)

N. Asada, H. Fujiwara, and T. Matsuyama, “Edge and depth from focus,” Int. J. Comput. Vision 26,153–163 (1998).
[Crossref]

M. Subbarao and G. Surya, “Depth from Defocus: A Spatial Domain Approach,” Int. J. Comput. Vision 13,271–294 (1994).
[Crossref]

M. Watanabe and S.K. Nayar, “Rational filters for passive depth from defocus,” Int. J. Comput. Vision 27,203–225 (1998).
[Crossref]

P. Favaro, A. Mennucci, and S. Soatto, “Observing Shape from Defocused Images,” Int. J. Comput. Vision 52,25–43 (2003).
[Crossref]

Int. J. Pattern Recogn. Artif. Intell. (1)

L.F. Holeva, “Range Estimation from Camera Blur by Regularised Adaptive Identification,” Int. J. Pattern Recogn. Artif. Intell. 8,1273–1300 (1994).
[Crossref]

J. Image and Vision Computing (1)

T.T.E. Yeo, S.H. Ong, Jayasooriah, and R. Sinniah, “Autofocusing for Tissue Microscopy,” J. Image and Vision Computing, 11,629–639 (1993).
[Crossref]

J. Opt. Soc. Am. A (1)

J. Pattern Recogn. (1)

D.T. Pham and V. Aslantas, “Depth from Defocusing Using a Neural Network,” J. Pattern Recogn. 32,715–727 (1999).
[Crossref]

LNCS (1)

V. Aslantas and M. Tunckanat, “Depth of General Scenes from Defocused Images Using Multilayer Feedforward Network,” LNCS 3949,41–48 (2006).

Mach. Vision Appl. (2)

S. Xu, D.W. Capson, and T.M. Caelli, “Range Measurement from Defocus Gradient,” Mach. Vision Appl. 8,179–186 (1995).
[Crossref]

R.V. Dantu, N.J. Dimopoulos, R.V. Patel, and A.J. Al-Khalili, “Depth Perception Using Blurring and its Application in VLSI Wafer Probing,” Mach. Vision Appl. 5,35–45 (1992).
[Crossref]

Microscope (1)

R.A. Jarvis, “Focus Optimisation Criteria for Computer Image Processing,” Microscope, 24,163–180 (1976).

Opt. Laser Technol. (1)

N. Asada, H. Fujiwara, and T. Matsuyama, “Particle depth measurement based on depth-from-defocus,” Opt. Laser Technol. 31,95–102 (1999).
[Crossref]

Optical Engineering (1)

M. Subbarao, T. Choi, and A. Nikzat, “Focusing Techniques,” Optical Engineering, 32,2824–2836 (1993).
[Crossref]

Pattern Recogn. Lett. (2)

P. Grossmann, “Depth from Focus,” Pattern Recogn. Lett. 5,63–69 (1987).
[Crossref]

T. Darell and K. Wohn, “Depth from Focus Using a Pyramid Architecture,” Pattern Recogn. Lett. 11,787–796 (1990).
[Crossref]

Other (16)

S.K. Nayar and Y. Nakagawa, “Shape from Focus: An Effective Approach for Rough Surfaces,” in Proceedings of IEEE Conference on Robotics and Automation, (Institute of Electrical and Electronics Engineers, Cincinnati, Ohio, 1990), pp.218–225.

H.N. Nair and C.V. Stewart, “Robust Focus Ranging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Illinois, 1992), pp.309–314.

D.T. Pham and V. Aslantas, “Automatic Focusing,” in Birinci Turk Yapay Zeka ve Yapay Sinir Aglari Sempozyumu, (Bilkent Universitesi, Ankara, 1992), pp.295–303.

M. Subbarao and T. Wei, “Depth from Defocus and Rapid Autofocusing: A Practical Approach,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Champaign, Illinois, 1992), pp.773–776.

S.F. El-Hakim, J.-A. Beraldin, and F. Blais, “A Comparative Evaluation of the Performance of Passive and Active 3D Vision Systems,” in Digital Photogrammetry and Remote Sensing, Eugeny A. Fedosov, Ed., Proc. SPIE2646,14–25 (1995).
[Crossref]

M. Hebert, “Active and passive range sensing for robotics,” in Proceedings of IEEE Conference on Robotics and Automation, (Institute of Electrical and Electronics Engineers, San Francisco, CA, 2000), pp.102–110.

M. Subbarao and N. Gurumoorthy, “Depth Recovery from Blurred Edges,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Ann Arbor, MI, 1988), pp.498–503.

M. Subbarao, “Efficient Depth Recovery Through Inverse Optics,” Machine Vision Inspection and Measurement, H. Freeman ed., (Academic, Boston, 1989).

S. Chaudhuri and A.N. Rajagopalan, “Depth from Defocus: A Real Aperture Imaging Approach,” (Springer-Verlag New York, Inc. 1999).

B.K.P. Horn, Robot Vision, (McGraw-Hill, New York, 1986).

P. Favaro and S. Soatto, “Learning Shape from Defocus,” in European Conference on Computer Vision, (Copenhagen, Denmark, 2002), pp.735–45.

V. Aslantas and M. Tunckanat, “Depth from Image Sharpness Using A Neural Network,” in International Conference on Signal Processing, (Canakkale, Turkey, 2003), pp.260–265.

V. Aslantas, “Estimation of Depth From Defocusing Using A Neural Network,” in International Conference on Signal Processing, (Canakkale, Turkey, 2003), pp.305–309.

J.F. Schlag, A.C. Sanderson, C.P. Neuman, and F.C. Wimberly, “Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control,” CMU-RI-TR-83-14, (Robotics Institution, Carnegie Mellon University, 1983).

V. Aslantas, “Criterion functions for automatic focusing,” in 10. Turkish Symposium on Artificial Intelligence and Neural Networks, (Gazimagusa, Turkish Republic of Northern Cyprus2001), pp.301–311.

R.C. Gonzalez and R.E. Woods, “Digital Image Processing,” (Addison-Wesley, Reading, MA1992).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Basic image formation geometry
Fig. 2.
Fig. 2. Plot of theoretical blur circle radius versus depth for an f/2.8, 50mm lens [camera focused on an object 1m away from the lens].
Fig. 3.
Fig. 3. Cross sections of three edges. (The step edge was placed at a distance of 200mm from the lens. Blurred edge 1 was obtained using camera parameters DLS1=75.0mm, F1=50.0mm and f1=1.4. The camera parameters used for the blurred edge 2 were DLS2=74.0mm, F2=47.49mm and f2=2.0.)
Fig. 4.
Fig. 4. Different camera parameters giving the same sharpness value. B is the point of best focus
Fig. 5.
Fig. 5. Possible focusing positions for an object placed in front of the camera. B corresponds to the object location. (Arrows show direction of camera movements)
Fig. 6.
Fig. 6. (a) Cross section at points A and B (assuming a pin-hole camera with infinite depth of field) (b) the camera is focused at point B (c) the camera is focused at point A (d) after the movement required for depth computation by DFAD
Fig. 7.
Fig. 7. Images of the objects used in the experiments
Fig. 8.
Fig. 8. (a) Estimated depth vs. real depth (b) Errors for different depths

Tables (1)

Tables Icon

Table 1 Parameter adjustments and depth computation. “+” and “-” indicate that this camera parameter needs to be increased or decreased, respectively

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

1 D OL + 1 D LF = 1 F
R = 2 D LF
δ = D LS D LF
L = F f
R = F D LS F D LF 2 f D LF
R = D OL ( D LS F ) FD LS 2 fD OL
D OL = FD LS D LS F 2 fR
δ = D LF D LS
D OL = FD LS D LS F + 2 fR
D OL = FD LS D LS F
R 1 = D OL ( D LS 1 F 1 ) F 1 D LS 1 2 f 1 D OL
R 2 = ( D OL + d ) ( D LS 2 F 2 ) F 2 D LS 2 2 f 2 ( D OL + d )
D OL 2 + [ d ( D LS 1 F 1 f 2 D LS 2 F 2 f 1 ) ( D LS 1 F 1 ) f 2 ( D LS 2 F 2 ) f 1 ] D OL D LS 1 F 1 f 2 d ( D LS 1 F 1 ) f 2 ( D LS 2 F 2 ) f 1 = 0
D OL 2 + [ d ( D LS 1 F 1 f 2 + D LS 2 F 2 f 1 ) ( D LS 1 F 1 ) f 2 + ( D LS 2 F 2 ) f 1 ] D OL D LS 1 F 1 f 2 d ( D LS 1 F 1 ) f 2 + ( D LS 2 F 2 ) f 1 = 0
max x N y N Z ( x , y ) 2 for Z ( x , y ) 2 > T
[ 1 0 1 2 0 2 1 0 1 ] [ 1 2 1 0 0 0 1 2 1 ]
I ( x , y ) = 1 n i = 1 n I i ( x , y )
D OL 2 + [ d D 0 ] D OL D 0 f 2 d ( f 2 f 1 ) = 0
D 0 = D LS F D LS F

Metrics