Abstract

This paper presents a rational-operator-based approach to depth from defocus (DfD) for the reconstruction of three-dimensional scenes from two-dimensional images, which enables fast DfD computation that is independent of scene textures. Two variants of the approach, one using the Gaussian rational operators (ROs) that are based on the Gaussian point spread function (PSF) and the second based on the generalized Gaussian PSF, are considered. A novel DfD correction method is also presented to further improve the performance of the approach. Experimental results are considered for real scenes and show that both approaches outperform existing RO-based methods.

© 2013 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. R. Minhas, A. A. Mohammed, and Q. M. J. Wu, “Shape from focus using fast discrete curvelet transform,” Pattern Recogn. 44, 839–853 (2011).
    [CrossRef]
  2. I. Lee, M. T. Mahmood, S. Shim, and T. Choi, “Optimizing image focus for 3D shape recovery through genetic algorithm,” Multimedia Tools Appl., doi: 10.1007/s11042-013-1433-9 (2013).
    [CrossRef]
  3. M. Born and E. Wolf, Principles of Optics (Pergamon, 1965).
  4. S. Chaudhuri and A. N. Rajagopalan, Depth from Defocus: A Real Aperture Imaging Approach (Springer, 1998).
  5. A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9, 523–531 (1987).
    [CrossRef]
  6. M. Subbarao, “Parallel depth recovery by changing camera parameters,” in Second International Conference on Computer Vision (IEEE, 1988), pp. 149–155.
  7. J. Ens and P. Lawrence, “An investigation of methods for determining depth from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 97–108 (1987).
    [CrossRef]
  8. Y. Xiong and S. A. Shafer, “Moment and hypergeometric filters for high precision computation of focus, stereo and optical flow,” Int. J. Comput. Vis. 22, 25–59 (1997).
  9. A. N. Rajagopalan and S. Chaudhuri, “An MRF model-based approach to simultaneous recovery of depth and restoration from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell. 21, 577–589 (1999).
    [CrossRef]
  10. P. Favaro and S. Soatto, “Learning shape from defocus,” in Proceedings of 7th European Conference on Computer Vision (Springer, 2002), pp. 735–745.
  11. L. Ma and R. C. Staunton, “Integration of multiresolution image segmentation and neural networks for object depth recovery,” Pattern Recogn. 38, 985–996 (2005).
    [CrossRef]
  12. A. Levin, R. Fergus, and F. Durand, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graphics (TOG) 26, 70 (2007).
  13. C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus and defocus deblurring,” Int. J. Comput. Vis. 93, 53–72 (2011).
  14. L. Hong, J. Yu, and C. Hong, “Depth estimation from defocus images based on oriented heat-flows,” in Proceedings of IEEE 2nd International Conference on Machine Vision (IEEE, 2009), pp. 212–215.
  15. H. Wang, F. Cao, S. Fang, Y. Cao, and C. Fang, “Effective improvement for depth estimated based on defocus images,” J. Comput. 8, 888–895 (2013).
  16. Q. F. Wu, K. Q. Wang, and W. M. Zuo, “Depth from defocus using geometric optics regularization,” Advanced Materials Research 709, 511–514 (2013).
  17. M. Watanabe and S. K. Nayar, “Rational filters for passive depth from defocus,” Int. J. Comput. Vis. 27, 203–225 (1998).
    [CrossRef]
  18. A. N. J. Raj and R. C. Staunton, “Rational filter design for depth from defocus,” Pattern Recogn. 45, 198–207 (2012).
    [CrossRef]
  19. M. Watanabe and S. K. Nayar, “Telecentric optics for focus analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1360–1365 (1997).
    [CrossRef]
  20. M. Born and E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light (CUP Archive, 1999).
  21. M. Subbarao and G. Surya, “Depth from defocus: a spatial domain approach,” Int. J. Comput. Vis. 13, 271–294 (1994).
  22. C. D. Claxton and R. C. Staunton, “Measurement of the point-spread function of a noisy imaging system,” J. Opt. Soc. Am. A 25, 159–170 (2008).
    [CrossRef]
  23. W. Gander and W. Gautschi, “Adaptive quadrature–revisited,” BIT Numer. Math. 40, 84–101 (2000).
  24. IEEE Acoustics Speech, and Signal Processing Society Digital Signal Processing Committee, Programs for Digital Signal Processing (IEEE, 1979).
  25. J. S. Lim, Two-Dimensional Signal and Image Processing (Prentice-Hall, 1990).
  26. S. F. Ray, Applied Photographic Optics: Imaging Systems for Photography, Film and Video (Focal, 1988).

2013

H. Wang, F. Cao, S. Fang, Y. Cao, and C. Fang, “Effective improvement for depth estimated based on defocus images,” J. Comput. 8, 888–895 (2013).

Q. F. Wu, K. Q. Wang, and W. M. Zuo, “Depth from defocus using geometric optics regularization,” Advanced Materials Research 709, 511–514 (2013).

2012

A. N. J. Raj and R. C. Staunton, “Rational filter design for depth from defocus,” Pattern Recogn. 45, 198–207 (2012).
[CrossRef]

2011

C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus and defocus deblurring,” Int. J. Comput. Vis. 93, 53–72 (2011).

R. Minhas, A. A. Mohammed, and Q. M. J. Wu, “Shape from focus using fast discrete curvelet transform,” Pattern Recogn. 44, 839–853 (2011).
[CrossRef]

2008

2007

A. Levin, R. Fergus, and F. Durand, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graphics (TOG) 26, 70 (2007).

2005

L. Ma and R. C. Staunton, “Integration of multiresolution image segmentation and neural networks for object depth recovery,” Pattern Recogn. 38, 985–996 (2005).
[CrossRef]

2000

W. Gander and W. Gautschi, “Adaptive quadrature–revisited,” BIT Numer. Math. 40, 84–101 (2000).

1999

A. N. Rajagopalan and S. Chaudhuri, “An MRF model-based approach to simultaneous recovery of depth and restoration from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell. 21, 577–589 (1999).
[CrossRef]

1998

M. Watanabe and S. K. Nayar, “Rational filters for passive depth from defocus,” Int. J. Comput. Vis. 27, 203–225 (1998).
[CrossRef]

1997

Y. Xiong and S. A. Shafer, “Moment and hypergeometric filters for high precision computation of focus, stereo and optical flow,” Int. J. Comput. Vis. 22, 25–59 (1997).

M. Watanabe and S. K. Nayar, “Telecentric optics for focus analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1360–1365 (1997).
[CrossRef]

1994

M. Subbarao and G. Surya, “Depth from defocus: a spatial domain approach,” Int. J. Comput. Vis. 13, 271–294 (1994).

1987

A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9, 523–531 (1987).
[CrossRef]

J. Ens and P. Lawrence, “An investigation of methods for determining depth from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 97–108 (1987).
[CrossRef]

Born, M.

M. Born and E. Wolf, Principles of Optics (Pergamon, 1965).

M. Born and E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light (CUP Archive, 1999).

Cao, F.

H. Wang, F. Cao, S. Fang, Y. Cao, and C. Fang, “Effective improvement for depth estimated based on defocus images,” J. Comput. 8, 888–895 (2013).

Cao, Y.

H. Wang, F. Cao, S. Fang, Y. Cao, and C. Fang, “Effective improvement for depth estimated based on defocus images,” J. Comput. 8, 888–895 (2013).

Chaudhuri, S.

A. N. Rajagopalan and S. Chaudhuri, “An MRF model-based approach to simultaneous recovery of depth and restoration from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell. 21, 577–589 (1999).
[CrossRef]

S. Chaudhuri and A. N. Rajagopalan, Depth from Defocus: A Real Aperture Imaging Approach (Springer, 1998).

Choi, T.

I. Lee, M. T. Mahmood, S. Shim, and T. Choi, “Optimizing image focus for 3D shape recovery through genetic algorithm,” Multimedia Tools Appl., doi: 10.1007/s11042-013-1433-9 (2013).
[CrossRef]

Claxton, C. D.

Durand, F.

A. Levin, R. Fergus, and F. Durand, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graphics (TOG) 26, 70 (2007).

Ens, J.

J. Ens and P. Lawrence, “An investigation of methods for determining depth from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 97–108 (1987).
[CrossRef]

Fang, C.

H. Wang, F. Cao, S. Fang, Y. Cao, and C. Fang, “Effective improvement for depth estimated based on defocus images,” J. Comput. 8, 888–895 (2013).

Fang, S.

H. Wang, F. Cao, S. Fang, Y. Cao, and C. Fang, “Effective improvement for depth estimated based on defocus images,” J. Comput. 8, 888–895 (2013).

Favaro, P.

P. Favaro and S. Soatto, “Learning shape from defocus,” in Proceedings of 7th European Conference on Computer Vision (Springer, 2002), pp. 735–745.

Fergus, R.

A. Levin, R. Fergus, and F. Durand, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graphics (TOG) 26, 70 (2007).

Gander, W.

W. Gander and W. Gautschi, “Adaptive quadrature–revisited,” BIT Numer. Math. 40, 84–101 (2000).

Gautschi, W.

W. Gander and W. Gautschi, “Adaptive quadrature–revisited,” BIT Numer. Math. 40, 84–101 (2000).

Hong, C.

L. Hong, J. Yu, and C. Hong, “Depth estimation from defocus images based on oriented heat-flows,” in Proceedings of IEEE 2nd International Conference on Machine Vision (IEEE, 2009), pp. 212–215.

Hong, L.

L. Hong, J. Yu, and C. Hong, “Depth estimation from defocus images based on oriented heat-flows,” in Proceedings of IEEE 2nd International Conference on Machine Vision (IEEE, 2009), pp. 212–215.

Lawrence, P.

J. Ens and P. Lawrence, “An investigation of methods for determining depth from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 97–108 (1987).
[CrossRef]

Lee, I.

I. Lee, M. T. Mahmood, S. Shim, and T. Choi, “Optimizing image focus for 3D shape recovery through genetic algorithm,” Multimedia Tools Appl., doi: 10.1007/s11042-013-1433-9 (2013).
[CrossRef]

Levin, A.

A. Levin, R. Fergus, and F. Durand, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graphics (TOG) 26, 70 (2007).

Lim, J. S.

J. S. Lim, Two-Dimensional Signal and Image Processing (Prentice-Hall, 1990).

Lin, S.

C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus and defocus deblurring,” Int. J. Comput. Vis. 93, 53–72 (2011).

Ma, L.

L. Ma and R. C. Staunton, “Integration of multiresolution image segmentation and neural networks for object depth recovery,” Pattern Recogn. 38, 985–996 (2005).
[CrossRef]

Mahmood, M. T.

I. Lee, M. T. Mahmood, S. Shim, and T. Choi, “Optimizing image focus for 3D shape recovery through genetic algorithm,” Multimedia Tools Appl., doi: 10.1007/s11042-013-1433-9 (2013).
[CrossRef]

Minhas, R.

R. Minhas, A. A. Mohammed, and Q. M. J. Wu, “Shape from focus using fast discrete curvelet transform,” Pattern Recogn. 44, 839–853 (2011).
[CrossRef]

Mohammed, A. A.

R. Minhas, A. A. Mohammed, and Q. M. J. Wu, “Shape from focus using fast discrete curvelet transform,” Pattern Recogn. 44, 839–853 (2011).
[CrossRef]

Nayar, S.

C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus and defocus deblurring,” Int. J. Comput. Vis. 93, 53–72 (2011).

Nayar, S. K.

M. Watanabe and S. K. Nayar, “Rational filters for passive depth from defocus,” Int. J. Comput. Vis. 27, 203–225 (1998).
[CrossRef]

M. Watanabe and S. K. Nayar, “Telecentric optics for focus analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1360–1365 (1997).
[CrossRef]

Pentland, A. P.

A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9, 523–531 (1987).
[CrossRef]

Raj, A. N. J.

A. N. J. Raj and R. C. Staunton, “Rational filter design for depth from defocus,” Pattern Recogn. 45, 198–207 (2012).
[CrossRef]

Rajagopalan, A. N.

A. N. Rajagopalan and S. Chaudhuri, “An MRF model-based approach to simultaneous recovery of depth and restoration from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell. 21, 577–589 (1999).
[CrossRef]

S. Chaudhuri and A. N. Rajagopalan, Depth from Defocus: A Real Aperture Imaging Approach (Springer, 1998).

Ray, S. F.

S. F. Ray, Applied Photographic Optics: Imaging Systems for Photography, Film and Video (Focal, 1988).

Shafer, S. A.

Y. Xiong and S. A. Shafer, “Moment and hypergeometric filters for high precision computation of focus, stereo and optical flow,” Int. J. Comput. Vis. 22, 25–59 (1997).

Shim, S.

I. Lee, M. T. Mahmood, S. Shim, and T. Choi, “Optimizing image focus for 3D shape recovery through genetic algorithm,” Multimedia Tools Appl., doi: 10.1007/s11042-013-1433-9 (2013).
[CrossRef]

Soatto, S.

P. Favaro and S. Soatto, “Learning shape from defocus,” in Proceedings of 7th European Conference on Computer Vision (Springer, 2002), pp. 735–745.

Staunton, R. C.

A. N. J. Raj and R. C. Staunton, “Rational filter design for depth from defocus,” Pattern Recogn. 45, 198–207 (2012).
[CrossRef]

C. D. Claxton and R. C. Staunton, “Measurement of the point-spread function of a noisy imaging system,” J. Opt. Soc. Am. A 25, 159–170 (2008).
[CrossRef]

L. Ma and R. C. Staunton, “Integration of multiresolution image segmentation and neural networks for object depth recovery,” Pattern Recogn. 38, 985–996 (2005).
[CrossRef]

Subbarao, M.

M. Subbarao and G. Surya, “Depth from defocus: a spatial domain approach,” Int. J. Comput. Vis. 13, 271–294 (1994).

M. Subbarao, “Parallel depth recovery by changing camera parameters,” in Second International Conference on Computer Vision (IEEE, 1988), pp. 149–155.

Surya, G.

M. Subbarao and G. Surya, “Depth from defocus: a spatial domain approach,” Int. J. Comput. Vis. 13, 271–294 (1994).

Wang, H.

H. Wang, F. Cao, S. Fang, Y. Cao, and C. Fang, “Effective improvement for depth estimated based on defocus images,” J. Comput. 8, 888–895 (2013).

Wang, K. Q.

Q. F. Wu, K. Q. Wang, and W. M. Zuo, “Depth from defocus using geometric optics regularization,” Advanced Materials Research 709, 511–514 (2013).

Watanabe, M.

M. Watanabe and S. K. Nayar, “Rational filters for passive depth from defocus,” Int. J. Comput. Vis. 27, 203–225 (1998).
[CrossRef]

M. Watanabe and S. K. Nayar, “Telecentric optics for focus analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1360–1365 (1997).
[CrossRef]

Wolf, E.

M. Born and E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light (CUP Archive, 1999).

M. Born and E. Wolf, Principles of Optics (Pergamon, 1965).

Wu, Q. F.

Q. F. Wu, K. Q. Wang, and W. M. Zuo, “Depth from defocus using geometric optics regularization,” Advanced Materials Research 709, 511–514 (2013).

Wu, Q. M. J.

R. Minhas, A. A. Mohammed, and Q. M. J. Wu, “Shape from focus using fast discrete curvelet transform,” Pattern Recogn. 44, 839–853 (2011).
[CrossRef]

Xiong, Y.

Y. Xiong and S. A. Shafer, “Moment and hypergeometric filters for high precision computation of focus, stereo and optical flow,” Int. J. Comput. Vis. 22, 25–59 (1997).

Yu, J.

L. Hong, J. Yu, and C. Hong, “Depth estimation from defocus images based on oriented heat-flows,” in Proceedings of IEEE 2nd International Conference on Machine Vision (IEEE, 2009), pp. 212–215.

Zhou, C.

C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus and defocus deblurring,” Int. J. Comput. Vis. 93, 53–72 (2011).

Zuo, W. M.

Q. F. Wu, K. Q. Wang, and W. M. Zuo, “Depth from defocus using geometric optics regularization,” Advanced Materials Research 709, 511–514 (2013).

ACM Trans. Graphics (TOG)

A. Levin, R. Fergus, and F. Durand, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graphics (TOG) 26, 70 (2007).

Advanced Materials Research

Q. F. Wu, K. Q. Wang, and W. M. Zuo, “Depth from defocus using geometric optics regularization,” Advanced Materials Research 709, 511–514 (2013).

BIT Numer. Math.

W. Gander and W. Gautschi, “Adaptive quadrature–revisited,” BIT Numer. Math. 40, 84–101 (2000).

IEEE Trans. Pattern Anal. Mach. Intell.

M. Watanabe and S. K. Nayar, “Telecentric optics for focus analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1360–1365 (1997).
[CrossRef]

A. N. Rajagopalan and S. Chaudhuri, “An MRF model-based approach to simultaneous recovery of depth and restoration from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell. 21, 577–589 (1999).
[CrossRef]

A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9, 523–531 (1987).
[CrossRef]

J. Ens and P. Lawrence, “An investigation of methods for determining depth from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 97–108 (1987).
[CrossRef]

Int. J. Comput. Vis.

Y. Xiong and S. A. Shafer, “Moment and hypergeometric filters for high precision computation of focus, stereo and optical flow,” Int. J. Comput. Vis. 22, 25–59 (1997).

C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus and defocus deblurring,” Int. J. Comput. Vis. 93, 53–72 (2011).

M. Watanabe and S. K. Nayar, “Rational filters for passive depth from defocus,” Int. J. Comput. Vis. 27, 203–225 (1998).
[CrossRef]

M. Subbarao and G. Surya, “Depth from defocus: a spatial domain approach,” Int. J. Comput. Vis. 13, 271–294 (1994).

J. Comput.

H. Wang, F. Cao, S. Fang, Y. Cao, and C. Fang, “Effective improvement for depth estimated based on defocus images,” J. Comput. 8, 888–895 (2013).

J. Opt. Soc. Am. A

Pattern Recogn.

L. Ma and R. C. Staunton, “Integration of multiresolution image segmentation and neural networks for object depth recovery,” Pattern Recogn. 38, 985–996 (2005).
[CrossRef]

A. N. J. Raj and R. C. Staunton, “Rational filter design for depth from defocus,” Pattern Recogn. 45, 198–207 (2012).
[CrossRef]

R. Minhas, A. A. Mohammed, and Q. M. J. Wu, “Shape from focus using fast discrete curvelet transform,” Pattern Recogn. 44, 839–853 (2011).
[CrossRef]

Other

I. Lee, M. T. Mahmood, S. Shim, and T. Choi, “Optimizing image focus for 3D shape recovery through genetic algorithm,” Multimedia Tools Appl., doi: 10.1007/s11042-013-1433-9 (2013).
[CrossRef]

M. Born and E. Wolf, Principles of Optics (Pergamon, 1965).

S. Chaudhuri and A. N. Rajagopalan, Depth from Defocus: A Real Aperture Imaging Approach (Springer, 1998).

M. Subbarao, “Parallel depth recovery by changing camera parameters,” in Second International Conference on Computer Vision (IEEE, 1988), pp. 149–155.

L. Hong, J. Yu, and C. Hong, “Depth estimation from defocus images based on oriented heat-flows,” in Proceedings of IEEE 2nd International Conference on Machine Vision (IEEE, 2009), pp. 212–215.

P. Favaro and S. Soatto, “Learning shape from defocus,” in Proceedings of 7th European Conference on Computer Vision (Springer, 2002), pp. 735–745.

M. Born and E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light (CUP Archive, 1999).

IEEE Acoustics Speech, and Signal Processing Society Digital Signal Processing Committee, Programs for Digital Signal Processing (IEEE, 1979).

J. S. Lim, Two-Dimensional Signal and Image Processing (Prentice-Hall, 1990).

S. F. Ray, Applied Photographic Optics: Imaging Systems for Photography, Film and Video (Focal, 1988).

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1.

Telecentric DfD system.

Fig. 2.
Fig. 2.

(a) Pillbox normalized image ratio (NIR) varies with the normalized depth. (b) Gaussian NIR with k=0.4578. (c) Generalized Gaussian NIR with p=4 and k=0.5091. For each plot, the radial frequency of each curve increases in the direction of the arrow. All the frequencies are shown, as their ranges are within [11].

Fig. 3.
Fig. 3.

Example DfD correction problem. Gray-coded depth maps of a flat surface: (a) without correction and (b) after correction. (c) The gray bar of (a) and (b). Mesh plots: (d) the side view of (a); and (e) the side view of (b), where the horizontal units are in pixels and the vertical units are in millimeters. The horizontal lines in (d) and (e) represent the expected depth.

Fig. 4.
Fig. 4.

Comparison of RMSE of Raj’s and the proposed methods. Key: open diamond, Raj; open circle, GRO uncorrected; *, GGRO uncorrected; open rectangle, GRO corrected; and +, GGRO corrected.

Fig. 5.
Fig. 5.

Wireframe plots of the results using Raj’s and the proposed methods with the test objects. Row 1, the test scenes. Mesh plots of 3D scene reconstructions using: row 2, Raj’s method; row 3, GRO; and row 4, GGRO. Note that the results in column 3 are generated by using 5×5 postmedian filtering whereas the others are obtained using 3×3 median filtering.

Tables (1)

Tables Icon

Table 1. Mean RMSE and Mean SD of All Flat Surfaces in Reconstruction Results of Test Scenes in Fig. 5, Before and After Correction (mm)

Equations (23)

Equations on this page are rendered with MathJax. Learn more.

1F=1u+1w,
MP(fr,α)=H1(fr,α)H2(fr,α)H1(fr,α)+H2(fr,α),
MP(fr,α)=Gp1(fr,α)Gm1(fr,α)α+Gp2(fr,α)Gm1(fr,α)α3,
h(x,y)=12πσ2exp[(xx¯)2+(yy¯)22σ2],
H(u,v)=12πσ2exp[C1]exp[juxjvy]dxdy,
H(u,v)=C2exp[C3+C4]dy,
H(u,v)=exp[12σ2(u2+v2)j(x¯u+y¯v)]=exp[12σ2(u2+v2)]exp[j(x¯u+y¯v)].
H(r,θ)=rexp[jθ],
H(u,v,σ)=exp[12σ2(u2+v2)].
σ=kR,
2R1(1+α)e=2Aw,
2R1(1+α)e=1FeR1=(1+α)e2Fe.
H1(u,v,α)=exp[12(k(1+α)eFe)2(u2+v2)].
H2(u,v,α)=exp[12(k(1α)eFe)2(u2+v2)].
h(x)=p11p2σΓ(1p)exp[1p|xx¯|pσp],
h(x)=1C5exp[1p|xx¯|pσp],
h(x,y)=1C5exp(|xx¯|p+|yy¯|ppσp).
MP(u,v,α)=Gp1(u,v)Gm1(u,v)α+Gp2(u,v)Gm1(u,v)α3,
ϵ2=u,v,α(MP(u,v,α)·Fpre(u,v)MP(u,v,α))2,
[gp1,gm1,gp2]=argmingp1,gm1,gp2ϵ2.
ϵ2=u,v(Fpre(u,v)Fpre(u,v)ζ(u,v))2,
Δ(x,y)=c1(1)+i=24c1(i)xi1+i=57c1(i)yi4+i=810c1(i)(uraw(x,y))i7.
uc(x,y)=c2(1)+i=24c2(i)xi1+i=57c2(i)yi4+i=810c2(i)(Δ(x,y))i7,

Metrics