Abstract

Depth from defocus involves estimating the relative blur between a pair of defocused images of a scene captured with different lens settings. When a priori information about the scene is available, it is possible to estimate the depth even from a single image. However, experimental studies indicate that the depth estimate improves with multiple observations. We provide a mathematical underpinning to this evidence by deriving and comparing the theoretical bounds for the error in the estimate of blur corresponding to the case of a single image and for a pair of defocused images. A new theorem is proposed that proves that the Cramér–Rao bound on the variance of the error in the estimate of blur decreases with an increase in the number of observations. The difference in the bounds turns out to be a function of the relative blurring between the observations. Hence one can indeed get better estimates of depth from multiple defocused images compared with those using only a single image, provided that these images are differently blurred. Results on synthetic as well as real data are given to further validate the claim.

© 2000 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. E. P. Krotkov, Active Computer Vision by Cooperative Focus and Stereo (Springer-Verlag, New York, 1989).
  2. A. P. Pentland, “Depth of scene from depth of field,” in Proceedings of DARPA Image Understanding Workshop (Morgan Kaufmann, San Mateo, Calif., 1982), pp. 253–259.
  3. P. Grossman, “Depth from focus,” Pattern Recogn. Lett. 5, 63–69 (1987).
    [CrossRef]
  4. M. Subbarao, N. Gurumoorthy, “Depth recovery from blurred edges,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Washington, D.C., 1988), pp. 498–503.
  5. S. Lai, C. Fu, S. Chang, “A generalized depth estimation algorithm with a single image,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 405–411 (1992).
    [CrossRef]
  6. A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9, 523–531 (1987).
    [CrossRef]
  7. M. Subbarao, “Parallel depth recovery by changing camera parameters,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE Computer Society Press, Washington, D.C., 1988), pp. 149–155.
  8. J. Ens, P. Lawrence, “An investigation of methods for determining depth from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 97–107 (1993).
    [CrossRef]
  9. Y. Xiong, S. A. Shafer, “Depth from focusing and defocusing,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1993), pp. 68–73.
  10. A. Pentland, S. Scherock, T. Darrell, B. Girod, “Simple range cameras based on focal error,” J. Opt. Soc. Am. 11, 2925–2934 (1994).
    [CrossRef]
  11. Y. Xiong, S. A. Shafer, “Variable window Gabor filters and their use in focus and correspondence,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1994), pp. 668–671.
  12. M. Gökstorp, “Computing depth from out-of-focus blur us-ing a local frequency representation,” in Proceedings of the International Conference on Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1994), pp. 153–158.
  13. M. Watanabe, S. K. Nayar, “Minimal operator set for passive DFD,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1996), pp. 431–438.
  14. A. N. Rajagopalan, S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1158–1165 (1997).
    [CrossRef]
  15. Y. Y. Schechner, N. Kiryati, “Depth from defocus vs stereo: how different really are they?” (Department of Electrical Engineering, Technion-Israel Institute of Technology, Haifa, Israel, 1998).
  16. D. Ziou, “Passive depth from defocus using a spatial domain approach,” in Proceedings of the IEEE International Conference on Computer Vision (Narosa, New Delhi, 1998), pp. 799–804.
  17. G. Surya, M. Subbarao, “Depth from defocus by changing camera aperture: a spatial domain approach,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1993), pp. 61–67.
  18. A. N. Rajagopalan, S. Chaudhuri, “Optimum camera parameter settings for recovery of depth from defocused images,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1997), pp. 219–224.
  19. W. N. Klarquist, W. S. Geisler, A. C. Bovik, “Maximum-likelihood depth-from-defocus for active vision,” in Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IEEE Computer Society Press, Los Alamitos, Calif., 1995), pp. 374–379.
  20. M. Born, E. Wolf, Principles of Optics (Pergamon, London, 1965).
  21. W. F. Schreiber, Fundamentals of Electronic Imaging Systems (Springer-Verlag, Berlin, 1986).
  22. J. M. Mendel, Lessons in Digital Estimation Theory (Prentice-Hall, Englewood Cliffs, N.J., 1987).
  23. A. N. Rajagopalan, S. Chaudhuri, “Performance analysis of maximum likelihood estimator for recovery of depth from defocused images and optimal selection of camera parameters,” Int. J. Comput. Vis. 30, 175–190 (1998).
    [CrossRef]
  24. D. C. Ghiglia, “Space-invariant deblurring given N independently blurred images of a common object,” J. Opt. Soc. Am. 1, 398–402 (1984).
    [CrossRef]

1998 (1)

A. N. Rajagopalan, S. Chaudhuri, “Performance analysis of maximum likelihood estimator for recovery of depth from defocused images and optimal selection of camera parameters,” Int. J. Comput. Vis. 30, 175–190 (1998).
[CrossRef]

1997 (1)

A. N. Rajagopalan, S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1158–1165 (1997).
[CrossRef]

1994 (1)

A. Pentland, S. Scherock, T. Darrell, B. Girod, “Simple range cameras based on focal error,” J. Opt. Soc. Am. 11, 2925–2934 (1994).
[CrossRef]

1993 (1)

J. Ens, P. Lawrence, “An investigation of methods for determining depth from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 97–107 (1993).
[CrossRef]

1992 (1)

S. Lai, C. Fu, S. Chang, “A generalized depth estimation algorithm with a single image,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 405–411 (1992).
[CrossRef]

1987 (2)

A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9, 523–531 (1987).
[CrossRef]

P. Grossman, “Depth from focus,” Pattern Recogn. Lett. 5, 63–69 (1987).
[CrossRef]

1984 (1)

D. C. Ghiglia, “Space-invariant deblurring given N independently blurred images of a common object,” J. Opt. Soc. Am. 1, 398–402 (1984).
[CrossRef]

Born, M.

M. Born, E. Wolf, Principles of Optics (Pergamon, London, 1965).

Bovik, A. C.

W. N. Klarquist, W. S. Geisler, A. C. Bovik, “Maximum-likelihood depth-from-defocus for active vision,” in Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IEEE Computer Society Press, Los Alamitos, Calif., 1995), pp. 374–379.

Chang, S.

S. Lai, C. Fu, S. Chang, “A generalized depth estimation algorithm with a single image,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 405–411 (1992).
[CrossRef]

Chaudhuri, S.

A. N. Rajagopalan, S. Chaudhuri, “Performance analysis of maximum likelihood estimator for recovery of depth from defocused images and optimal selection of camera parameters,” Int. J. Comput. Vis. 30, 175–190 (1998).
[CrossRef]

A. N. Rajagopalan, S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1158–1165 (1997).
[CrossRef]

A. N. Rajagopalan, S. Chaudhuri, “Optimum camera parameter settings for recovery of depth from defocused images,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1997), pp. 219–224.

Darrell, T.

A. Pentland, S. Scherock, T. Darrell, B. Girod, “Simple range cameras based on focal error,” J. Opt. Soc. Am. 11, 2925–2934 (1994).
[CrossRef]

Ens, J.

J. Ens, P. Lawrence, “An investigation of methods for determining depth from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 97–107 (1993).
[CrossRef]

Fu, C.

S. Lai, C. Fu, S. Chang, “A generalized depth estimation algorithm with a single image,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 405–411 (1992).
[CrossRef]

Geisler, W. S.

W. N. Klarquist, W. S. Geisler, A. C. Bovik, “Maximum-likelihood depth-from-defocus for active vision,” in Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IEEE Computer Society Press, Los Alamitos, Calif., 1995), pp. 374–379.

Ghiglia, D. C.

D. C. Ghiglia, “Space-invariant deblurring given N independently blurred images of a common object,” J. Opt. Soc. Am. 1, 398–402 (1984).
[CrossRef]

Girod, B.

A. Pentland, S. Scherock, T. Darrell, B. Girod, “Simple range cameras based on focal error,” J. Opt. Soc. Am. 11, 2925–2934 (1994).
[CrossRef]

Gökstorp, M.

M. Gökstorp, “Computing depth from out-of-focus blur us-ing a local frequency representation,” in Proceedings of the International Conference on Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1994), pp. 153–158.

Grossman, P.

P. Grossman, “Depth from focus,” Pattern Recogn. Lett. 5, 63–69 (1987).
[CrossRef]

Gurumoorthy, N.

M. Subbarao, N. Gurumoorthy, “Depth recovery from blurred edges,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Washington, D.C., 1988), pp. 498–503.

Kiryati, N.

Y. Y. Schechner, N. Kiryati, “Depth from defocus vs stereo: how different really are they?” (Department of Electrical Engineering, Technion-Israel Institute of Technology, Haifa, Israel, 1998).

Klarquist, W. N.

W. N. Klarquist, W. S. Geisler, A. C. Bovik, “Maximum-likelihood depth-from-defocus for active vision,” in Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IEEE Computer Society Press, Los Alamitos, Calif., 1995), pp. 374–379.

Krotkov, E. P.

E. P. Krotkov, Active Computer Vision by Cooperative Focus and Stereo (Springer-Verlag, New York, 1989).

Lai, S.

S. Lai, C. Fu, S. Chang, “A generalized depth estimation algorithm with a single image,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 405–411 (1992).
[CrossRef]

Lawrence, P.

J. Ens, P. Lawrence, “An investigation of methods for determining depth from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 97–107 (1993).
[CrossRef]

Mendel, J. M.

J. M. Mendel, Lessons in Digital Estimation Theory (Prentice-Hall, Englewood Cliffs, N.J., 1987).

Nayar, S. K.

M. Watanabe, S. K. Nayar, “Minimal operator set for passive DFD,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1996), pp. 431–438.

Pentland, A.

A. Pentland, S. Scherock, T. Darrell, B. Girod, “Simple range cameras based on focal error,” J. Opt. Soc. Am. 11, 2925–2934 (1994).
[CrossRef]

Pentland, A. P.

A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9, 523–531 (1987).
[CrossRef]

A. P. Pentland, “Depth of scene from depth of field,” in Proceedings of DARPA Image Understanding Workshop (Morgan Kaufmann, San Mateo, Calif., 1982), pp. 253–259.

Rajagopalan, A. N.

A. N. Rajagopalan, S. Chaudhuri, “Performance analysis of maximum likelihood estimator for recovery of depth from defocused images and optimal selection of camera parameters,” Int. J. Comput. Vis. 30, 175–190 (1998).
[CrossRef]

A. N. Rajagopalan, S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1158–1165 (1997).
[CrossRef]

A. N. Rajagopalan, S. Chaudhuri, “Optimum camera parameter settings for recovery of depth from defocused images,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1997), pp. 219–224.

Schechner, Y. Y.

Y. Y. Schechner, N. Kiryati, “Depth from defocus vs stereo: how different really are they?” (Department of Electrical Engineering, Technion-Israel Institute of Technology, Haifa, Israel, 1998).

Scherock, S.

A. Pentland, S. Scherock, T. Darrell, B. Girod, “Simple range cameras based on focal error,” J. Opt. Soc. Am. 11, 2925–2934 (1994).
[CrossRef]

Schreiber, W. F.

W. F. Schreiber, Fundamentals of Electronic Imaging Systems (Springer-Verlag, Berlin, 1986).

Shafer, S. A.

Y. Xiong, S. A. Shafer, “Variable window Gabor filters and their use in focus and correspondence,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1994), pp. 668–671.

Y. Xiong, S. A. Shafer, “Depth from focusing and defocusing,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1993), pp. 68–73.

Subbarao, M.

M. Subbarao, “Parallel depth recovery by changing camera parameters,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE Computer Society Press, Washington, D.C., 1988), pp. 149–155.

G. Surya, M. Subbarao, “Depth from defocus by changing camera aperture: a spatial domain approach,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1993), pp. 61–67.

M. Subbarao, N. Gurumoorthy, “Depth recovery from blurred edges,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Washington, D.C., 1988), pp. 498–503.

Surya, G.

G. Surya, M. Subbarao, “Depth from defocus by changing camera aperture: a spatial domain approach,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1993), pp. 61–67.

Watanabe, M.

M. Watanabe, S. K. Nayar, “Minimal operator set for passive DFD,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1996), pp. 431–438.

Wolf, E.

M. Born, E. Wolf, Principles of Optics (Pergamon, London, 1965).

Xiong, Y.

Y. Xiong, S. A. Shafer, “Variable window Gabor filters and their use in focus and correspondence,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1994), pp. 668–671.

Y. Xiong, S. A. Shafer, “Depth from focusing and defocusing,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1993), pp. 68–73.

Ziou, D.

D. Ziou, “Passive depth from defocus using a spatial domain approach,” in Proceedings of the IEEE International Conference on Computer Vision (Narosa, New Delhi, 1998), pp. 799–804.

IEEE Trans. Pattern Anal. Mach. Intell. (4)

S. Lai, C. Fu, S. Chang, “A generalized depth estimation algorithm with a single image,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 405–411 (1992).
[CrossRef]

A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9, 523–531 (1987).
[CrossRef]

J. Ens, P. Lawrence, “An investigation of methods for determining depth from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 97–107 (1993).
[CrossRef]

A. N. Rajagopalan, S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1158–1165 (1997).
[CrossRef]

Int. J. Comput. Vis. (1)

A. N. Rajagopalan, S. Chaudhuri, “Performance analysis of maximum likelihood estimator for recovery of depth from defocused images and optimal selection of camera parameters,” Int. J. Comput. Vis. 30, 175–190 (1998).
[CrossRef]

J. Opt. Soc. Am. (2)

D. C. Ghiglia, “Space-invariant deblurring given N independently blurred images of a common object,” J. Opt. Soc. Am. 1, 398–402 (1984).
[CrossRef]

A. Pentland, S. Scherock, T. Darrell, B. Girod, “Simple range cameras based on focal error,” J. Opt. Soc. Am. 11, 2925–2934 (1994).
[CrossRef]

Pattern Recogn. Lett. (1)

P. Grossman, “Depth from focus,” Pattern Recogn. Lett. 5, 63–69 (1987).
[CrossRef]

Other (16)

M. Subbarao, N. Gurumoorthy, “Depth recovery from blurred edges,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Washington, D.C., 1988), pp. 498–503.

E. P. Krotkov, Active Computer Vision by Cooperative Focus and Stereo (Springer-Verlag, New York, 1989).

A. P. Pentland, “Depth of scene from depth of field,” in Proceedings of DARPA Image Understanding Workshop (Morgan Kaufmann, San Mateo, Calif., 1982), pp. 253–259.

Y. Xiong, S. A. Shafer, “Depth from focusing and defocusing,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1993), pp. 68–73.

M. Subbarao, “Parallel depth recovery by changing camera parameters,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE Computer Society Press, Washington, D.C., 1988), pp. 149–155.

Y. Y. Schechner, N. Kiryati, “Depth from defocus vs stereo: how different really are they?” (Department of Electrical Engineering, Technion-Israel Institute of Technology, Haifa, Israel, 1998).

D. Ziou, “Passive depth from defocus using a spatial domain approach,” in Proceedings of the IEEE International Conference on Computer Vision (Narosa, New Delhi, 1998), pp. 799–804.

G. Surya, M. Subbarao, “Depth from defocus by changing camera aperture: a spatial domain approach,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1993), pp. 61–67.

A. N. Rajagopalan, S. Chaudhuri, “Optimum camera parameter settings for recovery of depth from defocused images,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1997), pp. 219–224.

W. N. Klarquist, W. S. Geisler, A. C. Bovik, “Maximum-likelihood depth-from-defocus for active vision,” in Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IEEE Computer Society Press, Los Alamitos, Calif., 1995), pp. 374–379.

M. Born, E. Wolf, Principles of Optics (Pergamon, London, 1965).

W. F. Schreiber, Fundamentals of Electronic Imaging Systems (Springer-Verlag, Berlin, 1986).

J. M. Mendel, Lessons in Digital Estimation Theory (Prentice-Hall, Englewood Cliffs, N.J., 1987).

Y. Xiong, S. A. Shafer, “Variable window Gabor filters and their use in focus and correspondence,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1994), pp. 668–671.

M. Gökstorp, “Computing depth from out-of-focus blur us-ing a local frequency representation,” in Proceedings of the International Conference on Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1994), pp. 153–158.

M. Watanabe, S. K. Nayar, “Minimal operator set for passive DFD,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society Press, Los Alamitos, Calif., 1996), pp. 431–438.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1
Fig. 1

Geometry of the image formation process.

Fig. 2
Fig. 2

Illustration of the CRLB of Var(s˜) for the single and the two-image case. The dashed line and the solid curve correspond to CRLB1 and CRLB2, respectively.

Fig. 3
Fig. 3

(a) Synthetically generated focused AR image. (b)–(d) Defocused and noisy versions of the original image for different values of α. (e) Magnitude of the error in the ML estimate of σ1. The dashed line and the solid curve correspond to the single and the two-image case, respectively.

Fig. 4
Fig. 4

(a)–(d) Defocused images corresponding to a real experimental setup for different focusing ranges of the camera. (e) Plot of the percent error in the ML estimate of the depth for the nearest end of the textured planar object. The dashed line and the solid curve correspond to the single and the two-image case, respectively.

Equations (52)

Equations on this page are rendered with MathJax. Learn more.

rb1-δv0-v=r0-v,rb2+δv0-v=r0+v,
δv0-v=v.
rb1=rb2=rb=r0v0v-1.
rb=r0v01Fl-1v0-1D.
h(i, j)=12πσ2exp-(i2+j2)2σ2,
σ=ρr0v01Fl-1v0-1D.
g(i, j)=m,nShh(m, n)f(i-m, j-n)+w(i, j),
G¯=ΛHF¯+W¯,
f(i, j)=m,nSaa(m, n)f(i-m, j-n)+v(i, j),
F¯=(I-ΛA)-1V¯,
p(G)=1(2π)N/2(det P)1/2exp-12 GH(P)-1G,
P=E[G¯G¯H]=ΛHE[FF¯H]ΛHH+E[W¯W¯H]=ΛH(1-ΛA)-1E[V¯V¯H](I-ΛA)-HΛHH+σw2I=σv2ΛH(I-ΛA)-1(I-ΛA)-HΛHH+σw2I.
P(k, k)=H(k)H*(k)|1-A(k)|2 σv2+σw2,k=0, 1, , N-1,
P(k, k)=exp-2πkN2s|1-A(k)|2 σv2+σw2.
minθ F1(θ),
F1(θ)=log(det P)+GH(P)-1G.
F1(θ)=logk=0N-1P(k, k)+G*(0)G*(N-1)T×1P(0, 0)001P(N-1, N-1)×G(0)G(N-1).
F1(θ)=k=0N-1log P(k, k)+|G(k)|2P(k, k).
E[s˜2]1-E2log p(G)s2,
log p(G)=-N2log 2π-12k=0N-1log P(k, k)+|G(k)|2P(k, k).
2log p(G)s2=-12k=0N-11P(k, k)2P(k, k)s2-1[P(k, k)]2P(k, k)s2-|G(k)|21[P(k, k)]22P(k, k)s2+2|G(k)|21[P(k, k)]3P(k, k)s2.
-E2log p(G)s2=12k=0N-11P(k, k)2P(k, k)s2-1[P(k, k)]2P(k, k)s2 - P(k, k)1[P(k, k)]22P(k, k)s2-2[P(k, k)]3P(k, k)s2.
1E[s˜2]12k=0N-11[P(k, k)]2P(k, k)s2.
σm=ρrmvm1Flm-1vm-1D,m=1, 2.
Gi¯=ΛHiF¯+Wi¯,i=1, 2.
minθ F2(θ),
F2(θ)=k=0N-1{log P1,1(k, k)-log C2(k, k)+|G1(k)|2A2(k, k)+2 Re[G1*(k)B2(k, k)G2(k)]+|G2(k)|2C2(k, k)}.
A2(k, k)=1P1,1(k, k)+|P1,2(k, k)|2C2(k, k)[P1,1(k, k)]2,
B2(k, k)=-P1,2(k, k)C2(k, k)P1,1(k, k),
C2(k, k)=P1,1(k, k)P1,1(k, k)P2,2(k, k)-|P1,2(k, k)|2.
P1,1(k, k)=exp-2πkN2s|1-A(k)|2 σv2+σw2,
P1,2(k, k)=P2,1(k, k)=exp-2πkN21+α22s|1-A(k)|2 σv2,
P2,2(k, k)=exp-2πkN2α2s|1-A(k)|2 σv2+σw2.
E[s˜2]1-E2log p(G2)s2,
log p(G2)=-N log 2π-12F2(θ).
2log p(G2)s2=-12k=0N-11P1,1(k, k)2P1,1(k, k)s2-1[P1,1(k, k)]2P1,1(k, k)s2-1C2(k, k)2C2(k, k)s2-1[C2(k, k)]2C2(k, k)s2+|G1(k)|22A2(k, k)s2+[G1*(k)G2(k)+G2*(k)G1(k)]×2B2(k, k)s2+|G2(k)|22C2(k, k)s2.
1E[s˜2]12k=0N-11P1,1(k, k)2P1,1(k, k)s2-1[P1,1(k, k)]2×P1,1(k, k)s2-1C2(k, k)2C2(k, k)s2+1C22(k, k)C2(k, k)s2+P1,1(k, k)×2A2(k, k)s2+2P1,2(k, k) 2B2(k, k)s2+P2,2(k, k) 2C2(k, k)s2.
1E[s˜2]12k=0N-1Ds2Pk1,1Pk1,1-(Ds1Pk1,1)2(Pk1,1)2+(Ds1C2k)2C2k2+Pk1,1Ds2A2k+2Pk1,2Ds2B2k+(Pk1,2)2Pk1,1Ds2C2k.
Ds2A2k=-Ds2Pk1,1(Pk1,1)2+2(Ds1Pk1,1)2(Pk1,1)3+1(Pk1,1)2 {2[Pk1,2C2kDs2Pk1,2+(Ds1Pk1,2)(Pk1,2Ds1C2k+C2kDs1Pk1,2)]+(Pk1,2)2Ds2C2k+2(Ds1C2k)Pk1,2Ds1Pk1,2}-1(Pk1,1)4 {2[2Pk1,2C2kDs1Pk1,2+(Pk1,2)2Ds1C2k]Pk1,1Ds1Pk1,1}-1(Pk1,1)3(2{(Pk1,2)2C2kDs2Pk1,1+Ds1Pk1,1[(Pk1,2)2Ds1C2k+2C2kPk1,2Ds1Pk1,2]})+6 (Pk1,2)2C2k(Ds1Pk1,1)2(Pk1,1)4,
Ds2B2k=-1(Pk1,1)2 {Pk1,1Pk1,2Ds2C2k+Ds1C2k(Pk1,1Ds1Pk1,2+Pk1,2Ds1Pk1,1)+Pk1,1C2kDs2Pk1,2+(Ds1Pk1,2)(Pk1,1Ds1C2k+C2kDs1Pk1,1)-[Pk1,2C2kDs2Pk1,1+(Ds1Pk1,1)(Pk1,2Ds1C2k+C2kDs1Pk1,2)]}+1(Pk1,1)4 [2(Pk1,1Pk1,2Ds1C2k+Pk1,1C2kDs1Pk1,2-Pk1,2C2kDs1Pk1,1)Pk1,1Ds1Pk1,1].
Pk1,1Ds2A2k=-Ds2Pk1,1Pk1,1+2 (Ds1Pk1,1)2(Pk1,1)2+2 Pk1,2Pk1,1 C2kDs2Pk1,2+4 Pk1,2Pk1,1Ds1Pk1,2Ds1C2k+2 C2kPk1,1 (Ds1Pk1,2)2+(Pk1,2)2Pk1,1Ds2C2k-8 Pk1,2(Pk1,1)2 C2kDs1Pk1,2Ds1Pk1,1-4 (Pk1,2)2(Pk1,1)2Ds1C2kDs1Pk1,1-2 (Pk1,2)2(Pk1,1)2 C2kDs2Pk1,1+6 (Pk1,2)2(Pk1,1)3 C2k(Ds1Pk1,1)2,
2Pk1,2Ds2B2k=-2 (Pk1,2)2Pk1,1Ds2C2k-4 Pk1,2Pk1,1Ds1C2kDs1Pk1,2-2 Pk1,2Pk1,1 C2kDs2Pk1,2+2 (Pk1,2)2(Pk1,1)2 C2kDs2Pk1,1+4 (Pk1,2)2(Pk1,1)2Ds1C2kDs1Pk1,1+4 Pk1,2(Pk1,1)2 C2kDs1Pk1,2Ds1Pk1,1-4 (Pk1,2)2(Pk1,1)3 C2k(Ds1Pk1,1)2.
Pk1,1Ds2A2k+2Pk1,2Ds2B2k
=-Ds2Pk1,1Pk1,1+2 (Ds1Pk1,1)2(Pk1,1)2+2 C2kPk1,1 (Ds1Pk1,2)2-(Pk1,2)2Pk1,1Ds2C2k-4 Pk1,2(Pk1,1)2 C2kDs1Pk1,1Ds1Pk1,2+2 (Pk1,2)2(Pk1,1)3 C2k(Ds1Pk1,1)2.
1E[s˜2]12k=0N-1(Ds1Pk1,2)2(Pk1,1)2+(Ds1C2k)2C2k2+2 C2kPk1,1 (Ds1Pk1,2)2-4 Pk1,2(Pk1,1)2 C2kDs1Pk1,1Ds1Pk1,2+2 (Pk1,2)2(Pk1,1)3 C2k(Ds1Pk1,1)2.
1E[s˜2]12k=0N-1(Ds1Pk1,1)2(Pk1,1)2+(Ds1C2k)2C2k2+2 C2kPk1,1Ds1Pk1,2-Pk1,2Pk1,1Ds1Pk1,12.
E[s˜2]2k=0N-1(Ds1Pk1,1)2(Pk1,1)2+A,
A=(Ds1C2k)2C2k2+2 C2kPk1,1Ds1Pk1,2-Pk1,2Pk1,1Ds1Pk1,12.
E[s˜2]2k=0N-11[P(k, k)]2P(k, k)s2,
P2=P1,1P1,2P2,1P2,2
CRLB2CRLB1,
CRLBMCRLBM-1forM2.

Metrics