Abstract

Range acquisition systems such as light detection and ranging (LIDAR) and time-of-flight (TOF) cameras operate by measuring the time difference of arrival between a transmitted pulse and the scene reflection. We introduce the design of a range acquisition system for acquiring depth maps of piecewise-planar scenes with high spatial resolution using a single, omnidirectional, time-resolved photodetector and no scanning components. In our experiment, we reconstructed 64 × 64-pixel depth maps of scenes comprising two to four planar shapes using only 205 spatially-patterned, femtosecond illuminations of the scene. The reconstruction uses parametric signal modeling to recover a set of depths present in the scene. Then, a convex optimization that exploits sparsity of the Laplacian of the depth map of a typical scene determines correspondences between spatial positions and depths. In contrast with 2D laser scanning used in LIDAR systems and low-resolution 2D sensor arrays used in TOF cameras, our experiment demonstrates that it is possible to build a non-scanning range acquisition system with high spatial resolution using only a standard, low-cost photodetector and a spatial light modulator.

© 2011 OSA

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. K. Carlsson, P. E. Danielsson, R. Lenz, A. Liljeborg, L. Majlöf, and N. Åslund, “Three-dimensional microscopy using a confocal laser scanning microscope,” Opt. Lett. 10, 53–55 (1985).
    [CrossRef] [PubMed]
  2. J. Sharpe, U. Ahlgren, P. Perry, B. Hill, A. Ross, J. Hecksher-Sørensen, R. Baldock, and D. Davidson, “Optical projection tomography as a tool for 3d microscopy and gene expression studies,” Science 296, 541–545 (2002).
    [CrossRef] [PubMed]
  3. A. Wehr and U. Lohr, “Airborne laser scanning—an introduction and overview,” ISPRS J. Photogramm. Remote Sens. 54, 68–82 (1999).
    [CrossRef]
  4. D. A. Forsyth and J. Ponce, Computer Vision: A Modern Approach (Prentice-Hall, 2002).
  5. S. M. Seitz, B. Curless, J. Diebel, D. Scharstein, and R. Szeliski, “A comparison and evaluation of multi-view stereo reconstruction algorithms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (2006), pp. 519–528.
  6. S. Hussmann, T. Ringbeck, and B. Hagebeuker, “A performance review of 3D TOF vision systems in comparison to stereo vision systems,” in Stereo Vision, A. Bhatti, ed. (InTech, 2008), pp. 103–120.
  7. E. Stoykova, A. A. Alatan, P. Benzie, N. Grammalidis, S. Malassiotis, J. Ostermann, S. Piekh, V. Sainov, C. Theobalt, T. Thevar, and X. Zabulis, “3-D time-varying scene capture technologies—A survey,” IEEE Trans. Circ. Syst. Video Tech. 17, 1568–1586 (2007).
    [CrossRef]
  8. D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis. 47, 7–42 (2002).
    [CrossRef]
  9. B. Schwarz, “LIDAR: mapping the world in 3D,” Nat. Photonics 4, 429–430 (2010).
    [CrossRef]
  10. S. B. Gokturk, H. Yalcin, and C. Bamji, “A time-of-flight depth sensor — system description, issues and solutions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2004), p. 35.
  11. S. Foix, G. Alenyà, and C. Torras, “Lock-in time-of-flight (ToF) cameras: a survey,” IEEE Sens. J.11, 1917–1926 (2011).
    [CrossRef]
  12. A. P. Cracknell and L. W. B. Hayes, Introduction to Remote Sensing (Taylor & Francis, 1991).
  13. F. Blais, “Review of 20 years of range sensor development,” J. Electron. Imaging 13, 231–240 (2004).
    [CrossRef]
  14. R. Lamb and G. Buller, “Single-pixel imaging using 3d scanning time-of-flight photon counting,” SPIE News-room (2010). .
    [CrossRef]
  15. A. Medina, F. Gayá, and F. del Pozo, “Compact laser radar and three-dimensional camera,” J. Opt. Soc. Am. A 23, 800–805 (2006).
    [CrossRef]
  16. S. Schuon, C. Theobalt, J. Davis, and S. Thrun, “High-quality scanning using time-of-flight depth superresolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2008).
    [CrossRef]
  17. M. Vetterli, P. Marziliano, and T. Blu, “Sampling signals with finite rate of innovation,” IEEE Trans. Signal Process. 50, 1417–1428 (2002).
    [CrossRef]
  18. T. Blu, P.-L. Dragotti, M. Vetterli, P. Marziliano, and L. Coulot, “Sparse sampling of signal innovations,” IEEE Signal Process. Mag. 25, 31–40 (2008).
    [CrossRef]
  19. M. Sarkis and K. Diepold, “Depth map compression via compressed sensing,” in Proceedings of IEEE International Conference on Image Processing, (2009), pp. 737–740.
  20. I. Toši?, B. A. Olshausen, and B. J. Culpepper, “Learning sparse representations of depth,” IEEE J. Sel. Top. Signal Process. 5, 941–952 (2011).
    [CrossRef]
  21. E. J. Candès and T. Tao, “Near-optimal signal recovery from random projections: Universal encoding strategies?” IEEE Trans. Inf. Theory 52, 5406–5425 (2006).
    [CrossRef]
  22. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory52, 1289–1306 (2006).
    [CrossRef]
  23. M. B. Wakin, J. N. Laska, M. F. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. F. Kelly, and R. G. Baraniuk, “An architecture for compressive imaging,” in Proceedings of IEEE International Conference on Image Processing, (2006), pp. 1273–1276.
  24. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag.25, 83–91 (2008).
    [CrossRef]
  25. M. Elad, Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing (Springer, 2010).
    [PubMed]
  26. G. Howland, P. Zerom, R. W. Boyd, and J. C. Howell, “Compressive sensing LIDAR for 3D imaging,” in CLEO:2011 - Laser Applications to Photonic Applications, OSA Technical Digest (CD) (Optical Society of America, 2011), paper CMG3.
  27. P. L. Dragotti, M. Vetterli, and T. Blu, “Sampling moments and reconstructing signals of finite rate of innovation: Shannon meets Strang-Fix,” IEEE Trans. Signal Process.55, 1741–1757 (2007).
    [CrossRef]
  28. A. V. Oppenheim and R. W. Schafer, Discrete-Time Signal Processing, 3rd ed. (Prentice-Hall, 2009).
  29. M. Grant and S. Boyd, “CVX: Matlab software for disciplined convex programming, version 1.21,” http://cvxr.com/cvx .
  30. M. Grant and S. Boyd, “Graph implementations for nonsmooth convex programs,” in Recent Advances in Learning and Control, V. Blondel, S. Boyd, and H. Kimura, eds. (Springer-Verlag Limited, 2008), pp. 95–110.
    [CrossRef]
  31. G. C. M. R. de Prony, “Essai éxperimental et analytique: Sur les lois de la dilatabilité de fluides élastique et sur celles de la force expansive de la vapeur de l’alkool, à différentes températures,” J. de l’ École Polytechnique 1, 24–76 (1795).

2011 (1)

I. Toši?, B. A. Olshausen, and B. J. Culpepper, “Learning sparse representations of depth,” IEEE J. Sel. Top. Signal Process. 5, 941–952 (2011).
[CrossRef]

2010 (2)

B. Schwarz, “LIDAR: mapping the world in 3D,” Nat. Photonics 4, 429–430 (2010).
[CrossRef]

R. Lamb and G. Buller, “Single-pixel imaging using 3d scanning time-of-flight photon counting,” SPIE News-room (2010). .
[CrossRef]

2008 (1)

T. Blu, P.-L. Dragotti, M. Vetterli, P. Marziliano, and L. Coulot, “Sparse sampling of signal innovations,” IEEE Signal Process. Mag. 25, 31–40 (2008).
[CrossRef]

2007 (1)

E. Stoykova, A. A. Alatan, P. Benzie, N. Grammalidis, S. Malassiotis, J. Ostermann, S. Piekh, V. Sainov, C. Theobalt, T. Thevar, and X. Zabulis, “3-D time-varying scene capture technologies—A survey,” IEEE Trans. Circ. Syst. Video Tech. 17, 1568–1586 (2007).
[CrossRef]

2006 (2)

A. Medina, F. Gayá, and F. del Pozo, “Compact laser radar and three-dimensional camera,” J. Opt. Soc. Am. A 23, 800–805 (2006).
[CrossRef]

E. J. Candès and T. Tao, “Near-optimal signal recovery from random projections: Universal encoding strategies?” IEEE Trans. Inf. Theory 52, 5406–5425 (2006).
[CrossRef]

2004 (1)

F. Blais, “Review of 20 years of range sensor development,” J. Electron. Imaging 13, 231–240 (2004).
[CrossRef]

2002 (3)

M. Vetterli, P. Marziliano, and T. Blu, “Sampling signals with finite rate of innovation,” IEEE Trans. Signal Process. 50, 1417–1428 (2002).
[CrossRef]

D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis. 47, 7–42 (2002).
[CrossRef]

J. Sharpe, U. Ahlgren, P. Perry, B. Hill, A. Ross, J. Hecksher-Sørensen, R. Baldock, and D. Davidson, “Optical projection tomography as a tool for 3d microscopy and gene expression studies,” Science 296, 541–545 (2002).
[CrossRef] [PubMed]

1999 (1)

A. Wehr and U. Lohr, “Airborne laser scanning—an introduction and overview,” ISPRS J. Photogramm. Remote Sens. 54, 68–82 (1999).
[CrossRef]

1985 (1)

Ahlgren, U.

J. Sharpe, U. Ahlgren, P. Perry, B. Hill, A. Ross, J. Hecksher-Sørensen, R. Baldock, and D. Davidson, “Optical projection tomography as a tool for 3d microscopy and gene expression studies,” Science 296, 541–545 (2002).
[CrossRef] [PubMed]

Alatan, A. A.

E. Stoykova, A. A. Alatan, P. Benzie, N. Grammalidis, S. Malassiotis, J. Ostermann, S. Piekh, V. Sainov, C. Theobalt, T. Thevar, and X. Zabulis, “3-D time-varying scene capture technologies—A survey,” IEEE Trans. Circ. Syst. Video Tech. 17, 1568–1586 (2007).
[CrossRef]

Alenyà, G.

S. Foix, G. Alenyà, and C. Torras, “Lock-in time-of-flight (ToF) cameras: a survey,” IEEE Sens. J.11, 1917–1926 (2011).
[CrossRef]

Åslund, N.

Baldock, R.

J. Sharpe, U. Ahlgren, P. Perry, B. Hill, A. Ross, J. Hecksher-Sørensen, R. Baldock, and D. Davidson, “Optical projection tomography as a tool for 3d microscopy and gene expression studies,” Science 296, 541–545 (2002).
[CrossRef] [PubMed]

Bamji, C.

S. B. Gokturk, H. Yalcin, and C. Bamji, “A time-of-flight depth sensor — system description, issues and solutions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2004), p. 35.

Baraniuk, R. G.

M. B. Wakin, J. N. Laska, M. F. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. F. Kelly, and R. G. Baraniuk, “An architecture for compressive imaging,” in Proceedings of IEEE International Conference on Image Processing, (2006), pp. 1273–1276.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag.25, 83–91 (2008).
[CrossRef]

Baron, D.

M. B. Wakin, J. N. Laska, M. F. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. F. Kelly, and R. G. Baraniuk, “An architecture for compressive imaging,” in Proceedings of IEEE International Conference on Image Processing, (2006), pp. 1273–1276.

Benzie, P.

E. Stoykova, A. A. Alatan, P. Benzie, N. Grammalidis, S. Malassiotis, J. Ostermann, S. Piekh, V. Sainov, C. Theobalt, T. Thevar, and X. Zabulis, “3-D time-varying scene capture technologies—A survey,” IEEE Trans. Circ. Syst. Video Tech. 17, 1568–1586 (2007).
[CrossRef]

Blais, F.

F. Blais, “Review of 20 years of range sensor development,” J. Electron. Imaging 13, 231–240 (2004).
[CrossRef]

Blu, T.

T. Blu, P.-L. Dragotti, M. Vetterli, P. Marziliano, and L. Coulot, “Sparse sampling of signal innovations,” IEEE Signal Process. Mag. 25, 31–40 (2008).
[CrossRef]

M. Vetterli, P. Marziliano, and T. Blu, “Sampling signals with finite rate of innovation,” IEEE Trans. Signal Process. 50, 1417–1428 (2002).
[CrossRef]

P. L. Dragotti, M. Vetterli, and T. Blu, “Sampling moments and reconstructing signals of finite rate of innovation: Shannon meets Strang-Fix,” IEEE Trans. Signal Process.55, 1741–1757 (2007).
[CrossRef]

Boyd, R. W.

G. Howland, P. Zerom, R. W. Boyd, and J. C. Howell, “Compressive sensing LIDAR for 3D imaging,” in CLEO:2011 - Laser Applications to Photonic Applications, OSA Technical Digest (CD) (Optical Society of America, 2011), paper CMG3.

Boyd, S.

M. Grant and S. Boyd, “Graph implementations for nonsmooth convex programs,” in Recent Advances in Learning and Control, V. Blondel, S. Boyd, and H. Kimura, eds. (Springer-Verlag Limited, 2008), pp. 95–110.
[CrossRef]

Buller, G.

R. Lamb and G. Buller, “Single-pixel imaging using 3d scanning time-of-flight photon counting,” SPIE News-room (2010). .
[CrossRef]

Candès, E. J.

E. J. Candès and T. Tao, “Near-optimal signal recovery from random projections: Universal encoding strategies?” IEEE Trans. Inf. Theory 52, 5406–5425 (2006).
[CrossRef]

Carlsson, K.

Coulot, L.

T. Blu, P.-L. Dragotti, M. Vetterli, P. Marziliano, and L. Coulot, “Sparse sampling of signal innovations,” IEEE Signal Process. Mag. 25, 31–40 (2008).
[CrossRef]

Cracknell, A. P.

A. P. Cracknell and L. W. B. Hayes, Introduction to Remote Sensing (Taylor & Francis, 1991).

Culpepper, B. J.

I. Toši?, B. A. Olshausen, and B. J. Culpepper, “Learning sparse representations of depth,” IEEE J. Sel. Top. Signal Process. 5, 941–952 (2011).
[CrossRef]

Curless, B.

S. M. Seitz, B. Curless, J. Diebel, D. Scharstein, and R. Szeliski, “A comparison and evaluation of multi-view stereo reconstruction algorithms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (2006), pp. 519–528.

Danielsson, P. E.

Davenport, M. A.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag.25, 83–91 (2008).
[CrossRef]

Davidson, D.

J. Sharpe, U. Ahlgren, P. Perry, B. Hill, A. Ross, J. Hecksher-Sørensen, R. Baldock, and D. Davidson, “Optical projection tomography as a tool for 3d microscopy and gene expression studies,” Science 296, 541–545 (2002).
[CrossRef] [PubMed]

Davis, J.

S. Schuon, C. Theobalt, J. Davis, and S. Thrun, “High-quality scanning using time-of-flight depth superresolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2008).
[CrossRef]

de Prony, G. C. M. R.

G. C. M. R. de Prony, “Essai éxperimental et analytique: Sur les lois de la dilatabilité de fluides élastique et sur celles de la force expansive de la vapeur de l’alkool, à différentes températures,” J. de l’ École Polytechnique 1, 24–76 (1795).

del Pozo, F.

Diebel, J.

S. M. Seitz, B. Curless, J. Diebel, D. Scharstein, and R. Szeliski, “A comparison and evaluation of multi-view stereo reconstruction algorithms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (2006), pp. 519–528.

Diepold, K.

M. Sarkis and K. Diepold, “Depth map compression via compressed sensing,” in Proceedings of IEEE International Conference on Image Processing, (2009), pp. 737–740.

Donoho, D. L.

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory52, 1289–1306 (2006).
[CrossRef]

Dragotti, P. L.

P. L. Dragotti, M. Vetterli, and T. Blu, “Sampling moments and reconstructing signals of finite rate of innovation: Shannon meets Strang-Fix,” IEEE Trans. Signal Process.55, 1741–1757 (2007).
[CrossRef]

Dragotti, P.-L.

T. Blu, P.-L. Dragotti, M. Vetterli, P. Marziliano, and L. Coulot, “Sparse sampling of signal innovations,” IEEE Signal Process. Mag. 25, 31–40 (2008).
[CrossRef]

Duarte, M. F.

M. B. Wakin, J. N. Laska, M. F. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. F. Kelly, and R. G. Baraniuk, “An architecture for compressive imaging,” in Proceedings of IEEE International Conference on Image Processing, (2006), pp. 1273–1276.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag.25, 83–91 (2008).
[CrossRef]

Elad, M.

M. Elad, Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing (Springer, 2010).
[PubMed]

Foix, S.

S. Foix, G. Alenyà, and C. Torras, “Lock-in time-of-flight (ToF) cameras: a survey,” IEEE Sens. J.11, 1917–1926 (2011).
[CrossRef]

Forsyth, D. A.

D. A. Forsyth and J. Ponce, Computer Vision: A Modern Approach (Prentice-Hall, 2002).

Gayá, F.

Gokturk, S. B.

S. B. Gokturk, H. Yalcin, and C. Bamji, “A time-of-flight depth sensor — system description, issues and solutions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2004), p. 35.

Grammalidis, N.

E. Stoykova, A. A. Alatan, P. Benzie, N. Grammalidis, S. Malassiotis, J. Ostermann, S. Piekh, V. Sainov, C. Theobalt, T. Thevar, and X. Zabulis, “3-D time-varying scene capture technologies—A survey,” IEEE Trans. Circ. Syst. Video Tech. 17, 1568–1586 (2007).
[CrossRef]

Grant, M.

M. Grant and S. Boyd, “Graph implementations for nonsmooth convex programs,” in Recent Advances in Learning and Control, V. Blondel, S. Boyd, and H. Kimura, eds. (Springer-Verlag Limited, 2008), pp. 95–110.
[CrossRef]

Hagebeuker, B.

S. Hussmann, T. Ringbeck, and B. Hagebeuker, “A performance review of 3D TOF vision systems in comparison to stereo vision systems,” in Stereo Vision, A. Bhatti, ed. (InTech, 2008), pp. 103–120.

Hayes, L. W. B.

A. P. Cracknell and L. W. B. Hayes, Introduction to Remote Sensing (Taylor & Francis, 1991).

Hecksher-Sørensen, J.

J. Sharpe, U. Ahlgren, P. Perry, B. Hill, A. Ross, J. Hecksher-Sørensen, R. Baldock, and D. Davidson, “Optical projection tomography as a tool for 3d microscopy and gene expression studies,” Science 296, 541–545 (2002).
[CrossRef] [PubMed]

Hill, B.

J. Sharpe, U. Ahlgren, P. Perry, B. Hill, A. Ross, J. Hecksher-Sørensen, R. Baldock, and D. Davidson, “Optical projection tomography as a tool for 3d microscopy and gene expression studies,” Science 296, 541–545 (2002).
[CrossRef] [PubMed]

Howell, J. C.

G. Howland, P. Zerom, R. W. Boyd, and J. C. Howell, “Compressive sensing LIDAR for 3D imaging,” in CLEO:2011 - Laser Applications to Photonic Applications, OSA Technical Digest (CD) (Optical Society of America, 2011), paper CMG3.

Howland, G.

G. Howland, P. Zerom, R. W. Boyd, and J. C. Howell, “Compressive sensing LIDAR for 3D imaging,” in CLEO:2011 - Laser Applications to Photonic Applications, OSA Technical Digest (CD) (Optical Society of America, 2011), paper CMG3.

Hussmann, S.

S. Hussmann, T. Ringbeck, and B. Hagebeuker, “A performance review of 3D TOF vision systems in comparison to stereo vision systems,” in Stereo Vision, A. Bhatti, ed. (InTech, 2008), pp. 103–120.

Kelly, K.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag.25, 83–91 (2008).
[CrossRef]

Kelly, K. F.

M. B. Wakin, J. N. Laska, M. F. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. F. Kelly, and R. G. Baraniuk, “An architecture for compressive imaging,” in Proceedings of IEEE International Conference on Image Processing, (2006), pp. 1273–1276.

Lamb, R.

R. Lamb and G. Buller, “Single-pixel imaging using 3d scanning time-of-flight photon counting,” SPIE News-room (2010). .
[CrossRef]

Laska, J. N.

M. B. Wakin, J. N. Laska, M. F. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. F. Kelly, and R. G. Baraniuk, “An architecture for compressive imaging,” in Proceedings of IEEE International Conference on Image Processing, (2006), pp. 1273–1276.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag.25, 83–91 (2008).
[CrossRef]

Lenz, R.

Liljeborg, A.

Lohr, U.

A. Wehr and U. Lohr, “Airborne laser scanning—an introduction and overview,” ISPRS J. Photogramm. Remote Sens. 54, 68–82 (1999).
[CrossRef]

Majlöf, L.

Malassiotis, S.

E. Stoykova, A. A. Alatan, P. Benzie, N. Grammalidis, S. Malassiotis, J. Ostermann, S. Piekh, V. Sainov, C. Theobalt, T. Thevar, and X. Zabulis, “3-D time-varying scene capture technologies—A survey,” IEEE Trans. Circ. Syst. Video Tech. 17, 1568–1586 (2007).
[CrossRef]

Marziliano, P.

T. Blu, P.-L. Dragotti, M. Vetterli, P. Marziliano, and L. Coulot, “Sparse sampling of signal innovations,” IEEE Signal Process. Mag. 25, 31–40 (2008).
[CrossRef]

M. Vetterli, P. Marziliano, and T. Blu, “Sampling signals with finite rate of innovation,” IEEE Trans. Signal Process. 50, 1417–1428 (2002).
[CrossRef]

Medina, A.

Olshausen, B. A.

I. Toši?, B. A. Olshausen, and B. J. Culpepper, “Learning sparse representations of depth,” IEEE J. Sel. Top. Signal Process. 5, 941–952 (2011).
[CrossRef]

Oppenheim, A. V.

A. V. Oppenheim and R. W. Schafer, Discrete-Time Signal Processing, 3rd ed. (Prentice-Hall, 2009).

Ostermann, J.

E. Stoykova, A. A. Alatan, P. Benzie, N. Grammalidis, S. Malassiotis, J. Ostermann, S. Piekh, V. Sainov, C. Theobalt, T. Thevar, and X. Zabulis, “3-D time-varying scene capture technologies—A survey,” IEEE Trans. Circ. Syst. Video Tech. 17, 1568–1586 (2007).
[CrossRef]

Perry, P.

J. Sharpe, U. Ahlgren, P. Perry, B. Hill, A. Ross, J. Hecksher-Sørensen, R. Baldock, and D. Davidson, “Optical projection tomography as a tool for 3d microscopy and gene expression studies,” Science 296, 541–545 (2002).
[CrossRef] [PubMed]

Piekh, S.

E. Stoykova, A. A. Alatan, P. Benzie, N. Grammalidis, S. Malassiotis, J. Ostermann, S. Piekh, V. Sainov, C. Theobalt, T. Thevar, and X. Zabulis, “3-D time-varying scene capture technologies—A survey,” IEEE Trans. Circ. Syst. Video Tech. 17, 1568–1586 (2007).
[CrossRef]

Ponce, J.

D. A. Forsyth and J. Ponce, Computer Vision: A Modern Approach (Prentice-Hall, 2002).

Ringbeck, T.

S. Hussmann, T. Ringbeck, and B. Hagebeuker, “A performance review of 3D TOF vision systems in comparison to stereo vision systems,” in Stereo Vision, A. Bhatti, ed. (InTech, 2008), pp. 103–120.

Ross, A.

J. Sharpe, U. Ahlgren, P. Perry, B. Hill, A. Ross, J. Hecksher-Sørensen, R. Baldock, and D. Davidson, “Optical projection tomography as a tool for 3d microscopy and gene expression studies,” Science 296, 541–545 (2002).
[CrossRef] [PubMed]

Sainov, V.

E. Stoykova, A. A. Alatan, P. Benzie, N. Grammalidis, S. Malassiotis, J. Ostermann, S. Piekh, V. Sainov, C. Theobalt, T. Thevar, and X. Zabulis, “3-D time-varying scene capture technologies—A survey,” IEEE Trans. Circ. Syst. Video Tech. 17, 1568–1586 (2007).
[CrossRef]

Sarkis, M.

M. Sarkis and K. Diepold, “Depth map compression via compressed sensing,” in Proceedings of IEEE International Conference on Image Processing, (2009), pp. 737–740.

Sarvotham, S.

M. B. Wakin, J. N. Laska, M. F. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. F. Kelly, and R. G. Baraniuk, “An architecture for compressive imaging,” in Proceedings of IEEE International Conference on Image Processing, (2006), pp. 1273–1276.

Schafer, R. W.

A. V. Oppenheim and R. W. Schafer, Discrete-Time Signal Processing, 3rd ed. (Prentice-Hall, 2009).

Scharstein, D.

D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis. 47, 7–42 (2002).
[CrossRef]

S. M. Seitz, B. Curless, J. Diebel, D. Scharstein, and R. Szeliski, “A comparison and evaluation of multi-view stereo reconstruction algorithms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (2006), pp. 519–528.

Schuon, S.

S. Schuon, C. Theobalt, J. Davis, and S. Thrun, “High-quality scanning using time-of-flight depth superresolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2008).
[CrossRef]

Schwarz, B.

B. Schwarz, “LIDAR: mapping the world in 3D,” Nat. Photonics 4, 429–430 (2010).
[CrossRef]

Seitz, S. M.

S. M. Seitz, B. Curless, J. Diebel, D. Scharstein, and R. Szeliski, “A comparison and evaluation of multi-view stereo reconstruction algorithms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (2006), pp. 519–528.

Sharpe, J.

J. Sharpe, U. Ahlgren, P. Perry, B. Hill, A. Ross, J. Hecksher-Sørensen, R. Baldock, and D. Davidson, “Optical projection tomography as a tool for 3d microscopy and gene expression studies,” Science 296, 541–545 (2002).
[CrossRef] [PubMed]

Stoykova, E.

E. Stoykova, A. A. Alatan, P. Benzie, N. Grammalidis, S. Malassiotis, J. Ostermann, S. Piekh, V. Sainov, C. Theobalt, T. Thevar, and X. Zabulis, “3-D time-varying scene capture technologies—A survey,” IEEE Trans. Circ. Syst. Video Tech. 17, 1568–1586 (2007).
[CrossRef]

Sun, T.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag.25, 83–91 (2008).
[CrossRef]

Szeliski, R.

D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis. 47, 7–42 (2002).
[CrossRef]

S. M. Seitz, B. Curless, J. Diebel, D. Scharstein, and R. Szeliski, “A comparison and evaluation of multi-view stereo reconstruction algorithms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (2006), pp. 519–528.

Takhar, D.

M. B. Wakin, J. N. Laska, M. F. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. F. Kelly, and R. G. Baraniuk, “An architecture for compressive imaging,” in Proceedings of IEEE International Conference on Image Processing, (2006), pp. 1273–1276.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag.25, 83–91 (2008).
[CrossRef]

Tao, T.

E. J. Candès and T. Tao, “Near-optimal signal recovery from random projections: Universal encoding strategies?” IEEE Trans. Inf. Theory 52, 5406–5425 (2006).
[CrossRef]

Theobalt, C.

E. Stoykova, A. A. Alatan, P. Benzie, N. Grammalidis, S. Malassiotis, J. Ostermann, S. Piekh, V. Sainov, C. Theobalt, T. Thevar, and X. Zabulis, “3-D time-varying scene capture technologies—A survey,” IEEE Trans. Circ. Syst. Video Tech. 17, 1568–1586 (2007).
[CrossRef]

S. Schuon, C. Theobalt, J. Davis, and S. Thrun, “High-quality scanning using time-of-flight depth superresolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2008).
[CrossRef]

Thevar, T.

E. Stoykova, A. A. Alatan, P. Benzie, N. Grammalidis, S. Malassiotis, J. Ostermann, S. Piekh, V. Sainov, C. Theobalt, T. Thevar, and X. Zabulis, “3-D time-varying scene capture technologies—A survey,” IEEE Trans. Circ. Syst. Video Tech. 17, 1568–1586 (2007).
[CrossRef]

Thrun, S.

S. Schuon, C. Theobalt, J. Davis, and S. Thrun, “High-quality scanning using time-of-flight depth superresolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2008).
[CrossRef]

Torras, C.

S. Foix, G. Alenyà, and C. Torras, “Lock-in time-of-flight (ToF) cameras: a survey,” IEEE Sens. J.11, 1917–1926 (2011).
[CrossRef]

Tošic, I.

I. Toši?, B. A. Olshausen, and B. J. Culpepper, “Learning sparse representations of depth,” IEEE J. Sel. Top. Signal Process. 5, 941–952 (2011).
[CrossRef]

Vetterli, M.

T. Blu, P.-L. Dragotti, M. Vetterli, P. Marziliano, and L. Coulot, “Sparse sampling of signal innovations,” IEEE Signal Process. Mag. 25, 31–40 (2008).
[CrossRef]

M. Vetterli, P. Marziliano, and T. Blu, “Sampling signals with finite rate of innovation,” IEEE Trans. Signal Process. 50, 1417–1428 (2002).
[CrossRef]

P. L. Dragotti, M. Vetterli, and T. Blu, “Sampling moments and reconstructing signals of finite rate of innovation: Shannon meets Strang-Fix,” IEEE Trans. Signal Process.55, 1741–1757 (2007).
[CrossRef]

Wakin, M. B.

M. B. Wakin, J. N. Laska, M. F. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. F. Kelly, and R. G. Baraniuk, “An architecture for compressive imaging,” in Proceedings of IEEE International Conference on Image Processing, (2006), pp. 1273–1276.

Wehr, A.

A. Wehr and U. Lohr, “Airborne laser scanning—an introduction and overview,” ISPRS J. Photogramm. Remote Sens. 54, 68–82 (1999).
[CrossRef]

Yalcin, H.

S. B. Gokturk, H. Yalcin, and C. Bamji, “A time-of-flight depth sensor — system description, issues and solutions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2004), p. 35.

Zabulis, X.

E. Stoykova, A. A. Alatan, P. Benzie, N. Grammalidis, S. Malassiotis, J. Ostermann, S. Piekh, V. Sainov, C. Theobalt, T. Thevar, and X. Zabulis, “3-D time-varying scene capture technologies—A survey,” IEEE Trans. Circ. Syst. Video Tech. 17, 1568–1586 (2007).
[CrossRef]

Zerom, P.

G. Howland, P. Zerom, R. W. Boyd, and J. C. Howell, “Compressive sensing LIDAR for 3D imaging,” in CLEO:2011 - Laser Applications to Photonic Applications, OSA Technical Digest (CD) (Optical Society of America, 2011), paper CMG3.

IEEE J. Sel. Top. Signal Process. (1)

I. Toši?, B. A. Olshausen, and B. J. Culpepper, “Learning sparse representations of depth,” IEEE J. Sel. Top. Signal Process. 5, 941–952 (2011).
[CrossRef]

IEEE Signal Process. Mag. (1)

T. Blu, P.-L. Dragotti, M. Vetterli, P. Marziliano, and L. Coulot, “Sparse sampling of signal innovations,” IEEE Signal Process. Mag. 25, 31–40 (2008).
[CrossRef]

IEEE Trans. Circ. Syst. Video Tech. (1)

E. Stoykova, A. A. Alatan, P. Benzie, N. Grammalidis, S. Malassiotis, J. Ostermann, S. Piekh, V. Sainov, C. Theobalt, T. Thevar, and X. Zabulis, “3-D time-varying scene capture technologies—A survey,” IEEE Trans. Circ. Syst. Video Tech. 17, 1568–1586 (2007).
[CrossRef]

IEEE Trans. Inf. Theory (1)

E. J. Candès and T. Tao, “Near-optimal signal recovery from random projections: Universal encoding strategies?” IEEE Trans. Inf. Theory 52, 5406–5425 (2006).
[CrossRef]

IEEE Trans. Signal Process. (1)

M. Vetterli, P. Marziliano, and T. Blu, “Sampling signals with finite rate of innovation,” IEEE Trans. Signal Process. 50, 1417–1428 (2002).
[CrossRef]

Int. J. Comput. Vis. (1)

D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis. 47, 7–42 (2002).
[CrossRef]

ISPRS J. Photogramm. Remote Sens. (1)

A. Wehr and U. Lohr, “Airborne laser scanning—an introduction and overview,” ISPRS J. Photogramm. Remote Sens. 54, 68–82 (1999).
[CrossRef]

J. de l’ École Polytechnique (1)

G. C. M. R. de Prony, “Essai éxperimental et analytique: Sur les lois de la dilatabilité de fluides élastique et sur celles de la force expansive de la vapeur de l’alkool, à différentes températures,” J. de l’ École Polytechnique 1, 24–76 (1795).

J. Electron. Imaging (1)

F. Blais, “Review of 20 years of range sensor development,” J. Electron. Imaging 13, 231–240 (2004).
[CrossRef]

J. Opt. Soc. Am. A (1)

Nat. Photonics (1)

B. Schwarz, “LIDAR: mapping the world in 3D,” Nat. Photonics 4, 429–430 (2010).
[CrossRef]

Opt. Lett. (1)

Science (1)

J. Sharpe, U. Ahlgren, P. Perry, B. Hill, A. Ross, J. Hecksher-Sørensen, R. Baldock, and D. Davidson, “Optical projection tomography as a tool for 3d microscopy and gene expression studies,” Science 296, 541–545 (2002).
[CrossRef] [PubMed]

SPIE News-room (1)

R. Lamb and G. Buller, “Single-pixel imaging using 3d scanning time-of-flight photon counting,” SPIE News-room (2010). .
[CrossRef]

Other (17)

S. Schuon, C. Theobalt, J. Davis, and S. Thrun, “High-quality scanning using time-of-flight depth superresolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2008).
[CrossRef]

M. Sarkis and K. Diepold, “Depth map compression via compressed sensing,” in Proceedings of IEEE International Conference on Image Processing, (2009), pp. 737–740.

S. B. Gokturk, H. Yalcin, and C. Bamji, “A time-of-flight depth sensor — system description, issues and solutions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2004), p. 35.

S. Foix, G. Alenyà, and C. Torras, “Lock-in time-of-flight (ToF) cameras: a survey,” IEEE Sens. J.11, 1917–1926 (2011).
[CrossRef]

A. P. Cracknell and L. W. B. Hayes, Introduction to Remote Sensing (Taylor & Francis, 1991).

D. A. Forsyth and J. Ponce, Computer Vision: A Modern Approach (Prentice-Hall, 2002).

S. M. Seitz, B. Curless, J. Diebel, D. Scharstein, and R. Szeliski, “A comparison and evaluation of multi-view stereo reconstruction algorithms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (2006), pp. 519–528.

S. Hussmann, T. Ringbeck, and B. Hagebeuker, “A performance review of 3D TOF vision systems in comparison to stereo vision systems,” in Stereo Vision, A. Bhatti, ed. (InTech, 2008), pp. 103–120.

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory52, 1289–1306 (2006).
[CrossRef]

M. B. Wakin, J. N. Laska, M. F. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. F. Kelly, and R. G. Baraniuk, “An architecture for compressive imaging,” in Proceedings of IEEE International Conference on Image Processing, (2006), pp. 1273–1276.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag.25, 83–91 (2008).
[CrossRef]

M. Elad, Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing (Springer, 2010).
[PubMed]

G. Howland, P. Zerom, R. W. Boyd, and J. C. Howell, “Compressive sensing LIDAR for 3D imaging,” in CLEO:2011 - Laser Applications to Photonic Applications, OSA Technical Digest (CD) (Optical Society of America, 2011), paper CMG3.

P. L. Dragotti, M. Vetterli, and T. Blu, “Sampling moments and reconstructing signals of finite rate of innovation: Shannon meets Strang-Fix,” IEEE Trans. Signal Process.55, 1741–1757 (2007).
[CrossRef]

A. V. Oppenheim and R. W. Schafer, Discrete-Time Signal Processing, 3rd ed. (Prentice-Hall, 2009).

M. Grant and S. Boyd, “CVX: Matlab software for disciplined convex programming, version 1.21,” http://cvxr.com/cvx .

M. Grant and S. Boyd, “Graph implementations for nonsmooth convex programs,” in Recent Advances in Learning and Control, V. Blondel, S. Boyd, and H. Kimura, eds. (Springer-Verlag Limited, 2008), pp. 95–110.
[CrossRef]

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1

The proposed architecture for acquiring depth maps of scenes constituted of piecewise-planar facets. The scene is in far field, i.e., the baseline b and the dimensions of each planar facet w are much smaller than the distance between the imaging device and the scene. A light source with periodically-varying intensity s(t) illuminates an N × N-pixel SLM. The scene is serially illuminated with M chosen spatial patterns. For each patterned illumination the reflected light is focused at the photodetector and K digital time samples are recorded. The total M × K time samples are computationally processed to reconstruct an N × N-pixel depth map of the scene.

Fig. 2
Fig. 2

Sparsity of a signal (having a basis expansion or similar representation with a small number of coefficients significantly different from zero) is widely exploited for signal estimation and compression [25]. An N × N-pixel digital photograph (A) or depth map (B) of a scene requires N2 pixel values for representation in the spatial domain. As illustrated with the output of an edge-detection method, the Laplacian of a depth map (D) typically has fewer significant coefficients than the Laplacian of a photograph (C). This structure of natural scenes is also reflected in discrete wavelet transform (DWT) coefficients sorted by magnitude: a photograph has slower decay of DWT coefficients and more nonzero coefficients (E: blue, dashed) than the corresponding depth map (E: green, solid). We exploit this simplicity of depth maps in our range acquisition framework.

Fig. 3
Fig. 3

(A) Scene setup for parametric signal modeling of TOF light transport; (B) Top view; (C) Notation for various angles; (D) Side view.

Fig. 4
Fig. 4

(A) All-ones scene illumination. (B) Scene response to all-ones scene illumination. (C) Diagrammatic explanation of the modeling of the parametric signal p(t).

Fig. 5
Fig. 5

The signal p(t) only provides information regarding the depth ranges present in the scene. It does not allow us to estimate the position and shape of the planar facet in the FOV of the imaging system. At best, the facet can be localized to lie between spherical shells specified by Tmin and Tmax. In this figure two possible positions for the rectangular facet are shown.

Fig. 6
Fig. 6

(A) Binary patterned scene illumination. (B) Scene response to binary patterned scene illumination. (C) Diagrammatic explanation of the high-resolution SLM (small Δ) approximation. (D) Modeling of the parametric signal Up(t) as a weighted sum of equally-spaced Diracs. Note that Up(t) has the same time envelope as the signal P(t,T0, TϕΔϕ,TθΔθ).

Fig. 7
Fig. 7

Depth masks are binary-valued N × N pixel resolution images which indicate the presence (1) or absence (0) of a particular depth at a particular position (i, j) in the discretized FOV of the sensor. Depending on Δ and the sampling rate, we obtain a uniform sampling of the depth range and hence obtain L depth masks, one per depth value. The depth map of a scene is the weighted linear combination of depth masks where the weights are the numerical values of the discretized depth range, {d1, d2,..., dL}.

Fig. 8
Fig. 8

Parametric modeling for non-rectangular planes. The piecewise linear fit (shown in dotted black) is a good fit to the true parametric scene response from a triangular planar facet. This fit allows us to robustly estimate Tmin and Tmax.

Fig. 9
Fig. 9

Parametric modeling in scenes with multiple planar facets. Since light transport is linear and assuming light adds linearly at the detector, the parametric signal that characterizes the scene response is the sum of multiple parametric signals. Thus even in the case of multiple planar facets, a piecewise-linear fit to the observed data allows us to reliably estimate the scene’s depth range.

Fig. 10
Fig. 10

Schematic experimental setup to demonstrate depth estimation using our proposed framework. See text for details.

Fig. 11
Fig. 11

Photographs of experimental setup (A and B). Parametric signal estimate in response to all-transparent illumination (C and D). Parametric signal estimate in response to patterned illumination (E and F). Depth map reconstructions (G and H).

Equations (40)

Equations on this page are rendered with MathJax. Learn more.

q ( t ) = a δ ( t 2 | O Q | ) ,
r q ( t ) = h ( t ) * a δ ( t 2 | O Q | ) = ah ( t 2 | O Q | ) .
r ( t ) = a ϕ 1 ϕ 2 θ 1 θ 2 h ( t 2 | O Q ( ϕ , θ ) | ) d θ d ϕ ,
r ( t ) = a ϕ 1 ϕ 2 θ 1 θ 2 h ( t 2 d sec 2 ϕ + tan 2 θ ) d θ d ϕ = a 0 Δ ϕ 0 Δ θ h ( t 2 d sec 2 ( ϕ 1 + ϕ ) + tan 2 ( θ 1 + θ ) ) d θ d ϕ ,
sec 2 ( ϕ 1 + ϕ ) + tan 2 ( θ 1 + θ ) sec 2 ϕ 1 + tan 2 θ 1 + 1 sec 2 ϕ 1 + tan 2 θ 1 ( ( tan ϕ 1 sec 2 ϕ 1 ) ϕ + ( tan θ 1 sec 2 θ 1 ) θ ) .
r ( t ) = a 0 Δ ϕ 0 Δ θ h ( t 2 d ( γ ( ϕ 1 , θ 1 ) + ( tan ϕ 1 sec 2 ϕ 1 ) ϕ + ( tan θ 1 sec 2 θ 1 ) θ γ ( ϕ 1 , θ 1 ) ) ) d θ d ϕ = a 0 Δ ϕ 0 Δ θ h ( t τ ( ϕ , θ ) ) d θ d ϕ ,
τ ( ϕ , θ ) = 2 d γ ( ϕ 1 , θ 1 ) + 2 d γ ( ϕ 1 , θ 1 ) ( tan ϕ 1 sec 2 ϕ 1 ) ϕ + 2 d γ ( ϕ 1 , θ 1 ) ( tan θ 1 sec 2 θ 1 ) θ .
T 0 = 2 d γ ( ϕ 1 , θ 1 ) , T ϕ = 2 d γ ( ϕ 1 , θ 1 ) tan ϕ 1 sec 2 ϕ 1 , T θ = 2 d γ ( ϕ 1 , θ 1 ) tan θ 1 sec 2 θ 1 .
r ( t ) = a 0 Δ ϕ 0 Δ θ h ( t T 0 T ϕ ϕ T θ θ ) d θ d ϕ = a T ϕ T θ 0 T ϕ Δ ϕ 0 T θ Δ θ h ( t T 0 τ 1 τ 2 ) d τ 1 d τ 2 = a T ϕ T θ h ( t ) * δ ( t T 0 ) * 0 T ϕ Δ ϕ δ ( t τ 1 ) d τ 1 * 0 T θ Δ θ δ ( t τ 2 ) d τ 2 = a T ϕ T θ h ( t ) * δ ( t T 0 ) * B ( t , T ϕ Δ ϕ ) * B ( t , T θ Δ θ )
B ( t , T ) = { 1 , for t between 0 and T ; 0 , otherwise .
r ( t ) = a T ϕ T θ h ( t ) * P ( t , T 0 , T ϕ Δ ϕ , T θ Δ θ ) .
I i j = { 1 , if rays along SLM illumination pixel ( i , j ) intersect the rectangular facet ; 0 , otherwise .
r p ( t ) = i = 1 N j = 1 N c i j p I i j ( a h ( t ) * 0 Δ 0 Δ δ ( t 2 D i j 2 x 2 y ) d x d y ) = i = 1 N j = 1 N c i j p I i j ( a 4 h ( t ) * δ ( t 2 D i j ) * B ( t , Δ ) * B ( t , Δ ) )
= a 4 h ( t ) * ( i = 1 N j = 1 N c i j p I i j ( δ ( t 2 D i j ) * B ( t , Δ ) * B ( t , Δ ) ) ) .
U p ( t ) = i = 1 N j = 1 N c i j p I i j ( δ ( t 2 D i j ) * B ( t , Δ ) * B ( t , Δ ) ) .
lim Δ 0 U p ( t ) = i = 1 N j = 1 N c i j p I i j ( δ ( t 2 D i j ) * δ ( t ) ) = i = 1 N j = 1 N c i j p I i j δ ( t 2 D i j ) .
r all ones ( t ) = lim Δ 0 i = 1 N j = 1 N I i j ( a h ( t ) * 0 Δ 0 Δ δ ( t 2 D i j 2 x 2 y ) d x d y )
= a ϕ 1 ϕ 2 θ 1 θ 2 h ( t 2 | O Q ( ϕ , θ ) | ) d θ d ϕ = r ( t )
L = T max T min 2 Δ , d 1 = T min , d = d 1 + 2 Δ , = 1 , , L .
lim Δ 0 U p ( t ) = i = 1 N j = 1 N c i j p I i j δ ( t 2 D i j ) = = 1 L ( i = 1 N j = 1 N c i j p I i j ) δ ( t 2 d ) ,
I i j = { 1 , if D i j = d ; 0 , otherwise ,
𝔉 { lim Δ 0 U p ( t ) } = 𝔉 { = 1 L ( i = 1 N j = 1 N c i j p I i j ) δ ( t 2 d ) } = = 1 L ( i = 1 N j = 1 N c i j p I i j ) 𝔉 { δ ( t 2 d ) } = = 1 L ( i = 1 N j = 1 N c i j p I i j ) e i ω 2 d
𝔉 ( r p ( t ) ) = a 4 𝔉 { h ( t ) * U p ( t ) } = a 4 𝔉 { h ( t ) } 𝔉 { U p ( t ) } .
𝔉 { r p [ k ] } = a f 4 𝔉 { h [ k ] } 𝔉 { U p [ k ] } 𝔉 { r p [ k ] } 𝔉 { h [ k ] } = a f 4 = 1 L ( i = 1 N j = 1 N c i j p I i j ) e i ( 4 π f d ) k .
R p [ k ] H [ k ] = a f 4 = 1 L ( i = 1 N j = 1 N c i j p I i j ) e i ( 4 π f d ) k , k = 1 , , K .
y p = i = 1 N j = 1 N c i j p I i j , = 1 , , L .
[ R p [ 1 ] / H [ 1 ] R p [ k ] / H [ k ] R p [ K ] / H [ K ] ] = [ 1 1 1 e i ( 4 π f d 1 ) k e i ( 4 π f d ) k e i ( 4 π f d L ) k e i ( 4 π f d 1 ) K e i ( 4 π f d ) K e i ( 4 π f d L ) K ] [ y 1 p y p y L p ] ,
R p / H = V y p
[ y 1 p y p y L p ] = [ I 11 1 I 1 N 1 I 21 1 I 2 N 1 I N 1 1 I N N 1 I 11 I 1 N I 21 I 2 N I N 1 I N N I 11 L I 1 N L I 21 L I 2 N L I N 1 L I N N L ] [ c 11 p c 1 N p c 21 p c 2 N p c N 1 p c N N p ] .
y L × M = [ I 1 I I L ] T L × N 2 C N 2 × M .
OPT : minimize D y [ I 1 I I L ] T C F 2 + ( Φ Φ T ) D 1 subject to = 1 L I i j = 1 , for all ( i , j ) , = 1 L d I = D , and I i j { 0 , 1 } , = 1 , L , i = 1 , , N , j = 1 , N .
Φ = [ 1 2 1 0 0 0 1 2 1 0 0 0 1 2 1 ] ,
R-OPT : minimize D y [ I 1 I I L ] T C F 2 + Φ Φ T D 1 subject to = 1 L I i j = 1 , for all ( i , j ) , = 1 L d I = D , and I i j [ 0 , 1 ] = 1 , , L , i = 1 , , N , j = 1 , , N .
r ( t ) = a ϕ 1 ϕ 2 θ 1 ( ϕ ) θ 2 ( ϕ ) h ( t 2 | O Q ( ϕ , θ ) | ) d θ d ϕ .
r ( t ) = r 1 ( t ) + r 2 ( t ) = P 1 ( t , T 0 , 1 , T ϕ , 1 Δ ϕ 1 , T θ , 1 Δ θ 1 ) + P 2 ( t , T 0 , 2 , T ϕ , 2 Δ ϕ 2 , T θ , 2 Δ θ 2 ) ,
r p ( t ) = r 1 p ( t ) + r 2 p ( t ) ,
U p ( t ) = U 1 p ( t ) + U 2 p ( t ) .
r 0 ( t ) = lim Δ 0 i = 1 N j = 1 N a i j I i j ( h ( t ) * 0 Δ 0 Δ δ ( t 2 D i j 2 x l 2 y l ) d x d y ) = ϕ 1 ϕ 2 θ 1 θ 2 a ( ϕ , θ ) h ( t 2 | O Q ( ϕ , θ ) | ) d θ d ϕ .
r 1 ( t ) = lim Δ 0 i = 1 N j = 1 N a i j a a i j I i j ( h ( t ) * 0 Δ 0 Δ δ ( t 2 D i j 2 x 2 y ) d x d y ) = a ϕ 1 ϕ 2 θ 1 θ 2 h ( t 2 | O Q ( ϕ , θ ) | ) d θ d ϕ = h ( t ) * P ( t , T 0 , T ϕ Δ ϕ , T θ Δ θ ) ,
s ( t ) * r ( t ) = s ( t ) * { h ( t ) * P ( t , T 0 , T ϕ Δ ϕ , T θ Δ θ ) } = { s ( t ) * h ( t ) } * P ( t , T 0 , T ϕ Δ ϕ , T θ Δ θ ) .

Metrics