Abstract

A new obstacle detection method using time-of-flight 3D imaging sensor based on range clusters is proposed. To effectively reduce the influence of outlier and noise in range images, we utilize intensity images to estimate noise deviation of the range images and a weighted local linear smoothing is used to project the data into a new manifold surface. The proposed method divides the 3D imaging data into range clusters with different shapes and sizes according to the distance ambient relation between the pixels, and some regulation criterions are set to adjust the range clusters into optimal shape and size. Experiments on the SwissRanger sensor data show that, compared to the traditional obstacle detection methods based on regular data patches, the proposed method can give more precious detection results.

© 2014 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. G. J. Iddan, G. Yahav, “3D Imaging in the Studio and Elsewhere,” in Videometrics and Optical Methods for 3D Shape Measurements, Proc. SPIE4298:, 48–55 (2001).
  2. R. Bostelman, T. Hong, R. Madhavan, “Obstacle detection using a time-of-flight range camera for automated guided vehicle safety and navigation,” Integr. Comput-Aid. E. 12, 237–249 (2005).
  3. A. Prusak, O. Melnychuk, H. Roth, I. Schiller, R. Koch, “Pose estimation and map building with a time-of-flight-camera for robot navigation,” Int. J. Intell. Syst. Tech. Appl. 5, 355–364 (2008).
  4. F. Yuan, A. Swadzba, R. Philippsen, O. Engin, M. Hanheide, S. Wachsmuth, “Laser-based navigation enhanced with 3d time-of-flight data,” in Proceedings of IEEE International Conference on Robotics and Automation, 2844C-2850 (2009).
  5. S. Y. Kim, J. H. Cho, A. Koschan, M. A. Abidi, “Spatial and Temporal Enhancement of Depth Images Captured by a Time-of-flight Depth Sensor,” in Proceeding of 20th International Conference on Pattern Recognition, 2358–2361 (2010).
  6. S. H. McClure, M. J. Cree, A. A. Dorrington, A. D. Payne, “Resolving depth measurement ambiguity with commercially available range imaging cameras,” in Image Processing: Machine Vision Applications III, Proc. SPIE7538:, (75380K) 2010.
  7. A. D. Payne, A. P. P. Jongenelen, A. A. Dorrington, M. J. Cree, D. A. Carnegie, “Multiple frequency range imaging to remove measurement ambiguity,” in Proceedings of 9th Conference on Optical 3-D Measurement Techniques, 139–148 (2009).
  8. Y. Jiang, Y. Liu, Y. Q. Lei, Q. C. Wang, “Supervised preserving projection for learning scene information based on time-of-flight imaging sensor,” Appl. Opt. 22, 5279–5288 (2013).
    [CrossRef]
  9. I. Tziakos, C. Theoharatos, N. A. Laskaris, G. Economou, “Color image segmentation using Laplacian eigenmaps,” J. Electron. Imaging 18,023004 (2009).
    [CrossRef]
  10. L. Jovanov, A. Pizurica, W. Philips, “Fuzzy logic-based approach to wavelet denoising of 3D images produced by time-of-flight cameras,” Opt. Express 18, 651–676 (2010).
    [CrossRef]
  11. P. Seitz, “Quantum-noise limited distance resolution of optical range imaging techniques,” IEEE Trans. Circuits Syst. I, Reg. Papers 55, 2368–2377 (2008).
    [CrossRef]
  12. S. A. Gumundsson, H. Aans, R. Larsen, “Environmental effects on measurement uncertainties of time-of-flight cameras,” in International Symposium on Signals, Circuits and Systems, 1–4 (2007).
  13. Y. Y. Yao, “Perspectives of granular computing,” in Proceedings of IEEE International Conference on Granular Computing, 85–90 (2005).
  14. Z. Zhang, H. Zha, “Local linear smoothing for nonlinear manifold learning,” CSE-03–003, Technical Report, Pennsylvania State University, 2003.
  15. J. Serra, “A lattice approach to image segmentation,” J. Math. Imaging Vis. 24, 83–130 (2006).
    [CrossRef]
  16. J. Serra, “Connective segmentation,” in Proceedings of the 15th IEEE International Conference on Image Processing, 2192–2195 (2008).
  17. S. Yan, D. Xu, B. Zhang, H. Zhang, “Graph embedding and extensions: a general framework for dimensionality reduction,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 40–51 (2007).
    [CrossRef]
  18. M. Belkin, P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Comput. 15, 1373–1396 (2003).
    [CrossRef]
  19. V. N. Balasubramanian, J. Ye, S. Panchanathan, “Biased manifold embedding: a framework for personin dependent head pose estimation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 1–7 (2007).
  20. C. BenAbdelkader, K. Daniilidis, P. Maragos, N. Paragios, “Robust Head Pose Estimation Using Supervised Manifold Learning,” in Computer Vision-ECCV, Lect. Notes Comput. Sc.6316, 518–531 (2010).
    [CrossRef]
  21. X. He, M. Ji, H. Bao, “Graph embedding with constraints,” in Proceedings of International Joint Conference on Artificial Intelligence, 1065–1070 (2009).
  22. M. Belkin, P. Niyogi, V. Sindhwani, “Manifold regularization: A geometric framework for learning from labeled and unlabeled examples,” J. Mach. Learn. Res. 7, 2399–2434 (2006).

2013 (1)

Y. Jiang, Y. Liu, Y. Q. Lei, Q. C. Wang, “Supervised preserving projection for learning scene information based on time-of-flight imaging sensor,” Appl. Opt. 22, 5279–5288 (2013).
[CrossRef]

2010 (1)

L. Jovanov, A. Pizurica, W. Philips, “Fuzzy logic-based approach to wavelet denoising of 3D images produced by time-of-flight cameras,” Opt. Express 18, 651–676 (2010).
[CrossRef]

2009 (1)

I. Tziakos, C. Theoharatos, N. A. Laskaris, G. Economou, “Color image segmentation using Laplacian eigenmaps,” J. Electron. Imaging 18,023004 (2009).
[CrossRef]

2008 (2)

A. Prusak, O. Melnychuk, H. Roth, I. Schiller, R. Koch, “Pose estimation and map building with a time-of-flight-camera for robot navigation,” Int. J. Intell. Syst. Tech. Appl. 5, 355–364 (2008).

P. Seitz, “Quantum-noise limited distance resolution of optical range imaging techniques,” IEEE Trans. Circuits Syst. I, Reg. Papers 55, 2368–2377 (2008).
[CrossRef]

2007 (1)

S. Yan, D. Xu, B. Zhang, H. Zhang, “Graph embedding and extensions: a general framework for dimensionality reduction,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 40–51 (2007).
[CrossRef]

2006 (2)

J. Serra, “A lattice approach to image segmentation,” J. Math. Imaging Vis. 24, 83–130 (2006).
[CrossRef]

M. Belkin, P. Niyogi, V. Sindhwani, “Manifold regularization: A geometric framework for learning from labeled and unlabeled examples,” J. Mach. Learn. Res. 7, 2399–2434 (2006).

2005 (1)

R. Bostelman, T. Hong, R. Madhavan, “Obstacle detection using a time-of-flight range camera for automated guided vehicle safety and navigation,” Integr. Comput-Aid. E. 12, 237–249 (2005).

2003 (1)

M. Belkin, P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Comput. 15, 1373–1396 (2003).
[CrossRef]

Aans, H.

S. A. Gumundsson, H. Aans, R. Larsen, “Environmental effects on measurement uncertainties of time-of-flight cameras,” in International Symposium on Signals, Circuits and Systems, 1–4 (2007).

Abidi, M. A.

S. Y. Kim, J. H. Cho, A. Koschan, M. A. Abidi, “Spatial and Temporal Enhancement of Depth Images Captured by a Time-of-flight Depth Sensor,” in Proceeding of 20th International Conference on Pattern Recognition, 2358–2361 (2010).

Balasubramanian, V. N.

V. N. Balasubramanian, J. Ye, S. Panchanathan, “Biased manifold embedding: a framework for personin dependent head pose estimation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 1–7 (2007).

Bao, H.

X. He, M. Ji, H. Bao, “Graph embedding with constraints,” in Proceedings of International Joint Conference on Artificial Intelligence, 1065–1070 (2009).

Belkin, M.

M. Belkin, P. Niyogi, V. Sindhwani, “Manifold regularization: A geometric framework for learning from labeled and unlabeled examples,” J. Mach. Learn. Res. 7, 2399–2434 (2006).

M. Belkin, P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Comput. 15, 1373–1396 (2003).
[CrossRef]

BenAbdelkader, C.

C. BenAbdelkader, K. Daniilidis, P. Maragos, N. Paragios, “Robust Head Pose Estimation Using Supervised Manifold Learning,” in Computer Vision-ECCV, Lect. Notes Comput. Sc.6316, 518–531 (2010).
[CrossRef]

Bostelman, R.

R. Bostelman, T. Hong, R. Madhavan, “Obstacle detection using a time-of-flight range camera for automated guided vehicle safety and navigation,” Integr. Comput-Aid. E. 12, 237–249 (2005).

Carnegie, D. A.

A. D. Payne, A. P. P. Jongenelen, A. A. Dorrington, M. J. Cree, D. A. Carnegie, “Multiple frequency range imaging to remove measurement ambiguity,” in Proceedings of 9th Conference on Optical 3-D Measurement Techniques, 139–148 (2009).

Cho, J. H.

S. Y. Kim, J. H. Cho, A. Koschan, M. A. Abidi, “Spatial and Temporal Enhancement of Depth Images Captured by a Time-of-flight Depth Sensor,” in Proceeding of 20th International Conference on Pattern Recognition, 2358–2361 (2010).

Cree, M. J.

S. H. McClure, M. J. Cree, A. A. Dorrington, A. D. Payne, “Resolving depth measurement ambiguity with commercially available range imaging cameras,” in Image Processing: Machine Vision Applications III, Proc. SPIE7538:, (75380K) 2010.

A. D. Payne, A. P. P. Jongenelen, A. A. Dorrington, M. J. Cree, D. A. Carnegie, “Multiple frequency range imaging to remove measurement ambiguity,” in Proceedings of 9th Conference on Optical 3-D Measurement Techniques, 139–148 (2009).

Daniilidis, K.

C. BenAbdelkader, K. Daniilidis, P. Maragos, N. Paragios, “Robust Head Pose Estimation Using Supervised Manifold Learning,” in Computer Vision-ECCV, Lect. Notes Comput. Sc.6316, 518–531 (2010).
[CrossRef]

Dorrington, A. A.

S. H. McClure, M. J. Cree, A. A. Dorrington, A. D. Payne, “Resolving depth measurement ambiguity with commercially available range imaging cameras,” in Image Processing: Machine Vision Applications III, Proc. SPIE7538:, (75380K) 2010.

A. D. Payne, A. P. P. Jongenelen, A. A. Dorrington, M. J. Cree, D. A. Carnegie, “Multiple frequency range imaging to remove measurement ambiguity,” in Proceedings of 9th Conference on Optical 3-D Measurement Techniques, 139–148 (2009).

Economou, G.

I. Tziakos, C. Theoharatos, N. A. Laskaris, G. Economou, “Color image segmentation using Laplacian eigenmaps,” J. Electron. Imaging 18,023004 (2009).
[CrossRef]

Engin, O.

F. Yuan, A. Swadzba, R. Philippsen, O. Engin, M. Hanheide, S. Wachsmuth, “Laser-based navigation enhanced with 3d time-of-flight data,” in Proceedings of IEEE International Conference on Robotics and Automation, 2844C-2850 (2009).

Gumundsson, S. A.

S. A. Gumundsson, H. Aans, R. Larsen, “Environmental effects on measurement uncertainties of time-of-flight cameras,” in International Symposium on Signals, Circuits and Systems, 1–4 (2007).

Hanheide, M.

F. Yuan, A. Swadzba, R. Philippsen, O. Engin, M. Hanheide, S. Wachsmuth, “Laser-based navigation enhanced with 3d time-of-flight data,” in Proceedings of IEEE International Conference on Robotics and Automation, 2844C-2850 (2009).

He, X.

X. He, M. Ji, H. Bao, “Graph embedding with constraints,” in Proceedings of International Joint Conference on Artificial Intelligence, 1065–1070 (2009).

Hong, T.

R. Bostelman, T. Hong, R. Madhavan, “Obstacle detection using a time-of-flight range camera for automated guided vehicle safety and navigation,” Integr. Comput-Aid. E. 12, 237–249 (2005).

Iddan, G. J.

G. J. Iddan, G. Yahav, “3D Imaging in the Studio and Elsewhere,” in Videometrics and Optical Methods for 3D Shape Measurements, Proc. SPIE4298:, 48–55 (2001).

Ji, M.

X. He, M. Ji, H. Bao, “Graph embedding with constraints,” in Proceedings of International Joint Conference on Artificial Intelligence, 1065–1070 (2009).

Jiang, Y.

Y. Jiang, Y. Liu, Y. Q. Lei, Q. C. Wang, “Supervised preserving projection for learning scene information based on time-of-flight imaging sensor,” Appl. Opt. 22, 5279–5288 (2013).
[CrossRef]

Jongenelen, A. P. P.

A. D. Payne, A. P. P. Jongenelen, A. A. Dorrington, M. J. Cree, D. A. Carnegie, “Multiple frequency range imaging to remove measurement ambiguity,” in Proceedings of 9th Conference on Optical 3-D Measurement Techniques, 139–148 (2009).

Jovanov, L.

L. Jovanov, A. Pizurica, W. Philips, “Fuzzy logic-based approach to wavelet denoising of 3D images produced by time-of-flight cameras,” Opt. Express 18, 651–676 (2010).
[CrossRef]

Kim, S. Y.

S. Y. Kim, J. H. Cho, A. Koschan, M. A. Abidi, “Spatial and Temporal Enhancement of Depth Images Captured by a Time-of-flight Depth Sensor,” in Proceeding of 20th International Conference on Pattern Recognition, 2358–2361 (2010).

Koch, R.

A. Prusak, O. Melnychuk, H. Roth, I. Schiller, R. Koch, “Pose estimation and map building with a time-of-flight-camera for robot navigation,” Int. J. Intell. Syst. Tech. Appl. 5, 355–364 (2008).

Koschan, A.

S. Y. Kim, J. H. Cho, A. Koschan, M. A. Abidi, “Spatial and Temporal Enhancement of Depth Images Captured by a Time-of-flight Depth Sensor,” in Proceeding of 20th International Conference on Pattern Recognition, 2358–2361 (2010).

Larsen, R.

S. A. Gumundsson, H. Aans, R. Larsen, “Environmental effects on measurement uncertainties of time-of-flight cameras,” in International Symposium on Signals, Circuits and Systems, 1–4 (2007).

Laskaris, N. A.

I. Tziakos, C. Theoharatos, N. A. Laskaris, G. Economou, “Color image segmentation using Laplacian eigenmaps,” J. Electron. Imaging 18,023004 (2009).
[CrossRef]

Lei, Y. Q.

Y. Jiang, Y. Liu, Y. Q. Lei, Q. C. Wang, “Supervised preserving projection for learning scene information based on time-of-flight imaging sensor,” Appl. Opt. 22, 5279–5288 (2013).
[CrossRef]

Liu, Y.

Y. Jiang, Y. Liu, Y. Q. Lei, Q. C. Wang, “Supervised preserving projection for learning scene information based on time-of-flight imaging sensor,” Appl. Opt. 22, 5279–5288 (2013).
[CrossRef]

Madhavan, R.

R. Bostelman, T. Hong, R. Madhavan, “Obstacle detection using a time-of-flight range camera for automated guided vehicle safety and navigation,” Integr. Comput-Aid. E. 12, 237–249 (2005).

Maragos, P.

C. BenAbdelkader, K. Daniilidis, P. Maragos, N. Paragios, “Robust Head Pose Estimation Using Supervised Manifold Learning,” in Computer Vision-ECCV, Lect. Notes Comput. Sc.6316, 518–531 (2010).
[CrossRef]

McClure, S. H.

S. H. McClure, M. J. Cree, A. A. Dorrington, A. D. Payne, “Resolving depth measurement ambiguity with commercially available range imaging cameras,” in Image Processing: Machine Vision Applications III, Proc. SPIE7538:, (75380K) 2010.

Melnychuk, O.

A. Prusak, O. Melnychuk, H. Roth, I. Schiller, R. Koch, “Pose estimation and map building with a time-of-flight-camera for robot navigation,” Int. J. Intell. Syst. Tech. Appl. 5, 355–364 (2008).

Niyogi, P.

M. Belkin, P. Niyogi, V. Sindhwani, “Manifold regularization: A geometric framework for learning from labeled and unlabeled examples,” J. Mach. Learn. Res. 7, 2399–2434 (2006).

M. Belkin, P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Comput. 15, 1373–1396 (2003).
[CrossRef]

Panchanathan, S.

V. N. Balasubramanian, J. Ye, S. Panchanathan, “Biased manifold embedding: a framework for personin dependent head pose estimation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 1–7 (2007).

Paragios, N.

C. BenAbdelkader, K. Daniilidis, P. Maragos, N. Paragios, “Robust Head Pose Estimation Using Supervised Manifold Learning,” in Computer Vision-ECCV, Lect. Notes Comput. Sc.6316, 518–531 (2010).
[CrossRef]

Payne, A. D.

S. H. McClure, M. J. Cree, A. A. Dorrington, A. D. Payne, “Resolving depth measurement ambiguity with commercially available range imaging cameras,” in Image Processing: Machine Vision Applications III, Proc. SPIE7538:, (75380K) 2010.

A. D. Payne, A. P. P. Jongenelen, A. A. Dorrington, M. J. Cree, D. A. Carnegie, “Multiple frequency range imaging to remove measurement ambiguity,” in Proceedings of 9th Conference on Optical 3-D Measurement Techniques, 139–148 (2009).

Philippsen, R.

F. Yuan, A. Swadzba, R. Philippsen, O. Engin, M. Hanheide, S. Wachsmuth, “Laser-based navigation enhanced with 3d time-of-flight data,” in Proceedings of IEEE International Conference on Robotics and Automation, 2844C-2850 (2009).

Philips, W.

L. Jovanov, A. Pizurica, W. Philips, “Fuzzy logic-based approach to wavelet denoising of 3D images produced by time-of-flight cameras,” Opt. Express 18, 651–676 (2010).
[CrossRef]

Pizurica, A.

L. Jovanov, A. Pizurica, W. Philips, “Fuzzy logic-based approach to wavelet denoising of 3D images produced by time-of-flight cameras,” Opt. Express 18, 651–676 (2010).
[CrossRef]

Prusak, A.

A. Prusak, O. Melnychuk, H. Roth, I. Schiller, R. Koch, “Pose estimation and map building with a time-of-flight-camera for robot navigation,” Int. J. Intell. Syst. Tech. Appl. 5, 355–364 (2008).

Roth, H.

A. Prusak, O. Melnychuk, H. Roth, I. Schiller, R. Koch, “Pose estimation and map building with a time-of-flight-camera for robot navigation,” Int. J. Intell. Syst. Tech. Appl. 5, 355–364 (2008).

Schiller, I.

A. Prusak, O. Melnychuk, H. Roth, I. Schiller, R. Koch, “Pose estimation and map building with a time-of-flight-camera for robot navigation,” Int. J. Intell. Syst. Tech. Appl. 5, 355–364 (2008).

Seitz, P.

P. Seitz, “Quantum-noise limited distance resolution of optical range imaging techniques,” IEEE Trans. Circuits Syst. I, Reg. Papers 55, 2368–2377 (2008).
[CrossRef]

Serra, J.

J. Serra, “A lattice approach to image segmentation,” J. Math. Imaging Vis. 24, 83–130 (2006).
[CrossRef]

J. Serra, “Connective segmentation,” in Proceedings of the 15th IEEE International Conference on Image Processing, 2192–2195 (2008).

Sindhwani, V.

M. Belkin, P. Niyogi, V. Sindhwani, “Manifold regularization: A geometric framework for learning from labeled and unlabeled examples,” J. Mach. Learn. Res. 7, 2399–2434 (2006).

Swadzba, A.

F. Yuan, A. Swadzba, R. Philippsen, O. Engin, M. Hanheide, S. Wachsmuth, “Laser-based navigation enhanced with 3d time-of-flight data,” in Proceedings of IEEE International Conference on Robotics and Automation, 2844C-2850 (2009).

Theoharatos, C.

I. Tziakos, C. Theoharatos, N. A. Laskaris, G. Economou, “Color image segmentation using Laplacian eigenmaps,” J. Electron. Imaging 18,023004 (2009).
[CrossRef]

Tziakos, I.

I. Tziakos, C. Theoharatos, N. A. Laskaris, G. Economou, “Color image segmentation using Laplacian eigenmaps,” J. Electron. Imaging 18,023004 (2009).
[CrossRef]

Wachsmuth, S.

F. Yuan, A. Swadzba, R. Philippsen, O. Engin, M. Hanheide, S. Wachsmuth, “Laser-based navigation enhanced with 3d time-of-flight data,” in Proceedings of IEEE International Conference on Robotics and Automation, 2844C-2850 (2009).

Wang, Q. C.

Y. Jiang, Y. Liu, Y. Q. Lei, Q. C. Wang, “Supervised preserving projection for learning scene information based on time-of-flight imaging sensor,” Appl. Opt. 22, 5279–5288 (2013).
[CrossRef]

Xu, D.

S. Yan, D. Xu, B. Zhang, H. Zhang, “Graph embedding and extensions: a general framework for dimensionality reduction,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 40–51 (2007).
[CrossRef]

Yahav, G.

G. J. Iddan, G. Yahav, “3D Imaging in the Studio and Elsewhere,” in Videometrics and Optical Methods for 3D Shape Measurements, Proc. SPIE4298:, 48–55 (2001).

Yan, S.

S. Yan, D. Xu, B. Zhang, H. Zhang, “Graph embedding and extensions: a general framework for dimensionality reduction,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 40–51 (2007).
[CrossRef]

Yao, Y. Y.

Y. Y. Yao, “Perspectives of granular computing,” in Proceedings of IEEE International Conference on Granular Computing, 85–90 (2005).

Ye, J.

V. N. Balasubramanian, J. Ye, S. Panchanathan, “Biased manifold embedding: a framework for personin dependent head pose estimation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 1–7 (2007).

Yuan, F.

F. Yuan, A. Swadzba, R. Philippsen, O. Engin, M. Hanheide, S. Wachsmuth, “Laser-based navigation enhanced with 3d time-of-flight data,” in Proceedings of IEEE International Conference on Robotics and Automation, 2844C-2850 (2009).

Zha, H.

Z. Zhang, H. Zha, “Local linear smoothing for nonlinear manifold learning,” CSE-03–003, Technical Report, Pennsylvania State University, 2003.

Zhang, B.

S. Yan, D. Xu, B. Zhang, H. Zhang, “Graph embedding and extensions: a general framework for dimensionality reduction,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 40–51 (2007).
[CrossRef]

Zhang, H.

S. Yan, D. Xu, B. Zhang, H. Zhang, “Graph embedding and extensions: a general framework for dimensionality reduction,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 40–51 (2007).
[CrossRef]

Zhang, Z.

Z. Zhang, H. Zha, “Local linear smoothing for nonlinear manifold learning,” CSE-03–003, Technical Report, Pennsylvania State University, 2003.

Appl. Opt. (1)

Y. Jiang, Y. Liu, Y. Q. Lei, Q. C. Wang, “Supervised preserving projection for learning scene information based on time-of-flight imaging sensor,” Appl. Opt. 22, 5279–5288 (2013).
[CrossRef]

IEEE Trans. Circuits Syst. I, Reg. Papers (1)

P. Seitz, “Quantum-noise limited distance resolution of optical range imaging techniques,” IEEE Trans. Circuits Syst. I, Reg. Papers 55, 2368–2377 (2008).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

S. Yan, D. Xu, B. Zhang, H. Zhang, “Graph embedding and extensions: a general framework for dimensionality reduction,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 40–51 (2007).
[CrossRef]

Int. J. Intell. Syst. Tech. Appl. (1)

A. Prusak, O. Melnychuk, H. Roth, I. Schiller, R. Koch, “Pose estimation and map building with a time-of-flight-camera for robot navigation,” Int. J. Intell. Syst. Tech. Appl. 5, 355–364 (2008).

Integr. Comput-Aid. E. (1)

R. Bostelman, T. Hong, R. Madhavan, “Obstacle detection using a time-of-flight range camera for automated guided vehicle safety and navigation,” Integr. Comput-Aid. E. 12, 237–249 (2005).

J. Electron. Imaging (1)

I. Tziakos, C. Theoharatos, N. A. Laskaris, G. Economou, “Color image segmentation using Laplacian eigenmaps,” J. Electron. Imaging 18,023004 (2009).
[CrossRef]

J. Mach. Learn. Res. (1)

M. Belkin, P. Niyogi, V. Sindhwani, “Manifold regularization: A geometric framework for learning from labeled and unlabeled examples,” J. Mach. Learn. Res. 7, 2399–2434 (2006).

J. Math. Imaging Vis. (1)

J. Serra, “A lattice approach to image segmentation,” J. Math. Imaging Vis. 24, 83–130 (2006).
[CrossRef]

Neural Comput. (1)

M. Belkin, P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Comput. 15, 1373–1396 (2003).
[CrossRef]

Opt. Express (1)

L. Jovanov, A. Pizurica, W. Philips, “Fuzzy logic-based approach to wavelet denoising of 3D images produced by time-of-flight cameras,” Opt. Express 18, 651–676 (2010).
[CrossRef]

Other (12)

G. J. Iddan, G. Yahav, “3D Imaging in the Studio and Elsewhere,” in Videometrics and Optical Methods for 3D Shape Measurements, Proc. SPIE4298:, 48–55 (2001).

F. Yuan, A. Swadzba, R. Philippsen, O. Engin, M. Hanheide, S. Wachsmuth, “Laser-based navigation enhanced with 3d time-of-flight data,” in Proceedings of IEEE International Conference on Robotics and Automation, 2844C-2850 (2009).

S. Y. Kim, J. H. Cho, A. Koschan, M. A. Abidi, “Spatial and Temporal Enhancement of Depth Images Captured by a Time-of-flight Depth Sensor,” in Proceeding of 20th International Conference on Pattern Recognition, 2358–2361 (2010).

S. H. McClure, M. J. Cree, A. A. Dorrington, A. D. Payne, “Resolving depth measurement ambiguity with commercially available range imaging cameras,” in Image Processing: Machine Vision Applications III, Proc. SPIE7538:, (75380K) 2010.

A. D. Payne, A. P. P. Jongenelen, A. A. Dorrington, M. J. Cree, D. A. Carnegie, “Multiple frequency range imaging to remove measurement ambiguity,” in Proceedings of 9th Conference on Optical 3-D Measurement Techniques, 139–148 (2009).

V. N. Balasubramanian, J. Ye, S. Panchanathan, “Biased manifold embedding: a framework for personin dependent head pose estimation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 1–7 (2007).

C. BenAbdelkader, K. Daniilidis, P. Maragos, N. Paragios, “Robust Head Pose Estimation Using Supervised Manifold Learning,” in Computer Vision-ECCV, Lect. Notes Comput. Sc.6316, 518–531 (2010).
[CrossRef]

X. He, M. Ji, H. Bao, “Graph embedding with constraints,” in Proceedings of International Joint Conference on Artificial Intelligence, 1065–1070 (2009).

S. A. Gumundsson, H. Aans, R. Larsen, “Environmental effects on measurement uncertainties of time-of-flight cameras,” in International Symposium on Signals, Circuits and Systems, 1–4 (2007).

Y. Y. Yao, “Perspectives of granular computing,” in Proceedings of IEEE International Conference on Granular Computing, 85–90 (2005).

Z. Zhang, H. Zha, “Local linear smoothing for nonlinear manifold learning,” CSE-03–003, Technical Report, Pennsylvania State University, 2003.

J. Serra, “Connective segmentation,” in Proceedings of the 15th IEEE International Conference on Image Processing, 2192–2195 (2008).

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1

Obstacle detection result of depth images using regular data patches.

Fig. 2
Fig. 2

Three layer wavelet transform of images.

Fig. 3
Fig. 3

Vertical view of projection grids of depth images.

Fig. 4
Fig. 4

Basic range clusters (K = 220).

Fig. 5
Fig. 5

Pixel number of basic range clusters and the least covering sizes, (a) pixel number in every basic range cluster, (b) the least square covering region size corresponding to every basic range cluster.

Fig. 6
Fig. 6

Histogram of the least covering sizes according to the basic range clusters.

Fig. 7
Fig. 7

Ambiguity boundary lines of range images and the extracted non-ambiguity foreground regions.

Fig. 8
Fig. 8

Noise standard deviations of a depth image.

Fig. 9
Fig. 9

Noise standard deviations of 300 frames.

Fig. 10
Fig. 10

Comparison results between regular patches based algorithm and range clusters based algorithm.

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

D = R amb φ 2 π
R amb = c 2 f m
Δ ϕ = i = 1 3 ( ϕ A i ) 2 A i
Δ L = L 8 B 2 A
σ ^ l = C 1 A l
σ ^ b l 2 = 1 N k = 1 N ( X l , k m l ) 2
C = 1 P M l = 1 P M A b l σ ^ b l
w i j = { Δ σ j exp ( γ x i x j ) , x j [ x i 1 , x i 2 , , x i k ] 0 , else
[ X , 0 , Z ] = [ X , Y , Z ] [ 1 0 0 0 0 0 0 0 1 ]
hist ( l k ) = m k , l k = 1 , 2 , , L
Kurt = 1 m 1 k = 1 L m k ( l k l * ) 4 / σ 4 , Skew = 1 m 1 k = 1 L m k ( l k l * ) 3 / σ 3
S min = floor ( s ~ 1 Kurt 1 Skew F ( a , 0 , 1 ) ) , S max = ceil ( s ~ + 1 Kurt 1 Skew F ( a , 0 , 1 ) )
{ missing ( x i ) = | x i x i * | , n i < n * added ( x j ) = | x j * x j | , n j > n *
SQM = i = 1 k 1 p ( x i ) missing ( x i ) + j = 1 k 2 p ( x j ) added ( x j )
dist ˜ ( i , j ) = dist ( i , j ) f ( i , j )
W i , j = { exp ( d * ( i , j ) 2 2 σ 2 ) , h i N k ( h j ) h j N k ( h i ) 0 , otherwise
min e i , j n e i e j 2 W i j = E T L E
Ω i j = { exp ( | c i c j | 2 2 σ 2 ) , h i N k ( h j ) h j N k ( h i ) 0 , otherwise
min e i , j e i e j 2 W i , j + λ i , j e i e j 2 Ω i j
min a i e i a T h i 2 + γ a 2
min a i e i a T h i 2 + λ i , j a T h i a T h j 2 W i , j + γ a 2

Metrics