Abstract

Ladar range images have attracted considerable attention in automatic target recognition fields. In this paper, Zernike moments (ZMs) are applied to classify the target of the range image from an arbitrary azimuth angle. However, ZMs suffer from high computational costs. To improve the performance of target recognition based on small samples, even-order ZMs with serial-parallel backpropagation neural networks (BPNNs) are applied to recognize the target of the range image. It is found that the rotation invariance and classified performance of the even-order ZMs are both better than for odd-order moments and for moments compressed by principal component analysis. The experimental results demonstrate that combining the even-order ZMs with serial-parallel BPNNs can significantly improve the recognition rate for small samples.

© 2012 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. G. E. Smith and B. G. Mobasseri, “Robust through-the-wall radar image classification using a target-model alignment procedure,” IEEE Trans. Image Process. 21, 754–767 (2012).
    [CrossRef]
  2. B. Steder, G. Greisetti, and G. W. Burgard, “Robust place recognition for 3D range data based on point features,” in Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2010), pp. 1400–1405.
  3. Q. Wang, L. Wang, and J. F. Sun, “Rotation-invariant target recognition in LADAR range imagery using model matching approach,” Opt. Express 18, 15349–15360 (2010).
    [CrossRef]
  4. A. Antonio, M. Pilar, and S. Santiago, “3D scene retrieval and recognition with depth gradient images,” Pattern Recogn. Lett. 32, 1337–1353 (2011).
    [CrossRef]
  5. F. Stein and G. Medioni, “Structural indexing: efficient 3D object recognition,” IEEE Trans. Pattern Anal. Machine Intel. 14, 125–145 (1992).
    [CrossRef]
  6. A. E. Johnson, “A representation for 3-D surface matching,” Ph.D. dissertation (Robotics Institute, Carnegie Mellon University, 1997).
  7. N. J. Mitra, L. Guibas, J. Giesen, and M. Pauly, “Probabilistic fingerprints for shapes,” in Proceedings of Symposium on Geometry Processing (ACM, 2006), pp. 121–130.
  8. A. S. Mian, M. Bennamoun, and R. Owens, “3D model-based object recognition and segmentation in cluttered scenes,” IEEE Trans. Pattern Anal. Machine Intel. 28, 1584–1600 (2006).
    [CrossRef]
  9. H. Chen and B. Bhanu, “3D free-form object recognition in range images using local surface patches,” Pattern Recogn. Lett. 28, 1252–1262 (2007).
    [CrossRef]
  10. M. K. Hu, “Visual pattern recognition by moment invariants,” IRE Trans. Inf. Theory 8, 179–187 (1962).
  11. S. Chang and C. P. Grover, “Pattern recognition with generalized centroids and subcentroids,” Appl. Opt. 44, 1372–1380 (2005).
    [CrossRef]
  12. A. Stern, I. Kruchakov, E. Yoavi, and N. S. Kopeika, “Recognition of motion-blurred images by use of the method of moments,” Appl. Opt. 41, 2164–2171 (2002).
    [CrossRef]
  13. M. Teague, “Image analysis via the general theory of moments,” J. Opt Soc. Am. 70, 920–930 (1980).
    [CrossRef]
  14. A. Khotanzad and Y. H. Hong, “Invariant image recognition by Zernike moments,” IEEE Trans. Pattern Anal. Machine Intel. 12, 489–497 (1990).
    [CrossRef]
  15. A. P. Vivanco, G. U. Serrano, F. G. Agustin, and A. C. Rodriguez, “Comparative analysis of pattern reconstruction using orthogonal moments,” Opt. Eng. 46, 017002 (2007).
    [CrossRef]
  16. J. Revaud, G. Lavoue, and A. Baskurt, “Improving Zernike moments comparison for optimal similarity and rotation angle retrieval,” IEEE Trans. Pattern Anal. Machine Intel. 31, 627–636 (2009).
    [CrossRef]
  17. Z. Chen and S. K. Sun, “A Zernike moment phase-based descriptor for local image representation and matching,” IEEE Trans. Image Process. 19, 205–219 (2010).
    [CrossRef]
  18. G. A. Papakostas, Y. S. Boutalis, D. A. Karras, and B. G. Mertzios, “Pattern classification by using improved wavelet compressed Zernike moments,” Appl. Math. Comput. 212, 162–176 (2009).
    [CrossRef]
  19. B. J. Chen, H. Z. Shu, H. Zhang, G. Chen, C. Toumoulin, J. L. Dillenseger, and L. M. Luo, “Quaternion Zernike moments and their invariants for color image analysis and object recognition,” Signal Process. 92, 308–318 (2012).
    [CrossRef]
  20. C. W. Chong, R. Paramesran, and R. Mukundan, “A comparative analysis of algorithms for fast computation of Zernike moments,” Pattern Recogn. 36, 731–742 (2003).
    [CrossRef]
  21. C. Singh and E. Walia, “Fast and numerically stable methods for the computation of Zernike moments,” Pattern Recogn. 43, 2497–2506 (2010).
    [CrossRef]
  22. Z. W. Yang and T. Fang, “On the accuracy of image normalization by Zernike moments,” Image Vis. Comput. 28, 403–413 (2010).
    [CrossRef]
  23. G. A. Papakostas, Y. S. Boutalis, C. N. Papaodysseus, and D. K. Fragoulis, “Numerical error analysis in Zernike moments computation,” Image Vis. Comput. 24, 960–969 (2006).
    [CrossRef]
  24. C. Y. Wee and R. Paramesran, “On the computational aspects of Zernike moments,” Image Vis. Comput. 25, 967–980(2007).
    [CrossRef]
  25. L. Kotoulas and I. Andreadis, “Accurate calculation of image moments,” IEEE Trans. Image Process. 16, 2028–2037(2007).
    [CrossRef]
  26. L. K. Hansen and P. Salamon, “Neural network ensembles,” IEEE. Trans. Pattern Anal. Machine Intel. 12, 993–1001 (1990).
  27. M. Pohit, “Neural network model for rotation invariant recognition of object shapes,” Appl. Opt. 49, 4144–4151 (2010).
    [CrossRef]
  28. A. Khotanzad and J. J. H. Liou, “Recognition and pose estimation of unoccluded three-dimensional objects from a two-dimensional perspective view by banks of neural networks,” IEEE Trans. Neural Netw. 7, 897–906 (1996).
    [CrossRef]
  29. J. T. J. Green and J. H. Shapiro, “Detecting objects in three-dimensional laser radar range images,” Opt. Eng. 33, 865–874 (1994).
  30. J. T. J. Green and J. H. Shapiro, “Maximum-likelihood laser radar range profiling with the expectation-maximization algorithm,” Opt. Eng. 31, 2343–2354 (1992).
  31. R. G. Donald, I. Fung, and J. H. Shapiro, “Maximum-likelihood multiresolution laser radar range imaging,” IEEE Trans. Image Process. 6, 36–46 (1997).
    [CrossRef]
  32. H. Abdi and L. J. Williams, “Principal component analysis,” WIREs Comp. Stat. 2, 433–459 (2010).
    [CrossRef]
  33. D. Xiao and L. Yang, “Gait recognition using Zernike moments and BP neural network,” in Proceedings of IEEE International Conference on Networking, Sensing and Control (IEEE, 2008), pp. 418–423.
  34. R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd ed. (Prentice Hall, 2002).
  35. Z. J. Liu, Q. Li, Z. W. Xia, and Q. Wang, “Target recognition for small samples of ladar range image using classifier ensembles,” Opt. Eng. 51, 087201 (2012).
    [CrossRef]

2012 (3)

G. E. Smith and B. G. Mobasseri, “Robust through-the-wall radar image classification using a target-model alignment procedure,” IEEE Trans. Image Process. 21, 754–767 (2012).
[CrossRef]

B. J. Chen, H. Z. Shu, H. Zhang, G. Chen, C. Toumoulin, J. L. Dillenseger, and L. M. Luo, “Quaternion Zernike moments and their invariants for color image analysis and object recognition,” Signal Process. 92, 308–318 (2012).
[CrossRef]

Z. J. Liu, Q. Li, Z. W. Xia, and Q. Wang, “Target recognition for small samples of ladar range image using classifier ensembles,” Opt. Eng. 51, 087201 (2012).
[CrossRef]

2011 (1)

A. Antonio, M. Pilar, and S. Santiago, “3D scene retrieval and recognition with depth gradient images,” Pattern Recogn. Lett. 32, 1337–1353 (2011).
[CrossRef]

2010 (6)

Q. Wang, L. Wang, and J. F. Sun, “Rotation-invariant target recognition in LADAR range imagery using model matching approach,” Opt. Express 18, 15349–15360 (2010).
[CrossRef]

Z. Chen and S. K. Sun, “A Zernike moment phase-based descriptor for local image representation and matching,” IEEE Trans. Image Process. 19, 205–219 (2010).
[CrossRef]

H. Abdi and L. J. Williams, “Principal component analysis,” WIREs Comp. Stat. 2, 433–459 (2010).
[CrossRef]

C. Singh and E. Walia, “Fast and numerically stable methods for the computation of Zernike moments,” Pattern Recogn. 43, 2497–2506 (2010).
[CrossRef]

Z. W. Yang and T. Fang, “On the accuracy of image normalization by Zernike moments,” Image Vis. Comput. 28, 403–413 (2010).
[CrossRef]

M. Pohit, “Neural network model for rotation invariant recognition of object shapes,” Appl. Opt. 49, 4144–4151 (2010).
[CrossRef]

2009 (2)

G. A. Papakostas, Y. S. Boutalis, D. A. Karras, and B. G. Mertzios, “Pattern classification by using improved wavelet compressed Zernike moments,” Appl. Math. Comput. 212, 162–176 (2009).
[CrossRef]

J. Revaud, G. Lavoue, and A. Baskurt, “Improving Zernike moments comparison for optimal similarity and rotation angle retrieval,” IEEE Trans. Pattern Anal. Machine Intel. 31, 627–636 (2009).
[CrossRef]

2007 (4)

A. P. Vivanco, G. U. Serrano, F. G. Agustin, and A. C. Rodriguez, “Comparative analysis of pattern reconstruction using orthogonal moments,” Opt. Eng. 46, 017002 (2007).
[CrossRef]

H. Chen and B. Bhanu, “3D free-form object recognition in range images using local surface patches,” Pattern Recogn. Lett. 28, 1252–1262 (2007).
[CrossRef]

C. Y. Wee and R. Paramesran, “On the computational aspects of Zernike moments,” Image Vis. Comput. 25, 967–980(2007).
[CrossRef]

L. Kotoulas and I. Andreadis, “Accurate calculation of image moments,” IEEE Trans. Image Process. 16, 2028–2037(2007).
[CrossRef]

2006 (2)

G. A. Papakostas, Y. S. Boutalis, C. N. Papaodysseus, and D. K. Fragoulis, “Numerical error analysis in Zernike moments computation,” Image Vis. Comput. 24, 960–969 (2006).
[CrossRef]

A. S. Mian, M. Bennamoun, and R. Owens, “3D model-based object recognition and segmentation in cluttered scenes,” IEEE Trans. Pattern Anal. Machine Intel. 28, 1584–1600 (2006).
[CrossRef]

2005 (1)

2003 (1)

C. W. Chong, R. Paramesran, and R. Mukundan, “A comparative analysis of algorithms for fast computation of Zernike moments,” Pattern Recogn. 36, 731–742 (2003).
[CrossRef]

2002 (1)

1997 (1)

R. G. Donald, I. Fung, and J. H. Shapiro, “Maximum-likelihood multiresolution laser radar range imaging,” IEEE Trans. Image Process. 6, 36–46 (1997).
[CrossRef]

1996 (1)

A. Khotanzad and J. J. H. Liou, “Recognition and pose estimation of unoccluded three-dimensional objects from a two-dimensional perspective view by banks of neural networks,” IEEE Trans. Neural Netw. 7, 897–906 (1996).
[CrossRef]

1994 (1)

J. T. J. Green and J. H. Shapiro, “Detecting objects in three-dimensional laser radar range images,” Opt. Eng. 33, 865–874 (1994).

1992 (2)

J. T. J. Green and J. H. Shapiro, “Maximum-likelihood laser radar range profiling with the expectation-maximization algorithm,” Opt. Eng. 31, 2343–2354 (1992).

F. Stein and G. Medioni, “Structural indexing: efficient 3D object recognition,” IEEE Trans. Pattern Anal. Machine Intel. 14, 125–145 (1992).
[CrossRef]

1990 (2)

A. Khotanzad and Y. H. Hong, “Invariant image recognition by Zernike moments,” IEEE Trans. Pattern Anal. Machine Intel. 12, 489–497 (1990).
[CrossRef]

L. K. Hansen and P. Salamon, “Neural network ensembles,” IEEE. Trans. Pattern Anal. Machine Intel. 12, 993–1001 (1990).

1980 (1)

M. Teague, “Image analysis via the general theory of moments,” J. Opt Soc. Am. 70, 920–930 (1980).
[CrossRef]

1962 (1)

M. K. Hu, “Visual pattern recognition by moment invariants,” IRE Trans. Inf. Theory 8, 179–187 (1962).

Abdi, H.

H. Abdi and L. J. Williams, “Principal component analysis,” WIREs Comp. Stat. 2, 433–459 (2010).
[CrossRef]

Agustin, F. G.

A. P. Vivanco, G. U. Serrano, F. G. Agustin, and A. C. Rodriguez, “Comparative analysis of pattern reconstruction using orthogonal moments,” Opt. Eng. 46, 017002 (2007).
[CrossRef]

Andreadis, I.

L. Kotoulas and I. Andreadis, “Accurate calculation of image moments,” IEEE Trans. Image Process. 16, 2028–2037(2007).
[CrossRef]

Antonio, A.

A. Antonio, M. Pilar, and S. Santiago, “3D scene retrieval and recognition with depth gradient images,” Pattern Recogn. Lett. 32, 1337–1353 (2011).
[CrossRef]

Baskurt, A.

J. Revaud, G. Lavoue, and A. Baskurt, “Improving Zernike moments comparison for optimal similarity and rotation angle retrieval,” IEEE Trans. Pattern Anal. Machine Intel. 31, 627–636 (2009).
[CrossRef]

Bennamoun, M.

A. S. Mian, M. Bennamoun, and R. Owens, “3D model-based object recognition and segmentation in cluttered scenes,” IEEE Trans. Pattern Anal. Machine Intel. 28, 1584–1600 (2006).
[CrossRef]

Bhanu, B.

H. Chen and B. Bhanu, “3D free-form object recognition in range images using local surface patches,” Pattern Recogn. Lett. 28, 1252–1262 (2007).
[CrossRef]

Boutalis, Y. S.

G. A. Papakostas, Y. S. Boutalis, D. A. Karras, and B. G. Mertzios, “Pattern classification by using improved wavelet compressed Zernike moments,” Appl. Math. Comput. 212, 162–176 (2009).
[CrossRef]

G. A. Papakostas, Y. S. Boutalis, C. N. Papaodysseus, and D. K. Fragoulis, “Numerical error analysis in Zernike moments computation,” Image Vis. Comput. 24, 960–969 (2006).
[CrossRef]

Burgard, G. W.

B. Steder, G. Greisetti, and G. W. Burgard, “Robust place recognition for 3D range data based on point features,” in Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2010), pp. 1400–1405.

Chang, S.

Chen, B. J.

B. J. Chen, H. Z. Shu, H. Zhang, G. Chen, C. Toumoulin, J. L. Dillenseger, and L. M. Luo, “Quaternion Zernike moments and their invariants for color image analysis and object recognition,” Signal Process. 92, 308–318 (2012).
[CrossRef]

Chen, G.

B. J. Chen, H. Z. Shu, H. Zhang, G. Chen, C. Toumoulin, J. L. Dillenseger, and L. M. Luo, “Quaternion Zernike moments and their invariants for color image analysis and object recognition,” Signal Process. 92, 308–318 (2012).
[CrossRef]

Chen, H.

H. Chen and B. Bhanu, “3D free-form object recognition in range images using local surface patches,” Pattern Recogn. Lett. 28, 1252–1262 (2007).
[CrossRef]

Chen, Z.

Z. Chen and S. K. Sun, “A Zernike moment phase-based descriptor for local image representation and matching,” IEEE Trans. Image Process. 19, 205–219 (2010).
[CrossRef]

Chong, C. W.

C. W. Chong, R. Paramesran, and R. Mukundan, “A comparative analysis of algorithms for fast computation of Zernike moments,” Pattern Recogn. 36, 731–742 (2003).
[CrossRef]

Dillenseger, J. L.

B. J. Chen, H. Z. Shu, H. Zhang, G. Chen, C. Toumoulin, J. L. Dillenseger, and L. M. Luo, “Quaternion Zernike moments and their invariants for color image analysis and object recognition,” Signal Process. 92, 308–318 (2012).
[CrossRef]

Donald, R. G.

R. G. Donald, I. Fung, and J. H. Shapiro, “Maximum-likelihood multiresolution laser radar range imaging,” IEEE Trans. Image Process. 6, 36–46 (1997).
[CrossRef]

Fang, T.

Z. W. Yang and T. Fang, “On the accuracy of image normalization by Zernike moments,” Image Vis. Comput. 28, 403–413 (2010).
[CrossRef]

Fragoulis, D. K.

G. A. Papakostas, Y. S. Boutalis, C. N. Papaodysseus, and D. K. Fragoulis, “Numerical error analysis in Zernike moments computation,” Image Vis. Comput. 24, 960–969 (2006).
[CrossRef]

Fung, I.

R. G. Donald, I. Fung, and J. H. Shapiro, “Maximum-likelihood multiresolution laser radar range imaging,” IEEE Trans. Image Process. 6, 36–46 (1997).
[CrossRef]

Giesen, J.

N. J. Mitra, L. Guibas, J. Giesen, and M. Pauly, “Probabilistic fingerprints for shapes,” in Proceedings of Symposium on Geometry Processing (ACM, 2006), pp. 121–130.

Gonzalez, R. C.

R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd ed. (Prentice Hall, 2002).

Green, J. T. J.

J. T. J. Green and J. H. Shapiro, “Detecting objects in three-dimensional laser radar range images,” Opt. Eng. 33, 865–874 (1994).

J. T. J. Green and J. H. Shapiro, “Maximum-likelihood laser radar range profiling with the expectation-maximization algorithm,” Opt. Eng. 31, 2343–2354 (1992).

Greisetti, G.

B. Steder, G. Greisetti, and G. W. Burgard, “Robust place recognition for 3D range data based on point features,” in Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2010), pp. 1400–1405.

Grover, C. P.

Guibas, L.

N. J. Mitra, L. Guibas, J. Giesen, and M. Pauly, “Probabilistic fingerprints for shapes,” in Proceedings of Symposium on Geometry Processing (ACM, 2006), pp. 121–130.

Hansen, L. K.

L. K. Hansen and P. Salamon, “Neural network ensembles,” IEEE. Trans. Pattern Anal. Machine Intel. 12, 993–1001 (1990).

Hong, Y. H.

A. Khotanzad and Y. H. Hong, “Invariant image recognition by Zernike moments,” IEEE Trans. Pattern Anal. Machine Intel. 12, 489–497 (1990).
[CrossRef]

Hu, M. K.

M. K. Hu, “Visual pattern recognition by moment invariants,” IRE Trans. Inf. Theory 8, 179–187 (1962).

Johnson, A. E.

A. E. Johnson, “A representation for 3-D surface matching,” Ph.D. dissertation (Robotics Institute, Carnegie Mellon University, 1997).

Karras, D. A.

G. A. Papakostas, Y. S. Boutalis, D. A. Karras, and B. G. Mertzios, “Pattern classification by using improved wavelet compressed Zernike moments,” Appl. Math. Comput. 212, 162–176 (2009).
[CrossRef]

Khotanzad, A.

A. Khotanzad and J. J. H. Liou, “Recognition and pose estimation of unoccluded three-dimensional objects from a two-dimensional perspective view by banks of neural networks,” IEEE Trans. Neural Netw. 7, 897–906 (1996).
[CrossRef]

A. Khotanzad and Y. H. Hong, “Invariant image recognition by Zernike moments,” IEEE Trans. Pattern Anal. Machine Intel. 12, 489–497 (1990).
[CrossRef]

Kopeika, N. S.

Kotoulas, L.

L. Kotoulas and I. Andreadis, “Accurate calculation of image moments,” IEEE Trans. Image Process. 16, 2028–2037(2007).
[CrossRef]

Kruchakov, I.

Lavoue, G.

J. Revaud, G. Lavoue, and A. Baskurt, “Improving Zernike moments comparison for optimal similarity and rotation angle retrieval,” IEEE Trans. Pattern Anal. Machine Intel. 31, 627–636 (2009).
[CrossRef]

Li, Q.

Z. J. Liu, Q. Li, Z. W. Xia, and Q. Wang, “Target recognition for small samples of ladar range image using classifier ensembles,” Opt. Eng. 51, 087201 (2012).
[CrossRef]

Liou, J. J. H.

A. Khotanzad and J. J. H. Liou, “Recognition and pose estimation of unoccluded three-dimensional objects from a two-dimensional perspective view by banks of neural networks,” IEEE Trans. Neural Netw. 7, 897–906 (1996).
[CrossRef]

Liu, Z. J.

Z. J. Liu, Q. Li, Z. W. Xia, and Q. Wang, “Target recognition for small samples of ladar range image using classifier ensembles,” Opt. Eng. 51, 087201 (2012).
[CrossRef]

Luo, L. M.

B. J. Chen, H. Z. Shu, H. Zhang, G. Chen, C. Toumoulin, J. L. Dillenseger, and L. M. Luo, “Quaternion Zernike moments and their invariants for color image analysis and object recognition,” Signal Process. 92, 308–318 (2012).
[CrossRef]

Medioni, G.

F. Stein and G. Medioni, “Structural indexing: efficient 3D object recognition,” IEEE Trans. Pattern Anal. Machine Intel. 14, 125–145 (1992).
[CrossRef]

Mertzios, B. G.

G. A. Papakostas, Y. S. Boutalis, D. A. Karras, and B. G. Mertzios, “Pattern classification by using improved wavelet compressed Zernike moments,” Appl. Math. Comput. 212, 162–176 (2009).
[CrossRef]

Mian, A. S.

A. S. Mian, M. Bennamoun, and R. Owens, “3D model-based object recognition and segmentation in cluttered scenes,” IEEE Trans. Pattern Anal. Machine Intel. 28, 1584–1600 (2006).
[CrossRef]

Mitra, N. J.

N. J. Mitra, L. Guibas, J. Giesen, and M. Pauly, “Probabilistic fingerprints for shapes,” in Proceedings of Symposium on Geometry Processing (ACM, 2006), pp. 121–130.

Mobasseri, B. G.

G. E. Smith and B. G. Mobasseri, “Robust through-the-wall radar image classification using a target-model alignment procedure,” IEEE Trans. Image Process. 21, 754–767 (2012).
[CrossRef]

Mukundan, R.

C. W. Chong, R. Paramesran, and R. Mukundan, “A comparative analysis of algorithms for fast computation of Zernike moments,” Pattern Recogn. 36, 731–742 (2003).
[CrossRef]

Owens, R.

A. S. Mian, M. Bennamoun, and R. Owens, “3D model-based object recognition and segmentation in cluttered scenes,” IEEE Trans. Pattern Anal. Machine Intel. 28, 1584–1600 (2006).
[CrossRef]

Papakostas, G. A.

G. A. Papakostas, Y. S. Boutalis, D. A. Karras, and B. G. Mertzios, “Pattern classification by using improved wavelet compressed Zernike moments,” Appl. Math. Comput. 212, 162–176 (2009).
[CrossRef]

G. A. Papakostas, Y. S. Boutalis, C. N. Papaodysseus, and D. K. Fragoulis, “Numerical error analysis in Zernike moments computation,” Image Vis. Comput. 24, 960–969 (2006).
[CrossRef]

Papaodysseus, C. N.

G. A. Papakostas, Y. S. Boutalis, C. N. Papaodysseus, and D. K. Fragoulis, “Numerical error analysis in Zernike moments computation,” Image Vis. Comput. 24, 960–969 (2006).
[CrossRef]

Paramesran, R.

C. Y. Wee and R. Paramesran, “On the computational aspects of Zernike moments,” Image Vis. Comput. 25, 967–980(2007).
[CrossRef]

C. W. Chong, R. Paramesran, and R. Mukundan, “A comparative analysis of algorithms for fast computation of Zernike moments,” Pattern Recogn. 36, 731–742 (2003).
[CrossRef]

Pauly, M.

N. J. Mitra, L. Guibas, J. Giesen, and M. Pauly, “Probabilistic fingerprints for shapes,” in Proceedings of Symposium on Geometry Processing (ACM, 2006), pp. 121–130.

Pilar, M.

A. Antonio, M. Pilar, and S. Santiago, “3D scene retrieval and recognition with depth gradient images,” Pattern Recogn. Lett. 32, 1337–1353 (2011).
[CrossRef]

Pohit, M.

Revaud, J.

J. Revaud, G. Lavoue, and A. Baskurt, “Improving Zernike moments comparison for optimal similarity and rotation angle retrieval,” IEEE Trans. Pattern Anal. Machine Intel. 31, 627–636 (2009).
[CrossRef]

Rodriguez, A. C.

A. P. Vivanco, G. U. Serrano, F. G. Agustin, and A. C. Rodriguez, “Comparative analysis of pattern reconstruction using orthogonal moments,” Opt. Eng. 46, 017002 (2007).
[CrossRef]

Salamon, P.

L. K. Hansen and P. Salamon, “Neural network ensembles,” IEEE. Trans. Pattern Anal. Machine Intel. 12, 993–1001 (1990).

Santiago, S.

A. Antonio, M. Pilar, and S. Santiago, “3D scene retrieval and recognition with depth gradient images,” Pattern Recogn. Lett. 32, 1337–1353 (2011).
[CrossRef]

Serrano, G. U.

A. P. Vivanco, G. U. Serrano, F. G. Agustin, and A. C. Rodriguez, “Comparative analysis of pattern reconstruction using orthogonal moments,” Opt. Eng. 46, 017002 (2007).
[CrossRef]

Shapiro, J. H.

R. G. Donald, I. Fung, and J. H. Shapiro, “Maximum-likelihood multiresolution laser radar range imaging,” IEEE Trans. Image Process. 6, 36–46 (1997).
[CrossRef]

J. T. J. Green and J. H. Shapiro, “Detecting objects in three-dimensional laser radar range images,” Opt. Eng. 33, 865–874 (1994).

J. T. J. Green and J. H. Shapiro, “Maximum-likelihood laser radar range profiling with the expectation-maximization algorithm,” Opt. Eng. 31, 2343–2354 (1992).

Shu, H. Z.

B. J. Chen, H. Z. Shu, H. Zhang, G. Chen, C. Toumoulin, J. L. Dillenseger, and L. M. Luo, “Quaternion Zernike moments and their invariants for color image analysis and object recognition,” Signal Process. 92, 308–318 (2012).
[CrossRef]

Singh, C.

C. Singh and E. Walia, “Fast and numerically stable methods for the computation of Zernike moments,” Pattern Recogn. 43, 2497–2506 (2010).
[CrossRef]

Smith, G. E.

G. E. Smith and B. G. Mobasseri, “Robust through-the-wall radar image classification using a target-model alignment procedure,” IEEE Trans. Image Process. 21, 754–767 (2012).
[CrossRef]

Steder, B.

B. Steder, G. Greisetti, and G. W. Burgard, “Robust place recognition for 3D range data based on point features,” in Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2010), pp. 1400–1405.

Stein, F.

F. Stein and G. Medioni, “Structural indexing: efficient 3D object recognition,” IEEE Trans. Pattern Anal. Machine Intel. 14, 125–145 (1992).
[CrossRef]

Stern, A.

Sun, J. F.

Sun, S. K.

Z. Chen and S. K. Sun, “A Zernike moment phase-based descriptor for local image representation and matching,” IEEE Trans. Image Process. 19, 205–219 (2010).
[CrossRef]

Teague, M.

M. Teague, “Image analysis via the general theory of moments,” J. Opt Soc. Am. 70, 920–930 (1980).
[CrossRef]

Toumoulin, C.

B. J. Chen, H. Z. Shu, H. Zhang, G. Chen, C. Toumoulin, J. L. Dillenseger, and L. M. Luo, “Quaternion Zernike moments and their invariants for color image analysis and object recognition,” Signal Process. 92, 308–318 (2012).
[CrossRef]

Vivanco, A. P.

A. P. Vivanco, G. U. Serrano, F. G. Agustin, and A. C. Rodriguez, “Comparative analysis of pattern reconstruction using orthogonal moments,” Opt. Eng. 46, 017002 (2007).
[CrossRef]

Walia, E.

C. Singh and E. Walia, “Fast and numerically stable methods for the computation of Zernike moments,” Pattern Recogn. 43, 2497–2506 (2010).
[CrossRef]

Wang, L.

Wang, Q.

Z. J. Liu, Q. Li, Z. W. Xia, and Q. Wang, “Target recognition for small samples of ladar range image using classifier ensembles,” Opt. Eng. 51, 087201 (2012).
[CrossRef]

Q. Wang, L. Wang, and J. F. Sun, “Rotation-invariant target recognition in LADAR range imagery using model matching approach,” Opt. Express 18, 15349–15360 (2010).
[CrossRef]

Wee, C. Y.

C. Y. Wee and R. Paramesran, “On the computational aspects of Zernike moments,” Image Vis. Comput. 25, 967–980(2007).
[CrossRef]

Williams, L. J.

H. Abdi and L. J. Williams, “Principal component analysis,” WIREs Comp. Stat. 2, 433–459 (2010).
[CrossRef]

Woods, R. E.

R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd ed. (Prentice Hall, 2002).

Xia, Z. W.

Z. J. Liu, Q. Li, Z. W. Xia, and Q. Wang, “Target recognition for small samples of ladar range image using classifier ensembles,” Opt. Eng. 51, 087201 (2012).
[CrossRef]

Xiao, D.

D. Xiao and L. Yang, “Gait recognition using Zernike moments and BP neural network,” in Proceedings of IEEE International Conference on Networking, Sensing and Control (IEEE, 2008), pp. 418–423.

Yang, L.

D. Xiao and L. Yang, “Gait recognition using Zernike moments and BP neural network,” in Proceedings of IEEE International Conference on Networking, Sensing and Control (IEEE, 2008), pp. 418–423.

Yang, Z. W.

Z. W. Yang and T. Fang, “On the accuracy of image normalization by Zernike moments,” Image Vis. Comput. 28, 403–413 (2010).
[CrossRef]

Yoavi, E.

Zhang, H.

B. J. Chen, H. Z. Shu, H. Zhang, G. Chen, C. Toumoulin, J. L. Dillenseger, and L. M. Luo, “Quaternion Zernike moments and their invariants for color image analysis and object recognition,” Signal Process. 92, 308–318 (2012).
[CrossRef]

Appl. Math. Comput. (1)

G. A. Papakostas, Y. S. Boutalis, D. A. Karras, and B. G. Mertzios, “Pattern classification by using improved wavelet compressed Zernike moments,” Appl. Math. Comput. 212, 162–176 (2009).
[CrossRef]

Appl. Opt. (3)

IEEE Trans. Image Process. (4)

Z. Chen and S. K. Sun, “A Zernike moment phase-based descriptor for local image representation and matching,” IEEE Trans. Image Process. 19, 205–219 (2010).
[CrossRef]

G. E. Smith and B. G. Mobasseri, “Robust through-the-wall radar image classification using a target-model alignment procedure,” IEEE Trans. Image Process. 21, 754–767 (2012).
[CrossRef]

L. Kotoulas and I. Andreadis, “Accurate calculation of image moments,” IEEE Trans. Image Process. 16, 2028–2037(2007).
[CrossRef]

R. G. Donald, I. Fung, and J. H. Shapiro, “Maximum-likelihood multiresolution laser radar range imaging,” IEEE Trans. Image Process. 6, 36–46 (1997).
[CrossRef]

IEEE Trans. Neural Netw. (1)

A. Khotanzad and J. J. H. Liou, “Recognition and pose estimation of unoccluded three-dimensional objects from a two-dimensional perspective view by banks of neural networks,” IEEE Trans. Neural Netw. 7, 897–906 (1996).
[CrossRef]

IEEE Trans. Pattern Anal. Machine Intel. (4)

J. Revaud, G. Lavoue, and A. Baskurt, “Improving Zernike moments comparison for optimal similarity and rotation angle retrieval,” IEEE Trans. Pattern Anal. Machine Intel. 31, 627–636 (2009).
[CrossRef]

F. Stein and G. Medioni, “Structural indexing: efficient 3D object recognition,” IEEE Trans. Pattern Anal. Machine Intel. 14, 125–145 (1992).
[CrossRef]

A. S. Mian, M. Bennamoun, and R. Owens, “3D model-based object recognition and segmentation in cluttered scenes,” IEEE Trans. Pattern Anal. Machine Intel. 28, 1584–1600 (2006).
[CrossRef]

A. Khotanzad and Y. H. Hong, “Invariant image recognition by Zernike moments,” IEEE Trans. Pattern Anal. Machine Intel. 12, 489–497 (1990).
[CrossRef]

IEEE. Trans. Pattern Anal. Machine Intel. (1)

L. K. Hansen and P. Salamon, “Neural network ensembles,” IEEE. Trans. Pattern Anal. Machine Intel. 12, 993–1001 (1990).

Image Vis. Comput. (3)

Z. W. Yang and T. Fang, “On the accuracy of image normalization by Zernike moments,” Image Vis. Comput. 28, 403–413 (2010).
[CrossRef]

G. A. Papakostas, Y. S. Boutalis, C. N. Papaodysseus, and D. K. Fragoulis, “Numerical error analysis in Zernike moments computation,” Image Vis. Comput. 24, 960–969 (2006).
[CrossRef]

C. Y. Wee and R. Paramesran, “On the computational aspects of Zernike moments,” Image Vis. Comput. 25, 967–980(2007).
[CrossRef]

IRE Trans. Inf. Theory (1)

M. K. Hu, “Visual pattern recognition by moment invariants,” IRE Trans. Inf. Theory 8, 179–187 (1962).

J. Opt Soc. Am. (1)

M. Teague, “Image analysis via the general theory of moments,” J. Opt Soc. Am. 70, 920–930 (1980).
[CrossRef]

Opt. Eng. (4)

A. P. Vivanco, G. U. Serrano, F. G. Agustin, and A. C. Rodriguez, “Comparative analysis of pattern reconstruction using orthogonal moments,” Opt. Eng. 46, 017002 (2007).
[CrossRef]

J. T. J. Green and J. H. Shapiro, “Detecting objects in three-dimensional laser radar range images,” Opt. Eng. 33, 865–874 (1994).

J. T. J. Green and J. H. Shapiro, “Maximum-likelihood laser radar range profiling with the expectation-maximization algorithm,” Opt. Eng. 31, 2343–2354 (1992).

Z. J. Liu, Q. Li, Z. W. Xia, and Q. Wang, “Target recognition for small samples of ladar range image using classifier ensembles,” Opt. Eng. 51, 087201 (2012).
[CrossRef]

Opt. Express (1)

Pattern Recogn. (2)

C. W. Chong, R. Paramesran, and R. Mukundan, “A comparative analysis of algorithms for fast computation of Zernike moments,” Pattern Recogn. 36, 731–742 (2003).
[CrossRef]

C. Singh and E. Walia, “Fast and numerically stable methods for the computation of Zernike moments,” Pattern Recogn. 43, 2497–2506 (2010).
[CrossRef]

Pattern Recogn. Lett. (2)

A. Antonio, M. Pilar, and S. Santiago, “3D scene retrieval and recognition with depth gradient images,” Pattern Recogn. Lett. 32, 1337–1353 (2011).
[CrossRef]

H. Chen and B. Bhanu, “3D free-form object recognition in range images using local surface patches,” Pattern Recogn. Lett. 28, 1252–1262 (2007).
[CrossRef]

Signal Process. (1)

B. J. Chen, H. Z. Shu, H. Zhang, G. Chen, C. Toumoulin, J. L. Dillenseger, and L. M. Luo, “Quaternion Zernike moments and their invariants for color image analysis and object recognition,” Signal Process. 92, 308–318 (2012).
[CrossRef]

WIREs Comp. Stat. (1)

H. Abdi and L. J. Williams, “Principal component analysis,” WIREs Comp. Stat. 2, 433–459 (2010).
[CrossRef]

Other (5)

D. Xiao and L. Yang, “Gait recognition using Zernike moments and BP neural network,” in Proceedings of IEEE International Conference on Networking, Sensing and Control (IEEE, 2008), pp. 418–423.

R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd ed. (Prentice Hall, 2002).

B. Steder, G. Greisetti, and G. W. Burgard, “Robust place recognition for 3D range data based on point features,” in Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2010), pp. 1400–1405.

A. E. Johnson, “A representation for 3-D surface matching,” Ph.D. dissertation (Robotics Institute, Carnegie Mellon University, 1997).

N. J. Mitra, L. Guibas, J. Giesen, and M. Pauly, “Probabilistic fingerprints for shapes,” in Proceedings of Symposium on Geometry Processing (ACM, 2006), pp. 121–130.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1.

Block diagram of coherent ladar system.

Fig. 2.
Fig. 2.

Binary images of a car and three rotated versions of it: (a) rotation angles of the car are 0°, (b) 330°, (c) 300°, (d) 270°.

Fig. 3.
Fig. 3.

Block diagram for target recognition system.

Fig. 4.
Fig. 4.

Vehicles’ 3D models and number of these vehicle are from (a) to (j) respectively.

Fig. 5.
Fig. 5.

Simulated noise-free range images for Fig. 4. The targets of (a)–(j) in this figure correspond to Figs. 4(a)4(j).

Fig. 6.
Fig. 6.

Schematic diagram of images collected for ladar.

Fig. 7.
Fig. 7.

Range images with different CNR values for Fig. 4. The targets of (a)–(j) correspond to Figs. 4(a)4(j), respectively.

Fig. 8.
Fig. 8.

Binary images preprocessed from range images with different CNR values from azimuth angles of 45° Fig. 4. (a)–(j) correspond to the targets of Figs. 4(a)4(j), respectively.

Fig. 9.
Fig. 9.

Recognition rates of different features based on different numbers of training samples. (a)–(d) correspond to 3, 4, 7, and 10 training samples, respectively.

Fig. 10.
Fig. 10.

Schematic diagram of the serial-parallel BPNN.

Fig. 11.
Fig. 11.

Performance of the serial-parallel BPNN on even-order moments. (a)–(d) correspond to 3, 4, 7, and 10 training samples, respectively.

Tables (2)

Tables Icon

Table 1. Magnitude of Even-Order ZMs and Their Corresponding Statistics

Tables Icon

Table 2. Magnitudes of Odd-Order of ZMs and Their Corresponding Statistics

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

Pr|r*(R|R*)=j,k[[1Pr(A)]exp[(RjkRjk*)22(δR)2]2π(δR)2+Pr(A)ΔR],RminR,R*Rmax,
Pr(A)1CNR[ln(N)1N+0.577],
δRRres/CNR,
Vnm(x,y)=Vnm(ρ,θ)=Rnm(ρ)exp(jmθ),
Rnm(ρ)=s=0(n|m|)/2(1)s(ns)!s!((n+|m|)2s)!((n|m|)2+s)!ρn2s.
x2+y21Vnm*(x,y)Vpq(x,y)dxdy=πn+1δnpδmq,
δij={1i=j0otherwise
Znm=n+1πxyf(x,y)Vnm*(ρ,θ),x2+y21,
f=f(xa+x¯,ya+y¯),
gk(x)zk=f(j=1nHwkjf(i=1dwjixi+wj0)+wk0),
J(w)k=1c(tkzk)2=tz2,

Metrics