Abstract

Pattern discovery algorithms based on the computational mechanics (CM) method have been shown to succinctly describe underlying patterns in data through the reconstruction of minimum probabilistic finite state automata (PFSA). We apply the CM approach toward the tracking of human subjects in real time by matching and tracking the underlying color pattern as observed from a fixed camera. Objects are extracted from a video sequence, and then raster scanned, decomposed with a one-dimensional Haar wavelet transform, and symbolized with the aid of a red–green–blue (RGB) color cube. The clustered causal state algorithm is then used to reconstruct the corresponding PFSA. Tracking is accomplished by generating the minimum PFSA for each subsequent frame, followed by matching the PFSAs to the previous frame. Results show that there is an optimum alphabet size and segmentation of the RGB color cube for efficient tracking.

© 2010 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behavior,” IEEE Trans. Syst. Man Cybern. C 34, 334–352 (2004).
    [CrossRef]
  2. R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt, and L. Wixon, “A system for video surveillance and monitoring,” Carnegie Mellon University report CMU-RI-TR-00-12 (2001).
  3. C. Lerdsudwichai, M. Abdel-Mottaleb, and A. Ansari, “Tracking multiple people with recovery from partial and total occlusion,” Pattern Recogn. 38, 1059–1070 (2005).
    [CrossRef]
  4. A. A. Argyros and M. I. A. Lourakis, “Three-dimensional tracking of multiple skin-colored regions by a moving stereoscopic system,” Appl. Opt. 43, 366–377 (2004).
    [CrossRef] [PubMed]
  5. S. Weng, C. Kuo, and S. Tu, “Video object tracking using adaptive Kalman filter,” J. Visual Commun. Image Represent. 17, 1190 (2006).
    [CrossRef]
  6. G. R. Bradski, “Computer video face tracking for use in a perceptual user interface,” Intel Technol. J. Q2, 1–15 (1998).
  7. M. J. Swain and D. H. Ballard, “Color indexing,” Int. J. Comput. Vis. 7(1), 11–32 (1991).
    [CrossRef]
  8. M. S. Alam, J. Khan, and A. Bal, “Heteroassociative multiple-target tracking by fringe-adjusted joint transform correlation,” Appl. Opt. 43, 358–365 (2004).
    [CrossRef] [PubMed]
  9. F. A. Sadjadi, “Infrared target detection with probability density functions of wavelet transform subbands,” Appl. Opt. 43, 315–323 (2004).
    [CrossRef] [PubMed]
  10. C. R. Shalizi and J. P. Crutchfield, “Computational mechanics: pattern and prediction, structure, and simplicity,” J. Stat. Phys. 104, 817–879 (2001).
    [CrossRef]
  11. A. Ray, “Symbolic dynamic analysis of complex systems for anomaly detection,” Signal Process. 84, 1115–1130 (2004).
    [CrossRef]
  12. V. Rajagopalan and A. Ray, “Symbolic time series analysis via wavelet-based partitioning,” Signal Process. 86, 3309–3320 (2006).
    [CrossRef]
  13. C. R. Shalizi and K. L. Shalizi, “Blind construction of optimal nonlinear recursive predictors for discrete sequences,” in Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence (AUAI Press, 2004), Vol. 70, pp. 504–511.
  14. M. Schmiedekamp, A. Subbu, and S. Phoha, “The clustered causal state algorithm: efficient pattern discovery for lossy data-compression applications,” Comput. Sci. Eng. 8, 59–67(2006).
    [CrossRef]
  15. M. Piccardi, “Background subtraction techniques: a review,” in 2004 IEEE International Conference on Systems, Man, and Cybernetics (IEEE, 2004), pp. 3099–3104.
  16. C. S. Burrus, R. A. Gopinath, and H. Guo, Introduction to Wavelets and Wavelets Transforms: a Primer (Prentice-Hall1998).
  17. E. Vidal, F. Thollard, C. del la Higuera, F. Casacuberta, and R. C. Carrasco, “Probabilistic finite-state machines: Part I,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 1013–1025(2005).
    [CrossRef] [PubMed]
  18. Open Source Computer Vision, http://opencv.willowgarage.com/wiki/.
  19. G. Theo, “Robust segmentation and tracking of colored objects in video,” IEEE Trans. Circuits Syst. Video Technol. 14, 776–781 (2004).
    [CrossRef]
  20. M. W. Lee and R. Nevatia, “Body part detection for human pose estimation and tracking,” in Proceedings of the 2008 IEEE Workshop on Motion and Video Computing (IEEE, 2008).
  21. J. Ritter, J. Kato, S. Joga, and A. Blake, “A probabilistic background model for tracking,” in ECCV 2000, LNCS 1843, D.Vernon, ed. (Springer-Verlag, 2000), pp. 336–350.
  22. J. Sun, W. Zhang, X. Tang, and H. Y. Shum, “Background cut,” in ECCV 2006, Part II, LNCS 3952, A.Leonardis, H.Bishcof, A.Pinz, eds. (Springer-Verlag, 2006), pp. 628–641.
  23. Y. Shan and R. Wang, “Improved algorithms for motion detection and tracking,” Opt. Eng. 45, 067201 (2006).
    [CrossRef]

2006 (4)

S. Weng, C. Kuo, and S. Tu, “Video object tracking using adaptive Kalman filter,” J. Visual Commun. Image Represent. 17, 1190 (2006).
[CrossRef]

V. Rajagopalan and A. Ray, “Symbolic time series analysis via wavelet-based partitioning,” Signal Process. 86, 3309–3320 (2006).
[CrossRef]

M. Schmiedekamp, A. Subbu, and S. Phoha, “The clustered causal state algorithm: efficient pattern discovery for lossy data-compression applications,” Comput. Sci. Eng. 8, 59–67(2006).
[CrossRef]

Y. Shan and R. Wang, “Improved algorithms for motion detection and tracking,” Opt. Eng. 45, 067201 (2006).
[CrossRef]

2005 (2)

C. Lerdsudwichai, M. Abdel-Mottaleb, and A. Ansari, “Tracking multiple people with recovery from partial and total occlusion,” Pattern Recogn. 38, 1059–1070 (2005).
[CrossRef]

E. Vidal, F. Thollard, C. del la Higuera, F. Casacuberta, and R. C. Carrasco, “Probabilistic finite-state machines: Part I,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 1013–1025(2005).
[CrossRef] [PubMed]

2004 (6)

G. Theo, “Robust segmentation and tracking of colored objects in video,” IEEE Trans. Circuits Syst. Video Technol. 14, 776–781 (2004).
[CrossRef]

A. Ray, “Symbolic dynamic analysis of complex systems for anomaly detection,” Signal Process. 84, 1115–1130 (2004).
[CrossRef]

F. A. Sadjadi, “Infrared target detection with probability density functions of wavelet transform subbands,” Appl. Opt. 43, 315–323 (2004).
[CrossRef] [PubMed]

M. S. Alam, J. Khan, and A. Bal, “Heteroassociative multiple-target tracking by fringe-adjusted joint transform correlation,” Appl. Opt. 43, 358–365 (2004).
[CrossRef] [PubMed]

A. A. Argyros and M. I. A. Lourakis, “Three-dimensional tracking of multiple skin-colored regions by a moving stereoscopic system,” Appl. Opt. 43, 366–377 (2004).
[CrossRef] [PubMed]

W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behavior,” IEEE Trans. Syst. Man Cybern. C 34, 334–352 (2004).
[CrossRef]

2001 (1)

C. R. Shalizi and J. P. Crutchfield, “Computational mechanics: pattern and prediction, structure, and simplicity,” J. Stat. Phys. 104, 817–879 (2001).
[CrossRef]

1998 (1)

G. R. Bradski, “Computer video face tracking for use in a perceptual user interface,” Intel Technol. J. Q2, 1–15 (1998).

1991 (1)

M. J. Swain and D. H. Ballard, “Color indexing,” Int. J. Comput. Vis. 7(1), 11–32 (1991).
[CrossRef]

Abdel-Mottaleb, M.

C. Lerdsudwichai, M. Abdel-Mottaleb, and A. Ansari, “Tracking multiple people with recovery from partial and total occlusion,” Pattern Recogn. 38, 1059–1070 (2005).
[CrossRef]

Alam, M. S.

Ansari, A.

C. Lerdsudwichai, M. Abdel-Mottaleb, and A. Ansari, “Tracking multiple people with recovery from partial and total occlusion,” Pattern Recogn. 38, 1059–1070 (2005).
[CrossRef]

Argyros, A. A.

Bal, A.

Ballard, D. H.

M. J. Swain and D. H. Ballard, “Color indexing,” Int. J. Comput. Vis. 7(1), 11–32 (1991).
[CrossRef]

Blake, A.

J. Ritter, J. Kato, S. Joga, and A. Blake, “A probabilistic background model for tracking,” in ECCV 2000, LNCS 1843, D.Vernon, ed. (Springer-Verlag, 2000), pp. 336–350.

Bradski, G. R.

G. R. Bradski, “Computer video face tracking for use in a perceptual user interface,” Intel Technol. J. Q2, 1–15 (1998).

Burrus, C. S.

C. S. Burrus, R. A. Gopinath, and H. Guo, Introduction to Wavelets and Wavelets Transforms: a Primer (Prentice-Hall1998).

Burt, P.

R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt, and L. Wixon, “A system for video surveillance and monitoring,” Carnegie Mellon University report CMU-RI-TR-00-12 (2001).

Carrasco, R. C.

E. Vidal, F. Thollard, C. del la Higuera, F. Casacuberta, and R. C. Carrasco, “Probabilistic finite-state machines: Part I,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 1013–1025(2005).
[CrossRef] [PubMed]

Casacuberta, F.

E. Vidal, F. Thollard, C. del la Higuera, F. Casacuberta, and R. C. Carrasco, “Probabilistic finite-state machines: Part I,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 1013–1025(2005).
[CrossRef] [PubMed]

Collins, R. T.

R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt, and L. Wixon, “A system for video surveillance and monitoring,” Carnegie Mellon University report CMU-RI-TR-00-12 (2001).

Crutchfield, J. P.

C. R. Shalizi and J. P. Crutchfield, “Computational mechanics: pattern and prediction, structure, and simplicity,” J. Stat. Phys. 104, 817–879 (2001).
[CrossRef]

del la Higuera, C.

E. Vidal, F. Thollard, C. del la Higuera, F. Casacuberta, and R. C. Carrasco, “Probabilistic finite-state machines: Part I,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 1013–1025(2005).
[CrossRef] [PubMed]

Duggins, D.

R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt, and L. Wixon, “A system for video surveillance and monitoring,” Carnegie Mellon University report CMU-RI-TR-00-12 (2001).

Enomoto, N.

R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt, and L. Wixon, “A system for video surveillance and monitoring,” Carnegie Mellon University report CMU-RI-TR-00-12 (2001).

Fujiyoshi, H.

R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt, and L. Wixon, “A system for video surveillance and monitoring,” Carnegie Mellon University report CMU-RI-TR-00-12 (2001).

Gopinath, R. A.

C. S. Burrus, R. A. Gopinath, and H. Guo, Introduction to Wavelets and Wavelets Transforms: a Primer (Prentice-Hall1998).

Guo, H.

C. S. Burrus, R. A. Gopinath, and H. Guo, Introduction to Wavelets and Wavelets Transforms: a Primer (Prentice-Hall1998).

Hasegawa, O.

R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt, and L. Wixon, “A system for video surveillance and monitoring,” Carnegie Mellon University report CMU-RI-TR-00-12 (2001).

Hu, W.

W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behavior,” IEEE Trans. Syst. Man Cybern. C 34, 334–352 (2004).
[CrossRef]

Joga, S.

J. Ritter, J. Kato, S. Joga, and A. Blake, “A probabilistic background model for tracking,” in ECCV 2000, LNCS 1843, D.Vernon, ed. (Springer-Verlag, 2000), pp. 336–350.

Kanade, T.

R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt, and L. Wixon, “A system for video surveillance and monitoring,” Carnegie Mellon University report CMU-RI-TR-00-12 (2001).

Kato, J.

J. Ritter, J. Kato, S. Joga, and A. Blake, “A probabilistic background model for tracking,” in ECCV 2000, LNCS 1843, D.Vernon, ed. (Springer-Verlag, 2000), pp. 336–350.

Khan, J.

Kuo, C.

S. Weng, C. Kuo, and S. Tu, “Video object tracking using adaptive Kalman filter,” J. Visual Commun. Image Represent. 17, 1190 (2006).
[CrossRef]

Lee, M. W.

M. W. Lee and R. Nevatia, “Body part detection for human pose estimation and tracking,” in Proceedings of the 2008 IEEE Workshop on Motion and Video Computing (IEEE, 2008).

Lerdsudwichai, C.

C. Lerdsudwichai, M. Abdel-Mottaleb, and A. Ansari, “Tracking multiple people with recovery from partial and total occlusion,” Pattern Recogn. 38, 1059–1070 (2005).
[CrossRef]

Lipton, A. J.

R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt, and L. Wixon, “A system for video surveillance and monitoring,” Carnegie Mellon University report CMU-RI-TR-00-12 (2001).

Lourakis, M. I. A.

Maybank, S.

W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behavior,” IEEE Trans. Syst. Man Cybern. C 34, 334–352 (2004).
[CrossRef]

Nevatia, R.

M. W. Lee and R. Nevatia, “Body part detection for human pose estimation and tracking,” in Proceedings of the 2008 IEEE Workshop on Motion and Video Computing (IEEE, 2008).

Phoha, S.

M. Schmiedekamp, A. Subbu, and S. Phoha, “The clustered causal state algorithm: efficient pattern discovery for lossy data-compression applications,” Comput. Sci. Eng. 8, 59–67(2006).
[CrossRef]

Piccardi, M.

M. Piccardi, “Background subtraction techniques: a review,” in 2004 IEEE International Conference on Systems, Man, and Cybernetics (IEEE, 2004), pp. 3099–3104.

Rajagopalan, V.

V. Rajagopalan and A. Ray, “Symbolic time series analysis via wavelet-based partitioning,” Signal Process. 86, 3309–3320 (2006).
[CrossRef]

Ray, A.

V. Rajagopalan and A. Ray, “Symbolic time series analysis via wavelet-based partitioning,” Signal Process. 86, 3309–3320 (2006).
[CrossRef]

A. Ray, “Symbolic dynamic analysis of complex systems for anomaly detection,” Signal Process. 84, 1115–1130 (2004).
[CrossRef]

Ritter, J.

J. Ritter, J. Kato, S. Joga, and A. Blake, “A probabilistic background model for tracking,” in ECCV 2000, LNCS 1843, D.Vernon, ed. (Springer-Verlag, 2000), pp. 336–350.

Sadjadi, F. A.

Schmiedekamp, M.

M. Schmiedekamp, A. Subbu, and S. Phoha, “The clustered causal state algorithm: efficient pattern discovery for lossy data-compression applications,” Comput. Sci. Eng. 8, 59–67(2006).
[CrossRef]

Shalizi, C. R.

C. R. Shalizi and J. P. Crutchfield, “Computational mechanics: pattern and prediction, structure, and simplicity,” J. Stat. Phys. 104, 817–879 (2001).
[CrossRef]

C. R. Shalizi and K. L. Shalizi, “Blind construction of optimal nonlinear recursive predictors for discrete sequences,” in Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence (AUAI Press, 2004), Vol. 70, pp. 504–511.

Shalizi, K. L.

C. R. Shalizi and K. L. Shalizi, “Blind construction of optimal nonlinear recursive predictors for discrete sequences,” in Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence (AUAI Press, 2004), Vol. 70, pp. 504–511.

Shan, Y.

Y. Shan and R. Wang, “Improved algorithms for motion detection and tracking,” Opt. Eng. 45, 067201 (2006).
[CrossRef]

Shum, H. Y.

J. Sun, W. Zhang, X. Tang, and H. Y. Shum, “Background cut,” in ECCV 2006, Part II, LNCS 3952, A.Leonardis, H.Bishcof, A.Pinz, eds. (Springer-Verlag, 2006), pp. 628–641.

Subbu, A.

M. Schmiedekamp, A. Subbu, and S. Phoha, “The clustered causal state algorithm: efficient pattern discovery for lossy data-compression applications,” Comput. Sci. Eng. 8, 59–67(2006).
[CrossRef]

Sun, J.

J. Sun, W. Zhang, X. Tang, and H. Y. Shum, “Background cut,” in ECCV 2006, Part II, LNCS 3952, A.Leonardis, H.Bishcof, A.Pinz, eds. (Springer-Verlag, 2006), pp. 628–641.

Swain, M. J.

M. J. Swain and D. H. Ballard, “Color indexing,” Int. J. Comput. Vis. 7(1), 11–32 (1991).
[CrossRef]

Tan, T.

W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behavior,” IEEE Trans. Syst. Man Cybern. C 34, 334–352 (2004).
[CrossRef]

Tang, X.

J. Sun, W. Zhang, X. Tang, and H. Y. Shum, “Background cut,” in ECCV 2006, Part II, LNCS 3952, A.Leonardis, H.Bishcof, A.Pinz, eds. (Springer-Verlag, 2006), pp. 628–641.

Theo, G.

G. Theo, “Robust segmentation and tracking of colored objects in video,” IEEE Trans. Circuits Syst. Video Technol. 14, 776–781 (2004).
[CrossRef]

Thollard, F.

E. Vidal, F. Thollard, C. del la Higuera, F. Casacuberta, and R. C. Carrasco, “Probabilistic finite-state machines: Part I,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 1013–1025(2005).
[CrossRef] [PubMed]

Tolliver, D.

R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt, and L. Wixon, “A system for video surveillance and monitoring,” Carnegie Mellon University report CMU-RI-TR-00-12 (2001).

Tsin, Y.

R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt, and L. Wixon, “A system for video surveillance and monitoring,” Carnegie Mellon University report CMU-RI-TR-00-12 (2001).

Tu, S.

S. Weng, C. Kuo, and S. Tu, “Video object tracking using adaptive Kalman filter,” J. Visual Commun. Image Represent. 17, 1190 (2006).
[CrossRef]

Vidal, E.

E. Vidal, F. Thollard, C. del la Higuera, F. Casacuberta, and R. C. Carrasco, “Probabilistic finite-state machines: Part I,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 1013–1025(2005).
[CrossRef] [PubMed]

Wang, L.

W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behavior,” IEEE Trans. Syst. Man Cybern. C 34, 334–352 (2004).
[CrossRef]

Wang, R.

Y. Shan and R. Wang, “Improved algorithms for motion detection and tracking,” Opt. Eng. 45, 067201 (2006).
[CrossRef]

Weng, S.

S. Weng, C. Kuo, and S. Tu, “Video object tracking using adaptive Kalman filter,” J. Visual Commun. Image Represent. 17, 1190 (2006).
[CrossRef]

Wixon, L.

R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt, and L. Wixon, “A system for video surveillance and monitoring,” Carnegie Mellon University report CMU-RI-TR-00-12 (2001).

Zhang, W.

J. Sun, W. Zhang, X. Tang, and H. Y. Shum, “Background cut,” in ECCV 2006, Part II, LNCS 3952, A.Leonardis, H.Bishcof, A.Pinz, eds. (Springer-Verlag, 2006), pp. 628–641.

Appl. Opt. (3)

Comput. Sci. Eng. (1)

M. Schmiedekamp, A. Subbu, and S. Phoha, “The clustered causal state algorithm: efficient pattern discovery for lossy data-compression applications,” Comput. Sci. Eng. 8, 59–67(2006).
[CrossRef]

IEEE Trans. Circuits Syst. Video Technol. (1)

G. Theo, “Robust segmentation and tracking of colored objects in video,” IEEE Trans. Circuits Syst. Video Technol. 14, 776–781 (2004).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

E. Vidal, F. Thollard, C. del la Higuera, F. Casacuberta, and R. C. Carrasco, “Probabilistic finite-state machines: Part I,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 1013–1025(2005).
[CrossRef] [PubMed]

IEEE Trans. Syst. Man Cybern. C (1)

W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behavior,” IEEE Trans. Syst. Man Cybern. C 34, 334–352 (2004).
[CrossRef]

Int. J. Comput. Vis. (1)

M. J. Swain and D. H. Ballard, “Color indexing,” Int. J. Comput. Vis. 7(1), 11–32 (1991).
[CrossRef]

Intel Technol. J. (1)

G. R. Bradski, “Computer video face tracking for use in a perceptual user interface,” Intel Technol. J. Q2, 1–15 (1998).

J. Stat. Phys. (1)

C. R. Shalizi and J. P. Crutchfield, “Computational mechanics: pattern and prediction, structure, and simplicity,” J. Stat. Phys. 104, 817–879 (2001).
[CrossRef]

J. Visual Commun. Image Represent. (1)

S. Weng, C. Kuo, and S. Tu, “Video object tracking using adaptive Kalman filter,” J. Visual Commun. Image Represent. 17, 1190 (2006).
[CrossRef]

Opt. Eng. (1)

Y. Shan and R. Wang, “Improved algorithms for motion detection and tracking,” Opt. Eng. 45, 067201 (2006).
[CrossRef]

Pattern Recogn. (1)

C. Lerdsudwichai, M. Abdel-Mottaleb, and A. Ansari, “Tracking multiple people with recovery from partial and total occlusion,” Pattern Recogn. 38, 1059–1070 (2005).
[CrossRef]

Signal Process. (2)

A. Ray, “Symbolic dynamic analysis of complex systems for anomaly detection,” Signal Process. 84, 1115–1130 (2004).
[CrossRef]

V. Rajagopalan and A. Ray, “Symbolic time series analysis via wavelet-based partitioning,” Signal Process. 86, 3309–3320 (2006).
[CrossRef]

Other (8)

C. R. Shalizi and K. L. Shalizi, “Blind construction of optimal nonlinear recursive predictors for discrete sequences,” in Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence (AUAI Press, 2004), Vol. 70, pp. 504–511.

R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt, and L. Wixon, “A system for video surveillance and monitoring,” Carnegie Mellon University report CMU-RI-TR-00-12 (2001).

Open Source Computer Vision, http://opencv.willowgarage.com/wiki/.

M. Piccardi, “Background subtraction techniques: a review,” in 2004 IEEE International Conference on Systems, Man, and Cybernetics (IEEE, 2004), pp. 3099–3104.

C. S. Burrus, R. A. Gopinath, and H. Guo, Introduction to Wavelets and Wavelets Transforms: a Primer (Prentice-Hall1998).

M. W. Lee and R. Nevatia, “Body part detection for human pose estimation and tracking,” in Proceedings of the 2008 IEEE Workshop on Motion and Video Computing (IEEE, 2008).

J. Ritter, J. Kato, S. Joga, and A. Blake, “A probabilistic background model for tracking,” in ECCV 2000, LNCS 1843, D.Vernon, ed. (Springer-Verlag, 2000), pp. 336–350.

J. Sun, W. Zhang, X. Tang, and H. Y. Shum, “Background cut,” in ECCV 2006, Part II, LNCS 3952, A.Leonardis, H.Bishcof, A.Pinz, eds. (Springer-Verlag, 2006), pp. 628–641.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1

Method to count the histories and the future symbols using a sliding window. The window limits the past to L symbols and the future to M symbols.

Fig. 2
Fig. 2

Probabilistic finite state automaton. The states contain equivalent histories and transitions to other states from a symbol taken from a finite alphabet and with a transition probability τ.

Fig. 3
Fig. 3

Clustering of equivalent histories. C 1 and C 2 are the centers of clusters 1 and 2, and r is the clustering radius.

Fig. 4
Fig. 4

Color tracking algorithm block diagram. (a) Background and group of frame generation. (b), (c) Background subtraction, object extraction, segmentation, symbolization, and PFSA reconstruction. (d) PFSA matching with previously stored PFSA. The distance metric is used to determine the distance between the distributions generated by the two PFSAs.

Fig. 5
Fig. 5

1D FWT decomposition filter banks diagram.

Fig. 6
Fig. 6

RGB color cube. The RGB pixel values specify the coordinates on the color cube. The color cube is subdivided into cube sections, and each cube is a given a coordinate ( r , g , b ) .

Fig. 7
Fig. 7

Three-element color objects and corresponding minimum PFSAs. (a) Rectangular object and corresponding PFSA. (b) Triangular object and corresponding PFSA.

Fig. 8
Fig. 8

Number of states as a function of sigma of the Gaussian distribution to simulate noise for the case L = 3 .

Fig. 9
Fig. 9

Subject silhouettes and corresponding PFSAs: (a) Subject 1, (b) Subject 2, (c) Subject 1 PFSA, (d) Subject 2 PFSA. Note that transition probabilities of less than 0.01 are not included. The red lines are counterclockwise transitions, and the blue lines are clockwise transitions.

Fig. 10
Fig. 10

Tracking of Subject 1 (dark pants) and Subject 2 (light pants): (a) 464, (b) 496, (c) 499, (d) 529, (e) 538, (f) 542, (g) 593, (h) 594, and (i) 614. The color tracking algorithm successfully tracks both objects when not occulted. There is ambiguity for the case of occlusion, as shown in (c) and (f). The CTA correctly differentiates the objects after occlusion.

Fig. 11
Fig. 11

Distance metric of PFSA X generated by Subject 1 when compared to the PFSA Y prototype of Subject 1 and Subject 2.

Fig. 12
Fig. 12

Distance metric of PFSA X generated by Subject 2 when compared to the PFSA Y prototype of Subject 1 and Subject 2.

Fig. 13
Fig. 13

CCSA-CTA and backprojection results. The parameters are V = 4 , 8, and 16, and | A | = 8 for the CCSA-CTA, and [ 4 , 4 , 4 ] , [ 8 , 8 , 8 ] , and [ 16 , 16 , 16 ] RGB color bin histogram sizes for the backprojection method. The minimum and maximum pixel reduction is PR = 0 and PR = 8 , respectively. The minimum and maximum noise is 4% and 12%, respectively.

Tables (1)

Tables Icon

Table 1 Matching Results for R-G-B and B-G-R Object Targets as a Function of Alphabet Size and the Number of Subdivisions along the RGB Color Cube, V a

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

I ( S M ; R ) = H ( S M ) H ( S M | R ) .
P ( S M = s M | s L ) = s M [ v ( S L = s L , S M = s M ) v ( S L = s L ) ] ,
P ( S M = s M | S = s ) = P ( S M = s M | S = s ) .
d = ( n | A | ( v ( a n ) v ( h ) v ( a n ) v ( h ) ) 2 ) .
τ ( σ k , a i , σ l ) = ν ( σ k , a i , σ l ) l = 1 | Q | n = 1 | A | v ( σ k , a n , σ l ) ,
r = 1 Z | A | 0.5 ,
ψ ( x ) = { 1 0 x < 1 / 2 1 1 / 2 x < 1 0 otherwise } , ϕ ( x ) = { 1 0 x < 1 0 otherwise } .
r = ( R · V / 256 ) , g = ( G · V / 256 ) , b = ( B · V / 256 ) .
F ( r , g , b ) = r · V 2 + g · V + b .
F max = ( V 1 ) · ( V 2 + V + 1 ) .
a n = A ( ( F ( r , g , b ) · N / F max ) ) .
P i ( a ¯ ) = k | Q | l | Q | P i 1 ( σ k ) · τ ( σ k , a ¯ , σ l ) .
P i ( σ ¯ ) = n | A | k | Q | P i 1 ( σ k ) · τ ( σ k , a n , σ ¯ ) .
k = 1 | Q | P ( σ k ) = 1 , n = 1 | A | P ( a n ) = 1 .
D = 1 I i I ( n | A | ( P X i ( a n ) P Y i ( a n ) ) 2 ) .

Metrics