Abstract

Vehicles moving on a street were monitored by a CCD digital TV camera and recognized in real time by a nonlinear matched filtering method, quantization of modulated function to complexes of a quadrupole, with a personal computer. In the nonlinear matched filtering method an input scene containing the vehicles is modulated in the frequency domain by matched filters, the modulated functions are quantized to complex numbers representing a quadrupole, and then quantized functions are inverse-Fourier transformed. In the frame time of an NTSC-TV system, autocorrelation signals of four vehicles were calculated and discriminated, and their peak positions were determined. Their correct loci in real time were tracked in sequential frames.

© 2004 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. S. T. Barnard, W. B. Thompson, “Disparity analysis of images,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-2, 334–340 (1980).
    [CrossRef]
  2. L. Dreschler, H.-H. Nagel, “Volumetric model and 3D-trajectory of a moving car derived from monocular TV-frame sequence of a street scene,” in Proceedings of the 7th International Joint Conference on Artificial Intelligence (IJCAI ’81)William Kaufmann, ed. (Max-Planck-Institut für Informatik, Saarbrücken, Germany, 1991), pp. 692–699.
  3. J. O. Limb, J. A. Murphy, “Estimating the velocity of moving images in television signals,” Comput. Graph. Image Process. 4, 311–327 (1975).
    [CrossRef]
  4. B. K. P. Horn, B. G. Schunk, “Determining optical flow,” Artif. Intell. 17, 185–204 (1981).
    [CrossRef]
  5. A. D. Gara, “Real-time tracking of moving objects by optical correlation,” Appl. Opt. 18, 172–174 (1979).
    [CrossRef] [PubMed]
  6. E. C. Tam, F. T. S. Yu, D. A. Gregory, R. D. Juday, “Autonomous real-time object tracking with an adaptive joint transform correlator,” Opt. Eng. 29, 314–320 (1990).
    [CrossRef]
  7. E. C. Tam, F. T. S. Yu, A. Tanone, D. A. Gregory, R. D. Juday, “Data association multiple target tracking using a phase-mostly liquid-crystal television,” Opt. Eng. 29, 1114–1121 (1990).
    [CrossRef]
  8. D. Psaltis, M. A. Neifeld, A. Yamamura, “Image correlation using optical memory disks,” Opt. Lett. 14, 429–431 (1989).
    [CrossRef] [PubMed]
  9. T. Lu, K. Chois, S. Wu, X. Xu, F. T. S. Yu, “Optical-disk-based neural network,” Appl. Opt. 28, 4722–4724 (1989).
    [CrossRef] [PubMed]
  10. F. T. S. Yu, E. C. Tam, T. W. Lu, E. Nishihara, T. Nishikawa, “Optical-disk-based joint transform correlator,” Appl. Opt. 30, 915–916 (1991).
    [CrossRef] [PubMed]
  11. D. Cottrell, R. Lilly, J. Dacis, T. Day, “Optical correlator performance of binary phase-only filters using Fourier and Hartley transforms,” Appl. Opt. 26, 3755–3761 (1987).
    [CrossRef] [PubMed]
  12. J. L. Horner, J. R. Leger, “Pattern recognition with binary phase-only filters,” Appl. Opt. 24, 609–611 (1985).
    [CrossRef] [PubMed]
  13. B. Javidi, C. J. Kou, “Joint transform image correlation using a binary spatial light modulator at the Fourier plane,” Appl. Opt. 27, 663–665 (1988).
    [CrossRef] [PubMed]
  14. I. Labastida, A. Carnicer, E. Martin-Badosa, I. Juvells, S. Vallmitjana, “On-axis joint transform correlation based on a four-level power spectrum,” Appl. Opt. 38, 6111–6115 (1999).
    [CrossRef]
  15. T. Yao, T. Minemoto, “Pattern recognition by quantization of modulated function to complexes of a quadrupole in matched filtering,” Opt. Eng. 42, 3004–3012 (2003).
    [CrossRef]

2003 (1)

T. Yao, T. Minemoto, “Pattern recognition by quantization of modulated function to complexes of a quadrupole in matched filtering,” Opt. Eng. 42, 3004–3012 (2003).
[CrossRef]

1999 (1)

1991 (1)

1990 (2)

E. C. Tam, F. T. S. Yu, D. A. Gregory, R. D. Juday, “Autonomous real-time object tracking with an adaptive joint transform correlator,” Opt. Eng. 29, 314–320 (1990).
[CrossRef]

E. C. Tam, F. T. S. Yu, A. Tanone, D. A. Gregory, R. D. Juday, “Data association multiple target tracking using a phase-mostly liquid-crystal television,” Opt. Eng. 29, 1114–1121 (1990).
[CrossRef]

1989 (2)

1988 (1)

1987 (1)

1985 (1)

1981 (1)

B. K. P. Horn, B. G. Schunk, “Determining optical flow,” Artif. Intell. 17, 185–204 (1981).
[CrossRef]

1980 (1)

S. T. Barnard, W. B. Thompson, “Disparity analysis of images,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-2, 334–340 (1980).
[CrossRef]

1979 (1)

1975 (1)

J. O. Limb, J. A. Murphy, “Estimating the velocity of moving images in television signals,” Comput. Graph. Image Process. 4, 311–327 (1975).
[CrossRef]

Barnard, S. T.

S. T. Barnard, W. B. Thompson, “Disparity analysis of images,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-2, 334–340 (1980).
[CrossRef]

Carnicer, A.

Chois, K.

Cottrell, D.

Dacis, J.

Day, T.

Dreschler, L.

L. Dreschler, H.-H. Nagel, “Volumetric model and 3D-trajectory of a moving car derived from monocular TV-frame sequence of a street scene,” in Proceedings of the 7th International Joint Conference on Artificial Intelligence (IJCAI ’81)William Kaufmann, ed. (Max-Planck-Institut für Informatik, Saarbrücken, Germany, 1991), pp. 692–699.

Gara, A. D.

Gregory, D. A.

E. C. Tam, F. T. S. Yu, A. Tanone, D. A. Gregory, R. D. Juday, “Data association multiple target tracking using a phase-mostly liquid-crystal television,” Opt. Eng. 29, 1114–1121 (1990).
[CrossRef]

E. C. Tam, F. T. S. Yu, D. A. Gregory, R. D. Juday, “Autonomous real-time object tracking with an adaptive joint transform correlator,” Opt. Eng. 29, 314–320 (1990).
[CrossRef]

Horn, B. K. P.

B. K. P. Horn, B. G. Schunk, “Determining optical flow,” Artif. Intell. 17, 185–204 (1981).
[CrossRef]

Horner, J. L.

Javidi, B.

Juday, R. D.

E. C. Tam, F. T. S. Yu, D. A. Gregory, R. D. Juday, “Autonomous real-time object tracking with an adaptive joint transform correlator,” Opt. Eng. 29, 314–320 (1990).
[CrossRef]

E. C. Tam, F. T. S. Yu, A. Tanone, D. A. Gregory, R. D. Juday, “Data association multiple target tracking using a phase-mostly liquid-crystal television,” Opt. Eng. 29, 1114–1121 (1990).
[CrossRef]

Juvells, I.

Kou, C. J.

Labastida, I.

Leger, J. R.

Lilly, R.

Limb, J. O.

J. O. Limb, J. A. Murphy, “Estimating the velocity of moving images in television signals,” Comput. Graph. Image Process. 4, 311–327 (1975).
[CrossRef]

Lu, T.

Lu, T. W.

Martin-Badosa, E.

Minemoto, T.

T. Yao, T. Minemoto, “Pattern recognition by quantization of modulated function to complexes of a quadrupole in matched filtering,” Opt. Eng. 42, 3004–3012 (2003).
[CrossRef]

Murphy, J. A.

J. O. Limb, J. A. Murphy, “Estimating the velocity of moving images in television signals,” Comput. Graph. Image Process. 4, 311–327 (1975).
[CrossRef]

Nagel, H.-H.

L. Dreschler, H.-H. Nagel, “Volumetric model and 3D-trajectory of a moving car derived from monocular TV-frame sequence of a street scene,” in Proceedings of the 7th International Joint Conference on Artificial Intelligence (IJCAI ’81)William Kaufmann, ed. (Max-Planck-Institut für Informatik, Saarbrücken, Germany, 1991), pp. 692–699.

Neifeld, M. A.

Nishihara, E.

Nishikawa, T.

Psaltis, D.

Schunk, B. G.

B. K. P. Horn, B. G. Schunk, “Determining optical flow,” Artif. Intell. 17, 185–204 (1981).
[CrossRef]

Tam, E. C.

F. T. S. Yu, E. C. Tam, T. W. Lu, E. Nishihara, T. Nishikawa, “Optical-disk-based joint transform correlator,” Appl. Opt. 30, 915–916 (1991).
[CrossRef] [PubMed]

E. C. Tam, F. T. S. Yu, D. A. Gregory, R. D. Juday, “Autonomous real-time object tracking with an adaptive joint transform correlator,” Opt. Eng. 29, 314–320 (1990).
[CrossRef]

E. C. Tam, F. T. S. Yu, A. Tanone, D. A. Gregory, R. D. Juday, “Data association multiple target tracking using a phase-mostly liquid-crystal television,” Opt. Eng. 29, 1114–1121 (1990).
[CrossRef]

Tanone, A.

E. C. Tam, F. T. S. Yu, A. Tanone, D. A. Gregory, R. D. Juday, “Data association multiple target tracking using a phase-mostly liquid-crystal television,” Opt. Eng. 29, 1114–1121 (1990).
[CrossRef]

Thompson, W. B.

S. T. Barnard, W. B. Thompson, “Disparity analysis of images,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-2, 334–340 (1980).
[CrossRef]

Vallmitjana, S.

Wu, S.

Xu, X.

Yamamura, A.

Yao, T.

T. Yao, T. Minemoto, “Pattern recognition by quantization of modulated function to complexes of a quadrupole in matched filtering,” Opt. Eng. 42, 3004–3012 (2003).
[CrossRef]

Yu, F. T. S.

F. T. S. Yu, E. C. Tam, T. W. Lu, E. Nishihara, T. Nishikawa, “Optical-disk-based joint transform correlator,” Appl. Opt. 30, 915–916 (1991).
[CrossRef] [PubMed]

E. C. Tam, F. T. S. Yu, A. Tanone, D. A. Gregory, R. D. Juday, “Data association multiple target tracking using a phase-mostly liquid-crystal television,” Opt. Eng. 29, 1114–1121 (1990).
[CrossRef]

E. C. Tam, F. T. S. Yu, D. A. Gregory, R. D. Juday, “Autonomous real-time object tracking with an adaptive joint transform correlator,” Opt. Eng. 29, 314–320 (1990).
[CrossRef]

T. Lu, K. Chois, S. Wu, X. Xu, F. T. S. Yu, “Optical-disk-based neural network,” Appl. Opt. 28, 4722–4724 (1989).
[CrossRef] [PubMed]

Appl. Opt. (7)

Artif. Intell. (1)

B. K. P. Horn, B. G. Schunk, “Determining optical flow,” Artif. Intell. 17, 185–204 (1981).
[CrossRef]

Comput. Graph. Image Process. (1)

J. O. Limb, J. A. Murphy, “Estimating the velocity of moving images in television signals,” Comput. Graph. Image Process. 4, 311–327 (1975).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

S. T. Barnard, W. B. Thompson, “Disparity analysis of images,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-2, 334–340 (1980).
[CrossRef]

Opt. Eng. (3)

E. C. Tam, F. T. S. Yu, D. A. Gregory, R. D. Juday, “Autonomous real-time object tracking with an adaptive joint transform correlator,” Opt. Eng. 29, 314–320 (1990).
[CrossRef]

E. C. Tam, F. T. S. Yu, A. Tanone, D. A. Gregory, R. D. Juday, “Data association multiple target tracking using a phase-mostly liquid-crystal television,” Opt. Eng. 29, 1114–1121 (1990).
[CrossRef]

T. Yao, T. Minemoto, “Pattern recognition by quantization of modulated function to complexes of a quadrupole in matched filtering,” Opt. Eng. 42, 3004–3012 (2003).
[CrossRef]

Opt. Lett. (1)

Other (1)

L. Dreschler, H.-H. Nagel, “Volumetric model and 3D-trajectory of a moving car derived from monocular TV-frame sequence of a street scene,” in Proceedings of the 7th International Joint Conference on Artificial Intelligence (IJCAI ’81)William Kaufmann, ed. (Max-Planck-Institut für Informatik, Saarbrücken, Germany, 1991), pp. 692–699.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1

An example of monitored scene images.

Fig. 2
Fig. 2

Process flow chart.

Fig. 3
Fig. 3

Examples of the entrance region. The black lines on the left and the right tracks show the entrance regions designated for two lanes on the respective tracks.

Fig. 4
Fig. 4

Example of the region for segmentation of an admitted car image. (a) The square region enclosed by the white line in shows the area chosen on the right-hand lane on the left-hand track. (b) Segmented car image. (c) First reference image for the car.

Fig. 5
Fig. 5

Example of segmentation for the reference image. (a) D(x, y) given by Eq. (5). (b) Structure element for erosion. (c), (d) Structure elements for dilation. (e) Image obtained by erosion with the structure element shown in (b). (f) Segmentation mask obtained by dilation for the image shown in (e) by use of the structure elements shown in (c) and (d). (g) Reference image obtained by putting the segmented image on a halftone area with 128 × 128 pixels.

Fig. 6
Fig. 6

Input images f(x, y; N), its output images, and reference images h k (x, y; N - 1) segmented from input images f(x, y; N - 1) for 15 of the 150 image sequences. Images corresponding to every tenth value of N from 10 to 150 from the time that monitoring was started. The input and the output images are arranged above the corresponding reference images. (continues next page).

Fig. 7
Fig. 7

Trajectories of tracked cars. The chains of small white dots show trajectories that were superimposed on the input images. The small images show the corresponding reference images of the target cars.

Fig. 8
Fig. 8

Example of tracking trajectory for a car moving on a curved street. (a) Trajectory superimposed on the final input image, where the chain of small white dots shows the trajectory. (b) Three reference images made by the segmentation of the target car for the frame numbers N = 80, 200, 300.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

DLx, y=fx, y; N-fx, y; N-1 for x, y  the entrance line.
Cku, v; N=Fku, v; NHk*u, v; N-1.
Qku, v; N=1+jReCku, v; N0, ImCku, v; N01-jReCku, v; N0, ImCku, v; N<0-1-jReCku, v; N<0, ImCku, v; N<0-1+jReCku, v; N<0, ImCku, v; N0,
okx, y; N=|FT-1Qku, v; N|2,
Dx, y=fx, y; N+1-fx, y; N for x, y128×128 pixels centered at pxk, yk; N.
DBx, y=0for |Dx, y|101for |Dx, y|<10.
DB1x, y=DBx, y Θ A1C,
DB2x, y=DB1x, yA2A3.

Metrics