Abstract

The multichannel gradient model (McGM) is an established, biologically plausible framework for the robust extraction of image velocity. Here we describe the McGM extension into color space and report the resulting performance improvement. Our new model, in contrast to existing approaches that process color channels separately, incorporates spectral energy measures to form a local description of the stimulus chromatic spatio-temporal structure from which we can recover both spatial and spectral velocities. We present a range of comparative experiments on synthetic and natural test data that demonstrate that our new method reduces errors and is more robust over a range of viewing environments.

© 2011 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. J. W. Brandt and W. Jonathan, “Improved accuracy in gradient-based optical flow estimation,” Int. J. Comput. Vis. 25, 5–22(1997).
    [CrossRef]
  2. P. C. Chung, C. L. Huang, and E. L. Chen, “A region-based selective optical flow back-projection for genuine motion vector estimation,” Pattern Recognit. 40, 1066–1077 (2007).
    [CrossRef]
  3. E. D. Castro and C. Morandi, “Registration of translated and rotated images using finite Fourier transforms,” IEEE Trans. Pattern Anal. Machine Intell. PAMI-9, 700–703 (1987).
    [CrossRef]
  4. D. Kalivas and A. Sawchuk, “A region matching motion estimation algorithm,” in Comput. Vision Graphics Image Process. Image Underst. 54, 275–288 (1991).
    [CrossRef]
  5. H. Liu, R. Chellappa, and A. Rosenfeld, “Fast two-frame multiscale dense optical flow estimation using discrete wavelet filters,” J. Opt. Soc. Am. A 20, 1505–1515 (2003).
    [CrossRef]
  6. T. Amiaz, E. Lubetzky, and N. Kiryati, “Coarse to over-fine optical flow estimation,” Pattern Recognit. 40, 2496–2503 (2007).
    [CrossRef]
  7. B. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in IJCAI'81 Proceedings of the 7th International Joint Conference on Artificial Intelligence (Morgan Kaufmann, 1981), Vol.  2, pp. 674–679.
  8. B. Horn and B. Schunck, “Determining optical flow,” Artif. Intell. 17, 185–203 (1981).
    [CrossRef]
  9. N. Ohta, “Optical flow detection by color images,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 1989), Vol.  4, pp. 801–805.
  10. N. Ohta and S. Nishizawa, “How much does color information help optical flow computation,” IEICE Trans. Inf. Syst. E89-D, 1759–1762 (2006).
    [CrossRef]
  11. J. Barron and R. Klette, “Quantitative color optical flow,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 2002), Vol.  4, pp. 251–255.
  12. K. Aires, A. Santana, and A. Medeiros, “Optical flow using color information: preliminary results,” in Proceedings of the 2008 ACM Symposium on Applied Computing (ACM, 2008), pp. 1607–1611.
    [CrossRef]
  13. R. Andrews and B. Lovell, “Color optical flow,” in Proceedings of Workshop on Digital Image Computing (2003), pp. 135–139.
  14. P. Golland and A. Bruckstein, “Motion from color,” Comput. Vision Image Underst. 68, 346–362 (1997).
    [CrossRef]
  15. Y. Mileva, A. Bruhn, and J. Weickert, “Illumination-robust variational optical flow with photometric invariants,” in Proceedings of the 29th DAGM Conference on Pattern Recognition (Springer-Verlag, 2007), pp. 152–162.
  16. T. Zickler, S. Mallick, D. Kriegman, and P. Belhumeur, “Color subspaces as photometric invariants,” Int. J. Comput. Vis. 79, 13–30 (2008).
    [CrossRef]
  17. A. Johnston and C. Clifford, “A unified account of three apparent motion illusions,” Vision Res. 35, 1109–1123 (1995).
    [CrossRef] [PubMed]
  18. A. Johnston, P. W. McOwan, and C. P. Benton, “Robust velocity computation from a biologically motivated model of motion perception,” Proc. R. Soc. London Series B 266, 509–518(1999).
    [CrossRef]
  19. B. A. Wandell, in Foundations of Vision (Sinauer, 1995), Chap. 9.
  20. E. Hering, Outlines of a Theory of the Light Sense (Harvard University, 1964).
  21. L. M. Hurvich and D. Jameson, “An opponent-process theory of color vision,” Psych. Rev. 64, 384–404 (1957).
    [CrossRef]
  22. D. Jameson and L. M. Hurvich, “Some quantitative aspects of an opponent-colors theory. i. chromatic responses and spectral saturation,” J. Opt. Soc. Am. 45, 546–552 (1955).
    [CrossRef]
  23. J. B. Derrico and G. Buchsbaum, “A computational model of spatiochromatic image coding in early vision,” J. Visual Commun. Image Represent 2), 31–38 (1991).
    [CrossRef]
  24. A. B. Poirson and B. A. Wandell, “The appearance of colored patterns: pattern-color separability,” J. Opt. Soc. Am. A 10, 2458–2470 (1993).
    [CrossRef]
  25. M. D. Fairchild, in Color Appearance Models, 2nd ed. (Wiley, 2005), Chap. 8.
  26. J. Koenderink and A. Kappers, Color Space (Utrecht University, 1998).
  27. J.-M. Geusebroek, R. Boomgaard, A. W. Smeulders, and H. Geerts, “Color invariance,” IEEE Trans. Pattern Anal. Machine Intell. 23, 1338–1350 (2001).
    [CrossRef]
  28. A. Johnston, P. W. McOwan, and H. Buxton, “A computational model of the analysis of first and second order motion by simple and complex cells,” Proc. R. Soc. London Series B 250, 297–306 (1992).
    [CrossRef]
  29. P. W. McOwan, C. Benton, J. Dale, and A. Johnston, “A multi-differential neuromorphic approach to motion detection,” Int. J. Neural Syst. 9, 429–434 (1999).
    [CrossRef]
  30. A. Johnston, P. W. McOwan, and C. Benton, “Biological computation of image motion from flows over boundaries,” J. Phys. 97, 325–334 (2003).
    [CrossRef]
  31. S. Baker and I. Matthews, “Lucas-Kandade 20 years on: a unifying framework,” Int. J. Comput. Vis. 56, 221–255(2004).
    [CrossRef]
  32. A. Johnston, C. Benton, and P. W. McOwan, “Induced motion at texture-defined motion boundaries,” Proc. R. Soc. London Series B 266, 2441–2450 (1999).
    [CrossRef]
  33. T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow estimation based on a theory for warping,” in Proceedings of European Conference on Computer Vision (Springer, 2004), Vol.  4, pp. 25–36.
  34. J. Koenderink and A. van Doorn, “Receptive field assembly pattern specificity,” J. Visual Commun. Image Represent. 3, 1–12(1992).
    [CrossRef]
  35. J.-M. Geusebroek, R. Boomgaard, A. W. Smeulders, and T. Gevers, “Color constancy from physical principles,” Pattern Recognit. Lett. 24, 1653–1662 (2003).
    [CrossRef]
  36. S. Lang, Calculus of Several Variables (Springer-Verlag, 1987).
    [CrossRef]
  37. J. Schanda, in Colorimetry: Understanding the CIE System(Wiley, 2007), Chap. 3.
  38. “HSV Color Space,” http://en.wikipedia.org/wiki/HSL_and_HSV#cite_note12.
  39. O. J. Braddick, “Segmentation versus integration in visual motion processing,” Trends Neurosci. 16, 263–268 (1993).
    [CrossRef] [PubMed]
  40. G. Botella, M. Rodriguez, A. Garcia, and E. Ros, “Neuromorphic configurable architecture for robust motion estimation,” Int. J. Reconfigurable Comput. 2008, 384–404 (2008).
    [CrossRef]

2008 (2)

T. Zickler, S. Mallick, D. Kriegman, and P. Belhumeur, “Color subspaces as photometric invariants,” Int. J. Comput. Vis. 79, 13–30 (2008).
[CrossRef]

G. Botella, M. Rodriguez, A. Garcia, and E. Ros, “Neuromorphic configurable architecture for robust motion estimation,” Int. J. Reconfigurable Comput. 2008, 384–404 (2008).
[CrossRef]

2007 (2)

P. C. Chung, C. L. Huang, and E. L. Chen, “A region-based selective optical flow back-projection for genuine motion vector estimation,” Pattern Recognit. 40, 1066–1077 (2007).
[CrossRef]

T. Amiaz, E. Lubetzky, and N. Kiryati, “Coarse to over-fine optical flow estimation,” Pattern Recognit. 40, 2496–2503 (2007).
[CrossRef]

2006 (1)

N. Ohta and S. Nishizawa, “How much does color information help optical flow computation,” IEICE Trans. Inf. Syst. E89-D, 1759–1762 (2006).
[CrossRef]

2004 (1)

S. Baker and I. Matthews, “Lucas-Kandade 20 years on: a unifying framework,” Int. J. Comput. Vis. 56, 221–255(2004).
[CrossRef]

2003 (3)

J.-M. Geusebroek, R. Boomgaard, A. W. Smeulders, and T. Gevers, “Color constancy from physical principles,” Pattern Recognit. Lett. 24, 1653–1662 (2003).
[CrossRef]

A. Johnston, P. W. McOwan, and C. Benton, “Biological computation of image motion from flows over boundaries,” J. Phys. 97, 325–334 (2003).
[CrossRef]

H. Liu, R. Chellappa, and A. Rosenfeld, “Fast two-frame multiscale dense optical flow estimation using discrete wavelet filters,” J. Opt. Soc. Am. A 20, 1505–1515 (2003).
[CrossRef]

2001 (1)

J.-M. Geusebroek, R. Boomgaard, A. W. Smeulders, and H. Geerts, “Color invariance,” IEEE Trans. Pattern Anal. Machine Intell. 23, 1338–1350 (2001).
[CrossRef]

1999 (3)

P. W. McOwan, C. Benton, J. Dale, and A. Johnston, “A multi-differential neuromorphic approach to motion detection,” Int. J. Neural Syst. 9, 429–434 (1999).
[CrossRef]

A. Johnston, C. Benton, and P. W. McOwan, “Induced motion at texture-defined motion boundaries,” Proc. R. Soc. London Series B 266, 2441–2450 (1999).
[CrossRef]

A. Johnston, P. W. McOwan, and C. P. Benton, “Robust velocity computation from a biologically motivated model of motion perception,” Proc. R. Soc. London Series B 266, 509–518(1999).
[CrossRef]

1997 (2)

P. Golland and A. Bruckstein, “Motion from color,” Comput. Vision Image Underst. 68, 346–362 (1997).
[CrossRef]

J. W. Brandt and W. Jonathan, “Improved accuracy in gradient-based optical flow estimation,” Int. J. Comput. Vis. 25, 5–22(1997).
[CrossRef]

1995 (1)

A. Johnston and C. Clifford, “A unified account of three apparent motion illusions,” Vision Res. 35, 1109–1123 (1995).
[CrossRef] [PubMed]

1993 (2)

O. J. Braddick, “Segmentation versus integration in visual motion processing,” Trends Neurosci. 16, 263–268 (1993).
[CrossRef] [PubMed]

A. B. Poirson and B. A. Wandell, “The appearance of colored patterns: pattern-color separability,” J. Opt. Soc. Am. A 10, 2458–2470 (1993).
[CrossRef]

1992 (2)

J. Koenderink and A. van Doorn, “Receptive field assembly pattern specificity,” J. Visual Commun. Image Represent. 3, 1–12(1992).
[CrossRef]

A. Johnston, P. W. McOwan, and H. Buxton, “A computational model of the analysis of first and second order motion by simple and complex cells,” Proc. R. Soc. London Series B 250, 297–306 (1992).
[CrossRef]

1991 (2)

D. Kalivas and A. Sawchuk, “A region matching motion estimation algorithm,” in Comput. Vision Graphics Image Process. Image Underst. 54, 275–288 (1991).
[CrossRef]

J. B. Derrico and G. Buchsbaum, “A computational model of spatiochromatic image coding in early vision,” J. Visual Commun. Image Represent 2), 31–38 (1991).
[CrossRef]

1987 (1)

E. D. Castro and C. Morandi, “Registration of translated and rotated images using finite Fourier transforms,” IEEE Trans. Pattern Anal. Machine Intell. PAMI-9, 700–703 (1987).
[CrossRef]

1981 (1)

B. Horn and B. Schunck, “Determining optical flow,” Artif. Intell. 17, 185–203 (1981).
[CrossRef]

1957 (1)

L. M. Hurvich and D. Jameson, “An opponent-process theory of color vision,” Psych. Rev. 64, 384–404 (1957).
[CrossRef]

1955 (1)

Aires, K.

K. Aires, A. Santana, and A. Medeiros, “Optical flow using color information: preliminary results,” in Proceedings of the 2008 ACM Symposium on Applied Computing (ACM, 2008), pp. 1607–1611.
[CrossRef]

Amiaz, T.

T. Amiaz, E. Lubetzky, and N. Kiryati, “Coarse to over-fine optical flow estimation,” Pattern Recognit. 40, 2496–2503 (2007).
[CrossRef]

Andrews, R.

R. Andrews and B. Lovell, “Color optical flow,” in Proceedings of Workshop on Digital Image Computing (2003), pp. 135–139.

Baker, S.

S. Baker and I. Matthews, “Lucas-Kandade 20 years on: a unifying framework,” Int. J. Comput. Vis. 56, 221–255(2004).
[CrossRef]

Barron, J.

J. Barron and R. Klette, “Quantitative color optical flow,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 2002), Vol.  4, pp. 251–255.

Belhumeur, P.

T. Zickler, S. Mallick, D. Kriegman, and P. Belhumeur, “Color subspaces as photometric invariants,” Int. J. Comput. Vis. 79, 13–30 (2008).
[CrossRef]

Benton, C.

A. Johnston, P. W. McOwan, and C. Benton, “Biological computation of image motion from flows over boundaries,” J. Phys. 97, 325–334 (2003).
[CrossRef]

P. W. McOwan, C. Benton, J. Dale, and A. Johnston, “A multi-differential neuromorphic approach to motion detection,” Int. J. Neural Syst. 9, 429–434 (1999).
[CrossRef]

A. Johnston, C. Benton, and P. W. McOwan, “Induced motion at texture-defined motion boundaries,” Proc. R. Soc. London Series B 266, 2441–2450 (1999).
[CrossRef]

Benton, C. P.

A. Johnston, P. W. McOwan, and C. P. Benton, “Robust velocity computation from a biologically motivated model of motion perception,” Proc. R. Soc. London Series B 266, 509–518(1999).
[CrossRef]

Boomgaard, R.

J.-M. Geusebroek, R. Boomgaard, A. W. Smeulders, and T. Gevers, “Color constancy from physical principles,” Pattern Recognit. Lett. 24, 1653–1662 (2003).
[CrossRef]

J.-M. Geusebroek, R. Boomgaard, A. W. Smeulders, and H. Geerts, “Color invariance,” IEEE Trans. Pattern Anal. Machine Intell. 23, 1338–1350 (2001).
[CrossRef]

Botella, G.

G. Botella, M. Rodriguez, A. Garcia, and E. Ros, “Neuromorphic configurable architecture for robust motion estimation,” Int. J. Reconfigurable Comput. 2008, 384–404 (2008).
[CrossRef]

Braddick, O. J.

O. J. Braddick, “Segmentation versus integration in visual motion processing,” Trends Neurosci. 16, 263–268 (1993).
[CrossRef] [PubMed]

Brandt, J. W.

J. W. Brandt and W. Jonathan, “Improved accuracy in gradient-based optical flow estimation,” Int. J. Comput. Vis. 25, 5–22(1997).
[CrossRef]

Brox, T.

T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow estimation based on a theory for warping,” in Proceedings of European Conference on Computer Vision (Springer, 2004), Vol.  4, pp. 25–36.

Bruckstein, A.

P. Golland and A. Bruckstein, “Motion from color,” Comput. Vision Image Underst. 68, 346–362 (1997).
[CrossRef]

Bruhn, A.

T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow estimation based on a theory for warping,” in Proceedings of European Conference on Computer Vision (Springer, 2004), Vol.  4, pp. 25–36.

Y. Mileva, A. Bruhn, and J. Weickert, “Illumination-robust variational optical flow with photometric invariants,” in Proceedings of the 29th DAGM Conference on Pattern Recognition (Springer-Verlag, 2007), pp. 152–162.

Buchsbaum, G.

J. B. Derrico and G. Buchsbaum, “A computational model of spatiochromatic image coding in early vision,” J. Visual Commun. Image Represent 2), 31–38 (1991).
[CrossRef]

Buxton, H.

A. Johnston, P. W. McOwan, and H. Buxton, “A computational model of the analysis of first and second order motion by simple and complex cells,” Proc. R. Soc. London Series B 250, 297–306 (1992).
[CrossRef]

Castro, E. D.

E. D. Castro and C. Morandi, “Registration of translated and rotated images using finite Fourier transforms,” IEEE Trans. Pattern Anal. Machine Intell. PAMI-9, 700–703 (1987).
[CrossRef]

Chellappa, R.

Chen, E. L.

P. C. Chung, C. L. Huang, and E. L. Chen, “A region-based selective optical flow back-projection for genuine motion vector estimation,” Pattern Recognit. 40, 1066–1077 (2007).
[CrossRef]

Chung, P. C.

P. C. Chung, C. L. Huang, and E. L. Chen, “A region-based selective optical flow back-projection for genuine motion vector estimation,” Pattern Recognit. 40, 1066–1077 (2007).
[CrossRef]

Clifford, C.

A. Johnston and C. Clifford, “A unified account of three apparent motion illusions,” Vision Res. 35, 1109–1123 (1995).
[CrossRef] [PubMed]

Dale, J.

P. W. McOwan, C. Benton, J. Dale, and A. Johnston, “A multi-differential neuromorphic approach to motion detection,” Int. J. Neural Syst. 9, 429–434 (1999).
[CrossRef]

Derrico, J. B.

J. B. Derrico and G. Buchsbaum, “A computational model of spatiochromatic image coding in early vision,” J. Visual Commun. Image Represent 2), 31–38 (1991).
[CrossRef]

Fairchild, M. D.

M. D. Fairchild, in Color Appearance Models, 2nd ed. (Wiley, 2005), Chap. 8.

Garcia, A.

G. Botella, M. Rodriguez, A. Garcia, and E. Ros, “Neuromorphic configurable architecture for robust motion estimation,” Int. J. Reconfigurable Comput. 2008, 384–404 (2008).
[CrossRef]

Geerts, H.

J.-M. Geusebroek, R. Boomgaard, A. W. Smeulders, and H. Geerts, “Color invariance,” IEEE Trans. Pattern Anal. Machine Intell. 23, 1338–1350 (2001).
[CrossRef]

Geusebroek, J.-M.

J.-M. Geusebroek, R. Boomgaard, A. W. Smeulders, and T. Gevers, “Color constancy from physical principles,” Pattern Recognit. Lett. 24, 1653–1662 (2003).
[CrossRef]

J.-M. Geusebroek, R. Boomgaard, A. W. Smeulders, and H. Geerts, “Color invariance,” IEEE Trans. Pattern Anal. Machine Intell. 23, 1338–1350 (2001).
[CrossRef]

Gevers, T.

J.-M. Geusebroek, R. Boomgaard, A. W. Smeulders, and T. Gevers, “Color constancy from physical principles,” Pattern Recognit. Lett. 24, 1653–1662 (2003).
[CrossRef]

Golland, P.

P. Golland and A. Bruckstein, “Motion from color,” Comput. Vision Image Underst. 68, 346–362 (1997).
[CrossRef]

Hering, E.

E. Hering, Outlines of a Theory of the Light Sense (Harvard University, 1964).

Horn, B.

B. Horn and B. Schunck, “Determining optical flow,” Artif. Intell. 17, 185–203 (1981).
[CrossRef]

Huang, C. L.

P. C. Chung, C. L. Huang, and E. L. Chen, “A region-based selective optical flow back-projection for genuine motion vector estimation,” Pattern Recognit. 40, 1066–1077 (2007).
[CrossRef]

Hurvich, L. M.

Jameson, D.

Johnston, A.

A. Johnston, P. W. McOwan, and C. Benton, “Biological computation of image motion from flows over boundaries,” J. Phys. 97, 325–334 (2003).
[CrossRef]

P. W. McOwan, C. Benton, J. Dale, and A. Johnston, “A multi-differential neuromorphic approach to motion detection,” Int. J. Neural Syst. 9, 429–434 (1999).
[CrossRef]

A. Johnston, P. W. McOwan, and C. P. Benton, “Robust velocity computation from a biologically motivated model of motion perception,” Proc. R. Soc. London Series B 266, 509–518(1999).
[CrossRef]

A. Johnston, C. Benton, and P. W. McOwan, “Induced motion at texture-defined motion boundaries,” Proc. R. Soc. London Series B 266, 2441–2450 (1999).
[CrossRef]

A. Johnston and C. Clifford, “A unified account of three apparent motion illusions,” Vision Res. 35, 1109–1123 (1995).
[CrossRef] [PubMed]

A. Johnston, P. W. McOwan, and H. Buxton, “A computational model of the analysis of first and second order motion by simple and complex cells,” Proc. R. Soc. London Series B 250, 297–306 (1992).
[CrossRef]

Jonathan, W.

J. W. Brandt and W. Jonathan, “Improved accuracy in gradient-based optical flow estimation,” Int. J. Comput. Vis. 25, 5–22(1997).
[CrossRef]

Kalivas, D.

D. Kalivas and A. Sawchuk, “A region matching motion estimation algorithm,” in Comput. Vision Graphics Image Process. Image Underst. 54, 275–288 (1991).
[CrossRef]

Kanade, T.

B. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in IJCAI'81 Proceedings of the 7th International Joint Conference on Artificial Intelligence (Morgan Kaufmann, 1981), Vol.  2, pp. 674–679.

Kappers, A.

J. Koenderink and A. Kappers, Color Space (Utrecht University, 1998).

Kiryati, N.

T. Amiaz, E. Lubetzky, and N. Kiryati, “Coarse to over-fine optical flow estimation,” Pattern Recognit. 40, 2496–2503 (2007).
[CrossRef]

Klette, R.

J. Barron and R. Klette, “Quantitative color optical flow,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 2002), Vol.  4, pp. 251–255.

Koenderink, J.

J. Koenderink and A. van Doorn, “Receptive field assembly pattern specificity,” J. Visual Commun. Image Represent. 3, 1–12(1992).
[CrossRef]

J. Koenderink and A. Kappers, Color Space (Utrecht University, 1998).

Kriegman, D.

T. Zickler, S. Mallick, D. Kriegman, and P. Belhumeur, “Color subspaces as photometric invariants,” Int. J. Comput. Vis. 79, 13–30 (2008).
[CrossRef]

Lang, S.

S. Lang, Calculus of Several Variables (Springer-Verlag, 1987).
[CrossRef]

Liu, H.

Lovell, B.

R. Andrews and B. Lovell, “Color optical flow,” in Proceedings of Workshop on Digital Image Computing (2003), pp. 135–139.

Lubetzky, E.

T. Amiaz, E. Lubetzky, and N. Kiryati, “Coarse to over-fine optical flow estimation,” Pattern Recognit. 40, 2496–2503 (2007).
[CrossRef]

Lucas, B.

B. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in IJCAI'81 Proceedings of the 7th International Joint Conference on Artificial Intelligence (Morgan Kaufmann, 1981), Vol.  2, pp. 674–679.

Mallick, S.

T. Zickler, S. Mallick, D. Kriegman, and P. Belhumeur, “Color subspaces as photometric invariants,” Int. J. Comput. Vis. 79, 13–30 (2008).
[CrossRef]

Matthews, I.

S. Baker and I. Matthews, “Lucas-Kandade 20 years on: a unifying framework,” Int. J. Comput. Vis. 56, 221–255(2004).
[CrossRef]

McOwan, P. W.

A. Johnston, P. W. McOwan, and C. Benton, “Biological computation of image motion from flows over boundaries,” J. Phys. 97, 325–334 (2003).
[CrossRef]

P. W. McOwan, C. Benton, J. Dale, and A. Johnston, “A multi-differential neuromorphic approach to motion detection,” Int. J. Neural Syst. 9, 429–434 (1999).
[CrossRef]

A. Johnston, P. W. McOwan, and C. P. Benton, “Robust velocity computation from a biologically motivated model of motion perception,” Proc. R. Soc. London Series B 266, 509–518(1999).
[CrossRef]

A. Johnston, C. Benton, and P. W. McOwan, “Induced motion at texture-defined motion boundaries,” Proc. R. Soc. London Series B 266, 2441–2450 (1999).
[CrossRef]

A. Johnston, P. W. McOwan, and H. Buxton, “A computational model of the analysis of first and second order motion by simple and complex cells,” Proc. R. Soc. London Series B 250, 297–306 (1992).
[CrossRef]

Medeiros, A.

K. Aires, A. Santana, and A. Medeiros, “Optical flow using color information: preliminary results,” in Proceedings of the 2008 ACM Symposium on Applied Computing (ACM, 2008), pp. 1607–1611.
[CrossRef]

Mileva, Y.

Y. Mileva, A. Bruhn, and J. Weickert, “Illumination-robust variational optical flow with photometric invariants,” in Proceedings of the 29th DAGM Conference on Pattern Recognition (Springer-Verlag, 2007), pp. 152–162.

Morandi, C.

E. D. Castro and C. Morandi, “Registration of translated and rotated images using finite Fourier transforms,” IEEE Trans. Pattern Anal. Machine Intell. PAMI-9, 700–703 (1987).
[CrossRef]

Nishizawa, S.

N. Ohta and S. Nishizawa, “How much does color information help optical flow computation,” IEICE Trans. Inf. Syst. E89-D, 1759–1762 (2006).
[CrossRef]

Ohta, N.

N. Ohta and S. Nishizawa, “How much does color information help optical flow computation,” IEICE Trans. Inf. Syst. E89-D, 1759–1762 (2006).
[CrossRef]

N. Ohta, “Optical flow detection by color images,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 1989), Vol.  4, pp. 801–805.

Papenberg, N.

T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow estimation based on a theory for warping,” in Proceedings of European Conference on Computer Vision (Springer, 2004), Vol.  4, pp. 25–36.

Poirson, A. B.

Rodriguez, M.

G. Botella, M. Rodriguez, A. Garcia, and E. Ros, “Neuromorphic configurable architecture for robust motion estimation,” Int. J. Reconfigurable Comput. 2008, 384–404 (2008).
[CrossRef]

Ros, E.

G. Botella, M. Rodriguez, A. Garcia, and E. Ros, “Neuromorphic configurable architecture for robust motion estimation,” Int. J. Reconfigurable Comput. 2008, 384–404 (2008).
[CrossRef]

Rosenfeld, A.

Santana, A.

K. Aires, A. Santana, and A. Medeiros, “Optical flow using color information: preliminary results,” in Proceedings of the 2008 ACM Symposium on Applied Computing (ACM, 2008), pp. 1607–1611.
[CrossRef]

Sawchuk, A.

D. Kalivas and A. Sawchuk, “A region matching motion estimation algorithm,” in Comput. Vision Graphics Image Process. Image Underst. 54, 275–288 (1991).
[CrossRef]

Schanda, J.

J. Schanda, in Colorimetry: Understanding the CIE System(Wiley, 2007), Chap. 3.

Schunck, B.

B. Horn and B. Schunck, “Determining optical flow,” Artif. Intell. 17, 185–203 (1981).
[CrossRef]

Smeulders, A. W.

J.-M. Geusebroek, R. Boomgaard, A. W. Smeulders, and T. Gevers, “Color constancy from physical principles,” Pattern Recognit. Lett. 24, 1653–1662 (2003).
[CrossRef]

J.-M. Geusebroek, R. Boomgaard, A. W. Smeulders, and H. Geerts, “Color invariance,” IEEE Trans. Pattern Anal. Machine Intell. 23, 1338–1350 (2001).
[CrossRef]

van Doorn, A.

J. Koenderink and A. van Doorn, “Receptive field assembly pattern specificity,” J. Visual Commun. Image Represent. 3, 1–12(1992).
[CrossRef]

Wandell, B. A.

Weickert, J.

Y. Mileva, A. Bruhn, and J. Weickert, “Illumination-robust variational optical flow with photometric invariants,” in Proceedings of the 29th DAGM Conference on Pattern Recognition (Springer-Verlag, 2007), pp. 152–162.

T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow estimation based on a theory for warping,” in Proceedings of European Conference on Computer Vision (Springer, 2004), Vol.  4, pp. 25–36.

Zickler, T.

T. Zickler, S. Mallick, D. Kriegman, and P. Belhumeur, “Color subspaces as photometric invariants,” Int. J. Comput. Vis. 79, 13–30 (2008).
[CrossRef]

Artif. Intell. (1)

B. Horn and B. Schunck, “Determining optical flow,” Artif. Intell. 17, 185–203 (1981).
[CrossRef]

Comput. Vision Graphics Image Process. Image Underst. (1)

D. Kalivas and A. Sawchuk, “A region matching motion estimation algorithm,” in Comput. Vision Graphics Image Process. Image Underst. 54, 275–288 (1991).
[CrossRef]

Comput. Vision Image Underst. (1)

P. Golland and A. Bruckstein, “Motion from color,” Comput. Vision Image Underst. 68, 346–362 (1997).
[CrossRef]

IEEE Trans. Pattern Anal. Machine Intell. (2)

J.-M. Geusebroek, R. Boomgaard, A. W. Smeulders, and H. Geerts, “Color invariance,” IEEE Trans. Pattern Anal. Machine Intell. 23, 1338–1350 (2001).
[CrossRef]

E. D. Castro and C. Morandi, “Registration of translated and rotated images using finite Fourier transforms,” IEEE Trans. Pattern Anal. Machine Intell. PAMI-9, 700–703 (1987).
[CrossRef]

IEICE Trans. Inf. Syst. (1)

N. Ohta and S. Nishizawa, “How much does color information help optical flow computation,” IEICE Trans. Inf. Syst. E89-D, 1759–1762 (2006).
[CrossRef]

Int. J. Comput. Vis. (3)

T. Zickler, S. Mallick, D. Kriegman, and P. Belhumeur, “Color subspaces as photometric invariants,” Int. J. Comput. Vis. 79, 13–30 (2008).
[CrossRef]

J. W. Brandt and W. Jonathan, “Improved accuracy in gradient-based optical flow estimation,” Int. J. Comput. Vis. 25, 5–22(1997).
[CrossRef]

S. Baker and I. Matthews, “Lucas-Kandade 20 years on: a unifying framework,” Int. J. Comput. Vis. 56, 221–255(2004).
[CrossRef]

Int. J. Neural Syst. (1)

P. W. McOwan, C. Benton, J. Dale, and A. Johnston, “A multi-differential neuromorphic approach to motion detection,” Int. J. Neural Syst. 9, 429–434 (1999).
[CrossRef]

Int. J. Reconfigurable Comput. (1)

G. Botella, M. Rodriguez, A. Garcia, and E. Ros, “Neuromorphic configurable architecture for robust motion estimation,” Int. J. Reconfigurable Comput. 2008, 384–404 (2008).
[CrossRef]

J. Opt. Soc. Am. (1)

J. Opt. Soc. Am. A (2)

J. Phys. (1)

A. Johnston, P. W. McOwan, and C. Benton, “Biological computation of image motion from flows over boundaries,” J. Phys. 97, 325–334 (2003).
[CrossRef]

J. Visual Commun. Image Represent (1)

J. B. Derrico and G. Buchsbaum, “A computational model of spatiochromatic image coding in early vision,” J. Visual Commun. Image Represent 2), 31–38 (1991).
[CrossRef]

J. Visual Commun. Image Represent. (1)

J. Koenderink and A. van Doorn, “Receptive field assembly pattern specificity,” J. Visual Commun. Image Represent. 3, 1–12(1992).
[CrossRef]

Pattern Recognit. (2)

P. C. Chung, C. L. Huang, and E. L. Chen, “A region-based selective optical flow back-projection for genuine motion vector estimation,” Pattern Recognit. 40, 1066–1077 (2007).
[CrossRef]

T. Amiaz, E. Lubetzky, and N. Kiryati, “Coarse to over-fine optical flow estimation,” Pattern Recognit. 40, 2496–2503 (2007).
[CrossRef]

Pattern Recognit. Lett. (1)

J.-M. Geusebroek, R. Boomgaard, A. W. Smeulders, and T. Gevers, “Color constancy from physical principles,” Pattern Recognit. Lett. 24, 1653–1662 (2003).
[CrossRef]

Proc. R. Soc. London Series B (3)

A. Johnston, P. W. McOwan, and H. Buxton, “A computational model of the analysis of first and second order motion by simple and complex cells,” Proc. R. Soc. London Series B 250, 297–306 (1992).
[CrossRef]

A. Johnston, C. Benton, and P. W. McOwan, “Induced motion at texture-defined motion boundaries,” Proc. R. Soc. London Series B 266, 2441–2450 (1999).
[CrossRef]

A. Johnston, P. W. McOwan, and C. P. Benton, “Robust velocity computation from a biologically motivated model of motion perception,” Proc. R. Soc. London Series B 266, 509–518(1999).
[CrossRef]

Psych. Rev. (1)

L. M. Hurvich and D. Jameson, “An opponent-process theory of color vision,” Psych. Rev. 64, 384–404 (1957).
[CrossRef]

Trends Neurosci. (1)

O. J. Braddick, “Segmentation versus integration in visual motion processing,” Trends Neurosci. 16, 263–268 (1993).
[CrossRef] [PubMed]

Vision Res. (1)

A. Johnston and C. Clifford, “A unified account of three apparent motion illusions,” Vision Res. 35, 1109–1123 (1995).
[CrossRef] [PubMed]

Other (14)

Y. Mileva, A. Bruhn, and J. Weickert, “Illumination-robust variational optical flow with photometric invariants,” in Proceedings of the 29th DAGM Conference on Pattern Recognition (Springer-Verlag, 2007), pp. 152–162.

M. D. Fairchild, in Color Appearance Models, 2nd ed. (Wiley, 2005), Chap. 8.

J. Koenderink and A. Kappers, Color Space (Utrecht University, 1998).

B. A. Wandell, in Foundations of Vision (Sinauer, 1995), Chap. 9.

E. Hering, Outlines of a Theory of the Light Sense (Harvard University, 1964).

B. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in IJCAI'81 Proceedings of the 7th International Joint Conference on Artificial Intelligence (Morgan Kaufmann, 1981), Vol.  2, pp. 674–679.

N. Ohta, “Optical flow detection by color images,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 1989), Vol.  4, pp. 801–805.

J. Barron and R. Klette, “Quantitative color optical flow,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 2002), Vol.  4, pp. 251–255.

K. Aires, A. Santana, and A. Medeiros, “Optical flow using color information: preliminary results,” in Proceedings of the 2008 ACM Symposium on Applied Computing (ACM, 2008), pp. 1607–1611.
[CrossRef]

R. Andrews and B. Lovell, “Color optical flow,” in Proceedings of Workshop on Digital Image Computing (2003), pp. 135–139.

T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow estimation based on a theory for warping,” in Proceedings of European Conference on Computer Vision (Springer, 2004), Vol.  4, pp. 25–36.

S. Lang, Calculus of Several Variables (Springer-Verlag, 1987).
[CrossRef]

J. Schanda, in Colorimetry: Understanding the CIE System(Wiley, 2007), Chap. 3.

“HSV Color Space,” http://en.wikipedia.org/wiki/HSL_and_HSV#cite_note12.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1

The zeroth, first, and second Gaussian derivatives along the wavelength axis are similar to the luminance, blue– yellow, and red–green weighting functions found in the human visual system [21, 22, 23, 24].

Fig. 2
Fig. 2

Motion as orientation in space-time. If we take a slice in the ( x , t ) plane, we can see that the structure is oriented. So space–time orientation/slope indicates motion.

Fig. 3
Fig. 3

The color McGM primitives have intensity ( D s 0 ), yellow–blue ( D s 1 ) and red–green ( D s 2 ) for velocity estimation in space-time. The coordinates of the points represent the values of matched pairs of terms combined through the least squares calculation that differ only in that one has an extra temporal derivative and one has an extra spatial derivative. (a) illustrates the normal situation of a moving object leading to both intensity and color changes. So, both intensity ( D s 0 ) and color opponency ( D s 1 , D s 2 ) contribute the orientation estimate. (b) illustrates the situation with different colors but a constant intensity. Thus, intensity changes are small and do not make a substantial contribution. The orientation is mainly estimated from chromatic change. (c) illustrates the situation for constant color but varied local luminance. In this case, the chromatic change is small. The orientation estimate relies heavily on the intensity changes.

Fig. 4
Fig. 4

Synthetic color image sequence (25 paired frames) with gradually increasing wavelength.

Fig. 5
Fig. 5

Model calculated speed of wavelength change between adjacent frame pairs.

Fig. 6
Fig. 6

Performance comparison of our proposed color McGM with the original McGM on grating sequences. The selected viewing rectangles are at the same position in both sequences; the direction results within these rectangular viewing windows are contrast enhanced for illustrative purposes to show the difference in performance. (a) shows the result from a color image sequence using the color McGM and (b) shows the result from a directly converted gray-scale image derived from the color sequence using the original McGM.

Fig. 7
Fig. 7

Performance comparison of the proposed color McGM and the original McGM on RDKs. (a) is the result of color RDK using the color McGM. (b) is the result of the corresponding gray-scale RDK using the original McGM.

Fig. 8
Fig. 8

Performance evaluation of the proposed color McGM on image patterns with different colors that result in the same gray value.

Fig. 9
Fig. 9

Grating illumination is gradually reduced. The top images show a gradual reduction in illumination color; the bottom images show the corresponding reductions in gray-scale images.

Fig. 10
Fig. 10

Comparing degenerations of the color and noncolor McGM while illumination decreases. (a) Standard deviations of speed error increase with decreasing illumination. (b) Standard deviations of direction error increase with decreasing illumination. Note that the color McGM error increases more slowly than does the corresponding noncolor graph.

Fig. 11
Fig. 11

Performance comparison of the proposed color McGM and the original McGM on benchmark traffic scene data. (a) shows the result for car velocity using the color McGM. (b) is the result of using original McGM on the corresponding gray-scale video sequence and shows clearly a greater level of noise in the output.

Fig. 12
Fig. 12

Performance comparison of more complicated traffic scene. (a) is the result using the color McGM. (b) is the result using the original McGM. Note the cars in the left lane are static.

Tables (4)

Tables Icon

Table 1 Computed ν s and Hue Value Change of Color References Under Different Illuminations

Tables Icon

Table 2 Numerical Comparison of the Color McGM and the Original McGM a

Tables Icon

Table 3 Comparison of the Color McGM and Applying Original McGM in the Different Color Spaces for the Color Traffic Scene a

Tables Icon

Table 4 Comparison of the Color McGM and Applying Original McGM in the Different Color Spaces for the Shifted Color Sine Wave with Illumination Changes

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

M ( λ 0 ; E , σ ) = E ( λ ) G ( λ ; λ 0 , σ ) d λ
M ^ ( λ 0 + δ ; E , σ ) = M ( λ 0 ; E , σ ) + δ M λ ( λ 0 ; E , σ ) + δ 2 2 M λ λ ( λ 0 ; E , σ ) + O ( δ 3 ) ,
M λ ( λ 0 ; E , σ ) = E ( λ ) G λ ( λ ; λ 0 , σ ) d λ , M λ λ ( λ 0 ; E , σ ) = E ( λ ) G λ λ ( λ ; λ 0 , σ ) d λ ,
M ^ ( x , y , t , λ ) ( p , q , r , s ) = i = 0 a j = 0 b k = 0 c l = 0 d p i q j r k s l i ! j ! k ! l ! ( i + j + k + l ) E ( x , y , t , λ ) x i y j t k λ l ,
k ( x , y , t , λ ) = ( k 0 , k 1 , , k n ) T ,
Zeroth Order Term k 0 = E ( x , y , t , λ ) ; First Order Term k 1 = p E x ( x , y , t , λ ) + q E y ( x , y , t , λ ) + r E t ( x , y , t , λ ) + s E λ ( x , y , t , λ ) ; Second Order Term k 2 = p 2 2 E 2 x ( x , y , t , λ ) + q 2 2 E 2 y ( x , y , t , λ ) + r 2 2 E 2 t ( x , y , t , λ ) + s 2 2 E 2 λ ( x , y , t , λ ) + p q E x y ( x , y , t , λ ) + p r E x t ( x , y , t , λ ) + p s E x λ ( x , y , t , λ ) + q r E y t ( x , y , t , λ ) + q s E y λ ( x , y , t , λ ) + r s E t λ ( x , y , t , λ ) ; Higher Order Term
J = D k ( x , y , t , λ ) = ( k x , k y , k t , k λ ) = ( k 0 , x k 0 , y k 0 , t k 0 , λ k 1 , x k 1 , y k 1 , t k 1 , λ k n , x k n , y k n , t k n , λ ) ,
( x 1 y 0 t 0 λ 0 , , x i + 1 y j t k λ 0 Intensity ( D S 0 ) , x 1 y 0 t 0 λ 1 , , x i + 1 y j t k λ 1 ) Yellow Blue ( D S 1 ) ,
( x 0 y 0 t 0 λ 1 , , x i y j t k λ 1 Yellow Blue ( D S 1 ) , x 0 y 0 t 0 λ 2 , , x i y j t k λ 2 Red Green ( D S 2 ) ) .
J T J = ( k x · k x k x · k y k x · k t k x · k λ k y · k x k y · k y k y · k t k y · k λ k t · k x k t · k y k t · k t k t · k λ k λ · k x k λ · k y k λ · k t k λ · k λ ) .
C = s s r r q q p p J T J d x d y d t d λ = ( x · x x · y x · t x · λ y · x y · y y · t y · λ t · x t · y t · t t · λ λ · x λ · y λ · t λ · λ ) .
ν x = ( λ · t + x · t λ · x + x · x ) , ν y = ( λ · t + y · t λ · y + y · y ) .
ν x = E λ E t + E x E t E λ E x + E x E x ,
ν s = ( λ · t λ · λ ) .
ν ^ = 2 m [ ν x ( 1 + ( x · y x · x ) 2 ) 1 ] , ν ^ = 2 m [ ν y ( 1 + ( x · y y · y ) 2 ) 1 ] ,
ν̌ = 2 m ( λ · x + t · x λ · t + t · t ) , ν̌ = 2 m ( λ · y + t · y λ · t + t · t ) .
F ( θ ) = ( F ( θ ) , F ( θ ) ) = 2 m ( cos ( θ ) , sin ( θ ) ) .
S 2 = | ν ^ · F ν ^ · F ν ^ · F ν ^ · F | | ν ^ · ν̌ ν ^ · ν̌ ν ^ · ν̌ ν ^ · ν̌ | ,
Θ = tan 1 ( ν ^ + ν̌ ) · F ( ν ^ + ν̌ ) · F ( ν ^ + ν̌ ) · F + ( ν ^ + ν̌ ) · F .

Metrics