Abstract

We took panoramic snapshots in outdoor scenes at regular intervals in two- or three-dimensional grids covering 1 m2 or 1 m3 and determined how the root mean square pixel differences between each of the images and a reference image acquired at one of the locations in the grid develop over distance from the reference position. We then asked whether the reference position can be pinpointed from a random starting position by moving the panoramic imaging device in such a way that the image differences relative to the reference image are minimized. We find that on time scales of minutes to hours, outdoor locations are accurately defined by a clear, sharp minimum in a smooth three-dimensional (3D) volume of image differences (the 3D difference function). 3D difference functions depend on the spatial-frequency content of natural scenes and on the spatial layout of objects therein. They become steeper in the vicinity of dominant objects. Their shape and smoothness, however, are affected by changes in illumination and shadows. The difference functions generated by rotation are similar in shape to those generated by translation, but their plateau values are higher. Rotational difference functions change little with distance from the reference location. Simple gradient descent methods are surprisingly successful in recovering a goal location, even if faced with transient changes in illumination. Our results show that view-based homing with panoramic images is in principle feasible in natural environments and does not require the identification of individual landmarks. We discuss the relevance of our findings to the study of robot and insect homing.

© 2003 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. T. S. Collett, J. Zeil, “Selection and use of landmarks by insects,” in Orientation and Communication in Arthropods, M. Lehrer, ed. (Birkhäuser Verlag, Basel, Switzerland, 1997), pp. 41–65.
  2. T. S. Collett, J. Zeil, “Places and landmarks: an arthro-pod perspective,” in Spatial Representation in Animals, S. Healy, ed. (Oxford U. Press, Oxford, UK, 1998), pp. 18–53.
  3. M. Giurfa, E. A. Capaldi, “Vectors, routes and maps: new discoveries about navigation in insects,” Trends Neurosci. 22, 237–242 (1999).
    [CrossRef] [PubMed]
  4. M. O. Franz, H. A. Mallot, “Biomimetic robot navigation,” Rob. Auton. Syst. 30, 133–153 (2000).
    [CrossRef]
  5. E. M. Riseman, A. R. Hanson, J. R. Beveridge, R. Kumar, H. Sawhney, “Landmark-based navigation and the acquisition of environmental models,” in Visual Navigation, Y. Aloimonos, ed. (Erlbaum, Hillsdale, N.J., 1997), pp. 317–374.
  6. J. Hong, X. Tan, B. Pinette, R. Weiss, E. M. Riseman, “Image-based homing,” IEEE Control Syst., special issue on robotics and automation, 12, 38–45 (1992).
  7. M. V. Srinivasan, “An image interpolation technique for the computation of optic flow and egomotion,” Biol. Cybern. 71, 401–415 (1994).
    [CrossRef]
  8. J. S. Chahl, M. V. Srinivasan, “Visual computation of egomotion using an image interpolation technique,” Biol. Cybern. 74, 405–411 (1996).
    [CrossRef] [PubMed]
  9. M. O. Franz, B. Schölkopf, H. A. Mallot, H. H. Bülthoff, “Where did I take that snapshot? Scene-based homing by image matching,” Biol. Cybern. 79, 191–202 (1998).
    [CrossRef]
  10. D. Lambrinos, R. Möller, T. Labhart, R. Pfeifer, R. Wehner, “A mobile robot employing insect strategies for navigation,” Rob. Auton. Syst. 30, 39–64 (2000).
    [CrossRef]
  11. R. Möller, “Insect visual homing strategies in a robot with analog processing,” Biol. Cybern. 83, 231–243 (2000).
    [CrossRef] [PubMed]
  12. B. A. Cartwright, T. S. Collett, “Landmark learning in bees: experiments and models,” J. Comp. Physiol. 151, 521–543 (1983).
    [CrossRef]
  13. B. A. Cartwright, T. S. Collett, “Landmark maps for honeybees,” Biol. Cybern. 57, 85–93 (1987).
    [CrossRef]
  14. M. Lehrer, G. Bianco, “The turn-back-and-look behaviour: bee versus robot,” Biol. Cybern. 83, 211–229 (2000).
    [CrossRef] [PubMed]
  15. P. Gaussier, C. Joulain, J. P. Banquet, S. Leprêtre, A. Revel, “The visual homing problem: an example of robotics/biology cross fertilization,” Rob. Auton. Syst. 30, 155–180 (2000).
    [CrossRef]
  16. M. G. Nagle, M. V. Srinivasan, D. L. Wilson, “Image interpolation technique for measurement of egomotion in 6 degrees of freedom,” J. Opt. Soc. Am. A 14, 3233–3241 (1997).
    [CrossRef]
  17. J. S. Chahl, M. V. Srinivasan, “Range estimation with a panoramic visual sensor,” J. Opt. Soc. Am. A 14, 2144–2151 (1997).
    [CrossRef]
  18. M. G. Nagle, M. V. Srinivasan, “Structure from motion: determining the range and orientation of surfaces by image interpolation,” J. Opt. Soc. Am. A 13, 25–34 (1996).
    [CrossRef]
  19. M. V. Srinivasan, J. S. Chahl, S. W. Zhang, “Robot navigation by visual dead-reckoning: inspiration from insects,” Int. J. Pattern Recogn. Artif. Intell. 11, 35–47 (1997).
    [CrossRef]
  20. J. S. Chahl, M. V. Srinivasan, “Reflective surfaces for panoramic imaging,” Appl. Opt. 36, 8275–8285 (1997).
    [CrossRef]
  21. M. P. Eckert, J. Zeil, “Towards an ecology of motion vision,” in Motion Vision: Computational, Neural and Ecological Constraints, J. M. Zanker, J. Zeil, eds. (Springer-Verlag, Berlin, 2001), pp. 333–369.
  22. G. E. P. Box, N. R. Draper, Evolutionary Operation (Wiley, New York, 1969).
  23. J. Zeil, “Orientation flights of solitary wasps (Cerceris; Sphecidae; Hymenoptera): I. Description of flight,” J. Comp. Physiol., A 172, 189–205 (1993).
    [CrossRef]
  24. M. Lehrer, “Why do bees turn back and look?” J. Comp. Physiol., A 172, 549–563 (1993).
    [CrossRef]
  25. T. S. Collett, M. Lehrer, “Looking and learning: a spatial pattern in the orientation flight of the wasp Vespula vulgaris,” Proc. R. Soc. London, Ser. B 252, 129–134 (1993).
    [CrossRef]
  26. J. Zeil, A. Kelber, R. Voss, “Structure and function of learning flights in bees and wasps,” J. Exp. Biol. 199, 245–252 (1996).
  27. T. S. Collett, J. Zeil, “Flights of learning,” Curr. Direct. Psychol. Sci. 5, 149–155 (1996).
    [CrossRef]
  28. J. Zeil, “Orientation flights of solitary wasps (Cerceris; Sphecidae; Hymenoptera): II. Similarities between orientation and return flights and the use of motion parallax,” J. Comp. Physiol., A 172, 207–222 (1993).
    [CrossRef]
  29. T. S. Collett, J. A. Rees, “View-based navigation in Hymenoptera: multiple strategies of landmark guidance in the approach to a feeder,” J. Comp. Physiol., A 181, 47–58 (1997).
    [CrossRef]
  30. J. H. van Hateren, M. V. Srinivasan, P. B. Wait, “Pattern recognition in bees: orientation discrimination,” J. Comp. Physiol., A 167, 649–654 (1990).
  31. D. Efler, B. Ronacher, “Evidence against a retinotopic-template matching in honeybees’ pattern recognition,” Vision Res. 40, 3391–3403 (2000).
    [CrossRef]
  32. M. Dill, M. Heisenberg, “Visual pattern memory without shape recognition,” Philos. Trans. R. Soc. London, Ser. B 349, 143–152 (1995).
    [CrossRef] [PubMed]
  33. T. S. Collett, M. F. Land, “Visual spatial memory in a hoverfly,” J. Comp. Physiol. 100, 59–84 (1975).
    [CrossRef]
  34. R. Wehner, F. Räber, “Visual spatial memory in desert ants, Cataglyphis bicolor (Hymenoptera: Formicidae),” Experientia 35, 1569–1571 (1979).
    [CrossRef]
  35. M. Dill, R. Wolf, M. Heisenberg, “Visual pattern recognition in Drosophila involves retinotopic matching,” Nature (London) 365, 751–753 (1993).
    [CrossRef]
  36. M. Heisenberg, “Pattern recognition in insects,” Curr. Opin. Neurobiol. 5, 475–481 (1995).
    [CrossRef] [PubMed]
  37. B. Ronacher, U. Duft, “An image-matching mechanism describes a generalization task in honeybees,” J. Comp. Physiol., A 178, 803–812 (1996).
    [CrossRef]
  38. B. Ronacher, “How do bees learn and recognize visual patterns?” Biol. Cybern. 79, 477–485 (1998).
    [CrossRef]
  39. R. Ernst, M. Heisenberg, “The memory template in Drosophila pattern vision at the flight simulator,” Vision Res. 39, 3920–3933 (1999).
    [CrossRef]
  40. D. M. Coppola, H. R. Purves, A. N. McCoy, D. Purves, “The distribution of oriented contours in thereal world,” Proc. Natl. Acad. Sci. U.S.A. 95, 4002–4006 (1998).
    [CrossRef]
  41. A. van der Schaaf, H. van Hateren, “Modelling the power spectra of natural images: statistics and information,” Vision Res. 36, 2759–2770 (1996).
    [CrossRef] [PubMed]
  42. D. Ruderman, “Origins of scaling in natural images,” Vision Res. 23, 3385–3398 (1997).
    [CrossRef]
  43. R. Voss, J. Zeil, “Active vision in insects: an analysis of object-directed zig-zag flights in a ground-nesting wasp (Odynerus spinipes, Eumenidae),” J. Comp. Physiol., A 182, 377–387 (1998).
    [CrossRef]
  44. K. Dale, T. S. Collett, “Using artificial evolution and selection to model insect navigation,” Curr. Biol. 11, 1305–1316 (2001).
    [CrossRef] [PubMed]
  45. R. Möller, “Do insects use templates or parameters for landmark navigaion?,” J. Theor. Biol. 210, 33–45 (2001).
    [CrossRef]

2001

K. Dale, T. S. Collett, “Using artificial evolution and selection to model insect navigation,” Curr. Biol. 11, 1305–1316 (2001).
[CrossRef] [PubMed]

R. Möller, “Do insects use templates or parameters for landmark navigaion?,” J. Theor. Biol. 210, 33–45 (2001).
[CrossRef]

2000

M. O. Franz, H. A. Mallot, “Biomimetic robot navigation,” Rob. Auton. Syst. 30, 133–153 (2000).
[CrossRef]

D. Lambrinos, R. Möller, T. Labhart, R. Pfeifer, R. Wehner, “A mobile robot employing insect strategies for navigation,” Rob. Auton. Syst. 30, 39–64 (2000).
[CrossRef]

R. Möller, “Insect visual homing strategies in a robot with analog processing,” Biol. Cybern. 83, 231–243 (2000).
[CrossRef] [PubMed]

M. Lehrer, G. Bianco, “The turn-back-and-look behaviour: bee versus robot,” Biol. Cybern. 83, 211–229 (2000).
[CrossRef] [PubMed]

P. Gaussier, C. Joulain, J. P. Banquet, S. Leprêtre, A. Revel, “The visual homing problem: an example of robotics/biology cross fertilization,” Rob. Auton. Syst. 30, 155–180 (2000).
[CrossRef]

D. Efler, B. Ronacher, “Evidence against a retinotopic-template matching in honeybees’ pattern recognition,” Vision Res. 40, 3391–3403 (2000).
[CrossRef]

1999

R. Ernst, M. Heisenberg, “The memory template in Drosophila pattern vision at the flight simulator,” Vision Res. 39, 3920–3933 (1999).
[CrossRef]

M. Giurfa, E. A. Capaldi, “Vectors, routes and maps: new discoveries about navigation in insects,” Trends Neurosci. 22, 237–242 (1999).
[CrossRef] [PubMed]

1998

M. O. Franz, B. Schölkopf, H. A. Mallot, H. H. Bülthoff, “Where did I take that snapshot? Scene-based homing by image matching,” Biol. Cybern. 79, 191–202 (1998).
[CrossRef]

D. M. Coppola, H. R. Purves, A. N. McCoy, D. Purves, “The distribution of oriented contours in thereal world,” Proc. Natl. Acad. Sci. U.S.A. 95, 4002–4006 (1998).
[CrossRef]

R. Voss, J. Zeil, “Active vision in insects: an analysis of object-directed zig-zag flights in a ground-nesting wasp (Odynerus spinipes, Eumenidae),” J. Comp. Physiol., A 182, 377–387 (1998).
[CrossRef]

B. Ronacher, “How do bees learn and recognize visual patterns?” Biol. Cybern. 79, 477–485 (1998).
[CrossRef]

1997

D. Ruderman, “Origins of scaling in natural images,” Vision Res. 23, 3385–3398 (1997).
[CrossRef]

T. S. Collett, J. A. Rees, “View-based navigation in Hymenoptera: multiple strategies of landmark guidance in the approach to a feeder,” J. Comp. Physiol., A 181, 47–58 (1997).
[CrossRef]

M. G. Nagle, M. V. Srinivasan, D. L. Wilson, “Image interpolation technique for measurement of egomotion in 6 degrees of freedom,” J. Opt. Soc. Am. A 14, 3233–3241 (1997).
[CrossRef]

J. S. Chahl, M. V. Srinivasan, “Range estimation with a panoramic visual sensor,” J. Opt. Soc. Am. A 14, 2144–2151 (1997).
[CrossRef]

M. V. Srinivasan, J. S. Chahl, S. W. Zhang, “Robot navigation by visual dead-reckoning: inspiration from insects,” Int. J. Pattern Recogn. Artif. Intell. 11, 35–47 (1997).
[CrossRef]

J. S. Chahl, M. V. Srinivasan, “Reflective surfaces for panoramic imaging,” Appl. Opt. 36, 8275–8285 (1997).
[CrossRef]

1996

M. G. Nagle, M. V. Srinivasan, “Structure from motion: determining the range and orientation of surfaces by image interpolation,” J. Opt. Soc. Am. A 13, 25–34 (1996).
[CrossRef]

J. S. Chahl, M. V. Srinivasan, “Visual computation of egomotion using an image interpolation technique,” Biol. Cybern. 74, 405–411 (1996).
[CrossRef] [PubMed]

J. Zeil, A. Kelber, R. Voss, “Structure and function of learning flights in bees and wasps,” J. Exp. Biol. 199, 245–252 (1996).

T. S. Collett, J. Zeil, “Flights of learning,” Curr. Direct. Psychol. Sci. 5, 149–155 (1996).
[CrossRef]

A. van der Schaaf, H. van Hateren, “Modelling the power spectra of natural images: statistics and information,” Vision Res. 36, 2759–2770 (1996).
[CrossRef] [PubMed]

B. Ronacher, U. Duft, “An image-matching mechanism describes a generalization task in honeybees,” J. Comp. Physiol., A 178, 803–812 (1996).
[CrossRef]

1995

M. Heisenberg, “Pattern recognition in insects,” Curr. Opin. Neurobiol. 5, 475–481 (1995).
[CrossRef] [PubMed]

M. Dill, M. Heisenberg, “Visual pattern memory without shape recognition,” Philos. Trans. R. Soc. London, Ser. B 349, 143–152 (1995).
[CrossRef] [PubMed]

1994

M. V. Srinivasan, “An image interpolation technique for the computation of optic flow and egomotion,” Biol. Cybern. 71, 401–415 (1994).
[CrossRef]

1993

J. Zeil, “Orientation flights of solitary wasps (Cerceris; Sphecidae; Hymenoptera): I. Description of flight,” J. Comp. Physiol., A 172, 189–205 (1993).
[CrossRef]

M. Lehrer, “Why do bees turn back and look?” J. Comp. Physiol., A 172, 549–563 (1993).
[CrossRef]

T. S. Collett, M. Lehrer, “Looking and learning: a spatial pattern in the orientation flight of the wasp Vespula vulgaris,” Proc. R. Soc. London, Ser. B 252, 129–134 (1993).
[CrossRef]

J. Zeil, “Orientation flights of solitary wasps (Cerceris; Sphecidae; Hymenoptera): II. Similarities between orientation and return flights and the use of motion parallax,” J. Comp. Physiol., A 172, 207–222 (1993).
[CrossRef]

M. Dill, R. Wolf, M. Heisenberg, “Visual pattern recognition in Drosophila involves retinotopic matching,” Nature (London) 365, 751–753 (1993).
[CrossRef]

1992

J. Hong, X. Tan, B. Pinette, R. Weiss, E. M. Riseman, “Image-based homing,” IEEE Control Syst., special issue on robotics and automation, 12, 38–45 (1992).

1990

J. H. van Hateren, M. V. Srinivasan, P. B. Wait, “Pattern recognition in bees: orientation discrimination,” J. Comp. Physiol., A 167, 649–654 (1990).

1987

B. A. Cartwright, T. S. Collett, “Landmark maps for honeybees,” Biol. Cybern. 57, 85–93 (1987).
[CrossRef]

1983

B. A. Cartwright, T. S. Collett, “Landmark learning in bees: experiments and models,” J. Comp. Physiol. 151, 521–543 (1983).
[CrossRef]

1979

R. Wehner, F. Räber, “Visual spatial memory in desert ants, Cataglyphis bicolor (Hymenoptera: Formicidae),” Experientia 35, 1569–1571 (1979).
[CrossRef]

1975

T. S. Collett, M. F. Land, “Visual spatial memory in a hoverfly,” J. Comp. Physiol. 100, 59–84 (1975).
[CrossRef]

Banquet, J. P.

P. Gaussier, C. Joulain, J. P. Banquet, S. Leprêtre, A. Revel, “The visual homing problem: an example of robotics/biology cross fertilization,” Rob. Auton. Syst. 30, 155–180 (2000).
[CrossRef]

Beveridge, J. R.

E. M. Riseman, A. R. Hanson, J. R. Beveridge, R. Kumar, H. Sawhney, “Landmark-based navigation and the acquisition of environmental models,” in Visual Navigation, Y. Aloimonos, ed. (Erlbaum, Hillsdale, N.J., 1997), pp. 317–374.

Bianco, G.

M. Lehrer, G. Bianco, “The turn-back-and-look behaviour: bee versus robot,” Biol. Cybern. 83, 211–229 (2000).
[CrossRef] [PubMed]

Box, G. E. P.

G. E. P. Box, N. R. Draper, Evolutionary Operation (Wiley, New York, 1969).

Bülthoff, H. H.

M. O. Franz, B. Schölkopf, H. A. Mallot, H. H. Bülthoff, “Where did I take that snapshot? Scene-based homing by image matching,” Biol. Cybern. 79, 191–202 (1998).
[CrossRef]

Capaldi, E. A.

M. Giurfa, E. A. Capaldi, “Vectors, routes and maps: new discoveries about navigation in insects,” Trends Neurosci. 22, 237–242 (1999).
[CrossRef] [PubMed]

Cartwright, B. A.

B. A. Cartwright, T. S. Collett, “Landmark maps for honeybees,” Biol. Cybern. 57, 85–93 (1987).
[CrossRef]

B. A. Cartwright, T. S. Collett, “Landmark learning in bees: experiments and models,” J. Comp. Physiol. 151, 521–543 (1983).
[CrossRef]

Chahl, J. S.

J. S. Chahl, M. V. Srinivasan, “Range estimation with a panoramic visual sensor,” J. Opt. Soc. Am. A 14, 2144–2151 (1997).
[CrossRef]

M. V. Srinivasan, J. S. Chahl, S. W. Zhang, “Robot navigation by visual dead-reckoning: inspiration from insects,” Int. J. Pattern Recogn. Artif. Intell. 11, 35–47 (1997).
[CrossRef]

J. S. Chahl, M. V. Srinivasan, “Reflective surfaces for panoramic imaging,” Appl. Opt. 36, 8275–8285 (1997).
[CrossRef]

J. S. Chahl, M. V. Srinivasan, “Visual computation of egomotion using an image interpolation technique,” Biol. Cybern. 74, 405–411 (1996).
[CrossRef] [PubMed]

Collett, T. S.

K. Dale, T. S. Collett, “Using artificial evolution and selection to model insect navigation,” Curr. Biol. 11, 1305–1316 (2001).
[CrossRef] [PubMed]

T. S. Collett, J. A. Rees, “View-based navigation in Hymenoptera: multiple strategies of landmark guidance in the approach to a feeder,” J. Comp. Physiol., A 181, 47–58 (1997).
[CrossRef]

T. S. Collett, J. Zeil, “Flights of learning,” Curr. Direct. Psychol. Sci. 5, 149–155 (1996).
[CrossRef]

T. S. Collett, M. Lehrer, “Looking and learning: a spatial pattern in the orientation flight of the wasp Vespula vulgaris,” Proc. R. Soc. London, Ser. B 252, 129–134 (1993).
[CrossRef]

B. A. Cartwright, T. S. Collett, “Landmark maps for honeybees,” Biol. Cybern. 57, 85–93 (1987).
[CrossRef]

B. A. Cartwright, T. S. Collett, “Landmark learning in bees: experiments and models,” J. Comp. Physiol. 151, 521–543 (1983).
[CrossRef]

T. S. Collett, M. F. Land, “Visual spatial memory in a hoverfly,” J. Comp. Physiol. 100, 59–84 (1975).
[CrossRef]

T. S. Collett, J. Zeil, “Selection and use of landmarks by insects,” in Orientation and Communication in Arthropods, M. Lehrer, ed. (Birkhäuser Verlag, Basel, Switzerland, 1997), pp. 41–65.

T. S. Collett, J. Zeil, “Places and landmarks: an arthro-pod perspective,” in Spatial Representation in Animals, S. Healy, ed. (Oxford U. Press, Oxford, UK, 1998), pp. 18–53.

Coppola, D. M.

D. M. Coppola, H. R. Purves, A. N. McCoy, D. Purves, “The distribution of oriented contours in thereal world,” Proc. Natl. Acad. Sci. U.S.A. 95, 4002–4006 (1998).
[CrossRef]

Dale, K.

K. Dale, T. S. Collett, “Using artificial evolution and selection to model insect navigation,” Curr. Biol. 11, 1305–1316 (2001).
[CrossRef] [PubMed]

Dill, M.

M. Dill, M. Heisenberg, “Visual pattern memory without shape recognition,” Philos. Trans. R. Soc. London, Ser. B 349, 143–152 (1995).
[CrossRef] [PubMed]

M. Dill, R. Wolf, M. Heisenberg, “Visual pattern recognition in Drosophila involves retinotopic matching,” Nature (London) 365, 751–753 (1993).
[CrossRef]

Draper, N. R.

G. E. P. Box, N. R. Draper, Evolutionary Operation (Wiley, New York, 1969).

Duft, U.

B. Ronacher, U. Duft, “An image-matching mechanism describes a generalization task in honeybees,” J. Comp. Physiol., A 178, 803–812 (1996).
[CrossRef]

Eckert, M. P.

M. P. Eckert, J. Zeil, “Towards an ecology of motion vision,” in Motion Vision: Computational, Neural and Ecological Constraints, J. M. Zanker, J. Zeil, eds. (Springer-Verlag, Berlin, 2001), pp. 333–369.

Efler, D.

D. Efler, B. Ronacher, “Evidence against a retinotopic-template matching in honeybees’ pattern recognition,” Vision Res. 40, 3391–3403 (2000).
[CrossRef]

Ernst, R.

R. Ernst, M. Heisenberg, “The memory template in Drosophila pattern vision at the flight simulator,” Vision Res. 39, 3920–3933 (1999).
[CrossRef]

Franz, M. O.

M. O. Franz, H. A. Mallot, “Biomimetic robot navigation,” Rob. Auton. Syst. 30, 133–153 (2000).
[CrossRef]

M. O. Franz, B. Schölkopf, H. A. Mallot, H. H. Bülthoff, “Where did I take that snapshot? Scene-based homing by image matching,” Biol. Cybern. 79, 191–202 (1998).
[CrossRef]

Gaussier, P.

P. Gaussier, C. Joulain, J. P. Banquet, S. Leprêtre, A. Revel, “The visual homing problem: an example of robotics/biology cross fertilization,” Rob. Auton. Syst. 30, 155–180 (2000).
[CrossRef]

Giurfa, M.

M. Giurfa, E. A. Capaldi, “Vectors, routes and maps: new discoveries about navigation in insects,” Trends Neurosci. 22, 237–242 (1999).
[CrossRef] [PubMed]

Hanson, A. R.

E. M. Riseman, A. R. Hanson, J. R. Beveridge, R. Kumar, H. Sawhney, “Landmark-based navigation and the acquisition of environmental models,” in Visual Navigation, Y. Aloimonos, ed. (Erlbaum, Hillsdale, N.J., 1997), pp. 317–374.

Heisenberg, M.

R. Ernst, M. Heisenberg, “The memory template in Drosophila pattern vision at the flight simulator,” Vision Res. 39, 3920–3933 (1999).
[CrossRef]

M. Heisenberg, “Pattern recognition in insects,” Curr. Opin. Neurobiol. 5, 475–481 (1995).
[CrossRef] [PubMed]

M. Dill, M. Heisenberg, “Visual pattern memory without shape recognition,” Philos. Trans. R. Soc. London, Ser. B 349, 143–152 (1995).
[CrossRef] [PubMed]

M. Dill, R. Wolf, M. Heisenberg, “Visual pattern recognition in Drosophila involves retinotopic matching,” Nature (London) 365, 751–753 (1993).
[CrossRef]

Hong, J.

J. Hong, X. Tan, B. Pinette, R. Weiss, E. M. Riseman, “Image-based homing,” IEEE Control Syst., special issue on robotics and automation, 12, 38–45 (1992).

Joulain, C.

P. Gaussier, C. Joulain, J. P. Banquet, S. Leprêtre, A. Revel, “The visual homing problem: an example of robotics/biology cross fertilization,” Rob. Auton. Syst. 30, 155–180 (2000).
[CrossRef]

Kelber, A.

J. Zeil, A. Kelber, R. Voss, “Structure and function of learning flights in bees and wasps,” J. Exp. Biol. 199, 245–252 (1996).

Kumar, R.

E. M. Riseman, A. R. Hanson, J. R. Beveridge, R. Kumar, H. Sawhney, “Landmark-based navigation and the acquisition of environmental models,” in Visual Navigation, Y. Aloimonos, ed. (Erlbaum, Hillsdale, N.J., 1997), pp. 317–374.

Labhart, T.

D. Lambrinos, R. Möller, T. Labhart, R. Pfeifer, R. Wehner, “A mobile robot employing insect strategies for navigation,” Rob. Auton. Syst. 30, 39–64 (2000).
[CrossRef]

Lambrinos, D.

D. Lambrinos, R. Möller, T. Labhart, R. Pfeifer, R. Wehner, “A mobile robot employing insect strategies for navigation,” Rob. Auton. Syst. 30, 39–64 (2000).
[CrossRef]

Land, M. F.

T. S. Collett, M. F. Land, “Visual spatial memory in a hoverfly,” J. Comp. Physiol. 100, 59–84 (1975).
[CrossRef]

Lehrer, M.

M. Lehrer, G. Bianco, “The turn-back-and-look behaviour: bee versus robot,” Biol. Cybern. 83, 211–229 (2000).
[CrossRef] [PubMed]

T. S. Collett, M. Lehrer, “Looking and learning: a spatial pattern in the orientation flight of the wasp Vespula vulgaris,” Proc. R. Soc. London, Ser. B 252, 129–134 (1993).
[CrossRef]

M. Lehrer, “Why do bees turn back and look?” J. Comp. Physiol., A 172, 549–563 (1993).
[CrossRef]

Leprêtre, S.

P. Gaussier, C. Joulain, J. P. Banquet, S. Leprêtre, A. Revel, “The visual homing problem: an example of robotics/biology cross fertilization,” Rob. Auton. Syst. 30, 155–180 (2000).
[CrossRef]

Mallot, H. A.

M. O. Franz, H. A. Mallot, “Biomimetic robot navigation,” Rob. Auton. Syst. 30, 133–153 (2000).
[CrossRef]

M. O. Franz, B. Schölkopf, H. A. Mallot, H. H. Bülthoff, “Where did I take that snapshot? Scene-based homing by image matching,” Biol. Cybern. 79, 191–202 (1998).
[CrossRef]

McCoy, A. N.

D. M. Coppola, H. R. Purves, A. N. McCoy, D. Purves, “The distribution of oriented contours in thereal world,” Proc. Natl. Acad. Sci. U.S.A. 95, 4002–4006 (1998).
[CrossRef]

Möller, R.

R. Möller, “Do insects use templates or parameters for landmark navigaion?,” J. Theor. Biol. 210, 33–45 (2001).
[CrossRef]

R. Möller, “Insect visual homing strategies in a robot with analog processing,” Biol. Cybern. 83, 231–243 (2000).
[CrossRef] [PubMed]

D. Lambrinos, R. Möller, T. Labhart, R. Pfeifer, R. Wehner, “A mobile robot employing insect strategies for navigation,” Rob. Auton. Syst. 30, 39–64 (2000).
[CrossRef]

Nagle, M. G.

Pfeifer, R.

D. Lambrinos, R. Möller, T. Labhart, R. Pfeifer, R. Wehner, “A mobile robot employing insect strategies for navigation,” Rob. Auton. Syst. 30, 39–64 (2000).
[CrossRef]

Pinette, B.

J. Hong, X. Tan, B. Pinette, R. Weiss, E. M. Riseman, “Image-based homing,” IEEE Control Syst., special issue on robotics and automation, 12, 38–45 (1992).

Purves, D.

D. M. Coppola, H. R. Purves, A. N. McCoy, D. Purves, “The distribution of oriented contours in thereal world,” Proc. Natl. Acad. Sci. U.S.A. 95, 4002–4006 (1998).
[CrossRef]

Purves, H. R.

D. M. Coppola, H. R. Purves, A. N. McCoy, D. Purves, “The distribution of oriented contours in thereal world,” Proc. Natl. Acad. Sci. U.S.A. 95, 4002–4006 (1998).
[CrossRef]

Räber, F.

R. Wehner, F. Räber, “Visual spatial memory in desert ants, Cataglyphis bicolor (Hymenoptera: Formicidae),” Experientia 35, 1569–1571 (1979).
[CrossRef]

Rees, J. A.

T. S. Collett, J. A. Rees, “View-based navigation in Hymenoptera: multiple strategies of landmark guidance in the approach to a feeder,” J. Comp. Physiol., A 181, 47–58 (1997).
[CrossRef]

Revel, A.

P. Gaussier, C. Joulain, J. P. Banquet, S. Leprêtre, A. Revel, “The visual homing problem: an example of robotics/biology cross fertilization,” Rob. Auton. Syst. 30, 155–180 (2000).
[CrossRef]

Riseman, E. M.

J. Hong, X. Tan, B. Pinette, R. Weiss, E. M. Riseman, “Image-based homing,” IEEE Control Syst., special issue on robotics and automation, 12, 38–45 (1992).

E. M. Riseman, A. R. Hanson, J. R. Beveridge, R. Kumar, H. Sawhney, “Landmark-based navigation and the acquisition of environmental models,” in Visual Navigation, Y. Aloimonos, ed. (Erlbaum, Hillsdale, N.J., 1997), pp. 317–374.

Ronacher, B.

D. Efler, B. Ronacher, “Evidence against a retinotopic-template matching in honeybees’ pattern recognition,” Vision Res. 40, 3391–3403 (2000).
[CrossRef]

B. Ronacher, “How do bees learn and recognize visual patterns?” Biol. Cybern. 79, 477–485 (1998).
[CrossRef]

B. Ronacher, U. Duft, “An image-matching mechanism describes a generalization task in honeybees,” J. Comp. Physiol., A 178, 803–812 (1996).
[CrossRef]

Ruderman, D.

D. Ruderman, “Origins of scaling in natural images,” Vision Res. 23, 3385–3398 (1997).
[CrossRef]

Sawhney, H.

E. M. Riseman, A. R. Hanson, J. R. Beveridge, R. Kumar, H. Sawhney, “Landmark-based navigation and the acquisition of environmental models,” in Visual Navigation, Y. Aloimonos, ed. (Erlbaum, Hillsdale, N.J., 1997), pp. 317–374.

Schölkopf, B.

M. O. Franz, B. Schölkopf, H. A. Mallot, H. H. Bülthoff, “Where did I take that snapshot? Scene-based homing by image matching,” Biol. Cybern. 79, 191–202 (1998).
[CrossRef]

Srinivasan, M. V.

M. V. Srinivasan, J. S. Chahl, S. W. Zhang, “Robot navigation by visual dead-reckoning: inspiration from insects,” Int. J. Pattern Recogn. Artif. Intell. 11, 35–47 (1997).
[CrossRef]

M. G. Nagle, M. V. Srinivasan, D. L. Wilson, “Image interpolation technique for measurement of egomotion in 6 degrees of freedom,” J. Opt. Soc. Am. A 14, 3233–3241 (1997).
[CrossRef]

J. S. Chahl, M. V. Srinivasan, “Range estimation with a panoramic visual sensor,” J. Opt. Soc. Am. A 14, 2144–2151 (1997).
[CrossRef]

J. S. Chahl, M. V. Srinivasan, “Reflective surfaces for panoramic imaging,” Appl. Opt. 36, 8275–8285 (1997).
[CrossRef]

M. G. Nagle, M. V. Srinivasan, “Structure from motion: determining the range and orientation of surfaces by image interpolation,” J. Opt. Soc. Am. A 13, 25–34 (1996).
[CrossRef]

J. S. Chahl, M. V. Srinivasan, “Visual computation of egomotion using an image interpolation technique,” Biol. Cybern. 74, 405–411 (1996).
[CrossRef] [PubMed]

M. V. Srinivasan, “An image interpolation technique for the computation of optic flow and egomotion,” Biol. Cybern. 71, 401–415 (1994).
[CrossRef]

J. H. van Hateren, M. V. Srinivasan, P. B. Wait, “Pattern recognition in bees: orientation discrimination,” J. Comp. Physiol., A 167, 649–654 (1990).

Tan, X.

J. Hong, X. Tan, B. Pinette, R. Weiss, E. M. Riseman, “Image-based homing,” IEEE Control Syst., special issue on robotics and automation, 12, 38–45 (1992).

van der Schaaf, A.

A. van der Schaaf, H. van Hateren, “Modelling the power spectra of natural images: statistics and information,” Vision Res. 36, 2759–2770 (1996).
[CrossRef] [PubMed]

van Hateren, H.

A. van der Schaaf, H. van Hateren, “Modelling the power spectra of natural images: statistics and information,” Vision Res. 36, 2759–2770 (1996).
[CrossRef] [PubMed]

van Hateren, J. H.

J. H. van Hateren, M. V. Srinivasan, P. B. Wait, “Pattern recognition in bees: orientation discrimination,” J. Comp. Physiol., A 167, 649–654 (1990).

Voss, R.

R. Voss, J. Zeil, “Active vision in insects: an analysis of object-directed zig-zag flights in a ground-nesting wasp (Odynerus spinipes, Eumenidae),” J. Comp. Physiol., A 182, 377–387 (1998).
[CrossRef]

J. Zeil, A. Kelber, R. Voss, “Structure and function of learning flights in bees and wasps,” J. Exp. Biol. 199, 245–252 (1996).

Wait, P. B.

J. H. van Hateren, M. V. Srinivasan, P. B. Wait, “Pattern recognition in bees: orientation discrimination,” J. Comp. Physiol., A 167, 649–654 (1990).

Wehner, R.

D. Lambrinos, R. Möller, T. Labhart, R. Pfeifer, R. Wehner, “A mobile robot employing insect strategies for navigation,” Rob. Auton. Syst. 30, 39–64 (2000).
[CrossRef]

R. Wehner, F. Räber, “Visual spatial memory in desert ants, Cataglyphis bicolor (Hymenoptera: Formicidae),” Experientia 35, 1569–1571 (1979).
[CrossRef]

Weiss, R.

J. Hong, X. Tan, B. Pinette, R. Weiss, E. M. Riseman, “Image-based homing,” IEEE Control Syst., special issue on robotics and automation, 12, 38–45 (1992).

Wilson, D. L.

Wolf, R.

M. Dill, R. Wolf, M. Heisenberg, “Visual pattern recognition in Drosophila involves retinotopic matching,” Nature (London) 365, 751–753 (1993).
[CrossRef]

Zeil, J.

R. Voss, J. Zeil, “Active vision in insects: an analysis of object-directed zig-zag flights in a ground-nesting wasp (Odynerus spinipes, Eumenidae),” J. Comp. Physiol., A 182, 377–387 (1998).
[CrossRef]

T. S. Collett, J. Zeil, “Flights of learning,” Curr. Direct. Psychol. Sci. 5, 149–155 (1996).
[CrossRef]

J. Zeil, A. Kelber, R. Voss, “Structure and function of learning flights in bees and wasps,” J. Exp. Biol. 199, 245–252 (1996).

J. Zeil, “Orientation flights of solitary wasps (Cerceris; Sphecidae; Hymenoptera): I. Description of flight,” J. Comp. Physiol., A 172, 189–205 (1993).
[CrossRef]

J. Zeil, “Orientation flights of solitary wasps (Cerceris; Sphecidae; Hymenoptera): II. Similarities between orientation and return flights and the use of motion parallax,” J. Comp. Physiol., A 172, 207–222 (1993).
[CrossRef]

M. P. Eckert, J. Zeil, “Towards an ecology of motion vision,” in Motion Vision: Computational, Neural and Ecological Constraints, J. M. Zanker, J. Zeil, eds. (Springer-Verlag, Berlin, 2001), pp. 333–369.

T. S. Collett, J. Zeil, “Places and landmarks: an arthro-pod perspective,” in Spatial Representation in Animals, S. Healy, ed. (Oxford U. Press, Oxford, UK, 1998), pp. 18–53.

T. S. Collett, J. Zeil, “Selection and use of landmarks by insects,” in Orientation and Communication in Arthropods, M. Lehrer, ed. (Birkhäuser Verlag, Basel, Switzerland, 1997), pp. 41–65.

Zhang, S. W.

M. V. Srinivasan, J. S. Chahl, S. W. Zhang, “Robot navigation by visual dead-reckoning: inspiration from insects,” Int. J. Pattern Recogn. Artif. Intell. 11, 35–47 (1997).
[CrossRef]

Appl. Opt.

Biol. Cybern.

M. V. Srinivasan, “An image interpolation technique for the computation of optic flow and egomotion,” Biol. Cybern. 71, 401–415 (1994).
[CrossRef]

J. S. Chahl, M. V. Srinivasan, “Visual computation of egomotion using an image interpolation technique,” Biol. Cybern. 74, 405–411 (1996).
[CrossRef] [PubMed]

M. O. Franz, B. Schölkopf, H. A. Mallot, H. H. Bülthoff, “Where did I take that snapshot? Scene-based homing by image matching,” Biol. Cybern. 79, 191–202 (1998).
[CrossRef]

R. Möller, “Insect visual homing strategies in a robot with analog processing,” Biol. Cybern. 83, 231–243 (2000).
[CrossRef] [PubMed]

B. A. Cartwright, T. S. Collett, “Landmark maps for honeybees,” Biol. Cybern. 57, 85–93 (1987).
[CrossRef]

M. Lehrer, G. Bianco, “The turn-back-and-look behaviour: bee versus robot,” Biol. Cybern. 83, 211–229 (2000).
[CrossRef] [PubMed]

B. Ronacher, “How do bees learn and recognize visual patterns?” Biol. Cybern. 79, 477–485 (1998).
[CrossRef]

Curr. Biol.

K. Dale, T. S. Collett, “Using artificial evolution and selection to model insect navigation,” Curr. Biol. 11, 1305–1316 (2001).
[CrossRef] [PubMed]

Curr. Direct. Psychol. Sci.

T. S. Collett, J. Zeil, “Flights of learning,” Curr. Direct. Psychol. Sci. 5, 149–155 (1996).
[CrossRef]

Curr. Opin. Neurobiol.

M. Heisenberg, “Pattern recognition in insects,” Curr. Opin. Neurobiol. 5, 475–481 (1995).
[CrossRef] [PubMed]

Experientia

R. Wehner, F. Räber, “Visual spatial memory in desert ants, Cataglyphis bicolor (Hymenoptera: Formicidae),” Experientia 35, 1569–1571 (1979).
[CrossRef]

IEEE Control Syst.

J. Hong, X. Tan, B. Pinette, R. Weiss, E. M. Riseman, “Image-based homing,” IEEE Control Syst., special issue on robotics and automation, 12, 38–45 (1992).

Int. J. Pattern Recogn. Artif. Intell.

M. V. Srinivasan, J. S. Chahl, S. W. Zhang, “Robot navigation by visual dead-reckoning: inspiration from insects,” Int. J. Pattern Recogn. Artif. Intell. 11, 35–47 (1997).
[CrossRef]

J. Comp. Physiol.

T. S. Collett, M. F. Land, “Visual spatial memory in a hoverfly,” J. Comp. Physiol. 100, 59–84 (1975).
[CrossRef]

B. A. Cartwright, T. S. Collett, “Landmark learning in bees: experiments and models,” J. Comp. Physiol. 151, 521–543 (1983).
[CrossRef]

J. Comp. Physiol., A

B. Ronacher, U. Duft, “An image-matching mechanism describes a generalization task in honeybees,” J. Comp. Physiol., A 178, 803–812 (1996).
[CrossRef]

J. Zeil, “Orientation flights of solitary wasps (Cerceris; Sphecidae; Hymenoptera): II. Similarities between orientation and return flights and the use of motion parallax,” J. Comp. Physiol., A 172, 207–222 (1993).
[CrossRef]

T. S. Collett, J. A. Rees, “View-based navigation in Hymenoptera: multiple strategies of landmark guidance in the approach to a feeder,” J. Comp. Physiol., A 181, 47–58 (1997).
[CrossRef]

J. H. van Hateren, M. V. Srinivasan, P. B. Wait, “Pattern recognition in bees: orientation discrimination,” J. Comp. Physiol., A 167, 649–654 (1990).

J. Zeil, “Orientation flights of solitary wasps (Cerceris; Sphecidae; Hymenoptera): I. Description of flight,” J. Comp. Physiol., A 172, 189–205 (1993).
[CrossRef]

M. Lehrer, “Why do bees turn back and look?” J. Comp. Physiol., A 172, 549–563 (1993).
[CrossRef]

R. Voss, J. Zeil, “Active vision in insects: an analysis of object-directed zig-zag flights in a ground-nesting wasp (Odynerus spinipes, Eumenidae),” J. Comp. Physiol., A 182, 377–387 (1998).
[CrossRef]

J. Exp. Biol.

J. Zeil, A. Kelber, R. Voss, “Structure and function of learning flights in bees and wasps,” J. Exp. Biol. 199, 245–252 (1996).

J. Opt. Soc. Am. A

J. Theor. Biol.

R. Möller, “Do insects use templates or parameters for landmark navigaion?,” J. Theor. Biol. 210, 33–45 (2001).
[CrossRef]

Nature (London)

M. Dill, R. Wolf, M. Heisenberg, “Visual pattern recognition in Drosophila involves retinotopic matching,” Nature (London) 365, 751–753 (1993).
[CrossRef]

Philos. Trans. R. Soc. London, Ser. B

M. Dill, M. Heisenberg, “Visual pattern memory without shape recognition,” Philos. Trans. R. Soc. London, Ser. B 349, 143–152 (1995).
[CrossRef] [PubMed]

Proc. Natl. Acad. Sci. U.S.A.

D. M. Coppola, H. R. Purves, A. N. McCoy, D. Purves, “The distribution of oriented contours in thereal world,” Proc. Natl. Acad. Sci. U.S.A. 95, 4002–4006 (1998).
[CrossRef]

Proc. R. Soc. London, Ser. B

T. S. Collett, M. Lehrer, “Looking and learning: a spatial pattern in the orientation flight of the wasp Vespula vulgaris,” Proc. R. Soc. London, Ser. B 252, 129–134 (1993).
[CrossRef]

Rob. Auton. Syst.

P. Gaussier, C. Joulain, J. P. Banquet, S. Leprêtre, A. Revel, “The visual homing problem: an example of robotics/biology cross fertilization,” Rob. Auton. Syst. 30, 155–180 (2000).
[CrossRef]

D. Lambrinos, R. Möller, T. Labhart, R. Pfeifer, R. Wehner, “A mobile robot employing insect strategies for navigation,” Rob. Auton. Syst. 30, 39–64 (2000).
[CrossRef]

M. O. Franz, H. A. Mallot, “Biomimetic robot navigation,” Rob. Auton. Syst. 30, 133–153 (2000).
[CrossRef]

Trends Neurosci.

M. Giurfa, E. A. Capaldi, “Vectors, routes and maps: new discoveries about navigation in insects,” Trends Neurosci. 22, 237–242 (1999).
[CrossRef] [PubMed]

Vision Res.

D. Efler, B. Ronacher, “Evidence against a retinotopic-template matching in honeybees’ pattern recognition,” Vision Res. 40, 3391–3403 (2000).
[CrossRef]

A. van der Schaaf, H. van Hateren, “Modelling the power spectra of natural images: statistics and information,” Vision Res. 36, 2759–2770 (1996).
[CrossRef] [PubMed]

D. Ruderman, “Origins of scaling in natural images,” Vision Res. 23, 3385–3398 (1997).
[CrossRef]

R. Ernst, M. Heisenberg, “The memory template in Drosophila pattern vision at the flight simulator,” Vision Res. 39, 3920–3933 (1999).
[CrossRef]

Other

M. P. Eckert, J. Zeil, “Towards an ecology of motion vision,” in Motion Vision: Computational, Neural and Ecological Constraints, J. M. Zanker, J. Zeil, eds. (Springer-Verlag, Berlin, 2001), pp. 333–369.

G. E. P. Box, N. R. Draper, Evolutionary Operation (Wiley, New York, 1969).

E. M. Riseman, A. R. Hanson, J. R. Beveridge, R. Kumar, H. Sawhney, “Landmark-based navigation and the acquisition of environmental models,” in Visual Navigation, Y. Aloimonos, ed. (Erlbaum, Hillsdale, N.J., 1997), pp. 317–374.

T. S. Collett, J. Zeil, “Selection and use of landmarks by insects,” in Orientation and Communication in Arthropods, M. Lehrer, ed. (Birkhäuser Verlag, Basel, Switzerland, 1997), pp. 41–65.

T. S. Collett, J. Zeil, “Places and landmarks: an arthro-pod perspective,” in Spatial Representation in Animals, S. Healy, ed. (Oxford U. Press, Oxford, UK, 1998), pp. 18–53.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1
Fig. 1

(a) Robotic gantry in its natural habitat. The panoramic imaging device, consisting of a video camera and a reflective surface, can be seen at the end of the horizontal y-axis arm at the far right of the picture. (b) Close-up of the panoramic imaging surface and the camera lens. (c) Panoramic image after a circular mask was applied to the original video image. (d) Panoramic image after applying an additional mask blocking the main gantry and the image of the camera and the camera lens. (e) Unwarped version of the panoramic image shown in c, after removing the image regions containing the camera lens.

Fig. 2
Fig. 2

Difference function in a densely vegetated area. (a) Position of a 3D grid of image positions and orientation of gantry axes in the scene. (b) 2D grid of 7×7 spatial positions at which panoramic images were taken for each horizontal plane of the 3D grid shown in a. The recording sequence starts at x=-0.3 m, y=-0.3 m and ends at x=0.3 m, y=0.3 m. Coordinates are given relative to the reference location at x=0, y=0. Transects are labeled with different symbols (see d). (c) 2D difference function for the lowest plane of the 3D grid. The r.m.s. pixel differences are shown along the z axis for each image position in the 7×7 grid, as compared with the image taken at the reference position in the center. (d) Transects along the x and y directions (solid dots) and along the two diagonals (open dots) through the 2D difference function shown in c. Directions of transects and their symbols are indicated in b.

Fig. 3
Fig. 3

Vertical spatial extent of difference functions and their dependence on reference image location. (a) Transects through the 2D difference functions at z=0.4 m, z=0.5 m, and z=0.6 m above ground for reference images RI at the center of the planes of the 7×7×7 3D grid shown in Fig. 2a. See the inset for a definition of symbols. Otherwise, conventions are the same as those in Fig. 2d. (b) Difference functions for the same planes (at z=0.4 m, z=0.5 m, and z=0.6 m), calculated relative to the reference image at the center of the bottom plane (at x=0 m, y=0 m, z=0.3 m; see the plot at bottom center).

Fig. 4
Fig. 4

Horizontal extent of difference functions. The difference functions are for two locations in an open area at the edge of a stand of tall eucalyptus trees. The difference functions were determined over an area of 1 m×3 m for two different reference locations (top and center) by moving the gantry 1 m at a time along the x direction. Images were taken approximately 20 cm above ground, and the grid spacing was 10 cm. Transects along the x axis at y=0 are shown in the bottom plot. Otherwise, conventions are the same as those before.

Fig. 5
Fig. 5

Transects through the 2D difference functions in four outdoor scenes with different depth structure. Conventions are the same as those before. Images were recorded at 10-cm intervals in an 11×11 grid approximately 20 cm above ground (a) in a location close to the brick wall of a barbecue area, (b) close to a small stand of trees, (c) at the edge of the small stand of trees, and (d) approximately 10 m away from the trees in an open area. The oval shape at the right masks one of the trolley wheels, which looms large in the image at position x=0.5, y=-0.5, z=0.

Fig. 6
Fig. 6

Influence of depth structure on the shape and the extent of difference functions. Shown are the difference functions for panoramic images (a) before and (b) after a cylindrical landmark had been placed a few centimeters beyond position x=-0.5 m, y=0 m (marked by dots in the plots beneath the images). The masked panoramic images of the scene are shown on top, with the respective 2D difference functions with the reference image at x=0 m, y=0 m below. Otherwise, conventions are the same as those before. (c) and (d) Difference functions for the location shown in Fig. 5d but with different image regions masked: (c) image region viewing exclusively objects above the horizon, (d) image region viewing the ground. Transects through the 2D difference functions are shown below the images. Conventions are the same as those before (see the inset).

Fig. 7
Fig. 7

Influence of depth structure on the extent and the depth of difference functions. The plot shows the image differences along a 1-m stretch of narrow tunnel, the walls of which were lined with a random dot pattern with 1-cm element size. The tunnel was 20 cm wide and 20 cm high. To determine the contributions of different image regions, differences are shown for the full scene (a, black curve), for the part of the images viewing the tunnel only (b, dark gray curve), and for the part of the images viewing the scene beyond the tunnel (c, light gray curve). The reference image was recorded at a position 50 cm along the tunnel. Otherwise, conventions are the same as those before.

Fig. 8
Fig. 8

Properties of rotational difference functions. Images were recorded every 10 cm in an 11×11×11 grid at the edge of a small stand of trees. Image differences were calculated for five locations each on the bottom and the top plane of the 3D grid (see the inset in the center) for different degrees of rotation of the same images (a and b) or between a reference image at the center of the bottom plane and rotated images at all other locations (c and d). Note the difference in scale compared with the translational difference functions shown in previous figures. Images were rotated in 9° steps after a circular mask had been applied to remove image regions outside the reflective cone.

Fig. 9
Fig. 9

Rotational difference functions in a flat world. A panoramic image taken low to the ground in a tropical mudflat was used to analyze the dependence of rotational difference functions on the spatial structure of a scene. The three curves were calculated after masking different parts of the image with circular masks (see the insets). Images were rotated in steps of 9°.

Fig. 10
Fig. 10

Temporal stability of difference functions. Images were recorded every 10 cm in an 11×11 grid approximately 20 cm above ground repeatedly over 3 h on two consecutive days in the same location at the edge of a small stand of trees. Recording the 121 images in one grid plane took approximately 5 mins. Transects in the left column are through 2D difference functions recorded on a clear and windy day in the same location, the first one at 13:00, the last one at 16:02. The reference image used throughout was the one recorded at the center of the grid at 13:00. The aperture setting of the camera lens had to be adjusted to prevent camera saturation and is shown in each plot together with the time of recording. The right column shows transects through 2D difference functions recorded at the same location on the following day, which was predominantly overcast and windy. The reference image used to calculate the difference functions was recorded at 13:44 at the center of the grid. The recording area lies at the northwest edge of the small forest, where the shadows from overhanging branches can change the scene significantly depending on the wind and the movements of clouds and the sun (see Fig. 12 below). Conventions are the same as those before (see the inset).

Fig. 11
Fig. 11

Temporal stability of difference functions: calm, clear, cloudless day in an open area. Images were recorded with 10-cm spacing in an 11×11 grid approximately 20 cm above ground on a calm, mostly cloudless day at an open site over 10 m away from the edge of a small forest. The transects through the 2D difference functions on the left were calculated with images recorded at different times of the day, using the reference image recorded at the center of the grid at 13:10. Functions on the right were calculated with the reference image recorded at the same time of the day. All other conventions are the same as those before (see the inset).

Fig. 12
Fig. 12

Long- and short-term changes of image differences in outdoor scenes. Images were recorded at the same location with a sampling rate of 6 per minute. Image differences were calculated with the image at t=0 as a reference. (a) The trace shows the r.m.s. image differences over time, separately for the whole image (dark gray; see the insets) and for image regions viewing the world below (black) and above (light gray) the horizon. The large variations are due to the movements of clouds, as can be seen by the sample images on top, which were recorded at 2-min intervals. (b) Image differences at the same location over a period of 3 h (same scene as that in Fig. 10, left-hand plots). Images were recorded at 10-s intervals intermittently over 10-min periods. Different gray levels indicate the aperture settings of the camera lens during the recording. The reference image was recorded at 13:14 (t=0). Dots on the x axis mark the times at which 2D difference functions were recorded (see Fig. 10). The rapid variations in image differences are probably due to the movement of clouds, wind-driven vegetation, and shadows, and the slow change is due to the change in the direction of illumination. (c) Long- and short-term variation of image differences at the same location on a windy, overcast day (same scene as that in Fig. 10, right-hand plots). The reference image was recorded at 13:50 (t=0). Note the large variations due to environmental motion and clouds and the comparatively minor change in image differences due to changes in the direction of illumination. Other conventions are the same as those before.

Fig. 13
Fig. 13

Homing by gradient descent: comparison of two algorithms that were tested in a plane approximately 20 cm above ground in an open area on a calm, clear day. For details see the text. The difference function relative to the reference image at the center of this location is shown in a. The performance of the two algorithms is shown in b and c: In “RunDown” the gantry moved in one direction as long as mean square (m.s.) pixel differences became smaller. If image differences increased, the direction of movement was changed by 90°. In the second algorithm, “Triangular,” the gantry determines m.s. pixel differences relative to the reference image at three positions at the corners of a small triangle and subsequently moves in the direction of the minimum. We tested the performance in a randomized sequence of 40 runs. Plots (b) and (c) show the 2D paths with starting positions marked by dots. Plots (d) and (e) show the m.s. pixel differences plotted over the distance from the reference location. (b) Results of 18 gradient descents using the RunDown algorithm (two out of the 20 runs are shown on their own in Fig. 14 below). Note that only two of the runs do not reach the goal (thick lines). (c) Results of 18 gradient descents using the Triangular algorithm (two of the 20 runs are shown on their own in Fig. 14). Otherwise, conventions are the same as those in b.

Fig. 14
Fig. 14

Homing by gradient descent: effects of changes in illumination. The figure shows examples in which illumination changed during the execution of a gradient descent run. Both the RunDown and Triangular algorithms appear to be immune to and able to recover from changes in illumination, since the large image differences that they cause also arrest the gradient descent schemes at the position that they currently occupy. Return to normal conditions allows the schemes to progress to the goal position. All traces except the light-gray one are from the experimental session shown in detail in Fig. 13. Other conventions are the same as those before.

Metrics