Abstract

A new concept called defocus morphing in real aperture images is introduced. View morphing is an existing example of shape-preserving image morphing based on the motion cue. It is proved that images can also be morphed based on the depth-related defocus cue. This illustrates that the morphing operation is not necessarily a geometric process alone; one can also perform a photometry-based morphing wherein the shape information is implicitly buried in the image intensity field. A theoretical understanding of the defocus morphing process is presented. It is shown mathematically that, given two observations of a three-dimensional scene for different camera parameter settings, we can obtain a virtual observation for any camera parameter setting through a simple nonlinear combination of these observations.

© 2005 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. S. E. Chen, L. Williams, “View interpolation for image synthesis,” in Proceedings of SIGGRAPH 93, Anaheim, California, August 1993 (www.siggraph.com), pp. 279–288.
  2. S. M. Seitz, C. R. Dyer, “View morphing,” in Proceedings of SIGGRAPH 96, New Orleans, Lousiana, August 1996 (www.siggraph.com), pp. 21–30.
  3. T. Beier, S. Neely, “Feature-based image metamorphosis,” in Proceedings of SIGGRAPH 92, Chicago, Illinois, July 1992 (www.siggraph), pp. 35–42.
  4. S. Seitz, C. R. Dyer, “View morphing: uniquely predicting scene appearance from basis images,” in Proceedings of Image Understanding Workshop, New Orleans, Lousiana, 1997, pp. 881–887.
  5. M. Lhuillier, L. Quan, “Image interpolation by joint view triangulation,” in Proceedings of the Conference on Computer Vision and Pattern Recognition, Fort Collins, Colorado, June 1999 (IEEE Computer Society, 1999), Vol. 2, pp. 139–145.
  6. S. B. Kang, R. Szeliski, P. Anandan, “The geometry-image representation tradeoff for rendering,” in Pro- ceedings of the International Conference on Image Processing, Vancouver, B.C. September 2000, Vol. 2, 13–16.
  7. D. Rathi, G. Agarwal, P. Kalra, S. Banerjee, “A system for image based rendering of walk-throughs,” in Proceedings of Computer Graphics International (CGI2002), Bradford, UK, July 2002.
  8. P. Debevec, G. Borshukov, Y. Yu, “Efficient view-dependent image-based rendering with projective texture-mapping,” in 9th Eurographics Rendering Workshop, Vienna, Austria, June 1998.
  9. S. Avidan, T. Evgeniou, A. Shashua, T. Poggio, “Image-based view synthesis,” Artificial Intelligence (AI) Lab, AIM-1603 (MIT, 1997), CBCL paper-145.
  10. S. Avidan, A. Shashua, “Novel view synthesis in tensor space,” in Proceedings of Computer Vision and Pattern Recognition, Puerto Rico, June 1997 (IEEE Computer Society, 1997), pp. 1034–1040.
  11. P. Vazques, M. Feixas, M. Sbert, W. Heidrich, “Image based modeling using viewpoint entropy,” in Proceedings of Computer Graphics International CGI 2002, Bradford, UK, July 2002.
  12. S. Laveau, O. D. Faugeras, “3D scene representation as a collection of images and fundamental matrices,” Tech. Report 2205 (Institut National de Recherche en Informatique et en Automatique, 1994).
  13. D. Svedberg, S. Carlsson, “Calibration, pose and novel views from single images of constrained scenes,” in Proceedings of the 11th Scandinavian Conference on Image Analysis (SCIA’99), Kangerlussuaq, Greenland, June 1999, pp. 111–117.
  14. Z. Zhang, “Image-based geometrically-correct photorealistic scene/object modeling (IBPhM): a review,” in Proceedings of the Asian Conference on Computer Vision (ACCV 98), Hong Kong, January 8–11, 1998, Vol. II, pp. 340–349.
  15. S. Vedula, S. Baker, T. Kanade, “Spatio-temporal view interpolation,” in Proceedings of the 13th ACM Eurographics Workshop on Rendering, SaarBruecken, Germany, 2002, pp. 1–11.
  16. S. Baba, H. Saito, S. Vedula, K. M. Cheng, T. Kanade, “Appearance-based virtual view generation for fly through in a real dynamic scene,” in Proceedings of the Joint Eurographics and IEEE TVCG Symposium on Visualization, Interlaken, Switzerland, 2000, pp. 179–188.
  17. R. A. Manning, C. R. Dyer, “Interpolating view and scene motion by dynamic view morphing,” in Proceedings of the Conference on Computer Vision and Pattern Recognition, Fort Collins, Colorado, June 1999 (IEEE Computer Society, 1999), Vol. 1, pp. 388–394.
  18. D. Cohen-Or, “Model-based view-extrapolation for interactive VR web-systems,” in Proceedings of Computer Graphics International (CGI97), Belgium, June 1997, pp. 104–112.
  19. M. Magnor, P. Ramanathan, B. Girod, “Multi-view coding for image-based rendering using 3-D scene geometry,” IEEE Trans. Circuits Syst. Video Technol. 13, 1092–1106 (2003).
    [CrossRef]
  20. D. Weiskopf, D. Kobras, H. Ruder, “An image-based approach to special relativistic rendering,” in Proceedings of IEEE Visualization Conference October 2000 (IEEE Press, 2000), pp. 303–310.
  21. T. Ezzat, T. Poggio, “Visual speech synthesis by morphing visemes,” Int. J. Comput. Vis., 38, 45–57 (2000).
    [CrossRef]
  22. R. J. Radke, S. Rickard, “Audio interpolation,” in Proceedings of the International Conference on Virtual, Synthetic and Entertainment Audio, Espoo, Finland, June 2002.
  23. S. Chaudhuri, A. N. Rajagopalan, Depth from Defocus: A Real Aperture Imaging Approach (Springer-Verlag, 1999).
  24. A. Kubota, K. Takahashi, K. Aizawa, T. Chen, “All-focused light field rendering,” in Proceedings of the Eurographics Symposium on Rendering, Norrköping, Sweden, 2004, pp. 235–242.
  25. A. Kubota, K. Aizawa, “Inverse filters for reconstruction of arbitrarily focused images from two differently focused images,” in Proceedings of IEEE International Conference Image Processing, Vancouver, 2000 (IEEE Press, 2000), pp. 101–104.
  26. K. Aizawa, K. Kodama, A. Kubota, “Producing object based special effects by fusing multiple differently focused images,” IEEE Trans. Circuits Syst. Video Technol. 10, 323–330 (2000).
    [CrossRef]
  27. A. Kubota, K. Aizawa, T. Chen, “Virtual view synthesis through linear processing without geometry,” in Proceedings of IEEE International Conference on Image Processing, Singapore, 2004 (IEEE Press, 2004), Vol. 5, pp. 3009–3012.
  28. M. Born, E. Wolf, Principles of Optics (Pergamon, 1965).
  29. A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. 9, 523–531 (1987).
    [CrossRef] [PubMed]
  30. M. Subbarao, “Parallel depth recovery by changing camera parameters,” in Proceedings of Second IEEE International Conference on Computer Vision Florida, December 1988 (IEEE Computer Society, 1998), pp. 149–155.
  31. H. H. Hopkins, “The frequency response of a defocused optical system,” Proc. R. Soc. London, Ser. A 231, 91–103 (1955).
    [CrossRef]
  32. L. R. Baker, “An interferometer for measuring the spatial frequency response of a lens system,” Proc. Phys. Soc. London, Sect. B 68, 871–880 (1955).
    [CrossRef]
  33. A. K. Jain, Fundamentals of Digital Image Processing (Prentice-Hall, 1988).
  34. S. Bhasin, S. Chaudhuri, “Depth from defocus in presence of partial self occlusion,” in Proceedings of IEEE International Conference on Computer Vision, Vancouver, Canada, July 2001 (IEEE Press, 2001), pp. 488–493.

2003 (1)

M. Magnor, P. Ramanathan, B. Girod, “Multi-view coding for image-based rendering using 3-D scene geometry,” IEEE Trans. Circuits Syst. Video Technol. 13, 1092–1106 (2003).
[CrossRef]

2000 (2)

T. Ezzat, T. Poggio, “Visual speech synthesis by morphing visemes,” Int. J. Comput. Vis., 38, 45–57 (2000).
[CrossRef]

K. Aizawa, K. Kodama, A. Kubota, “Producing object based special effects by fusing multiple differently focused images,” IEEE Trans. Circuits Syst. Video Technol. 10, 323–330 (2000).
[CrossRef]

1987 (1)

A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. 9, 523–531 (1987).
[CrossRef] [PubMed]

1955 (2)

H. H. Hopkins, “The frequency response of a defocused optical system,” Proc. R. Soc. London, Ser. A 231, 91–103 (1955).
[CrossRef]

L. R. Baker, “An interferometer for measuring the spatial frequency response of a lens system,” Proc. Phys. Soc. London, Sect. B 68, 871–880 (1955).
[CrossRef]

Agarwal, G.

D. Rathi, G. Agarwal, P. Kalra, S. Banerjee, “A system for image based rendering of walk-throughs,” in Proceedings of Computer Graphics International (CGI2002), Bradford, UK, July 2002.

Aizawa, K.

K. Aizawa, K. Kodama, A. Kubota, “Producing object based special effects by fusing multiple differently focused images,” IEEE Trans. Circuits Syst. Video Technol. 10, 323–330 (2000).
[CrossRef]

A. Kubota, K. Aizawa, T. Chen, “Virtual view synthesis through linear processing without geometry,” in Proceedings of IEEE International Conference on Image Processing, Singapore, 2004 (IEEE Press, 2004), Vol. 5, pp. 3009–3012.

A. Kubota, K. Takahashi, K. Aizawa, T. Chen, “All-focused light field rendering,” in Proceedings of the Eurographics Symposium on Rendering, Norrköping, Sweden, 2004, pp. 235–242.

A. Kubota, K. Aizawa, “Inverse filters for reconstruction of arbitrarily focused images from two differently focused images,” in Proceedings of IEEE International Conference Image Processing, Vancouver, 2000 (IEEE Press, 2000), pp. 101–104.

Anandan, P.

S. B. Kang, R. Szeliski, P. Anandan, “The geometry-image representation tradeoff for rendering,” in Pro- ceedings of the International Conference on Image Processing, Vancouver, B.C. September 2000, Vol. 2, 13–16.

Avidan, S.

S. Avidan, T. Evgeniou, A. Shashua, T. Poggio, “Image-based view synthesis,” Artificial Intelligence (AI) Lab, AIM-1603 (MIT, 1997), CBCL paper-145.

S. Avidan, A. Shashua, “Novel view synthesis in tensor space,” in Proceedings of Computer Vision and Pattern Recognition, Puerto Rico, June 1997 (IEEE Computer Society, 1997), pp. 1034–1040.

Baba, S.

S. Baba, H. Saito, S. Vedula, K. M. Cheng, T. Kanade, “Appearance-based virtual view generation for fly through in a real dynamic scene,” in Proceedings of the Joint Eurographics and IEEE TVCG Symposium on Visualization, Interlaken, Switzerland, 2000, pp. 179–188.

Baker, L. R.

L. R. Baker, “An interferometer for measuring the spatial frequency response of a lens system,” Proc. Phys. Soc. London, Sect. B 68, 871–880 (1955).
[CrossRef]

Baker, S.

S. Vedula, S. Baker, T. Kanade, “Spatio-temporal view interpolation,” in Proceedings of the 13th ACM Eurographics Workshop on Rendering, SaarBruecken, Germany, 2002, pp. 1–11.

Banerjee, S.

D. Rathi, G. Agarwal, P. Kalra, S. Banerjee, “A system for image based rendering of walk-throughs,” in Proceedings of Computer Graphics International (CGI2002), Bradford, UK, July 2002.

Beier, T.

T. Beier, S. Neely, “Feature-based image metamorphosis,” in Proceedings of SIGGRAPH 92, Chicago, Illinois, July 1992 (www.siggraph), pp. 35–42.

Bhasin, S.

S. Bhasin, S. Chaudhuri, “Depth from defocus in presence of partial self occlusion,” in Proceedings of IEEE International Conference on Computer Vision, Vancouver, Canada, July 2001 (IEEE Press, 2001), pp. 488–493.

Born, M.

M. Born, E. Wolf, Principles of Optics (Pergamon, 1965).

Borshukov, G.

P. Debevec, G. Borshukov, Y. Yu, “Efficient view-dependent image-based rendering with projective texture-mapping,” in 9th Eurographics Rendering Workshop, Vienna, Austria, June 1998.

Carlsson, S.

D. Svedberg, S. Carlsson, “Calibration, pose and novel views from single images of constrained scenes,” in Proceedings of the 11th Scandinavian Conference on Image Analysis (SCIA’99), Kangerlussuaq, Greenland, June 1999, pp. 111–117.

Chaudhuri, S.

S. Chaudhuri, A. N. Rajagopalan, Depth from Defocus: A Real Aperture Imaging Approach (Springer-Verlag, 1999).

S. Bhasin, S. Chaudhuri, “Depth from defocus in presence of partial self occlusion,” in Proceedings of IEEE International Conference on Computer Vision, Vancouver, Canada, July 2001 (IEEE Press, 2001), pp. 488–493.

Chen, S. E.

S. E. Chen, L. Williams, “View interpolation for image synthesis,” in Proceedings of SIGGRAPH 93, Anaheim, California, August 1993 (www.siggraph.com), pp. 279–288.

Chen, T.

A. Kubota, K. Takahashi, K. Aizawa, T. Chen, “All-focused light field rendering,” in Proceedings of the Eurographics Symposium on Rendering, Norrköping, Sweden, 2004, pp. 235–242.

A. Kubota, K. Aizawa, T. Chen, “Virtual view synthesis through linear processing without geometry,” in Proceedings of IEEE International Conference on Image Processing, Singapore, 2004 (IEEE Press, 2004), Vol. 5, pp. 3009–3012.

Cheng, K. M.

S. Baba, H. Saito, S. Vedula, K. M. Cheng, T. Kanade, “Appearance-based virtual view generation for fly through in a real dynamic scene,” in Proceedings of the Joint Eurographics and IEEE TVCG Symposium on Visualization, Interlaken, Switzerland, 2000, pp. 179–188.

Cohen-Or, D.

D. Cohen-Or, “Model-based view-extrapolation for interactive VR web-systems,” in Proceedings of Computer Graphics International (CGI97), Belgium, June 1997, pp. 104–112.

Debevec, P.

P. Debevec, G. Borshukov, Y. Yu, “Efficient view-dependent image-based rendering with projective texture-mapping,” in 9th Eurographics Rendering Workshop, Vienna, Austria, June 1998.

Dyer, C. R.

S. M. Seitz, C. R. Dyer, “View morphing,” in Proceedings of SIGGRAPH 96, New Orleans, Lousiana, August 1996 (www.siggraph.com), pp. 21–30.

S. Seitz, C. R. Dyer, “View morphing: uniquely predicting scene appearance from basis images,” in Proceedings of Image Understanding Workshop, New Orleans, Lousiana, 1997, pp. 881–887.

R. A. Manning, C. R. Dyer, “Interpolating view and scene motion by dynamic view morphing,” in Proceedings of the Conference on Computer Vision and Pattern Recognition, Fort Collins, Colorado, June 1999 (IEEE Computer Society, 1999), Vol. 1, pp. 388–394.

Evgeniou, T.

S. Avidan, T. Evgeniou, A. Shashua, T. Poggio, “Image-based view synthesis,” Artificial Intelligence (AI) Lab, AIM-1603 (MIT, 1997), CBCL paper-145.

Ezzat, T.

T. Ezzat, T. Poggio, “Visual speech synthesis by morphing visemes,” Int. J. Comput. Vis., 38, 45–57 (2000).
[CrossRef]

Faugeras, O. D.

S. Laveau, O. D. Faugeras, “3D scene representation as a collection of images and fundamental matrices,” Tech. Report 2205 (Institut National de Recherche en Informatique et en Automatique, 1994).

Feixas, M.

P. Vazques, M. Feixas, M. Sbert, W. Heidrich, “Image based modeling using viewpoint entropy,” in Proceedings of Computer Graphics International CGI 2002, Bradford, UK, July 2002.

Girod, B.

M. Magnor, P. Ramanathan, B. Girod, “Multi-view coding for image-based rendering using 3-D scene geometry,” IEEE Trans. Circuits Syst. Video Technol. 13, 1092–1106 (2003).
[CrossRef]

Heidrich, W.

P. Vazques, M. Feixas, M. Sbert, W. Heidrich, “Image based modeling using viewpoint entropy,” in Proceedings of Computer Graphics International CGI 2002, Bradford, UK, July 2002.

Hopkins, H. H.

H. H. Hopkins, “The frequency response of a defocused optical system,” Proc. R. Soc. London, Ser. A 231, 91–103 (1955).
[CrossRef]

Jain, A. K.

A. K. Jain, Fundamentals of Digital Image Processing (Prentice-Hall, 1988).

Kalra, P.

D. Rathi, G. Agarwal, P. Kalra, S. Banerjee, “A system for image based rendering of walk-throughs,” in Proceedings of Computer Graphics International (CGI2002), Bradford, UK, July 2002.

Kanade, T.

S. Baba, H. Saito, S. Vedula, K. M. Cheng, T. Kanade, “Appearance-based virtual view generation for fly through in a real dynamic scene,” in Proceedings of the Joint Eurographics and IEEE TVCG Symposium on Visualization, Interlaken, Switzerland, 2000, pp. 179–188.

S. Vedula, S. Baker, T. Kanade, “Spatio-temporal view interpolation,” in Proceedings of the 13th ACM Eurographics Workshop on Rendering, SaarBruecken, Germany, 2002, pp. 1–11.

Kang, S. B.

S. B. Kang, R. Szeliski, P. Anandan, “The geometry-image representation tradeoff for rendering,” in Pro- ceedings of the International Conference on Image Processing, Vancouver, B.C. September 2000, Vol. 2, 13–16.

Kobras, D.

D. Weiskopf, D. Kobras, H. Ruder, “An image-based approach to special relativistic rendering,” in Proceedings of IEEE Visualization Conference October 2000 (IEEE Press, 2000), pp. 303–310.

Kodama, K.

K. Aizawa, K. Kodama, A. Kubota, “Producing object based special effects by fusing multiple differently focused images,” IEEE Trans. Circuits Syst. Video Technol. 10, 323–330 (2000).
[CrossRef]

Kubota, A.

K. Aizawa, K. Kodama, A. Kubota, “Producing object based special effects by fusing multiple differently focused images,” IEEE Trans. Circuits Syst. Video Technol. 10, 323–330 (2000).
[CrossRef]

A. Kubota, K. Aizawa, T. Chen, “Virtual view synthesis through linear processing without geometry,” in Proceedings of IEEE International Conference on Image Processing, Singapore, 2004 (IEEE Press, 2004), Vol. 5, pp. 3009–3012.

A. Kubota, K. Aizawa, “Inverse filters for reconstruction of arbitrarily focused images from two differently focused images,” in Proceedings of IEEE International Conference Image Processing, Vancouver, 2000 (IEEE Press, 2000), pp. 101–104.

A. Kubota, K. Takahashi, K. Aizawa, T. Chen, “All-focused light field rendering,” in Proceedings of the Eurographics Symposium on Rendering, Norrköping, Sweden, 2004, pp. 235–242.

Laveau, S.

S. Laveau, O. D. Faugeras, “3D scene representation as a collection of images and fundamental matrices,” Tech. Report 2205 (Institut National de Recherche en Informatique et en Automatique, 1994).

Lhuillier, M.

M. Lhuillier, L. Quan, “Image interpolation by joint view triangulation,” in Proceedings of the Conference on Computer Vision and Pattern Recognition, Fort Collins, Colorado, June 1999 (IEEE Computer Society, 1999), Vol. 2, pp. 139–145.

Magnor, M.

M. Magnor, P. Ramanathan, B. Girod, “Multi-view coding for image-based rendering using 3-D scene geometry,” IEEE Trans. Circuits Syst. Video Technol. 13, 1092–1106 (2003).
[CrossRef]

Manning, R. A.

R. A. Manning, C. R. Dyer, “Interpolating view and scene motion by dynamic view morphing,” in Proceedings of the Conference on Computer Vision and Pattern Recognition, Fort Collins, Colorado, June 1999 (IEEE Computer Society, 1999), Vol. 1, pp. 388–394.

Neely, S.

T. Beier, S. Neely, “Feature-based image metamorphosis,” in Proceedings of SIGGRAPH 92, Chicago, Illinois, July 1992 (www.siggraph), pp. 35–42.

Pentland, A. P.

A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. 9, 523–531 (1987).
[CrossRef] [PubMed]

Poggio, T.

T. Ezzat, T. Poggio, “Visual speech synthesis by morphing visemes,” Int. J. Comput. Vis., 38, 45–57 (2000).
[CrossRef]

S. Avidan, T. Evgeniou, A. Shashua, T. Poggio, “Image-based view synthesis,” Artificial Intelligence (AI) Lab, AIM-1603 (MIT, 1997), CBCL paper-145.

Quan, L.

M. Lhuillier, L. Quan, “Image interpolation by joint view triangulation,” in Proceedings of the Conference on Computer Vision and Pattern Recognition, Fort Collins, Colorado, June 1999 (IEEE Computer Society, 1999), Vol. 2, pp. 139–145.

Radke, R. J.

R. J. Radke, S. Rickard, “Audio interpolation,” in Proceedings of the International Conference on Virtual, Synthetic and Entertainment Audio, Espoo, Finland, June 2002.

Rajagopalan, A. N.

S. Chaudhuri, A. N. Rajagopalan, Depth from Defocus: A Real Aperture Imaging Approach (Springer-Verlag, 1999).

Ramanathan, P.

M. Magnor, P. Ramanathan, B. Girod, “Multi-view coding for image-based rendering using 3-D scene geometry,” IEEE Trans. Circuits Syst. Video Technol. 13, 1092–1106 (2003).
[CrossRef]

Rathi, D.

D. Rathi, G. Agarwal, P. Kalra, S. Banerjee, “A system for image based rendering of walk-throughs,” in Proceedings of Computer Graphics International (CGI2002), Bradford, UK, July 2002.

Rickard, S.

R. J. Radke, S. Rickard, “Audio interpolation,” in Proceedings of the International Conference on Virtual, Synthetic and Entertainment Audio, Espoo, Finland, June 2002.

Ruder, H.

D. Weiskopf, D. Kobras, H. Ruder, “An image-based approach to special relativistic rendering,” in Proceedings of IEEE Visualization Conference October 2000 (IEEE Press, 2000), pp. 303–310.

Saito, H.

S. Baba, H. Saito, S. Vedula, K. M. Cheng, T. Kanade, “Appearance-based virtual view generation for fly through in a real dynamic scene,” in Proceedings of the Joint Eurographics and IEEE TVCG Symposium on Visualization, Interlaken, Switzerland, 2000, pp. 179–188.

Sbert, M.

P. Vazques, M. Feixas, M. Sbert, W. Heidrich, “Image based modeling using viewpoint entropy,” in Proceedings of Computer Graphics International CGI 2002, Bradford, UK, July 2002.

Seitz, S.

S. Seitz, C. R. Dyer, “View morphing: uniquely predicting scene appearance from basis images,” in Proceedings of Image Understanding Workshop, New Orleans, Lousiana, 1997, pp. 881–887.

Seitz, S. M.

S. M. Seitz, C. R. Dyer, “View morphing,” in Proceedings of SIGGRAPH 96, New Orleans, Lousiana, August 1996 (www.siggraph.com), pp. 21–30.

Shashua, A.

S. Avidan, A. Shashua, “Novel view synthesis in tensor space,” in Proceedings of Computer Vision and Pattern Recognition, Puerto Rico, June 1997 (IEEE Computer Society, 1997), pp. 1034–1040.

S. Avidan, T. Evgeniou, A. Shashua, T. Poggio, “Image-based view synthesis,” Artificial Intelligence (AI) Lab, AIM-1603 (MIT, 1997), CBCL paper-145.

Subbarao, M.

M. Subbarao, “Parallel depth recovery by changing camera parameters,” in Proceedings of Second IEEE International Conference on Computer Vision Florida, December 1988 (IEEE Computer Society, 1998), pp. 149–155.

Svedberg, D.

D. Svedberg, S. Carlsson, “Calibration, pose and novel views from single images of constrained scenes,” in Proceedings of the 11th Scandinavian Conference on Image Analysis (SCIA’99), Kangerlussuaq, Greenland, June 1999, pp. 111–117.

Szeliski, R.

S. B. Kang, R. Szeliski, P. Anandan, “The geometry-image representation tradeoff for rendering,” in Pro- ceedings of the International Conference on Image Processing, Vancouver, B.C. September 2000, Vol. 2, 13–16.

Takahashi, K.

A. Kubota, K. Takahashi, K. Aizawa, T. Chen, “All-focused light field rendering,” in Proceedings of the Eurographics Symposium on Rendering, Norrköping, Sweden, 2004, pp. 235–242.

Vazques, P.

P. Vazques, M. Feixas, M. Sbert, W. Heidrich, “Image based modeling using viewpoint entropy,” in Proceedings of Computer Graphics International CGI 2002, Bradford, UK, July 2002.

Vedula, S.

S. Vedula, S. Baker, T. Kanade, “Spatio-temporal view interpolation,” in Proceedings of the 13th ACM Eurographics Workshop on Rendering, SaarBruecken, Germany, 2002, pp. 1–11.

S. Baba, H. Saito, S. Vedula, K. M. Cheng, T. Kanade, “Appearance-based virtual view generation for fly through in a real dynamic scene,” in Proceedings of the Joint Eurographics and IEEE TVCG Symposium on Visualization, Interlaken, Switzerland, 2000, pp. 179–188.

Weiskopf, D.

D. Weiskopf, D. Kobras, H. Ruder, “An image-based approach to special relativistic rendering,” in Proceedings of IEEE Visualization Conference October 2000 (IEEE Press, 2000), pp. 303–310.

Williams, L.

S. E. Chen, L. Williams, “View interpolation for image synthesis,” in Proceedings of SIGGRAPH 93, Anaheim, California, August 1993 (www.siggraph.com), pp. 279–288.

Wolf, E.

M. Born, E. Wolf, Principles of Optics (Pergamon, 1965).

Yu, Y.

P. Debevec, G. Borshukov, Y. Yu, “Efficient view-dependent image-based rendering with projective texture-mapping,” in 9th Eurographics Rendering Workshop, Vienna, Austria, June 1998.

Zhang, Z.

Z. Zhang, “Image-based geometrically-correct photorealistic scene/object modeling (IBPhM): a review,” in Proceedings of the Asian Conference on Computer Vision (ACCV 98), Hong Kong, January 8–11, 1998, Vol. II, pp. 340–349.

IEEE Trans. Circuits Syst. Video Technol. (2)

M. Magnor, P. Ramanathan, B. Girod, “Multi-view coding for image-based rendering using 3-D scene geometry,” IEEE Trans. Circuits Syst. Video Technol. 13, 1092–1106 (2003).
[CrossRef]

K. Aizawa, K. Kodama, A. Kubota, “Producing object based special effects by fusing multiple differently focused images,” IEEE Trans. Circuits Syst. Video Technol. 10, 323–330 (2000).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. 9, 523–531 (1987).
[CrossRef] [PubMed]

Int. J. Comput. Vis. (1)

T. Ezzat, T. Poggio, “Visual speech synthesis by morphing visemes,” Int. J. Comput. Vis., 38, 45–57 (2000).
[CrossRef]

Proc. Phys. Soc. London, Sect. B (1)

L. R. Baker, “An interferometer for measuring the spatial frequency response of a lens system,” Proc. Phys. Soc. London, Sect. B 68, 871–880 (1955).
[CrossRef]

Proc. R. Soc. London, Ser. A (1)

H. H. Hopkins, “The frequency response of a defocused optical system,” Proc. R. Soc. London, Ser. A 231, 91–103 (1955).
[CrossRef]

Other (28)

A. K. Jain, Fundamentals of Digital Image Processing (Prentice-Hall, 1988).

S. Bhasin, S. Chaudhuri, “Depth from defocus in presence of partial self occlusion,” in Proceedings of IEEE International Conference on Computer Vision, Vancouver, Canada, July 2001 (IEEE Press, 2001), pp. 488–493.

M. Subbarao, “Parallel depth recovery by changing camera parameters,” in Proceedings of Second IEEE International Conference on Computer Vision Florida, December 1988 (IEEE Computer Society, 1998), pp. 149–155.

R. J. Radke, S. Rickard, “Audio interpolation,” in Proceedings of the International Conference on Virtual, Synthetic and Entertainment Audio, Espoo, Finland, June 2002.

S. Chaudhuri, A. N. Rajagopalan, Depth from Defocus: A Real Aperture Imaging Approach (Springer-Verlag, 1999).

A. Kubota, K. Takahashi, K. Aizawa, T. Chen, “All-focused light field rendering,” in Proceedings of the Eurographics Symposium on Rendering, Norrköping, Sweden, 2004, pp. 235–242.

A. Kubota, K. Aizawa, “Inverse filters for reconstruction of arbitrarily focused images from two differently focused images,” in Proceedings of IEEE International Conference Image Processing, Vancouver, 2000 (IEEE Press, 2000), pp. 101–104.

A. Kubota, K. Aizawa, T. Chen, “Virtual view synthesis through linear processing without geometry,” in Proceedings of IEEE International Conference on Image Processing, Singapore, 2004 (IEEE Press, 2004), Vol. 5, pp. 3009–3012.

M. Born, E. Wolf, Principles of Optics (Pergamon, 1965).

D. Weiskopf, D. Kobras, H. Ruder, “An image-based approach to special relativistic rendering,” in Proceedings of IEEE Visualization Conference October 2000 (IEEE Press, 2000), pp. 303–310.

S. E. Chen, L. Williams, “View interpolation for image synthesis,” in Proceedings of SIGGRAPH 93, Anaheim, California, August 1993 (www.siggraph.com), pp. 279–288.

S. M. Seitz, C. R. Dyer, “View morphing,” in Proceedings of SIGGRAPH 96, New Orleans, Lousiana, August 1996 (www.siggraph.com), pp. 21–30.

T. Beier, S. Neely, “Feature-based image metamorphosis,” in Proceedings of SIGGRAPH 92, Chicago, Illinois, July 1992 (www.siggraph), pp. 35–42.

S. Seitz, C. R. Dyer, “View morphing: uniquely predicting scene appearance from basis images,” in Proceedings of Image Understanding Workshop, New Orleans, Lousiana, 1997, pp. 881–887.

M. Lhuillier, L. Quan, “Image interpolation by joint view triangulation,” in Proceedings of the Conference on Computer Vision and Pattern Recognition, Fort Collins, Colorado, June 1999 (IEEE Computer Society, 1999), Vol. 2, pp. 139–145.

S. B. Kang, R. Szeliski, P. Anandan, “The geometry-image representation tradeoff for rendering,” in Pro- ceedings of the International Conference on Image Processing, Vancouver, B.C. September 2000, Vol. 2, 13–16.

D. Rathi, G. Agarwal, P. Kalra, S. Banerjee, “A system for image based rendering of walk-throughs,” in Proceedings of Computer Graphics International (CGI2002), Bradford, UK, July 2002.

P. Debevec, G. Borshukov, Y. Yu, “Efficient view-dependent image-based rendering with projective texture-mapping,” in 9th Eurographics Rendering Workshop, Vienna, Austria, June 1998.

S. Avidan, T. Evgeniou, A. Shashua, T. Poggio, “Image-based view synthesis,” Artificial Intelligence (AI) Lab, AIM-1603 (MIT, 1997), CBCL paper-145.

S. Avidan, A. Shashua, “Novel view synthesis in tensor space,” in Proceedings of Computer Vision and Pattern Recognition, Puerto Rico, June 1997 (IEEE Computer Society, 1997), pp. 1034–1040.

P. Vazques, M. Feixas, M. Sbert, W. Heidrich, “Image based modeling using viewpoint entropy,” in Proceedings of Computer Graphics International CGI 2002, Bradford, UK, July 2002.

S. Laveau, O. D. Faugeras, “3D scene representation as a collection of images and fundamental matrices,” Tech. Report 2205 (Institut National de Recherche en Informatique et en Automatique, 1994).

D. Svedberg, S. Carlsson, “Calibration, pose and novel views from single images of constrained scenes,” in Proceedings of the 11th Scandinavian Conference on Image Analysis (SCIA’99), Kangerlussuaq, Greenland, June 1999, pp. 111–117.

Z. Zhang, “Image-based geometrically-correct photorealistic scene/object modeling (IBPhM): a review,” in Proceedings of the Asian Conference on Computer Vision (ACCV 98), Hong Kong, January 8–11, 1998, Vol. II, pp. 340–349.

S. Vedula, S. Baker, T. Kanade, “Spatio-temporal view interpolation,” in Proceedings of the 13th ACM Eurographics Workshop on Rendering, SaarBruecken, Germany, 2002, pp. 1–11.

S. Baba, H. Saito, S. Vedula, K. M. Cheng, T. Kanade, “Appearance-based virtual view generation for fly through in a real dynamic scene,” in Proceedings of the Joint Eurographics and IEEE TVCG Symposium on Visualization, Interlaken, Switzerland, 2000, pp. 179–188.

R. A. Manning, C. R. Dyer, “Interpolating view and scene motion by dynamic view morphing,” in Proceedings of the Conference on Computer Vision and Pattern Recognition, Fort Collins, Colorado, June 1999 (IEEE Computer Society, 1999), Vol. 1, pp. 388–394.

D. Cohen-Or, “Model-based view-extrapolation for interactive VR web-systems,” in Proceedings of Computer Graphics International (CGI97), Belgium, June 1997, pp. 104–112.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1

Illustration of aperture morphing. The blur is increased by increasing the aperture. Aperture morphing is possible within the range ( r 1 , r 2 ) .

Fig. 2
Fig. 2

Illustration of lens-to-image plane distance morphing. A virtual image plane between A 1 B 1 and A 2 B 2 can be constructed.

Fig. 3
Fig. 3

Illustration of focal-length morphing. A virtual lens with the focal length in the range ( F 1 , F 2 ) can be constructed.

Fig. 4
Fig. 4

Illustration of depth morphing. For a fixed camera parameter setup and for the principal ray A B C O , a point in the scene can be virtually moved anywhere between the line segment A B .

Fig. 5
Fig. 5

(a) and (b) Views of an audience during a seminar presentation under two different focusing adjustments. Results of blur morphing with α = 0.5 when performed (c) globally and (d) locally.

Fig. 6
Fig. 6

(a) and (b) Views of a scene with slowly varying depth under different camera parameter settings. (c) and (d) Results of global and local morphing for α = 0.7 .

Fig. 7
Fig. 7

(a) and (b) Two views of a scene under different focus settings. (c) Same scene captured with a focus setting lying in between the above two observations. (d) Result of synthetically generating the view in (c) using images (a) and (b) through blur morphing.

Fig. 8
Fig. 8

(a) and (b) Two synthetically rendered views of a scene under different aperture settings. (c) Another rendered view for an aperture lying in between the above two settings. (d) and (e) Results of synthetically generating the view in (c) using images (a) and (b) through blur morphing. (f) The plot of peak SNR against the morphing parameter α for both the current and the previous experiment.

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

σ = κ r v ( 1 F 1 v 1 Z ) ,
h ( x , y ) = 1 2 π σ 2 exp ( x 2 + y 2 2 σ 2 ) ,
E ( x , y ) = I ( x , y ) h ( x , y ) ,
E ̂ ( ω x , ω y ) = I ̂ ( ω x , ω y ) h ̂ ( ω x , ω y ) = I ̂ ( ω x , ω y ) exp [ σ 2 ( ω x 2 + ω y 2 ) 2 ] .
σ 2 = α σ 1 2 + ( 1 α ) σ 2 2 ,
E ̂ ( ω x , ω y ) = I ̂ ( ω x , ω y ) exp { 1 2 [ α σ 1 2 + ( 1 α ) σ 2 2 ] ( ω x 2 + ω y 2 ) } = { I ̂ ( ω x , ω y ) exp [ σ 1 2 ( ω x 2 + ω y 2 ) 2 ] } α { I ̂ ( ω x , ω y ) exp [ σ 2 2 ( ω x 2 + ω y 2 ) 2 ] } 1 α ,
E ̂ ( ω x , ω y ) = E ̂ 1 α ( ω x , ω y ) E ̂ 2 ( 1 α ) ( ω x , ω y ) .
ln E ̂ ( ω x , ω y ) = α ln E ̂ 1 ( ω x , ω y ) + ( 1 α ) ln E ̂ 2 ( ω x , ω y ) .
E ̂ n ( ω x , ω y ) = E ̂ n 1 α ( ω x , ω y ) E ̂ n 2 ( 1 α ) ( ω x , ω y ) = [ E ̂ 1 ( ω x , ω y ) + N ̂ 1 ( ω x , ω y ) ] α [ E ̂ 2 ( ω x , ω y ) + N ̂ 2 ( ω x , ω y ) ] ( 1 α ) = E ̂ 1 α ( ω x , ω y ) [ 1 + N ̂ 1 ( ω x , ω y ) E ̂ 1 ( ω x , ω y ) ] α E ̂ 2 1 α ( ω x , ω y ) [ 1 + N ̂ 2 ( ω x , ω y ) E ̂ 2 ( ω x , ω y ) ] ( 1 α ) = E ̂ 1 α ( ω x , ω y ) [ 1 + β 1 ( ω x , ω y ) ] α E ̂ 2 1 α ( ω x , ω y ) [ 1 + β 2 ( ω x , ω y ) ] ( 1 α ) ,
E ̂ n = E ̂ 1 α E ̂ 2 1 α ( 1 + α β 1 ) [ 1 + ( 1 α ) β 2 ] .
E ( E ̂ n I ̂ ) = E ̂ 1 α E ̂ 2 1 α = E ̂ ,
E ( E ̂ n I ̂ ) E ̂ 1 α E ̂ 2 1 α .
r 2 = α r 1 2 + ( 1 α ) r 2 2 .
A = α A 1 + ( 1 α ) A 2 .
σ i = κ r [ v i ( 1 F 1 Z ) 1 ]
α v 1 2 + ( 1 α ) v 2 2 v 2 = 2 v o [ α v 1 + ( 1 α ) v 2 v ] .
( v v o ) 2 = α ( v 1 v o ) 2 + ( 1 α ) ( v 2 v 0 ) 2 .
( 1 F 1 F 0 ) 2 = α ( 1 F 1 1 F 0 ) 2 + ( 1 α ) ( 1 F 2 1 F 0 ) 2 ,
( F 0 F 1 ) 2 = α ( F 0 F 1 1 ) 2 + ( 1 α ) ( F 0 F 2 1 ) 2 .
( 1 F 1 v 1 Z ) 2 = α ( 1 F 1 v 1 Z 1 ) 2 + ( 1 α ) ( 1 F 1 v 1 Z 2 ) 2 .
( 1 Z 0 Z ) 2 = α ( 1 Z 0 Z 1 ) 2 + ( 1 α ) ( 1 Z 0 Z 2 ) 2 .
E ̂ ( ω x , ω y ) = E ̂ 1 α ( ω x , ω y ) E ̂ 2 ( 1 α ) ( ω x , ω y )

Metrics