Abstract

Platform motion blur is a common problem for airborne and space-based imagers. Photographs taken by hand or from moving vehicles in low-light conditions are also typically blurred. Correcting image motion blur poses a formidable problem since it requires a description of the blur in the form of the point spread function (PSF), which in general is dependent on spatial location within the image. Here we introduce a computational imaging system that incorporates optical position sensing detectors (PSDs), a conventional camera, and a method to reconstruct images degraded by spatially variant platform motion blur. A PSD tracks the movement of light distributions on its surface. It leverages more energy collection than a single pixel since it has a larger area making it proportionally faster. This affords it high temporal resolution as it measures the PSF at a specific location in the image field. Using multiple PSDs, a spatially variant PSF is generated and used to reconstruct images.

© 2012 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. A. F. Boden, D. C. Redding, R. J. Hanisch, and J. Mo, “Massively parallel spatially variant maximum likelihood restoration of Hubble space telescope imagery,” J. Opt. Soc. Am. A 13, 1537–1545 (1996).
    [CrossRef]
  2. D. E. Meisner, “Fundamentals of airborne video remote sensing,” Remote Sens. Environ. 19, 63–79 (1986).
    [CrossRef]
  3. B. Hulgren and D. Hertel, “Low light performance of digital cameras,” Proc. SPIE 7242, 724214 (2009).
    [CrossRef]
  4. P. Campisi and K. Egiazarian, Blind Image Deconvolution: Theory and Applications (CRC, 2007).
  5. R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman, “Removing camera shake from a single photograph,” ACM Trans. Graph. 25, 787–794 (2006).
    [CrossRef]
  6. Q. Shan, J. Jia, and A. Agarwala, “High-quality motion deblurring from a single image,” ACM Trans. Graph. 27, 73:1–73:10 (2008).
    [CrossRef]
  7. J.-F. Cai, H. Ji, C. Liu, and Z. Shen, “High-quality curvelet-based motion deblurring from an image pair,” in Proceedings of Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1566–1573.
  8. F. Sroubek and J. Flusser, “Multichannel blind deconvolution of spatially misaligned images,” IEEE Trans. Image Process. 14, 874–883 (2005).
    [CrossRef]
  9. Q. Shan, W. Xiong, and J. Jia, “Rotational motion deblurring of a rigid object from a single image,” in Proceedings of 11th International Conference on Computer Vision (IEEE, 2007), pp. 1–8.
  10. S. Harmeling, M. Hirsch, and B. Schölkopf, “Space-variant single-image blind deconvolution for removing camera shake,” Adv. Neural Inf. Process. Syst. 23, 829–837 (2010).
  11. A. Gupta, N. Joshi, C. Lawrence Zitnick, M. Cohen, and B. Curless, “Single image deblurring using motion density functions,” in Proceedings of the 11th European Conference on Computer Vision: Part I (Springer-Verlag, 2010), pp. 171–184.
  12. O. Whyte, J. Sivic, A. Zisserman, and J. Ponce, “Non-uniform deblurring for shaken images,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2010), pp. 491–498.
  13. M. Tico, M. Trimeche, and M. Vehvilainen, “Motion blur identification based on differently exposed images,” in Proceedings of International Conference on Image Processing (IEEE, 2006), pp. 2021–2024.
  14. L. Yuan, J. Sun, L. Quan, and H.-Y. Shum, “Image deblurring with blurred/noisy image pairs,” in SIGGRAPH ’07: ACM SIGGRAPH 2007 (ACM, 2007), p. 1.
  15. M. Sorel and F. Sroubek, “Space-variant deblurring using one blurred and one underexposed image,” in Proceedings of 16th International Conference on Image Processing (ICIP) (IEEE, 2009), pp. 157–160.
  16. R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. 25, 795–804 (2006).
    [CrossRef]
  17. A. Agrawal and R. Raskar, “Resolving objects at higher resolution from a single motion-blurred image,” in Proceedings of IEEE Conference Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.
  18. E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995).
    [CrossRef]
  19. A. Levin, P. Sand, T. Sang Cho, F. Durand, and W. T. Freeman, “Motion-invariant photography,” ACM Trans. Graph. 27, 71:1–71:9 (2008).
    [CrossRef]
  20. T. Sang Cho, A. Levin, F. Durand, and W. T. Freeman, “Motion blur removal with orthogonal parabolic exposures,” in Proceedings of International Conference on Computational Photography (ICCP) (IEEE, 2010), pp. 1–8.
  21. P. Christian Hansen, J. G. Nagy, and D. P. Leary, Deblurring Images: Matrices, Spectra and Filtering (SIAM, 2006).
  22. C. Zhou and S. Nayar, “Computational cameras: convergence of optics and processing,” IEEE Trans. Image Process. 20, 3322–3340 (2011).
    [CrossRef]
  23. G. Wetzstein, I. Ihrke, D. Lanman, and W. Heidrich, “Computational plenoptic imaging,” Computer Graphics Forum, 2397–2426 (2011).
  24. N. Joshi, S. B. Kang, C. L. Zitnick, and R. Szeliski, “Image deblurring using inertial measurement sensors,” ACM Trans. Graph. 29, 30:1–30:9 (2010).
    [CrossRef]
  25. S. K. Nayar and M. Ben-Ezra, “Motion-based motion deblurring,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 689–698 (2004).
    [CrossRef]
  26. Y.-W. Tai, H. Du, M. S. Brown, and S. Lin, “Correction of spatially varying image and video motion blur using a hybrid camera,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1012–1028 (2010).
    [CrossRef]
  27. W. Schottky, “Ueber den entstehungsort der photoelektronen in kupfer-kupferoxydul photozellen,” Phys. Z. 31, 913–925 (1930).
  28. J. T. Wallmark, “A new semiconductor photocell using lateral photoeffect,” Proc. IRE 45, 474–483 (1957).
    [CrossRef]
  29. H. Niu, C. Aoki, T. Matsuda, M. Takai, and M. Maeda, “A position-sensitive MOS device using lateral photovoltaic effect,” Jpn. J. Appl. Phys. 26, L35–L37 (1987).
  30. W. Wang and I. J. Busch-Vishniac, “The linearity and sensitivity of lateral effect position sensitive devices—an improved geometry,” IEEE Trans. Electron Devices 36, 2475–2480 (1957).
    [CrossRef]
  31. M. de Bakker, P. W. Verbeek, G. K. Steenvoorden, and I. T. Young, “The PSD transfer function,” IEEE Trans. Electron Devices 49, 202–206 (2002).
    [CrossRef]
  32. H. Andersson, “Position sensitive detectors: device technology and applications in spectroscopy,” Ph.D. thesis (Mid Sweden University, Department of Information Technology and Media, 2008).
  33. A. Makynen, “Position-sensitive devices for optical tracking and displacement sensing applications,” Ph.D. thesis (University of Oulu, 2000).
  34. M. Ben-Ezra, and S. K. Nayar, “Motion deblurring using hybrid imaging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2003), pp. I-657–I-664.
  35. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. (Cambridge University, 2003).
  36. B. K. P. Horn, H. M. Hilden, and S. Negahdaripour, “Closed-form solution of absolute orientation using orthonormal matrices,” J. Opt. Soc. Am. A 5, 1127–1135 (1988).
    [CrossRef]
  37. A. Chambolle, V. Caselles, D. Cremers, M. Novaga, and T. Pock, “An introduction to total variation for image analysis,” in Theoretical Foundations and Numerical Methods for Sparse Recovery (De Gruyter, 2010).
  38. A. Chambolle and P. Lions, “Image recovery via total variation minimization and related problems,” Numer. Math. 76, 167–188 (1997).
    [CrossRef]
  39. G. H. Golub and C. F. Van Loan, Matrix Computations (Johns Hopkins University, 1996).
  40. J. G. Nagy and D. P. O’Leary, “Restoring images degraded by spatially variant blur,” SIAM J. Sci. Comput. 19, 1063–1082 (1998).
    [CrossRef]
  41. M. Hirsch, C. J. Schuler, S. Harmeling, and B. Schölkopf, “Fast removal of non-uniform camera shake,” in Proceedings of International Conference on Computer Vision (ICCV) (IEEE, 2011), pp. 463–470.
  42. M. Sorel and F. Sroubek, Restoration in the Presence of Unknown Spatially Varying Blur in Image Restoration: Fundamentals and Advances (CRC, 2012).
  43. R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd ed. (Prentice-Hall, 2006).
  44. W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. 62, 55–59 (1972).
    [CrossRef]
  45. L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745–754 (1974).
    [CrossRef]
  46. A. Hore and D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in Proceedings of 20th International Conference on Pattern Recognition (ICPR,2010), pp. 2366–2369.
    [CrossRef]
  47. N. Massari, L. Gonzo, M. Gottardi, and A. Simoni, “High speed digital CMOS 2D optical position sensitive detector,” in Proceedings of Solid-State Circuits Conference (ESSCIRC, 2002), pp. 723–726.
  48. M. A. Clapp and R. Etienne-Cummings, “A dual pixel-type array for imaging and motion centroid localization,” IEEE Sens. J. 2, 529–548 (2002).
    [CrossRef]

2011 (1)

C. Zhou and S. Nayar, “Computational cameras: convergence of optics and processing,” IEEE Trans. Image Process. 20, 3322–3340 (2011).
[CrossRef]

2010 (3)

N. Joshi, S. B. Kang, C. L. Zitnick, and R. Szeliski, “Image deblurring using inertial measurement sensors,” ACM Trans. Graph. 29, 30:1–30:9 (2010).
[CrossRef]

Y.-W. Tai, H. Du, M. S. Brown, and S. Lin, “Correction of spatially varying image and video motion blur using a hybrid camera,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1012–1028 (2010).
[CrossRef]

S. Harmeling, M. Hirsch, and B. Schölkopf, “Space-variant single-image blind deconvolution for removing camera shake,” Adv. Neural Inf. Process. Syst. 23, 829–837 (2010).

2009 (1)

B. Hulgren and D. Hertel, “Low light performance of digital cameras,” Proc. SPIE 7242, 724214 (2009).
[CrossRef]

2008 (2)

Q. Shan, J. Jia, and A. Agarwala, “High-quality motion deblurring from a single image,” ACM Trans. Graph. 27, 73:1–73:10 (2008).
[CrossRef]

A. Levin, P. Sand, T. Sang Cho, F. Durand, and W. T. Freeman, “Motion-invariant photography,” ACM Trans. Graph. 27, 71:1–71:9 (2008).
[CrossRef]

2006 (2)

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. 25, 795–804 (2006).
[CrossRef]

R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman, “Removing camera shake from a single photograph,” ACM Trans. Graph. 25, 787–794 (2006).
[CrossRef]

2005 (1)

F. Sroubek and J. Flusser, “Multichannel blind deconvolution of spatially misaligned images,” IEEE Trans. Image Process. 14, 874–883 (2005).
[CrossRef]

2004 (1)

S. K. Nayar and M. Ben-Ezra, “Motion-based motion deblurring,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 689–698 (2004).
[CrossRef]

2002 (2)

M. de Bakker, P. W. Verbeek, G. K. Steenvoorden, and I. T. Young, “The PSD transfer function,” IEEE Trans. Electron Devices 49, 202–206 (2002).
[CrossRef]

M. A. Clapp and R. Etienne-Cummings, “A dual pixel-type array for imaging and motion centroid localization,” IEEE Sens. J. 2, 529–548 (2002).
[CrossRef]

1998 (1)

J. G. Nagy and D. P. O’Leary, “Restoring images degraded by spatially variant blur,” SIAM J. Sci. Comput. 19, 1063–1082 (1998).
[CrossRef]

1997 (1)

A. Chambolle and P. Lions, “Image recovery via total variation minimization and related problems,” Numer. Math. 76, 167–188 (1997).
[CrossRef]

1996 (1)

1995 (1)

1988 (1)

1987 (1)

H. Niu, C. Aoki, T. Matsuda, M. Takai, and M. Maeda, “A position-sensitive MOS device using lateral photovoltaic effect,” Jpn. J. Appl. Phys. 26, L35–L37 (1987).

1986 (1)

D. E. Meisner, “Fundamentals of airborne video remote sensing,” Remote Sens. Environ. 19, 63–79 (1986).
[CrossRef]

1974 (1)

L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745–754 (1974).
[CrossRef]

1972 (1)

1957 (2)

W. Wang and I. J. Busch-Vishniac, “The linearity and sensitivity of lateral effect position sensitive devices—an improved geometry,” IEEE Trans. Electron Devices 36, 2475–2480 (1957).
[CrossRef]

J. T. Wallmark, “A new semiconductor photocell using lateral photoeffect,” Proc. IRE 45, 474–483 (1957).
[CrossRef]

1930 (1)

W. Schottky, “Ueber den entstehungsort der photoelektronen in kupfer-kupferoxydul photozellen,” Phys. Z. 31, 913–925 (1930).

Agarwala, A.

Q. Shan, J. Jia, and A. Agarwala, “High-quality motion deblurring from a single image,” ACM Trans. Graph. 27, 73:1–73:10 (2008).
[CrossRef]

Agrawal, A.

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. 25, 795–804 (2006).
[CrossRef]

A. Agrawal and R. Raskar, “Resolving objects at higher resolution from a single motion-blurred image,” in Proceedings of IEEE Conference Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Andersson, H.

H. Andersson, “Position sensitive detectors: device technology and applications in spectroscopy,” Ph.D. thesis (Mid Sweden University, Department of Information Technology and Media, 2008).

Aoki, C.

H. Niu, C. Aoki, T. Matsuda, M. Takai, and M. Maeda, “A position-sensitive MOS device using lateral photovoltaic effect,” Jpn. J. Appl. Phys. 26, L35–L37 (1987).

Ben-Ezra, M.

S. K. Nayar and M. Ben-Ezra, “Motion-based motion deblurring,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 689–698 (2004).
[CrossRef]

M. Ben-Ezra, and S. K. Nayar, “Motion deblurring using hybrid imaging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2003), pp. I-657–I-664.

Boden, A. F.

Brown, M. S.

Y.-W. Tai, H. Du, M. S. Brown, and S. Lin, “Correction of spatially varying image and video motion blur using a hybrid camera,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1012–1028 (2010).
[CrossRef]

Busch-Vishniac, I. J.

W. Wang and I. J. Busch-Vishniac, “The linearity and sensitivity of lateral effect position sensitive devices—an improved geometry,” IEEE Trans. Electron Devices 36, 2475–2480 (1957).
[CrossRef]

Cai, J.-F.

J.-F. Cai, H. Ji, C. Liu, and Z. Shen, “High-quality curvelet-based motion deblurring from an image pair,” in Proceedings of Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1566–1573.

Campisi, P.

P. Campisi and K. Egiazarian, Blind Image Deconvolution: Theory and Applications (CRC, 2007).

Caselles, V.

A. Chambolle, V. Caselles, D. Cremers, M. Novaga, and T. Pock, “An introduction to total variation for image analysis,” in Theoretical Foundations and Numerical Methods for Sparse Recovery (De Gruyter, 2010).

Cathey, W. T.

Chambolle, A.

A. Chambolle and P. Lions, “Image recovery via total variation minimization and related problems,” Numer. Math. 76, 167–188 (1997).
[CrossRef]

A. Chambolle, V. Caselles, D. Cremers, M. Novaga, and T. Pock, “An introduction to total variation for image analysis,” in Theoretical Foundations and Numerical Methods for Sparse Recovery (De Gruyter, 2010).

Clapp, M. A.

M. A. Clapp and R. Etienne-Cummings, “A dual pixel-type array for imaging and motion centroid localization,” IEEE Sens. J. 2, 529–548 (2002).
[CrossRef]

Cohen, M.

A. Gupta, N. Joshi, C. Lawrence Zitnick, M. Cohen, and B. Curless, “Single image deblurring using motion density functions,” in Proceedings of the 11th European Conference on Computer Vision: Part I (Springer-Verlag, 2010), pp. 171–184.

Cremers, D.

A. Chambolle, V. Caselles, D. Cremers, M. Novaga, and T. Pock, “An introduction to total variation for image analysis,” in Theoretical Foundations and Numerical Methods for Sparse Recovery (De Gruyter, 2010).

Curless, B.

A. Gupta, N. Joshi, C. Lawrence Zitnick, M. Cohen, and B. Curless, “Single image deblurring using motion density functions,” in Proceedings of the 11th European Conference on Computer Vision: Part I (Springer-Verlag, 2010), pp. 171–184.

de Bakker, M.

M. de Bakker, P. W. Verbeek, G. K. Steenvoorden, and I. T. Young, “The PSD transfer function,” IEEE Trans. Electron Devices 49, 202–206 (2002).
[CrossRef]

Dowski, E. R.

Du, H.

Y.-W. Tai, H. Du, M. S. Brown, and S. Lin, “Correction of spatially varying image and video motion blur using a hybrid camera,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1012–1028 (2010).
[CrossRef]

Durand, F.

A. Levin, P. Sand, T. Sang Cho, F. Durand, and W. T. Freeman, “Motion-invariant photography,” ACM Trans. Graph. 27, 71:1–71:9 (2008).
[CrossRef]

T. Sang Cho, A. Levin, F. Durand, and W. T. Freeman, “Motion blur removal with orthogonal parabolic exposures,” in Proceedings of International Conference on Computational Photography (ICCP) (IEEE, 2010), pp. 1–8.

Egiazarian, K.

P. Campisi and K. Egiazarian, Blind Image Deconvolution: Theory and Applications (CRC, 2007).

Etienne-Cummings, R.

M. A. Clapp and R. Etienne-Cummings, “A dual pixel-type array for imaging and motion centroid localization,” IEEE Sens. J. 2, 529–548 (2002).
[CrossRef]

Fergus, R.

R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman, “Removing camera shake from a single photograph,” ACM Trans. Graph. 25, 787–794 (2006).
[CrossRef]

Flusser, J.

F. Sroubek and J. Flusser, “Multichannel blind deconvolution of spatially misaligned images,” IEEE Trans. Image Process. 14, 874–883 (2005).
[CrossRef]

Freeman, W. T.

A. Levin, P. Sand, T. Sang Cho, F. Durand, and W. T. Freeman, “Motion-invariant photography,” ACM Trans. Graph. 27, 71:1–71:9 (2008).
[CrossRef]

R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman, “Removing camera shake from a single photograph,” ACM Trans. Graph. 25, 787–794 (2006).
[CrossRef]

T. Sang Cho, A. Levin, F. Durand, and W. T. Freeman, “Motion blur removal with orthogonal parabolic exposures,” in Proceedings of International Conference on Computational Photography (ICCP) (IEEE, 2010), pp. 1–8.

Golub, G. H.

G. H. Golub and C. F. Van Loan, Matrix Computations (Johns Hopkins University, 1996).

Gonzalez, R. C.

R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd ed. (Prentice-Hall, 2006).

Gonzo, L.

N. Massari, L. Gonzo, M. Gottardi, and A. Simoni, “High speed digital CMOS 2D optical position sensitive detector,” in Proceedings of Solid-State Circuits Conference (ESSCIRC, 2002), pp. 723–726.

Gottardi, M.

N. Massari, L. Gonzo, M. Gottardi, and A. Simoni, “High speed digital CMOS 2D optical position sensitive detector,” in Proceedings of Solid-State Circuits Conference (ESSCIRC, 2002), pp. 723–726.

Gupta, A.

A. Gupta, N. Joshi, C. Lawrence Zitnick, M. Cohen, and B. Curless, “Single image deblurring using motion density functions,” in Proceedings of the 11th European Conference on Computer Vision: Part I (Springer-Verlag, 2010), pp. 171–184.

Hanisch, R. J.

Hansen, P. Christian

P. Christian Hansen, J. G. Nagy, and D. P. Leary, Deblurring Images: Matrices, Spectra and Filtering (SIAM, 2006).

Harmeling, S.

S. Harmeling, M. Hirsch, and B. Schölkopf, “Space-variant single-image blind deconvolution for removing camera shake,” Adv. Neural Inf. Process. Syst. 23, 829–837 (2010).

M. Hirsch, C. J. Schuler, S. Harmeling, and B. Schölkopf, “Fast removal of non-uniform camera shake,” in Proceedings of International Conference on Computer Vision (ICCV) (IEEE, 2011), pp. 463–470.

Hartley, R.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. (Cambridge University, 2003).

Heidrich, W.

G. Wetzstein, I. Ihrke, D. Lanman, and W. Heidrich, “Computational plenoptic imaging,” Computer Graphics Forum, 2397–2426 (2011).

Hertel, D.

B. Hulgren and D. Hertel, “Low light performance of digital cameras,” Proc. SPIE 7242, 724214 (2009).
[CrossRef]

Hertzmann, A.

R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman, “Removing camera shake from a single photograph,” ACM Trans. Graph. 25, 787–794 (2006).
[CrossRef]

Hilden, H. M.

Hirsch, M.

S. Harmeling, M. Hirsch, and B. Schölkopf, “Space-variant single-image blind deconvolution for removing camera shake,” Adv. Neural Inf. Process. Syst. 23, 829–837 (2010).

M. Hirsch, C. J. Schuler, S. Harmeling, and B. Schölkopf, “Fast removal of non-uniform camera shake,” in Proceedings of International Conference on Computer Vision (ICCV) (IEEE, 2011), pp. 463–470.

Hore, A.

A. Hore and D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in Proceedings of 20th International Conference on Pattern Recognition (ICPR,2010), pp. 2366–2369.
[CrossRef]

Horn, B. K. P.

Hulgren, B.

B. Hulgren and D. Hertel, “Low light performance of digital cameras,” Proc. SPIE 7242, 724214 (2009).
[CrossRef]

Ihrke, I.

G. Wetzstein, I. Ihrke, D. Lanman, and W. Heidrich, “Computational plenoptic imaging,” Computer Graphics Forum, 2397–2426 (2011).

Ji, H.

J.-F. Cai, H. Ji, C. Liu, and Z. Shen, “High-quality curvelet-based motion deblurring from an image pair,” in Proceedings of Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1566–1573.

Jia, J.

Q. Shan, J. Jia, and A. Agarwala, “High-quality motion deblurring from a single image,” ACM Trans. Graph. 27, 73:1–73:10 (2008).
[CrossRef]

Q. Shan, W. Xiong, and J. Jia, “Rotational motion deblurring of a rigid object from a single image,” in Proceedings of 11th International Conference on Computer Vision (IEEE, 2007), pp. 1–8.

Joshi, N.

N. Joshi, S. B. Kang, C. L. Zitnick, and R. Szeliski, “Image deblurring using inertial measurement sensors,” ACM Trans. Graph. 29, 30:1–30:9 (2010).
[CrossRef]

A. Gupta, N. Joshi, C. Lawrence Zitnick, M. Cohen, and B. Curless, “Single image deblurring using motion density functions,” in Proceedings of the 11th European Conference on Computer Vision: Part I (Springer-Verlag, 2010), pp. 171–184.

Kang, S. B.

N. Joshi, S. B. Kang, C. L. Zitnick, and R. Szeliski, “Image deblurring using inertial measurement sensors,” ACM Trans. Graph. 29, 30:1–30:9 (2010).
[CrossRef]

Lanman, D.

G. Wetzstein, I. Ihrke, D. Lanman, and W. Heidrich, “Computational plenoptic imaging,” Computer Graphics Forum, 2397–2426 (2011).

Leary, D. P.

P. Christian Hansen, J. G. Nagy, and D. P. Leary, Deblurring Images: Matrices, Spectra and Filtering (SIAM, 2006).

Levin, A.

A. Levin, P. Sand, T. Sang Cho, F. Durand, and W. T. Freeman, “Motion-invariant photography,” ACM Trans. Graph. 27, 71:1–71:9 (2008).
[CrossRef]

T. Sang Cho, A. Levin, F. Durand, and W. T. Freeman, “Motion blur removal with orthogonal parabolic exposures,” in Proceedings of International Conference on Computational Photography (ICCP) (IEEE, 2010), pp. 1–8.

Lin, S.

Y.-W. Tai, H. Du, M. S. Brown, and S. Lin, “Correction of spatially varying image and video motion blur using a hybrid camera,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1012–1028 (2010).
[CrossRef]

Lions, P.

A. Chambolle and P. Lions, “Image recovery via total variation minimization and related problems,” Numer. Math. 76, 167–188 (1997).
[CrossRef]

Liu, C.

J.-F. Cai, H. Ji, C. Liu, and Z. Shen, “High-quality curvelet-based motion deblurring from an image pair,” in Proceedings of Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1566–1573.

Lucy, L. B.

L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745–754 (1974).
[CrossRef]

Maeda, M.

H. Niu, C. Aoki, T. Matsuda, M. Takai, and M. Maeda, “A position-sensitive MOS device using lateral photovoltaic effect,” Jpn. J. Appl. Phys. 26, L35–L37 (1987).

Makynen, A.

A. Makynen, “Position-sensitive devices for optical tracking and displacement sensing applications,” Ph.D. thesis (University of Oulu, 2000).

Massari, N.

N. Massari, L. Gonzo, M. Gottardi, and A. Simoni, “High speed digital CMOS 2D optical position sensitive detector,” in Proceedings of Solid-State Circuits Conference (ESSCIRC, 2002), pp. 723–726.

Matsuda, T.

H. Niu, C. Aoki, T. Matsuda, M. Takai, and M. Maeda, “A position-sensitive MOS device using lateral photovoltaic effect,” Jpn. J. Appl. Phys. 26, L35–L37 (1987).

Meisner, D. E.

D. E. Meisner, “Fundamentals of airborne video remote sensing,” Remote Sens. Environ. 19, 63–79 (1986).
[CrossRef]

Mo, J.

Nagy, J. G.

J. G. Nagy and D. P. O’Leary, “Restoring images degraded by spatially variant blur,” SIAM J. Sci. Comput. 19, 1063–1082 (1998).
[CrossRef]

P. Christian Hansen, J. G. Nagy, and D. P. Leary, Deblurring Images: Matrices, Spectra and Filtering (SIAM, 2006).

Nayar, S.

C. Zhou and S. Nayar, “Computational cameras: convergence of optics and processing,” IEEE Trans. Image Process. 20, 3322–3340 (2011).
[CrossRef]

Nayar, S. K.

S. K. Nayar and M. Ben-Ezra, “Motion-based motion deblurring,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 689–698 (2004).
[CrossRef]

M. Ben-Ezra, and S. K. Nayar, “Motion deblurring using hybrid imaging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2003), pp. I-657–I-664.

Negahdaripour, S.

Niu, H.

H. Niu, C. Aoki, T. Matsuda, M. Takai, and M. Maeda, “A position-sensitive MOS device using lateral photovoltaic effect,” Jpn. J. Appl. Phys. 26, L35–L37 (1987).

Novaga, M.

A. Chambolle, V. Caselles, D. Cremers, M. Novaga, and T. Pock, “An introduction to total variation for image analysis,” in Theoretical Foundations and Numerical Methods for Sparse Recovery (De Gruyter, 2010).

O’Leary, D. P.

J. G. Nagy and D. P. O’Leary, “Restoring images degraded by spatially variant blur,” SIAM J. Sci. Comput. 19, 1063–1082 (1998).
[CrossRef]

Pock, T.

A. Chambolle, V. Caselles, D. Cremers, M. Novaga, and T. Pock, “An introduction to total variation for image analysis,” in Theoretical Foundations and Numerical Methods for Sparse Recovery (De Gruyter, 2010).

Ponce, J.

O. Whyte, J. Sivic, A. Zisserman, and J. Ponce, “Non-uniform deblurring for shaken images,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2010), pp. 491–498.

Quan, L.

L. Yuan, J. Sun, L. Quan, and H.-Y. Shum, “Image deblurring with blurred/noisy image pairs,” in SIGGRAPH ’07: ACM SIGGRAPH 2007 (ACM, 2007), p. 1.

Raskar, R.

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. 25, 795–804 (2006).
[CrossRef]

A. Agrawal and R. Raskar, “Resolving objects at higher resolution from a single motion-blurred image,” in Proceedings of IEEE Conference Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Redding, D. C.

Richardson, W. H.

Roweis, S. T.

R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman, “Removing camera shake from a single photograph,” ACM Trans. Graph. 25, 787–794 (2006).
[CrossRef]

Sand, P.

A. Levin, P. Sand, T. Sang Cho, F. Durand, and W. T. Freeman, “Motion-invariant photography,” ACM Trans. Graph. 27, 71:1–71:9 (2008).
[CrossRef]

Sang Cho, T.

A. Levin, P. Sand, T. Sang Cho, F. Durand, and W. T. Freeman, “Motion-invariant photography,” ACM Trans. Graph. 27, 71:1–71:9 (2008).
[CrossRef]

T. Sang Cho, A. Levin, F. Durand, and W. T. Freeman, “Motion blur removal with orthogonal parabolic exposures,” in Proceedings of International Conference on Computational Photography (ICCP) (IEEE, 2010), pp. 1–8.

Schölkopf, B.

S. Harmeling, M. Hirsch, and B. Schölkopf, “Space-variant single-image blind deconvolution for removing camera shake,” Adv. Neural Inf. Process. Syst. 23, 829–837 (2010).

M. Hirsch, C. J. Schuler, S. Harmeling, and B. Schölkopf, “Fast removal of non-uniform camera shake,” in Proceedings of International Conference on Computer Vision (ICCV) (IEEE, 2011), pp. 463–470.

Schottky, W.

W. Schottky, “Ueber den entstehungsort der photoelektronen in kupfer-kupferoxydul photozellen,” Phys. Z. 31, 913–925 (1930).

Schuler, C. J.

M. Hirsch, C. J. Schuler, S. Harmeling, and B. Schölkopf, “Fast removal of non-uniform camera shake,” in Proceedings of International Conference on Computer Vision (ICCV) (IEEE, 2011), pp. 463–470.

Shan, Q.

Q. Shan, J. Jia, and A. Agarwala, “High-quality motion deblurring from a single image,” ACM Trans. Graph. 27, 73:1–73:10 (2008).
[CrossRef]

Q. Shan, W. Xiong, and J. Jia, “Rotational motion deblurring of a rigid object from a single image,” in Proceedings of 11th International Conference on Computer Vision (IEEE, 2007), pp. 1–8.

Shen, Z.

J.-F. Cai, H. Ji, C. Liu, and Z. Shen, “High-quality curvelet-based motion deblurring from an image pair,” in Proceedings of Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1566–1573.

Shum, H.-Y.

L. Yuan, J. Sun, L. Quan, and H.-Y. Shum, “Image deblurring with blurred/noisy image pairs,” in SIGGRAPH ’07: ACM SIGGRAPH 2007 (ACM, 2007), p. 1.

Simoni, A.

N. Massari, L. Gonzo, M. Gottardi, and A. Simoni, “High speed digital CMOS 2D optical position sensitive detector,” in Proceedings of Solid-State Circuits Conference (ESSCIRC, 2002), pp. 723–726.

Singh, B.

R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman, “Removing camera shake from a single photograph,” ACM Trans. Graph. 25, 787–794 (2006).
[CrossRef]

Sivic, J.

O. Whyte, J. Sivic, A. Zisserman, and J. Ponce, “Non-uniform deblurring for shaken images,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2010), pp. 491–498.

Sorel, M.

M. Sorel and F. Sroubek, “Space-variant deblurring using one blurred and one underexposed image,” in Proceedings of 16th International Conference on Image Processing (ICIP) (IEEE, 2009), pp. 157–160.

M. Sorel and F. Sroubek, Restoration in the Presence of Unknown Spatially Varying Blur in Image Restoration: Fundamentals and Advances (CRC, 2012).

Sroubek, F.

F. Sroubek and J. Flusser, “Multichannel blind deconvolution of spatially misaligned images,” IEEE Trans. Image Process. 14, 874–883 (2005).
[CrossRef]

M. Sorel and F. Sroubek, “Space-variant deblurring using one blurred and one underexposed image,” in Proceedings of 16th International Conference on Image Processing (ICIP) (IEEE, 2009), pp. 157–160.

M. Sorel and F. Sroubek, Restoration in the Presence of Unknown Spatially Varying Blur in Image Restoration: Fundamentals and Advances (CRC, 2012).

Steenvoorden, G. K.

M. de Bakker, P. W. Verbeek, G. K. Steenvoorden, and I. T. Young, “The PSD transfer function,” IEEE Trans. Electron Devices 49, 202–206 (2002).
[CrossRef]

Sun, J.

L. Yuan, J. Sun, L. Quan, and H.-Y. Shum, “Image deblurring with blurred/noisy image pairs,” in SIGGRAPH ’07: ACM SIGGRAPH 2007 (ACM, 2007), p. 1.

Szeliski, R.

N. Joshi, S. B. Kang, C. L. Zitnick, and R. Szeliski, “Image deblurring using inertial measurement sensors,” ACM Trans. Graph. 29, 30:1–30:9 (2010).
[CrossRef]

Tai, Y.-W.

Y.-W. Tai, H. Du, M. S. Brown, and S. Lin, “Correction of spatially varying image and video motion blur using a hybrid camera,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1012–1028 (2010).
[CrossRef]

Takai, M.

H. Niu, C. Aoki, T. Matsuda, M. Takai, and M. Maeda, “A position-sensitive MOS device using lateral photovoltaic effect,” Jpn. J. Appl. Phys. 26, L35–L37 (1987).

Tico, M.

M. Tico, M. Trimeche, and M. Vehvilainen, “Motion blur identification based on differently exposed images,” in Proceedings of International Conference on Image Processing (IEEE, 2006), pp. 2021–2024.

Trimeche, M.

M. Tico, M. Trimeche, and M. Vehvilainen, “Motion blur identification based on differently exposed images,” in Proceedings of International Conference on Image Processing (IEEE, 2006), pp. 2021–2024.

Tumblin, J.

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. 25, 795–804 (2006).
[CrossRef]

Van Loan, C. F.

G. H. Golub and C. F. Van Loan, Matrix Computations (Johns Hopkins University, 1996).

Vehvilainen, M.

M. Tico, M. Trimeche, and M. Vehvilainen, “Motion blur identification based on differently exposed images,” in Proceedings of International Conference on Image Processing (IEEE, 2006), pp. 2021–2024.

Verbeek, P. W.

M. de Bakker, P. W. Verbeek, G. K. Steenvoorden, and I. T. Young, “The PSD transfer function,” IEEE Trans. Electron Devices 49, 202–206 (2002).
[CrossRef]

Wallmark, J. T.

J. T. Wallmark, “A new semiconductor photocell using lateral photoeffect,” Proc. IRE 45, 474–483 (1957).
[CrossRef]

Wang, W.

W. Wang and I. J. Busch-Vishniac, “The linearity and sensitivity of lateral effect position sensitive devices—an improved geometry,” IEEE Trans. Electron Devices 36, 2475–2480 (1957).
[CrossRef]

Wetzstein, G.

G. Wetzstein, I. Ihrke, D. Lanman, and W. Heidrich, “Computational plenoptic imaging,” Computer Graphics Forum, 2397–2426 (2011).

Whyte, O.

O. Whyte, J. Sivic, A. Zisserman, and J. Ponce, “Non-uniform deblurring for shaken images,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2010), pp. 491–498.

Woods, R. E.

R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd ed. (Prentice-Hall, 2006).

Xiong, W.

Q. Shan, W. Xiong, and J. Jia, “Rotational motion deblurring of a rigid object from a single image,” in Proceedings of 11th International Conference on Computer Vision (IEEE, 2007), pp. 1–8.

Young, I. T.

M. de Bakker, P. W. Verbeek, G. K. Steenvoorden, and I. T. Young, “The PSD transfer function,” IEEE Trans. Electron Devices 49, 202–206 (2002).
[CrossRef]

Yuan, L.

L. Yuan, J. Sun, L. Quan, and H.-Y. Shum, “Image deblurring with blurred/noisy image pairs,” in SIGGRAPH ’07: ACM SIGGRAPH 2007 (ACM, 2007), p. 1.

Zhou, C.

C. Zhou and S. Nayar, “Computational cameras: convergence of optics and processing,” IEEE Trans. Image Process. 20, 3322–3340 (2011).
[CrossRef]

Ziou, D.

A. Hore and D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in Proceedings of 20th International Conference on Pattern Recognition (ICPR,2010), pp. 2366–2369.
[CrossRef]

Zisserman, A.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. (Cambridge University, 2003).

O. Whyte, J. Sivic, A. Zisserman, and J. Ponce, “Non-uniform deblurring for shaken images,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2010), pp. 491–498.

Zitnick, C. L.

N. Joshi, S. B. Kang, C. L. Zitnick, and R. Szeliski, “Image deblurring using inertial measurement sensors,” ACM Trans. Graph. 29, 30:1–30:9 (2010).
[CrossRef]

Zitnick, C. Lawrence

A. Gupta, N. Joshi, C. Lawrence Zitnick, M. Cohen, and B. Curless, “Single image deblurring using motion density functions,” in Proceedings of the 11th European Conference on Computer Vision: Part I (Springer-Verlag, 2010), pp. 171–184.

ACM Trans. Graph. (5)

R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman, “Removing camera shake from a single photograph,” ACM Trans. Graph. 25, 787–794 (2006).
[CrossRef]

Q. Shan, J. Jia, and A. Agarwala, “High-quality motion deblurring from a single image,” ACM Trans. Graph. 27, 73:1–73:10 (2008).
[CrossRef]

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. 25, 795–804 (2006).
[CrossRef]

A. Levin, P. Sand, T. Sang Cho, F. Durand, and W. T. Freeman, “Motion-invariant photography,” ACM Trans. Graph. 27, 71:1–71:9 (2008).
[CrossRef]

N. Joshi, S. B. Kang, C. L. Zitnick, and R. Szeliski, “Image deblurring using inertial measurement sensors,” ACM Trans. Graph. 29, 30:1–30:9 (2010).
[CrossRef]

Adv. Neural Inf. Process. Syst. (1)

S. Harmeling, M. Hirsch, and B. Schölkopf, “Space-variant single-image blind deconvolution for removing camera shake,” Adv. Neural Inf. Process. Syst. 23, 829–837 (2010).

Appl. Opt. (1)

Astron. J. (1)

L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745–754 (1974).
[CrossRef]

IEEE Sens. J. (1)

M. A. Clapp and R. Etienne-Cummings, “A dual pixel-type array for imaging and motion centroid localization,” IEEE Sens. J. 2, 529–548 (2002).
[CrossRef]

IEEE Trans. Electron Devices (2)

W. Wang and I. J. Busch-Vishniac, “The linearity and sensitivity of lateral effect position sensitive devices—an improved geometry,” IEEE Trans. Electron Devices 36, 2475–2480 (1957).
[CrossRef]

M. de Bakker, P. W. Verbeek, G. K. Steenvoorden, and I. T. Young, “The PSD transfer function,” IEEE Trans. Electron Devices 49, 202–206 (2002).
[CrossRef]

IEEE Trans. Image Process. (2)

C. Zhou and S. Nayar, “Computational cameras: convergence of optics and processing,” IEEE Trans. Image Process. 20, 3322–3340 (2011).
[CrossRef]

F. Sroubek and J. Flusser, “Multichannel blind deconvolution of spatially misaligned images,” IEEE Trans. Image Process. 14, 874–883 (2005).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell. (2)

S. K. Nayar and M. Ben-Ezra, “Motion-based motion deblurring,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 689–698 (2004).
[CrossRef]

Y.-W. Tai, H. Du, M. S. Brown, and S. Lin, “Correction of spatially varying image and video motion blur using a hybrid camera,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1012–1028 (2010).
[CrossRef]

J. Opt. Soc. Am. (1)

J. Opt. Soc. Am. A (2)

Jpn. J. Appl. Phys. (1)

H. Niu, C. Aoki, T. Matsuda, M. Takai, and M. Maeda, “A position-sensitive MOS device using lateral photovoltaic effect,” Jpn. J. Appl. Phys. 26, L35–L37 (1987).

Numer. Math. (1)

A. Chambolle and P. Lions, “Image recovery via total variation minimization and related problems,” Numer. Math. 76, 167–188 (1997).
[CrossRef]

Phys. Z. (1)

W. Schottky, “Ueber den entstehungsort der photoelektronen in kupfer-kupferoxydul photozellen,” Phys. Z. 31, 913–925 (1930).

Proc. IRE (1)

J. T. Wallmark, “A new semiconductor photocell using lateral photoeffect,” Proc. IRE 45, 474–483 (1957).
[CrossRef]

Proc. SPIE (1)

B. Hulgren and D. Hertel, “Low light performance of digital cameras,” Proc. SPIE 7242, 724214 (2009).
[CrossRef]

Remote Sens. Environ. (1)

D. E. Meisner, “Fundamentals of airborne video remote sensing,” Remote Sens. Environ. 19, 63–79 (1986).
[CrossRef]

SIAM J. Sci. Comput. (1)

J. G. Nagy and D. P. O’Leary, “Restoring images degraded by spatially variant blur,” SIAM J. Sci. Comput. 19, 1063–1082 (1998).
[CrossRef]

Other (23)

M. Hirsch, C. J. Schuler, S. Harmeling, and B. Schölkopf, “Fast removal of non-uniform camera shake,” in Proceedings of International Conference on Computer Vision (ICCV) (IEEE, 2011), pp. 463–470.

M. Sorel and F. Sroubek, Restoration in the Presence of Unknown Spatially Varying Blur in Image Restoration: Fundamentals and Advances (CRC, 2012).

R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd ed. (Prentice-Hall, 2006).

A. Hore and D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in Proceedings of 20th International Conference on Pattern Recognition (ICPR,2010), pp. 2366–2369.
[CrossRef]

N. Massari, L. Gonzo, M. Gottardi, and A. Simoni, “High speed digital CMOS 2D optical position sensitive detector,” in Proceedings of Solid-State Circuits Conference (ESSCIRC, 2002), pp. 723–726.

P. Campisi and K. Egiazarian, Blind Image Deconvolution: Theory and Applications (CRC, 2007).

Q. Shan, W. Xiong, and J. Jia, “Rotational motion deblurring of a rigid object from a single image,” in Proceedings of 11th International Conference on Computer Vision (IEEE, 2007), pp. 1–8.

J.-F. Cai, H. Ji, C. Liu, and Z. Shen, “High-quality curvelet-based motion deblurring from an image pair,” in Proceedings of Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1566–1573.

A. Gupta, N. Joshi, C. Lawrence Zitnick, M. Cohen, and B. Curless, “Single image deblurring using motion density functions,” in Proceedings of the 11th European Conference on Computer Vision: Part I (Springer-Verlag, 2010), pp. 171–184.

O. Whyte, J. Sivic, A. Zisserman, and J. Ponce, “Non-uniform deblurring for shaken images,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2010), pp. 491–498.

M. Tico, M. Trimeche, and M. Vehvilainen, “Motion blur identification based on differently exposed images,” in Proceedings of International Conference on Image Processing (IEEE, 2006), pp. 2021–2024.

L. Yuan, J. Sun, L. Quan, and H.-Y. Shum, “Image deblurring with blurred/noisy image pairs,” in SIGGRAPH ’07: ACM SIGGRAPH 2007 (ACM, 2007), p. 1.

M. Sorel and F. Sroubek, “Space-variant deblurring using one blurred and one underexposed image,” in Proceedings of 16th International Conference on Image Processing (ICIP) (IEEE, 2009), pp. 157–160.

T. Sang Cho, A. Levin, F. Durand, and W. T. Freeman, “Motion blur removal with orthogonal parabolic exposures,” in Proceedings of International Conference on Computational Photography (ICCP) (IEEE, 2010), pp. 1–8.

P. Christian Hansen, J. G. Nagy, and D. P. Leary, Deblurring Images: Matrices, Spectra and Filtering (SIAM, 2006).

A. Agrawal and R. Raskar, “Resolving objects at higher resolution from a single motion-blurred image,” in Proceedings of IEEE Conference Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

G. Wetzstein, I. Ihrke, D. Lanman, and W. Heidrich, “Computational plenoptic imaging,” Computer Graphics Forum, 2397–2426 (2011).

G. H. Golub and C. F. Van Loan, Matrix Computations (Johns Hopkins University, 1996).

A. Chambolle, V. Caselles, D. Cremers, M. Novaga, and T. Pock, “An introduction to total variation for image analysis,” in Theoretical Foundations and Numerical Methods for Sparse Recovery (De Gruyter, 2010).

H. Andersson, “Position sensitive detectors: device technology and applications in spectroscopy,” Ph.D. thesis (Mid Sweden University, Department of Information Technology and Media, 2008).

A. Makynen, “Position-sensitive devices for optical tracking and displacement sensing applications,” Ph.D. thesis (University of Oulu, 2000).

M. Ben-Ezra, and S. K. Nayar, “Motion deblurring using hybrid imaging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2003), pp. I-657–I-664.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. (Cambridge University, 2003).

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1.

Platform motion deblur system schematic. A lens and beam splitter form identical image planes for the image sensor and PSD array to provide data for image restoration.

Fig. 2.
Fig. 2.

Lateral photoeffect of a PSD. Differential currents from input (blue) and output (red) electrodes produce sum (2) and position (1) data.

Fig. 3.
Fig. 3.

Scenes input into PSD image motion-tracking simulation to characterize the effects of background illumination. (a) Ground-truth input image. (b) SV blurred image. (c) SV blurred image with Gaussian noise ( σ = 0.1 ) with a positive mean value representing background illumination (5% of the maximum brightness).

Fig. 4.
Fig. 4.

Simulation results: plot of PSD tracking error for various levels of background illumination showing the sensor is only capable of tracking image features much brighter than the background. Percentages represent the ratio between the brightness of the background and that of the tracked spots.

Fig. 5.
Fig. 5.

PSD centroid tracking of identical image movement is skewed toward the sensor’s center as background illumination increases. (a) Motion estimate from Fig. 3(b) with no background illumination and (b) from Fig. 3(c) with background illumination at 5% of the tracked spots.

Fig. 6.
Fig. 6.

PSD data at specific locations in the image field after calibration (Appendix A). These local PSF measurements are used to construct an SV PSF. Their relative location is given in Figs. 9 and 10.

Fig. 7.
Fig. 7.

PSF specific to pixel ( x , y ) is recursively generated using affine model coefficients a ( t ) f ( t ) as the elementary time step t is integrated over the sampling time.

Fig. 8.
Fig. 8.

Photo of the imaging, position-sensing, motion-actuated system.

Fig. 9.
Fig. 9.

Experimentally blurred color image. Magnified sections of the image confirm SV blur. The calculated SV PSF is shown superimposed (yellow lines) and is consistent with the image motion blur. The PSDs’ relative locations are shown in cyan. The portion of the image that is to be deblurred is enclosed in red.

Fig. 10.
Fig. 10.

Experimentally blurred starfield image (contrast reversed). The calculated SV PSF is shown superimposed in green and is consistent with the SV image motion blur. The PSDs’ relative locations are shown in cyan. The portion of the image that is to be deblurred is enclosed in red.

Fig. 11.
Fig. 11.

Pixel-by-pixel deconvolution. A PSF is generated for the blurred pixel of interest (green). A section in the neighborhood of the pixel is deblurred using the PSF. Only the center pixel (yellow) of the deblurred section is used in the final reconstruction.

Fig. 12.
Fig. 12.

Region enclosed in red of the blurred image in Fig. 9 is deblurred using (c) bilinearly interpolated total variation regularization, (d) pixel-by-pixel Lucy–Richardson techniques, and (e) bilinearly interpolated Lucy–Richardson. Magnified sections of the images are also shown. (a) A ground-truth image is shown for comparison.

Fig. 13.
Fig. 13.

Region of Fig. 10 enclosed in red is deblurred using a pixel-by-pixel method with the iterative spatially invariant Lucy–Richardson algorithm. The (a) ground-truth, (b) blurred, and (c) five-iteration reconstructed images are shown for comparison along with magnified sections of the images. Superimposed green lines in (b) show calculated SV PSF.

Fig. 14.
Fig. 14.

PSD can track the moving centroid of features that remains on the sensor under low background illumination. They provide position (1) and intensity (2) outputs, which can reveal invalid data. (a) A PSD array provides SV information and increases probability of valid data (green) capture. (b) Invalid data (red) can be recovered by aggregating closely tiled PSDs using Eq. (21) to form a sensor group.

Fig. 15.
Fig. 15.

(a) Depiction of a PSD and imager’s physical location in the image plane. PSD voltage and imager pixel information are paired at two separate image locations and used to form calibration vectors. (b) Depiction showing vectors whose angle and magnitude are used to map voltage data into pixel data.

Tables (2)

Tables Icon

Table 1. On-Trak PSD (PSM2-4) Specifications

Tables Icon

Table 2. Time and Memory Requirements of the Described Algorithms Used in Fig. 12

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

X PSD = i 1 i 2 i 1 + i 2 L 2 Y PSD = i 3 i 4 i 3 + i 4 L 2 ,
S PSD = i 1 + i 2 = i 3 + i 4 .
X centroid = X I ( X , Y ) d X d Y I ( X , Y ) d X d Y , Y centroid = Y I ( X , Y ) d X d Y I ( X , Y ) d X d Y ,
S centroid = I ( X , Y ) d X d Y .
z ( x , y ) = [ H u ] ( x , y ) = u ( s , t ) h ( x s , y t ; s , t ) d s d t ,
[ x y ] = [ a ( t ) b ( t ) d ( t ) e ( t ) ] [ x y ] + [ c ( t ) f ( t ) ] .
min u [ 1 2 z H u 2 + λ | u | d x d y ] ,
u i + 1 = arg min u [ 1 2 z H u 2 + λ | u | 2 2 | u i | + | u i | 2 d x d y ] .
h ( s , t ; x , y ) = i = 1 4 α i ( x , y ) h i ( s , t ) ,
t x = x x 1 x 2 x 1 t y = y y 1 y 2 y 1 ,
α 1 = ( 1 t y ) ( 1 t x ) α 2 = ( 1 t y ) t x α 3 = t y ( 1 t x ) α 4 = t y t x .
[ H u ] ( x , y ) = u ( s , t ) h ( x s , y t ; s , t ) d s d t
= u ( s , t ) i = 1 4 α i ( s , t ) h i ( x s , y t ) d s d t
= i = 1 4 ( α i ( s , t ) u ( s , t ) ) h i ( x s , y t ) d s d t
= [ i = 1 4 [ α i u ] * h i ] ( x , y ) .
[ H * u ] ( x , y ) = u ( s , t ) h ( s x , t y ; x , y ) d s d t
= u ( s , t ) i = 1 4 α i ( x , y ) h i ( s x , t y ) d s d t
= i = 1 4 α i ( x , y ) u ( s , t ) h i ( s x , t y ) d s d t
= i = 1 4 α i ( x , y ) [ u h i ] ( x , y ) .
u i + 1 = u i H * ( z H u ) .
X aggregate = u = l m v = o p [ [ L ( u 1 ) + X PSD ( u , v ) ] S PSD ( u , v ) ] u = l m v = o p S PSD ( u , v ) , Y aggregate = u = l m v = o p [ [ L ( v 1 ) + Y PSD ( u , v ) ] S PSD ( u , v ) ] u = l m v = o p S PSD ( u , v )
[ P x P y ] = | P | | V | [ cos θ sin θ sin θ cos θ ] [ V x V y ] .

Metrics