Abstract

Reconstruction of a super-resolved image from multiple frames and extraction of matte are two popular topics that have been solved independently. In this paper, we advocate a unified framework that assimilates matting within the super-resolution model. We show that joint estimation is advantageous, as super-resolved edge information helps in obtaining a sharp matte, while the matte in turn aids in resolving fine details. We propose a multiframe approach to increase the spatial resolution of the matte, foreground, and background. This is validated extensively on examples from standard matting datasets.

© 2013 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. J. Wang and M. Cohen, “Image and video matting: a survey,” Found. Trends Comput. Graph. Vis. 3, 97–175 (2007).
    [CrossRef]
  2. C. Rhemann, C. Rother, P. Kohli, and M. Gelautz, “A spatially varying PSF-based prior for alpha matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 2149–2156.
  3. M. McGuire, W. Matusik, H. Pfister, J. F. Hughes, and F. Durand, “Defocus video matting,” in SIGGRAPH—Proceedings International Conference on Computer Graphics and Interactive Techniques (ACM, 2005), pp. 567–576.
  4. B. L. Price, B. S. Morse, and S. Cohen, “Simultaneous foreground, background, and alpha estimation for image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 2157–2164.
  5. S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003).
    [CrossRef]
  6. R. R. Schultz and R. L. Stevenson, “Extraction of high-resolution frames from video sequences,” IEEE Trans. Image Process. 5, 996–1011 (1996).
    [CrossRef]
  7. Y. W. Tai, W. S. Tong, and C. K. Tang, “Perceptually-inspired and edge-directed color image super-resolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 1948–1955.
  8. S. Dai, M. Han, W. Xu, Y. Wu, Y. Gong, and A. K. Katsaggelos, “SoftCuts: a soft edge smoothness prior for color image super-resolution,” IEEE Trans. Image Process. 18, 969–981 (2009).
    [CrossRef]
  9. A. W. M. van Eekeren, K. Schutte, and L. J. van Vliet, “Multiframe super-resolution reconstruction of small moving objects,” IEEE Trans. Image Process. 19, 2901–2912 (2010).
    [CrossRef]
  10. N. Joshi, W. Matusik, S. Avidan, H. Pfister, and W. T. Freeman, “Exploring defocus matting: nonparametric acceleration, super-resolution, and off-center matting,” IEEE Comput. Graph. Appl. 27, 43–52 (2007).
    [CrossRef]
  11. F. Sroubek, G. Cristobal, and J. Flusser, “A unified approach to superresolution and multichannel blind deconvolution,” IEEE Trans. Image Process. 16, 2322–2332 (2007).
    [CrossRef]
  12. J. Wang and M. Cohen, “Optimized color sampling for robust matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.
  13. A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to natural image matting,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 228–242 (2008).
    [CrossRef]
  14. C. Rhemann, C. Rother, J. Wang, M. Gelautz, P. Kohli, and P. Rott, “A perceptually motivated online benchmark for image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1826–1833.
  15. Y. Zheng and C. Kambhamettu, “Learning based digital matting,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2009), pp. 889–896.
  16. E. S. L. Gastal and M. M. Oliveira, “Shared sampling for real-time alpha matting,” Comput. Graph. Forum 29, 575–584 (2010).
    [CrossRef]
  17. S. K. Yeung, C. K. Tang, M. S. Brown, and S. B. Kang, “Matting and compositing of transparent and refractive objects,” ACM Trans. Graph. 30, 1–13 (2011).
    [CrossRef]
  18. S. M. Yoon and G. J. Yoon, “Alpha matting using compressive sensing,” Electron. Lett. 48, 153–155 (2012).
    [CrossRef]
  19. E. Shahrian and D. Rajan, “Weighted color and texture sample selection for image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 718–725.
  20. Q. Chen, D. Li, and C. K. Tang, “KNN Matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 869–876.
  21. S. M. Prabhu and A. N. Rajagopalan, “Natural matting for degraded pictures,” IEEE Trans. Image Process. 20, 3647–3653 (2011).
    [CrossRef]
  22. Y. Y. Chuang, A. Agarwala, B. Curless, D. H. Salesin, and R. Szeliski, “Video matting of complex scenes,” ACM Trans. Graph. 21, 243–248 (2002).
    [CrossRef]
  23. T. Schoenemann and D. Cremers, “A coding-cost framework for super-resolution motion layer decomposition,” IEEE Trans. Image Process. 21, 1097–1110 (2012).
    [CrossRef]
  24. X. Bai, J. Wang, D. Simons, and G. Sapiro, “Video snapcut: robust video object cutout using localized classifiers,” ACM Trans. Graph. 28, 70 (2009).
    [CrossRef]
  25. X. Bai and G. Sapiro, “Geodesic matting: a framework for fast interactive image and video segmentation and matting,” Int. J. Comput. Vis. 82, 113–132 (2009).
    [CrossRef]
  26. L. Wang, M. Gong, C. Zhang, R. Yang, C. Zhang, and Y. H. Yang, “Automatic real-time video matting using time-of-flight camera and multichannel Poisson equations,” Int. J. Comput. Vis. 97, 104–121 (2012).
    [CrossRef]
  27. M. Sindeev, A. Konushin, and C. Rother, “Alpha flow for video matting,” in Proceedings of Asian Conference on Computer Vision (2012).
  28. I. Choi, M. Lee, and Y. W. Tai, “Video matting using multi-frame nonlocal matting laplacian,” in Proceedings of European Conference on Computer Vision (2012), pp. 540–553.
  29. S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multi-frame super-resolution,” IEEE Trans. Image Process. 13, 1327–1344 (2004).
    [CrossRef]
  30. S. Farsiu, M. Elad, and P. Milanfar, “Multiframe demosaicing and super-resolution of color images,” IEEE Trans. Image Process. 15, 141–159 (2006).
    [CrossRef]
  31. R. Fransens, C. Strecha, and L. van Gool, “Optical flow based super-resolution: a probabilistic approach,” Comput. Vis. Image Underst. 106, 106–115 (2007).
    [CrossRef]
  32. K. V. Suresh and A. N. Rajagopalan, “Robust and computationally efficient superresolution algorithm,” J. Opt. Soc. Am. A 24, 984–992 (2007).
    [CrossRef]
  33. H. Shen, L. Zhang, B. Huang, and P. Li, “A MAP approach for joint motion estimation, segmentation, and super resolution,” IEEE Trans. Image Process. 16, 479–490 (2007).
    [CrossRef]
  34. M. Bleyer, M. Gelautz, C. Rother, and C. Rhemann, “A stereo approach that handles the matting problem via image warping,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 501–508.
  35. H. Takeda, P. Milanfar, M. Protter, and M. Elad, “Super-resolution without explicit subpixel motion estimation,” IEEE Trans. Image Process. 18, 1958–1975 (2009).
    [CrossRef]
  36. L. Zhang, Q. Yuan, H. Shen, and P. Li, “Multiframe image super-resolution adapted with local spatial information,” J. Opt. Soc. Am. A 28, 381–390 (2011).
    [CrossRef]
  37. S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Variational Bayesian super resolution,” IEEE Trans. Image Process. 20, 984–999 (2011).
    [CrossRef]
  38. S. Villena, M. Vega, S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Bayesian combination of sparse and non-sparse priors in image super resolution,” Digit. Signal Process. 23, 530–541 (2013).
    [CrossRef]
  39. A. Sanchez-Beato, “Coordinate-descent super-resolution and registration for parametric global motion models,” J. Vis. Commun. Image Represent. 23, 1060–1067 (2012).
    [CrossRef]
  40. S. H. Keller, F. Lauze, and M. Nielsen, “Video super-resolution using simultaneous motion and intensity calculations,” IEEE Trans. Image Process. 20, 1870–1884 (2011).
    [CrossRef]
  41. H. Su, Y. Wu, and J. Zhou, “Super-resolution without dense flow,” IEEE Trans. Image Process. 21, 1782–1795 (2012).
    [CrossRef]
  42. S. M. Prabhu and A. N. Rajagopalan, “Matte super-resolution for compositing,” Symposium of the German Association for Pattern Recognition (DAGM, 2010), pp. 422–431.
  43. S. M. Prabhu and A. N. Rajagopalan, “Joint multi-frame super-resolution and matting,” in Proceedings of International Conference on Pattern Recognition (IEEE2012), pp. 1924–1927.
  44. T. Brox and J. Malik, “Large displacement optical flow: descriptor matching in variational motion estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 500–513 (2011).
    [CrossRef]
  45. Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient method with a strong global convergence property,” SIAM J. Optim. 10, 177–182 (1999).
    [CrossRef]
  46. W. K. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes in C: The Art of Scientific Computing (Cambridge University, 2007).
  47. G. R. K. S. Subrahmanyam, A. N. Rajagopalan, and R. Aravind, “Recursive framework for joint inpainting and de-noising of photographic films,” J. Opt. Soc. Am. A 27, 1091–1099 (2010).
    [CrossRef]
  48. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
    [CrossRef]
  49. L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: a feature similarity index for image quality assessment,” IEEE Trans. Image Process. 20, 2378–2386 (2011).
    [CrossRef]
  50. N. Apostoloff and A. Fitzgibbon, “Automatic video segmentation using spatiotemporal T-junctions,” in Proceedings of the British Machine Vision Conference (2006).

2013 (1)

S. Villena, M. Vega, S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Bayesian combination of sparse and non-sparse priors in image super resolution,” Digit. Signal Process. 23, 530–541 (2013).
[CrossRef]

2012 (5)

A. Sanchez-Beato, “Coordinate-descent super-resolution and registration for parametric global motion models,” J. Vis. Commun. Image Represent. 23, 1060–1067 (2012).
[CrossRef]

H. Su, Y. Wu, and J. Zhou, “Super-resolution without dense flow,” IEEE Trans. Image Process. 21, 1782–1795 (2012).
[CrossRef]

S. M. Yoon and G. J. Yoon, “Alpha matting using compressive sensing,” Electron. Lett. 48, 153–155 (2012).
[CrossRef]

T. Schoenemann and D. Cremers, “A coding-cost framework for super-resolution motion layer decomposition,” IEEE Trans. Image Process. 21, 1097–1110 (2012).
[CrossRef]

L. Wang, M. Gong, C. Zhang, R. Yang, C. Zhang, and Y. H. Yang, “Automatic real-time video matting using time-of-flight camera and multichannel Poisson equations,” Int. J. Comput. Vis. 97, 104–121 (2012).
[CrossRef]

2011 (7)

S. M. Prabhu and A. N. Rajagopalan, “Natural matting for degraded pictures,” IEEE Trans. Image Process. 20, 3647–3653 (2011).
[CrossRef]

S. K. Yeung, C. K. Tang, M. S. Brown, and S. B. Kang, “Matting and compositing of transparent and refractive objects,” ACM Trans. Graph. 30, 1–13 (2011).
[CrossRef]

T. Brox and J. Malik, “Large displacement optical flow: descriptor matching in variational motion estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 500–513 (2011).
[CrossRef]

S. H. Keller, F. Lauze, and M. Nielsen, “Video super-resolution using simultaneous motion and intensity calculations,” IEEE Trans. Image Process. 20, 1870–1884 (2011).
[CrossRef]

S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Variational Bayesian super resolution,” IEEE Trans. Image Process. 20, 984–999 (2011).
[CrossRef]

L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: a feature similarity index for image quality assessment,” IEEE Trans. Image Process. 20, 2378–2386 (2011).
[CrossRef]

L. Zhang, Q. Yuan, H. Shen, and P. Li, “Multiframe image super-resolution adapted with local spatial information,” J. Opt. Soc. Am. A 28, 381–390 (2011).
[CrossRef]

2010 (3)

G. R. K. S. Subrahmanyam, A. N. Rajagopalan, and R. Aravind, “Recursive framework for joint inpainting and de-noising of photographic films,” J. Opt. Soc. Am. A 27, 1091–1099 (2010).
[CrossRef]

E. S. L. Gastal and M. M. Oliveira, “Shared sampling for real-time alpha matting,” Comput. Graph. Forum 29, 575–584 (2010).
[CrossRef]

A. W. M. van Eekeren, K. Schutte, and L. J. van Vliet, “Multiframe super-resolution reconstruction of small moving objects,” IEEE Trans. Image Process. 19, 2901–2912 (2010).
[CrossRef]

2009 (4)

S. Dai, M. Han, W. Xu, Y. Wu, Y. Gong, and A. K. Katsaggelos, “SoftCuts: a soft edge smoothness prior for color image super-resolution,” IEEE Trans. Image Process. 18, 969–981 (2009).
[CrossRef]

X. Bai, J. Wang, D. Simons, and G. Sapiro, “Video snapcut: robust video object cutout using localized classifiers,” ACM Trans. Graph. 28, 70 (2009).
[CrossRef]

X. Bai and G. Sapiro, “Geodesic matting: a framework for fast interactive image and video segmentation and matting,” Int. J. Comput. Vis. 82, 113–132 (2009).
[CrossRef]

H. Takeda, P. Milanfar, M. Protter, and M. Elad, “Super-resolution without explicit subpixel motion estimation,” IEEE Trans. Image Process. 18, 1958–1975 (2009).
[CrossRef]

2008 (1)

A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to natural image matting,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 228–242 (2008).
[CrossRef]

2007 (6)

N. Joshi, W. Matusik, S. Avidan, H. Pfister, and W. T. Freeman, “Exploring defocus matting: nonparametric acceleration, super-resolution, and off-center matting,” IEEE Comput. Graph. Appl. 27, 43–52 (2007).
[CrossRef]

F. Sroubek, G. Cristobal, and J. Flusser, “A unified approach to superresolution and multichannel blind deconvolution,” IEEE Trans. Image Process. 16, 2322–2332 (2007).
[CrossRef]

J. Wang and M. Cohen, “Image and video matting: a survey,” Found. Trends Comput. Graph. Vis. 3, 97–175 (2007).
[CrossRef]

R. Fransens, C. Strecha, and L. van Gool, “Optical flow based super-resolution: a probabilistic approach,” Comput. Vis. Image Underst. 106, 106–115 (2007).
[CrossRef]

H. Shen, L. Zhang, B. Huang, and P. Li, “A MAP approach for joint motion estimation, segmentation, and super resolution,” IEEE Trans. Image Process. 16, 479–490 (2007).
[CrossRef]

K. V. Suresh and A. N. Rajagopalan, “Robust and computationally efficient superresolution algorithm,” J. Opt. Soc. Am. A 24, 984–992 (2007).
[CrossRef]

2006 (1)

S. Farsiu, M. Elad, and P. Milanfar, “Multiframe demosaicing and super-resolution of color images,” IEEE Trans. Image Process. 15, 141–159 (2006).
[CrossRef]

2004 (2)

S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multi-frame super-resolution,” IEEE Trans. Image Process. 13, 1327–1344 (2004).
[CrossRef]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

2003 (1)

S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003).
[CrossRef]

2002 (1)

Y. Y. Chuang, A. Agarwala, B. Curless, D. H. Salesin, and R. Szeliski, “Video matting of complex scenes,” ACM Trans. Graph. 21, 243–248 (2002).
[CrossRef]

1999 (1)

Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient method with a strong global convergence property,” SIAM J. Optim. 10, 177–182 (1999).
[CrossRef]

1996 (1)

R. R. Schultz and R. L. Stevenson, “Extraction of high-resolution frames from video sequences,” IEEE Trans. Image Process. 5, 996–1011 (1996).
[CrossRef]

Agarwala, A.

Y. Y. Chuang, A. Agarwala, B. Curless, D. H. Salesin, and R. Szeliski, “Video matting of complex scenes,” ACM Trans. Graph. 21, 243–248 (2002).
[CrossRef]

Apostoloff, N.

N. Apostoloff and A. Fitzgibbon, “Automatic video segmentation using spatiotemporal T-junctions,” in Proceedings of the British Machine Vision Conference (2006).

Aravind, R.

Avidan, S.

N. Joshi, W. Matusik, S. Avidan, H. Pfister, and W. T. Freeman, “Exploring defocus matting: nonparametric acceleration, super-resolution, and off-center matting,” IEEE Comput. Graph. Appl. 27, 43–52 (2007).
[CrossRef]

Babacan, S. D.

S. Villena, M. Vega, S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Bayesian combination of sparse and non-sparse priors in image super resolution,” Digit. Signal Process. 23, 530–541 (2013).
[CrossRef]

S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Variational Bayesian super resolution,” IEEE Trans. Image Process. 20, 984–999 (2011).
[CrossRef]

Bai, X.

X. Bai, J. Wang, D. Simons, and G. Sapiro, “Video snapcut: robust video object cutout using localized classifiers,” ACM Trans. Graph. 28, 70 (2009).
[CrossRef]

X. Bai and G. Sapiro, “Geodesic matting: a framework for fast interactive image and video segmentation and matting,” Int. J. Comput. Vis. 82, 113–132 (2009).
[CrossRef]

Bleyer, M.

M. Bleyer, M. Gelautz, C. Rother, and C. Rhemann, “A stereo approach that handles the matting problem via image warping,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 501–508.

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

Brown, M. S.

S. K. Yeung, C. K. Tang, M. S. Brown, and S. B. Kang, “Matting and compositing of transparent and refractive objects,” ACM Trans. Graph. 30, 1–13 (2011).
[CrossRef]

Brox, T.

T. Brox and J. Malik, “Large displacement optical flow: descriptor matching in variational motion estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 500–513 (2011).
[CrossRef]

Chen, Q.

Q. Chen, D. Li, and C. K. Tang, “KNN Matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 869–876.

Choi, I.

I. Choi, M. Lee, and Y. W. Tai, “Video matting using multi-frame nonlocal matting laplacian,” in Proceedings of European Conference on Computer Vision (2012), pp. 540–553.

Chuang, Y. Y.

Y. Y. Chuang, A. Agarwala, B. Curless, D. H. Salesin, and R. Szeliski, “Video matting of complex scenes,” ACM Trans. Graph. 21, 243–248 (2002).
[CrossRef]

Cohen, M.

J. Wang and M. Cohen, “Image and video matting: a survey,” Found. Trends Comput. Graph. Vis. 3, 97–175 (2007).
[CrossRef]

J. Wang and M. Cohen, “Optimized color sampling for robust matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Cohen, S.

B. L. Price, B. S. Morse, and S. Cohen, “Simultaneous foreground, background, and alpha estimation for image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 2157–2164.

Cremers, D.

T. Schoenemann and D. Cremers, “A coding-cost framework for super-resolution motion layer decomposition,” IEEE Trans. Image Process. 21, 1097–1110 (2012).
[CrossRef]

Cristobal, G.

F. Sroubek, G. Cristobal, and J. Flusser, “A unified approach to superresolution and multichannel blind deconvolution,” IEEE Trans. Image Process. 16, 2322–2332 (2007).
[CrossRef]

Curless, B.

Y. Y. Chuang, A. Agarwala, B. Curless, D. H. Salesin, and R. Szeliski, “Video matting of complex scenes,” ACM Trans. Graph. 21, 243–248 (2002).
[CrossRef]

Dai, S.

S. Dai, M. Han, W. Xu, Y. Wu, Y. Gong, and A. K. Katsaggelos, “SoftCuts: a soft edge smoothness prior for color image super-resolution,” IEEE Trans. Image Process. 18, 969–981 (2009).
[CrossRef]

Dai, Y. H.

Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient method with a strong global convergence property,” SIAM J. Optim. 10, 177–182 (1999).
[CrossRef]

Durand, F.

M. McGuire, W. Matusik, H. Pfister, J. F. Hughes, and F. Durand, “Defocus video matting,” in SIGGRAPH—Proceedings International Conference on Computer Graphics and Interactive Techniques (ACM, 2005), pp. 567–576.

Elad, M.

H. Takeda, P. Milanfar, M. Protter, and M. Elad, “Super-resolution without explicit subpixel motion estimation,” IEEE Trans. Image Process. 18, 1958–1975 (2009).
[CrossRef]

S. Farsiu, M. Elad, and P. Milanfar, “Multiframe demosaicing and super-resolution of color images,” IEEE Trans. Image Process. 15, 141–159 (2006).
[CrossRef]

S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multi-frame super-resolution,” IEEE Trans. Image Process. 13, 1327–1344 (2004).
[CrossRef]

Farsiu, S.

S. Farsiu, M. Elad, and P. Milanfar, “Multiframe demosaicing and super-resolution of color images,” IEEE Trans. Image Process. 15, 141–159 (2006).
[CrossRef]

S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multi-frame super-resolution,” IEEE Trans. Image Process. 13, 1327–1344 (2004).
[CrossRef]

Fitzgibbon, A.

N. Apostoloff and A. Fitzgibbon, “Automatic video segmentation using spatiotemporal T-junctions,” in Proceedings of the British Machine Vision Conference (2006).

Flannery, B. P.

W. K. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes in C: The Art of Scientific Computing (Cambridge University, 2007).

Flusser, J.

F. Sroubek, G. Cristobal, and J. Flusser, “A unified approach to superresolution and multichannel blind deconvolution,” IEEE Trans. Image Process. 16, 2322–2332 (2007).
[CrossRef]

Fransens, R.

R. Fransens, C. Strecha, and L. van Gool, “Optical flow based super-resolution: a probabilistic approach,” Comput. Vis. Image Underst. 106, 106–115 (2007).
[CrossRef]

Freeman, W. T.

N. Joshi, W. Matusik, S. Avidan, H. Pfister, and W. T. Freeman, “Exploring defocus matting: nonparametric acceleration, super-resolution, and off-center matting,” IEEE Comput. Graph. Appl. 27, 43–52 (2007).
[CrossRef]

Gastal, E. S. L.

E. S. L. Gastal and M. M. Oliveira, “Shared sampling for real-time alpha matting,” Comput. Graph. Forum 29, 575–584 (2010).
[CrossRef]

Gelautz, M.

C. Rhemann, C. Rother, J. Wang, M. Gelautz, P. Kohli, and P. Rott, “A perceptually motivated online benchmark for image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1826–1833.

C. Rhemann, C. Rother, P. Kohli, and M. Gelautz, “A spatially varying PSF-based prior for alpha matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 2149–2156.

M. Bleyer, M. Gelautz, C. Rother, and C. Rhemann, “A stereo approach that handles the matting problem via image warping,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 501–508.

Gong, M.

L. Wang, M. Gong, C. Zhang, R. Yang, C. Zhang, and Y. H. Yang, “Automatic real-time video matting using time-of-flight camera and multichannel Poisson equations,” Int. J. Comput. Vis. 97, 104–121 (2012).
[CrossRef]

Gong, Y.

S. Dai, M. Han, W. Xu, Y. Wu, Y. Gong, and A. K. Katsaggelos, “SoftCuts: a soft edge smoothness prior for color image super-resolution,” IEEE Trans. Image Process. 18, 969–981 (2009).
[CrossRef]

Han, M.

S. Dai, M. Han, W. Xu, Y. Wu, Y. Gong, and A. K. Katsaggelos, “SoftCuts: a soft edge smoothness prior for color image super-resolution,” IEEE Trans. Image Process. 18, 969–981 (2009).
[CrossRef]

Huang, B.

H. Shen, L. Zhang, B. Huang, and P. Li, “A MAP approach for joint motion estimation, segmentation, and super resolution,” IEEE Trans. Image Process. 16, 479–490 (2007).
[CrossRef]

Hughes, J. F.

M. McGuire, W. Matusik, H. Pfister, J. F. Hughes, and F. Durand, “Defocus video matting,” in SIGGRAPH—Proceedings International Conference on Computer Graphics and Interactive Techniques (ACM, 2005), pp. 567–576.

Joshi, N.

N. Joshi, W. Matusik, S. Avidan, H. Pfister, and W. T. Freeman, “Exploring defocus matting: nonparametric acceleration, super-resolution, and off-center matting,” IEEE Comput. Graph. Appl. 27, 43–52 (2007).
[CrossRef]

Kambhamettu, C.

Y. Zheng and C. Kambhamettu, “Learning based digital matting,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2009), pp. 889–896.

Kang, M. G.

S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003).
[CrossRef]

Kang, S. B.

S. K. Yeung, C. K. Tang, M. S. Brown, and S. B. Kang, “Matting and compositing of transparent and refractive objects,” ACM Trans. Graph. 30, 1–13 (2011).
[CrossRef]

Katsaggelos, A. K.

S. Villena, M. Vega, S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Bayesian combination of sparse and non-sparse priors in image super resolution,” Digit. Signal Process. 23, 530–541 (2013).
[CrossRef]

S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Variational Bayesian super resolution,” IEEE Trans. Image Process. 20, 984–999 (2011).
[CrossRef]

S. Dai, M. Han, W. Xu, Y. Wu, Y. Gong, and A. K. Katsaggelos, “SoftCuts: a soft edge smoothness prior for color image super-resolution,” IEEE Trans. Image Process. 18, 969–981 (2009).
[CrossRef]

Keller, S. H.

S. H. Keller, F. Lauze, and M. Nielsen, “Video super-resolution using simultaneous motion and intensity calculations,” IEEE Trans. Image Process. 20, 1870–1884 (2011).
[CrossRef]

Kohli, P.

C. Rhemann, C. Rother, J. Wang, M. Gelautz, P. Kohli, and P. Rott, “A perceptually motivated online benchmark for image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1826–1833.

C. Rhemann, C. Rother, P. Kohli, and M. Gelautz, “A spatially varying PSF-based prior for alpha matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 2149–2156.

Konushin, A.

M. Sindeev, A. Konushin, and C. Rother, “Alpha flow for video matting,” in Proceedings of Asian Conference on Computer Vision (2012).

Lauze, F.

S. H. Keller, F. Lauze, and M. Nielsen, “Video super-resolution using simultaneous motion and intensity calculations,” IEEE Trans. Image Process. 20, 1870–1884 (2011).
[CrossRef]

Lee, M.

I. Choi, M. Lee, and Y. W. Tai, “Video matting using multi-frame nonlocal matting laplacian,” in Proceedings of European Conference on Computer Vision (2012), pp. 540–553.

Levin, A.

A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to natural image matting,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 228–242 (2008).
[CrossRef]

Li, D.

Q. Chen, D. Li, and C. K. Tang, “KNN Matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 869–876.

Li, P.

L. Zhang, Q. Yuan, H. Shen, and P. Li, “Multiframe image super-resolution adapted with local spatial information,” J. Opt. Soc. Am. A 28, 381–390 (2011).
[CrossRef]

H. Shen, L. Zhang, B. Huang, and P. Li, “A MAP approach for joint motion estimation, segmentation, and super resolution,” IEEE Trans. Image Process. 16, 479–490 (2007).
[CrossRef]

Lischinski, D.

A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to natural image matting,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 228–242 (2008).
[CrossRef]

Malik, J.

T. Brox and J. Malik, “Large displacement optical flow: descriptor matching in variational motion estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 500–513 (2011).
[CrossRef]

Matusik, W.

N. Joshi, W. Matusik, S. Avidan, H. Pfister, and W. T. Freeman, “Exploring defocus matting: nonparametric acceleration, super-resolution, and off-center matting,” IEEE Comput. Graph. Appl. 27, 43–52 (2007).
[CrossRef]

M. McGuire, W. Matusik, H. Pfister, J. F. Hughes, and F. Durand, “Defocus video matting,” in SIGGRAPH—Proceedings International Conference on Computer Graphics and Interactive Techniques (ACM, 2005), pp. 567–576.

McGuire, M.

M. McGuire, W. Matusik, H. Pfister, J. F. Hughes, and F. Durand, “Defocus video matting,” in SIGGRAPH—Proceedings International Conference on Computer Graphics and Interactive Techniques (ACM, 2005), pp. 567–576.

Milanfar, P.

H. Takeda, P. Milanfar, M. Protter, and M. Elad, “Super-resolution without explicit subpixel motion estimation,” IEEE Trans. Image Process. 18, 1958–1975 (2009).
[CrossRef]

S. Farsiu, M. Elad, and P. Milanfar, “Multiframe demosaicing and super-resolution of color images,” IEEE Trans. Image Process. 15, 141–159 (2006).
[CrossRef]

S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multi-frame super-resolution,” IEEE Trans. Image Process. 13, 1327–1344 (2004).
[CrossRef]

Molina, R.

S. Villena, M. Vega, S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Bayesian combination of sparse and non-sparse priors in image super resolution,” Digit. Signal Process. 23, 530–541 (2013).
[CrossRef]

S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Variational Bayesian super resolution,” IEEE Trans. Image Process. 20, 984–999 (2011).
[CrossRef]

Morse, B. S.

B. L. Price, B. S. Morse, and S. Cohen, “Simultaneous foreground, background, and alpha estimation for image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 2157–2164.

Mou, X.

L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: a feature similarity index for image quality assessment,” IEEE Trans. Image Process. 20, 2378–2386 (2011).
[CrossRef]

Nielsen, M.

S. H. Keller, F. Lauze, and M. Nielsen, “Video super-resolution using simultaneous motion and intensity calculations,” IEEE Trans. Image Process. 20, 1870–1884 (2011).
[CrossRef]

Oliveira, M. M.

E. S. L. Gastal and M. M. Oliveira, “Shared sampling for real-time alpha matting,” Comput. Graph. Forum 29, 575–584 (2010).
[CrossRef]

Park, M. K.

S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003).
[CrossRef]

Park, S. C.

S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003).
[CrossRef]

Pfister, H.

N. Joshi, W. Matusik, S. Avidan, H. Pfister, and W. T. Freeman, “Exploring defocus matting: nonparametric acceleration, super-resolution, and off-center matting,” IEEE Comput. Graph. Appl. 27, 43–52 (2007).
[CrossRef]

M. McGuire, W. Matusik, H. Pfister, J. F. Hughes, and F. Durand, “Defocus video matting,” in SIGGRAPH—Proceedings International Conference on Computer Graphics and Interactive Techniques (ACM, 2005), pp. 567–576.

Prabhu, S. M.

S. M. Prabhu and A. N. Rajagopalan, “Natural matting for degraded pictures,” IEEE Trans. Image Process. 20, 3647–3653 (2011).
[CrossRef]

S. M. Prabhu and A. N. Rajagopalan, “Matte super-resolution for compositing,” Symposium of the German Association for Pattern Recognition (DAGM, 2010), pp. 422–431.

S. M. Prabhu and A. N. Rajagopalan, “Joint multi-frame super-resolution and matting,” in Proceedings of International Conference on Pattern Recognition (IEEE2012), pp. 1924–1927.

Press, W. K.

W. K. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes in C: The Art of Scientific Computing (Cambridge University, 2007).

Price, B. L.

B. L. Price, B. S. Morse, and S. Cohen, “Simultaneous foreground, background, and alpha estimation for image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 2157–2164.

Protter, M.

H. Takeda, P. Milanfar, M. Protter, and M. Elad, “Super-resolution without explicit subpixel motion estimation,” IEEE Trans. Image Process. 18, 1958–1975 (2009).
[CrossRef]

Rajagopalan, A. N.

S. M. Prabhu and A. N. Rajagopalan, “Natural matting for degraded pictures,” IEEE Trans. Image Process. 20, 3647–3653 (2011).
[CrossRef]

G. R. K. S. Subrahmanyam, A. N. Rajagopalan, and R. Aravind, “Recursive framework for joint inpainting and de-noising of photographic films,” J. Opt. Soc. Am. A 27, 1091–1099 (2010).
[CrossRef]

K. V. Suresh and A. N. Rajagopalan, “Robust and computationally efficient superresolution algorithm,” J. Opt. Soc. Am. A 24, 984–992 (2007).
[CrossRef]

S. M. Prabhu and A. N. Rajagopalan, “Joint multi-frame super-resolution and matting,” in Proceedings of International Conference on Pattern Recognition (IEEE2012), pp. 1924–1927.

S. M. Prabhu and A. N. Rajagopalan, “Matte super-resolution for compositing,” Symposium of the German Association for Pattern Recognition (DAGM, 2010), pp. 422–431.

Rajan, D.

E. Shahrian and D. Rajan, “Weighted color and texture sample selection for image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 718–725.

Rhemann, C.

C. Rhemann, C. Rother, J. Wang, M. Gelautz, P. Kohli, and P. Rott, “A perceptually motivated online benchmark for image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1826–1833.

C. Rhemann, C. Rother, P. Kohli, and M. Gelautz, “A spatially varying PSF-based prior for alpha matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 2149–2156.

M. Bleyer, M. Gelautz, C. Rother, and C. Rhemann, “A stereo approach that handles the matting problem via image warping,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 501–508.

Robinson, D.

S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multi-frame super-resolution,” IEEE Trans. Image Process. 13, 1327–1344 (2004).
[CrossRef]

Rother, C.

M. Sindeev, A. Konushin, and C. Rother, “Alpha flow for video matting,” in Proceedings of Asian Conference on Computer Vision (2012).

C. Rhemann, C. Rother, J. Wang, M. Gelautz, P. Kohli, and P. Rott, “A perceptually motivated online benchmark for image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1826–1833.

M. Bleyer, M. Gelautz, C. Rother, and C. Rhemann, “A stereo approach that handles the matting problem via image warping,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 501–508.

C. Rhemann, C. Rother, P. Kohli, and M. Gelautz, “A spatially varying PSF-based prior for alpha matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 2149–2156.

Rott, P.

C. Rhemann, C. Rother, J. Wang, M. Gelautz, P. Kohli, and P. Rott, “A perceptually motivated online benchmark for image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1826–1833.

Salesin, D. H.

Y. Y. Chuang, A. Agarwala, B. Curless, D. H. Salesin, and R. Szeliski, “Video matting of complex scenes,” ACM Trans. Graph. 21, 243–248 (2002).
[CrossRef]

Sanchez-Beato, A.

A. Sanchez-Beato, “Coordinate-descent super-resolution and registration for parametric global motion models,” J. Vis. Commun. Image Represent. 23, 1060–1067 (2012).
[CrossRef]

Sapiro, G.

X. Bai and G. Sapiro, “Geodesic matting: a framework for fast interactive image and video segmentation and matting,” Int. J. Comput. Vis. 82, 113–132 (2009).
[CrossRef]

X. Bai, J. Wang, D. Simons, and G. Sapiro, “Video snapcut: robust video object cutout using localized classifiers,” ACM Trans. Graph. 28, 70 (2009).
[CrossRef]

Schoenemann, T.

T. Schoenemann and D. Cremers, “A coding-cost framework for super-resolution motion layer decomposition,” IEEE Trans. Image Process. 21, 1097–1110 (2012).
[CrossRef]

Schultz, R. R.

R. R. Schultz and R. L. Stevenson, “Extraction of high-resolution frames from video sequences,” IEEE Trans. Image Process. 5, 996–1011 (1996).
[CrossRef]

Schutte, K.

A. W. M. van Eekeren, K. Schutte, and L. J. van Vliet, “Multiframe super-resolution reconstruction of small moving objects,” IEEE Trans. Image Process. 19, 2901–2912 (2010).
[CrossRef]

Shahrian, E.

E. Shahrian and D. Rajan, “Weighted color and texture sample selection for image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 718–725.

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

Shen, H.

L. Zhang, Q. Yuan, H. Shen, and P. Li, “Multiframe image super-resolution adapted with local spatial information,” J. Opt. Soc. Am. A 28, 381–390 (2011).
[CrossRef]

H. Shen, L. Zhang, B. Huang, and P. Li, “A MAP approach for joint motion estimation, segmentation, and super resolution,” IEEE Trans. Image Process. 16, 479–490 (2007).
[CrossRef]

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

Simons, D.

X. Bai, J. Wang, D. Simons, and G. Sapiro, “Video snapcut: robust video object cutout using localized classifiers,” ACM Trans. Graph. 28, 70 (2009).
[CrossRef]

Sindeev, M.

M. Sindeev, A. Konushin, and C. Rother, “Alpha flow for video matting,” in Proceedings of Asian Conference on Computer Vision (2012).

Sroubek, F.

F. Sroubek, G. Cristobal, and J. Flusser, “A unified approach to superresolution and multichannel blind deconvolution,” IEEE Trans. Image Process. 16, 2322–2332 (2007).
[CrossRef]

Stevenson, R. L.

R. R. Schultz and R. L. Stevenson, “Extraction of high-resolution frames from video sequences,” IEEE Trans. Image Process. 5, 996–1011 (1996).
[CrossRef]

Strecha, C.

R. Fransens, C. Strecha, and L. van Gool, “Optical flow based super-resolution: a probabilistic approach,” Comput. Vis. Image Underst. 106, 106–115 (2007).
[CrossRef]

Su, H.

H. Su, Y. Wu, and J. Zhou, “Super-resolution without dense flow,” IEEE Trans. Image Process. 21, 1782–1795 (2012).
[CrossRef]

Subrahmanyam, G. R. K. S.

Suresh, K. V.

Szeliski, R.

Y. Y. Chuang, A. Agarwala, B. Curless, D. H. Salesin, and R. Szeliski, “Video matting of complex scenes,” ACM Trans. Graph. 21, 243–248 (2002).
[CrossRef]

Tai, Y. W.

Y. W. Tai, W. S. Tong, and C. K. Tang, “Perceptually-inspired and edge-directed color image super-resolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 1948–1955.

I. Choi, M. Lee, and Y. W. Tai, “Video matting using multi-frame nonlocal matting laplacian,” in Proceedings of European Conference on Computer Vision (2012), pp. 540–553.

Takeda, H.

H. Takeda, P. Milanfar, M. Protter, and M. Elad, “Super-resolution without explicit subpixel motion estimation,” IEEE Trans. Image Process. 18, 1958–1975 (2009).
[CrossRef]

Tang, C. K.

S. K. Yeung, C. K. Tang, M. S. Brown, and S. B. Kang, “Matting and compositing of transparent and refractive objects,” ACM Trans. Graph. 30, 1–13 (2011).
[CrossRef]

Y. W. Tai, W. S. Tong, and C. K. Tang, “Perceptually-inspired and edge-directed color image super-resolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 1948–1955.

Q. Chen, D. Li, and C. K. Tang, “KNN Matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 869–876.

Teukolsky, S. A.

W. K. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes in C: The Art of Scientific Computing (Cambridge University, 2007).

Tong, W. S.

Y. W. Tai, W. S. Tong, and C. K. Tang, “Perceptually-inspired and edge-directed color image super-resolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 1948–1955.

van Eekeren, A. W. M.

A. W. M. van Eekeren, K. Schutte, and L. J. van Vliet, “Multiframe super-resolution reconstruction of small moving objects,” IEEE Trans. Image Process. 19, 2901–2912 (2010).
[CrossRef]

van Gool, L.

R. Fransens, C. Strecha, and L. van Gool, “Optical flow based super-resolution: a probabilistic approach,” Comput. Vis. Image Underst. 106, 106–115 (2007).
[CrossRef]

van Vliet, L. J.

A. W. M. van Eekeren, K. Schutte, and L. J. van Vliet, “Multiframe super-resolution reconstruction of small moving objects,” IEEE Trans. Image Process. 19, 2901–2912 (2010).
[CrossRef]

Vega, M.

S. Villena, M. Vega, S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Bayesian combination of sparse and non-sparse priors in image super resolution,” Digit. Signal Process. 23, 530–541 (2013).
[CrossRef]

Vetterling, W. T.

W. K. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes in C: The Art of Scientific Computing (Cambridge University, 2007).

Villena, S.

S. Villena, M. Vega, S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Bayesian combination of sparse and non-sparse priors in image super resolution,” Digit. Signal Process. 23, 530–541 (2013).
[CrossRef]

Wang, J.

X. Bai, J. Wang, D. Simons, and G. Sapiro, “Video snapcut: robust video object cutout using localized classifiers,” ACM Trans. Graph. 28, 70 (2009).
[CrossRef]

J. Wang and M. Cohen, “Image and video matting: a survey,” Found. Trends Comput. Graph. Vis. 3, 97–175 (2007).
[CrossRef]

C. Rhemann, C. Rother, J. Wang, M. Gelautz, P. Kohli, and P. Rott, “A perceptually motivated online benchmark for image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1826–1833.

J. Wang and M. Cohen, “Optimized color sampling for robust matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Wang, L.

L. Wang, M. Gong, C. Zhang, R. Yang, C. Zhang, and Y. H. Yang, “Automatic real-time video matting using time-of-flight camera and multichannel Poisson equations,” Int. J. Comput. Vis. 97, 104–121 (2012).
[CrossRef]

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

Weiss, Y.

A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to natural image matting,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 228–242 (2008).
[CrossRef]

Wu, Y.

H. Su, Y. Wu, and J. Zhou, “Super-resolution without dense flow,” IEEE Trans. Image Process. 21, 1782–1795 (2012).
[CrossRef]

S. Dai, M. Han, W. Xu, Y. Wu, Y. Gong, and A. K. Katsaggelos, “SoftCuts: a soft edge smoothness prior for color image super-resolution,” IEEE Trans. Image Process. 18, 969–981 (2009).
[CrossRef]

Xu, W.

S. Dai, M. Han, W. Xu, Y. Wu, Y. Gong, and A. K. Katsaggelos, “SoftCuts: a soft edge smoothness prior for color image super-resolution,” IEEE Trans. Image Process. 18, 969–981 (2009).
[CrossRef]

Yang, R.

L. Wang, M. Gong, C. Zhang, R. Yang, C. Zhang, and Y. H. Yang, “Automatic real-time video matting using time-of-flight camera and multichannel Poisson equations,” Int. J. Comput. Vis. 97, 104–121 (2012).
[CrossRef]

Yang, Y. H.

L. Wang, M. Gong, C. Zhang, R. Yang, C. Zhang, and Y. H. Yang, “Automatic real-time video matting using time-of-flight camera and multichannel Poisson equations,” Int. J. Comput. Vis. 97, 104–121 (2012).
[CrossRef]

Yeung, S. K.

S. K. Yeung, C. K. Tang, M. S. Brown, and S. B. Kang, “Matting and compositing of transparent and refractive objects,” ACM Trans. Graph. 30, 1–13 (2011).
[CrossRef]

Yoon, G. J.

S. M. Yoon and G. J. Yoon, “Alpha matting using compressive sensing,” Electron. Lett. 48, 153–155 (2012).
[CrossRef]

Yoon, S. M.

S. M. Yoon and G. J. Yoon, “Alpha matting using compressive sensing,” Electron. Lett. 48, 153–155 (2012).
[CrossRef]

Yuan, Q.

Yuan, Y.

Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient method with a strong global convergence property,” SIAM J. Optim. 10, 177–182 (1999).
[CrossRef]

Zhang, C.

L. Wang, M. Gong, C. Zhang, R. Yang, C. Zhang, and Y. H. Yang, “Automatic real-time video matting using time-of-flight camera and multichannel Poisson equations,” Int. J. Comput. Vis. 97, 104–121 (2012).
[CrossRef]

L. Wang, M. Gong, C. Zhang, R. Yang, C. Zhang, and Y. H. Yang, “Automatic real-time video matting using time-of-flight camera and multichannel Poisson equations,” Int. J. Comput. Vis. 97, 104–121 (2012).
[CrossRef]

Zhang, D.

L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: a feature similarity index for image quality assessment,” IEEE Trans. Image Process. 20, 2378–2386 (2011).
[CrossRef]

Zhang, L.

L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: a feature similarity index for image quality assessment,” IEEE Trans. Image Process. 20, 2378–2386 (2011).
[CrossRef]

L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: a feature similarity index for image quality assessment,” IEEE Trans. Image Process. 20, 2378–2386 (2011).
[CrossRef]

L. Zhang, Q. Yuan, H. Shen, and P. Li, “Multiframe image super-resolution adapted with local spatial information,” J. Opt. Soc. Am. A 28, 381–390 (2011).
[CrossRef]

H. Shen, L. Zhang, B. Huang, and P. Li, “A MAP approach for joint motion estimation, segmentation, and super resolution,” IEEE Trans. Image Process. 16, 479–490 (2007).
[CrossRef]

Zheng, Y.

Y. Zheng and C. Kambhamettu, “Learning based digital matting,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2009), pp. 889–896.

Zhou, J.

H. Su, Y. Wu, and J. Zhou, “Super-resolution without dense flow,” IEEE Trans. Image Process. 21, 1782–1795 (2012).
[CrossRef]

ACM Trans. Graph. (3)

S. K. Yeung, C. K. Tang, M. S. Brown, and S. B. Kang, “Matting and compositing of transparent and refractive objects,” ACM Trans. Graph. 30, 1–13 (2011).
[CrossRef]

Y. Y. Chuang, A. Agarwala, B. Curless, D. H. Salesin, and R. Szeliski, “Video matting of complex scenes,” ACM Trans. Graph. 21, 243–248 (2002).
[CrossRef]

X. Bai, J. Wang, D. Simons, and G. Sapiro, “Video snapcut: robust video object cutout using localized classifiers,” ACM Trans. Graph. 28, 70 (2009).
[CrossRef]

Comput. Graph. Forum (1)

E. S. L. Gastal and M. M. Oliveira, “Shared sampling for real-time alpha matting,” Comput. Graph. Forum 29, 575–584 (2010).
[CrossRef]

Comput. Vis. Image Underst. (1)

R. Fransens, C. Strecha, and L. van Gool, “Optical flow based super-resolution: a probabilistic approach,” Comput. Vis. Image Underst. 106, 106–115 (2007).
[CrossRef]

Digit. Signal Process. (1)

S. Villena, M. Vega, S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Bayesian combination of sparse and non-sparse priors in image super resolution,” Digit. Signal Process. 23, 530–541 (2013).
[CrossRef]

Electron. Lett. (1)

S. M. Yoon and G. J. Yoon, “Alpha matting using compressive sensing,” Electron. Lett. 48, 153–155 (2012).
[CrossRef]

Found. Trends Comput. Graph. Vis. (1)

J. Wang and M. Cohen, “Image and video matting: a survey,” Found. Trends Comput. Graph. Vis. 3, 97–175 (2007).
[CrossRef]

IEEE Comput. Graph. Appl. (1)

N. Joshi, W. Matusik, S. Avidan, H. Pfister, and W. T. Freeman, “Exploring defocus matting: nonparametric acceleration, super-resolution, and off-center matting,” IEEE Comput. Graph. Appl. 27, 43–52 (2007).
[CrossRef]

IEEE Signal Process. Mag. (1)

S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003).
[CrossRef]

IEEE Trans. Image Process. (15)

R. R. Schultz and R. L. Stevenson, “Extraction of high-resolution frames from video sequences,” IEEE Trans. Image Process. 5, 996–1011 (1996).
[CrossRef]

S. Dai, M. Han, W. Xu, Y. Wu, Y. Gong, and A. K. Katsaggelos, “SoftCuts: a soft edge smoothness prior for color image super-resolution,” IEEE Trans. Image Process. 18, 969–981 (2009).
[CrossRef]

A. W. M. van Eekeren, K. Schutte, and L. J. van Vliet, “Multiframe super-resolution reconstruction of small moving objects,” IEEE Trans. Image Process. 19, 2901–2912 (2010).
[CrossRef]

F. Sroubek, G. Cristobal, and J. Flusser, “A unified approach to superresolution and multichannel blind deconvolution,” IEEE Trans. Image Process. 16, 2322–2332 (2007).
[CrossRef]

S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Variational Bayesian super resolution,” IEEE Trans. Image Process. 20, 984–999 (2011).
[CrossRef]

S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multi-frame super-resolution,” IEEE Trans. Image Process. 13, 1327–1344 (2004).
[CrossRef]

S. Farsiu, M. Elad, and P. Milanfar, “Multiframe demosaicing and super-resolution of color images,” IEEE Trans. Image Process. 15, 141–159 (2006).
[CrossRef]

S. H. Keller, F. Lauze, and M. Nielsen, “Video super-resolution using simultaneous motion and intensity calculations,” IEEE Trans. Image Process. 20, 1870–1884 (2011).
[CrossRef]

H. Su, Y. Wu, and J. Zhou, “Super-resolution without dense flow,” IEEE Trans. Image Process. 21, 1782–1795 (2012).
[CrossRef]

S. M. Prabhu and A. N. Rajagopalan, “Natural matting for degraded pictures,” IEEE Trans. Image Process. 20, 3647–3653 (2011).
[CrossRef]

T. Schoenemann and D. Cremers, “A coding-cost framework for super-resolution motion layer decomposition,” IEEE Trans. Image Process. 21, 1097–1110 (2012).
[CrossRef]

H. Takeda, P. Milanfar, M. Protter, and M. Elad, “Super-resolution without explicit subpixel motion estimation,” IEEE Trans. Image Process. 18, 1958–1975 (2009).
[CrossRef]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: a feature similarity index for image quality assessment,” IEEE Trans. Image Process. 20, 2378–2386 (2011).
[CrossRef]

H. Shen, L. Zhang, B. Huang, and P. Li, “A MAP approach for joint motion estimation, segmentation, and super resolution,” IEEE Trans. Image Process. 16, 479–490 (2007).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell. (2)

T. Brox and J. Malik, “Large displacement optical flow: descriptor matching in variational motion estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 500–513 (2011).
[CrossRef]

A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to natural image matting,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 228–242 (2008).
[CrossRef]

Int. J. Comput. Vis. (2)

X. Bai and G. Sapiro, “Geodesic matting: a framework for fast interactive image and video segmentation and matting,” Int. J. Comput. Vis. 82, 113–132 (2009).
[CrossRef]

L. Wang, M. Gong, C. Zhang, R. Yang, C. Zhang, and Y. H. Yang, “Automatic real-time video matting using time-of-flight camera and multichannel Poisson equations,” Int. J. Comput. Vis. 97, 104–121 (2012).
[CrossRef]

J. Opt. Soc. Am. A (3)

J. Vis. Commun. Image Represent. (1)

A. Sanchez-Beato, “Coordinate-descent super-resolution and registration for parametric global motion models,” J. Vis. Commun. Image Represent. 23, 1060–1067 (2012).
[CrossRef]

SIAM J. Optim. (1)

Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient method with a strong global convergence property,” SIAM J. Optim. 10, 177–182 (1999).
[CrossRef]

Other (16)

W. K. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes in C: The Art of Scientific Computing (Cambridge University, 2007).

N. Apostoloff and A. Fitzgibbon, “Automatic video segmentation using spatiotemporal T-junctions,” in Proceedings of the British Machine Vision Conference (2006).

M. Bleyer, M. Gelautz, C. Rother, and C. Rhemann, “A stereo approach that handles the matting problem via image warping,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 501–508.

M. Sindeev, A. Konushin, and C. Rother, “Alpha flow for video matting,” in Proceedings of Asian Conference on Computer Vision (2012).

I. Choi, M. Lee, and Y. W. Tai, “Video matting using multi-frame nonlocal matting laplacian,” in Proceedings of European Conference on Computer Vision (2012), pp. 540–553.

S. M. Prabhu and A. N. Rajagopalan, “Matte super-resolution for compositing,” Symposium of the German Association for Pattern Recognition (DAGM, 2010), pp. 422–431.

S. M. Prabhu and A. N. Rajagopalan, “Joint multi-frame super-resolution and matting,” in Proceedings of International Conference on Pattern Recognition (IEEE2012), pp. 1924–1927.

C. Rhemann, C. Rother, J. Wang, M. Gelautz, P. Kohli, and P. Rott, “A perceptually motivated online benchmark for image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1826–1833.

Y. Zheng and C. Kambhamettu, “Learning based digital matting,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2009), pp. 889–896.

J. Wang and M. Cohen, “Optimized color sampling for robust matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

E. Shahrian and D. Rajan, “Weighted color and texture sample selection for image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 718–725.

Q. Chen, D. Li, and C. K. Tang, “KNN Matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 869–876.

Y. W. Tai, W. S. Tong, and C. K. Tang, “Perceptually-inspired and edge-directed color image super-resolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 1948–1955.

C. Rhemann, C. Rother, P. Kohli, and M. Gelautz, “A spatially varying PSF-based prior for alpha matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 2149–2156.

M. McGuire, W. Matusik, H. Pfister, J. F. Hughes, and F. Durand, “Defocus video matting,” in SIGGRAPH—Proceedings International Conference on Computer Graphics and Interactive Techniques (ACM, 2005), pp. 567–576.

B. L. Price, B. S. Morse, and S. Cohen, “Simultaneous foreground, background, and alpha estimation for image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 2157–2164.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1.

(a) Original HR. (b) LR image. (c) Trimap. (d) True α . (e) Closed f . (f) Shared f . (g) Proposed SR f . (h) KNN α (0.2). (i) Learning α (0.25). (j) Shared α (0.171). (k) Proposed SR α (0.119).

Fig. 2.
Fig. 2.

(a) HR image. (b) Ground-truth α . (c) Our SR (1.64). (d) Our α (0.07).

Fig. 3.
Fig. 3.

(a) Original. (b) LR frame. (c) Trimap. (d) Shared f . (e) Proposed f . (f) True α . (g) KNN α (0.247). (h) Learning α (0.273). (i) Shared α (0.242). (j) Proposed α (0.193).

Fig. 4.
Fig. 4.

(a) Original HR. (b) LR frame. (c) Trimap. (d) Softcuts SR. (e) ICM-based f . (f) Closed f . (g) Shared f . (h) Proposed SR f . (i) ICM-based α . (j) Learning α . (k) Shared α . (l) Proposed SR α .

Fig. 5.
Fig. 5.

(a) LR frame. (b) Closed f . (c) Shared f . (d) Proposed SR f . (e) KNN α . (f) Learning α . (g) Shared α . (h) Proposed SR α .

Fig. 6.
Fig. 6.

(a) LR frame. (b) Closed-form f . (c) Shared f . (d) Proposed f . (e) KNN α . (f) Learning α . (g) Shared α . (h) Proposed α .

Fig. 7.
Fig. 7.

(a) LR frame. (b) Proposed SR α . (c) LR frame. (d) Proposed SR α .

Fig. 8.
Fig. 8.

(a) LR frame. (b) Closed-form f . (c) Shared f . (d) Proposed f . (e) KNN α . (f) Learning α . (g) Shared α . (h) Proposed α .

Tables (4)

Tables Icon

Algorithm 1 Simultaneous Matting and Super-Resolution

Tables Icon

Table 1. Average Values for SR Factor Two

Tables Icon

Table 2. Average Values for SR Factor Four

Tables Icon

Table 3. Average Values for SR Video Frames

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

x k = α k f k + ( 1 α k ) b k , 0 α k 1 ,
y i = D W i x ,
x = A ( α ) f + ( I A ( α ) ) b ,
y i = D W i [ A ( α ) f + ( I A ( α ) ) b ] .
y j = l N ( j ) d l q M ( l ) w q · x q = l N ( j ) d l q M ( l ) w q · [ α q f q + ( 1 α q ) b q ] ,
v j = { 1 , if pixel j is visible 0 , if pixel j is occluded .
E o = j v j ( y j o j ) 2 = j v j e j 2 .
E o α k = Ω v j e j l N ( j ) d l w k ( f k b k ) ; E o f k = Ω v j e j l N ( j ) d l w k α k .
E α = α T L α ,
E = E o + λ α E α + λ f E f + λ b E b .
E α α = α L .
g α = i ( F ( f ) B ( b ) ) T W i T D T ( v i e i ) + λ α α L ,
g f = i ( A ( α ) ) T W i T D T ( v i e i ) + λ f f L ,
g b = i ( I A ( α ) ) T W i T D T ( v i e i ) + λ b b L ,
β α ( n ) = [ g α ( n ) ] T g α ( n ) [ p α ( n 1 ) ] T [ g α ( n ) g α ( n 1 ) ] .
α ( n ) = α ( n 1 ) + μ α p α ( n 1 ) ; f ( n ) = f ( n 1 ) + μ f p f ( n 1 ) ; b ( n ) = b ( n 1 ) + μ b p b ( n 1 ) ,
p α ( n ) = g α ( n ) + β α ( n ) p α ( n 1 ) ; p f ( n ) = g f ( n ) + β f ( n ) p f ( n 1 ) ; p b ( n ) = g b ( n ) + β b ( n ) p b ( n 1 ) .

Metrics