Abstract

We use mechanical translation of a coded aperture for code division multiple access compression of video. We discuss the compressed video’s temporal resolution and present experimental results for reconstructions of > 10 frames of temporal data per coded snapshot.

© 2013 OSA

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature486(7403), 386–389, (2012).
    [CrossRef] [PubMed]
  2. S. Kleinfelder, S. H. Lim, X. Liu, and A. El Gamal, “A 10000 frames/s CMOS digital pixel sensor,” IEEE J. Solid-St. Circ.36(12), 2049–2059, (2001).
    [CrossRef]
  3. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR 2, (2005).
  4. D. J. Brady, Optical Imaging and Spectroscopy. (Wiley-Interscience, 2009).
    [CrossRef]
  5. D. J. Brady, M. Feldman, N. Pitsianis, J. P Guo, A. Portnoy, and M. Fiddy, “Compressive optical MONTAGE photography,” Photonic Devices and Algorithms for Computing VII5907(1), 590708, (2005).
    [CrossRef]
  6. M. Shankar, N. P. Pitsianis, and D. J. Brady, “Compressive video sensors using multichannel imagers,” Appl. Opt.49(10), B9–B17, (2010).
    [CrossRef]
  7. Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in Proceedings of IEEE International Conference on Computer Vision. (IEEE, 2011), pp. 287–294.
  8. D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE6065, 606509 (2006).
    [CrossRef]
  9. M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K.F. Kelly, and R. G. Baraniuk, “Compressive imaging for video representation and coding,” in Proceedings of Picture Coding Symposium, (2006).
  10. Y. Oike and A. E Gamal, “A 256× 256 CMOS image sensor with δσ-based single-shot compressed sensing,” In Proceedings of IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE, 2012), pp. 386–388.
  11. M. Zhang and A. Bermak, “CMOS image sensor with on-chip image compression: A review and performance analysis,” J. Sens.2010, 1–17, (2010).
    [CrossRef]
  12. A. Fish and O. Yadid-Pecht, “Low Power CMOS Imager Circuits,” in Circuits at the Nanoscale: Communications, Imaging, and Sensing, K. Iniewski, ed. (CRC Press, Inc., 2008), pp. 457–484.
    [CrossRef]
  13. V. Treeaporn, A. Ashok, and M. A. Neifeld, “Space–time compressive imaging,” App. Opt.51(4), A67–A79, (2012).
    [CrossRef]
  14. M. A. Neifeld and P. Shankar, “Feature-specific imaging,” App. Opt.42(17), 3379–3389, (2003).
    [CrossRef]
  15. E. J. Candès and T. Tao, “Reflections on compressed sensing,” IEEE Information Theory Society Newsletter58(4), 20–23, (2008).
  16. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory52(4), 1289–1306, (2006).
    [CrossRef]
  17. R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Transactions on Graphics25(3), 795–804, (2006).
    [CrossRef]
  18. D. Reddy, A. Veeraraghavan, and R. Chellappa, “P2C2: Programmable pixel compressive camera for high speed imaging,” In Proceedings of IEEEE Conference on Coputer Vision and Pattern Recognition (IEEE, 2011), pp. 329–336.
  19. A. C. Sankaranarayanan, C. Studer, and R. G. Baraniuk, “CS-MUVI: Video compressive sensing for spatial-multiplexing cameras,” In Proceedings of IEEE International Conference on Computational Photography (IEEE, 2012), pp. 1–10.
  20. M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Optics Express15(21), 14013–14027, (2007).
    [CrossRef] [PubMed]
  21. X. Liao, H. Li, and L. Carin, “Generalized alternating projection for weighted-ℓ2,1minimization with applications to model-based compressive sensing,” SIAM Journal on Imaging Sciences (to be published).
  22. J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Transactions on Image Processing16(12), 2992–3014 (2007).
    [CrossRef] [PubMed]

2012

V. Treeaporn, A. Ashok, and M. A. Neifeld, “Space–time compressive imaging,” App. Opt.51(4), A67–A79, (2012).
[CrossRef]

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature486(7403), 386–389, (2012).
[CrossRef] [PubMed]

2010

M. Shankar, N. P. Pitsianis, and D. J. Brady, “Compressive video sensors using multichannel imagers,” Appl. Opt.49(10), B9–B17, (2010).
[CrossRef]

M. Zhang and A. Bermak, “CMOS image sensor with on-chip image compression: A review and performance analysis,” J. Sens.2010, 1–17, (2010).
[CrossRef]

2008

E. J. Candès and T. Tao, “Reflections on compressed sensing,” IEEE Information Theory Society Newsletter58(4), 20–23, (2008).

2007

M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Optics Express15(21), 14013–14027, (2007).
[CrossRef] [PubMed]

J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Transactions on Image Processing16(12), 2992–3014 (2007).
[CrossRef] [PubMed]

2006

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory52(4), 1289–1306, (2006).
[CrossRef]

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Transactions on Graphics25(3), 795–804, (2006).
[CrossRef]

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE6065, 606509 (2006).
[CrossRef]

2005

D. J. Brady, M. Feldman, N. Pitsianis, J. P Guo, A. Portnoy, and M. Fiddy, “Compressive optical MONTAGE photography,” Photonic Devices and Algorithms for Computing VII5907(1), 590708, (2005).
[CrossRef]

2003

M. A. Neifeld and P. Shankar, “Feature-specific imaging,” App. Opt.42(17), 3379–3389, (2003).
[CrossRef]

2001

S. Kleinfelder, S. H. Lim, X. Liu, and A. El Gamal, “A 10000 frames/s CMOS digital pixel sensor,” IEEE J. Solid-St. Circ.36(12), 2049–2059, (2001).
[CrossRef]

Agrawal, A.

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Transactions on Graphics25(3), 795–804, (2006).
[CrossRef]

Ashok, A.

V. Treeaporn, A. Ashok, and M. A. Neifeld, “Space–time compressive imaging,” App. Opt.51(4), A67–A79, (2012).
[CrossRef]

Baraniuk, R. G.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE6065, 606509 (2006).
[CrossRef]

M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K.F. Kelly, and R. G. Baraniuk, “Compressive imaging for video representation and coding,” in Proceedings of Picture Coding Symposium, (2006).

A. C. Sankaranarayanan, C. Studer, and R. G. Baraniuk, “CS-MUVI: Video compressive sensing for spatial-multiplexing cameras,” In Proceedings of IEEE International Conference on Computational Photography (IEEE, 2012), pp. 1–10.

Baron, D.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE6065, 606509 (2006).
[CrossRef]

M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K.F. Kelly, and R. G. Baraniuk, “Compressive imaging for video representation and coding,” in Proceedings of Picture Coding Symposium, (2006).

Bermak, A.

M. Zhang and A. Bermak, “CMOS image sensor with on-chip image compression: A review and performance analysis,” J. Sens.2010, 1–17, (2010).
[CrossRef]

Bioucas-Dias, J. M.

J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Transactions on Image Processing16(12), 2992–3014 (2007).
[CrossRef] [PubMed]

Brady, D. J.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature486(7403), 386–389, (2012).
[CrossRef] [PubMed]

M. Shankar, N. P. Pitsianis, and D. J. Brady, “Compressive video sensors using multichannel imagers,” Appl. Opt.49(10), B9–B17, (2010).
[CrossRef]

M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Optics Express15(21), 14013–14027, (2007).
[CrossRef] [PubMed]

D. J. Brady, M. Feldman, N. Pitsianis, J. P Guo, A. Portnoy, and M. Fiddy, “Compressive optical MONTAGE photography,” Photonic Devices and Algorithms for Computing VII5907(1), 590708, (2005).
[CrossRef]

D. J. Brady, Optical Imaging and Spectroscopy. (Wiley-Interscience, 2009).
[CrossRef]

Brédif, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR 2, (2005).

Candès, E. J.

E. J. Candès and T. Tao, “Reflections on compressed sensing,” IEEE Information Theory Society Newsletter58(4), 20–23, (2008).

Carin, L.

X. Liao, H. Li, and L. Carin, “Generalized alternating projection for weighted-ℓ2,1minimization with applications to model-based compressive sensing,” SIAM Journal on Imaging Sciences (to be published).

Chellappa, R.

D. Reddy, A. Veeraraghavan, and R. Chellappa, “P2C2: Programmable pixel compressive camera for high speed imaging,” In Proceedings of IEEEE Conference on Coputer Vision and Pattern Recognition (IEEE, 2011), pp. 329–336.

Donoho, D. L.

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory52(4), 1289–1306, (2006).
[CrossRef]

Duarte, M.

M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K.F. Kelly, and R. G. Baraniuk, “Compressive imaging for video representation and coding,” in Proceedings of Picture Coding Symposium, (2006).

Duarte, M. F.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE6065, 606509 (2006).
[CrossRef]

Duval, G.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR 2, (2005).

El Gamal, A.

S. Kleinfelder, S. H. Lim, X. Liu, and A. El Gamal, “A 10000 frames/s CMOS digital pixel sensor,” IEEE J. Solid-St. Circ.36(12), 2049–2059, (2001).
[CrossRef]

Feldman, M.

D. J. Brady, M. Feldman, N. Pitsianis, J. P Guo, A. Portnoy, and M. Fiddy, “Compressive optical MONTAGE photography,” Photonic Devices and Algorithms for Computing VII5907(1), 590708, (2005).
[CrossRef]

Feller, S. D.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature486(7403), 386–389, (2012).
[CrossRef] [PubMed]

Fiddy, M.

D. J. Brady, M. Feldman, N. Pitsianis, J. P Guo, A. Portnoy, and M. Fiddy, “Compressive optical MONTAGE photography,” Photonic Devices and Algorithms for Computing VII5907(1), 590708, (2005).
[CrossRef]

Figueiredo, M. A. T.

J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Transactions on Image Processing16(12), 2992–3014 (2007).
[CrossRef] [PubMed]

Fish, A.

A. Fish and O. Yadid-Pecht, “Low Power CMOS Imager Circuits,” in Circuits at the Nanoscale: Communications, Imaging, and Sensing, K. Iniewski, ed. (CRC Press, Inc., 2008), pp. 457–484.
[CrossRef]

Gamal, A. E

Y. Oike and A. E Gamal, “A 256× 256 CMOS image sensor with δσ-based single-shot compressed sensing,” In Proceedings of IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE, 2012), pp. 386–388.

Gehm, M. E.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature486(7403), 386–389, (2012).
[CrossRef] [PubMed]

M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Optics Express15(21), 14013–14027, (2007).
[CrossRef] [PubMed]

Golish, D. R.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature486(7403), 386–389, (2012).
[CrossRef] [PubMed]

Gu, J.

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in Proceedings of IEEE International Conference on Computer Vision. (IEEE, 2011), pp. 287–294.

Guo, J. P

D. J. Brady, M. Feldman, N. Pitsianis, J. P Guo, A. Portnoy, and M. Fiddy, “Compressive optical MONTAGE photography,” Photonic Devices and Algorithms for Computing VII5907(1), 590708, (2005).
[CrossRef]

Gupta, M.

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in Proceedings of IEEE International Conference on Computer Vision. (IEEE, 2011), pp. 287–294.

Hanrahan, P.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR 2, (2005).

Hitomi, Y.

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in Proceedings of IEEE International Conference on Computer Vision. (IEEE, 2011), pp. 287–294.

Horowitz, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR 2, (2005).

John, R.

M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Optics Express15(21), 14013–14027, (2007).
[CrossRef] [PubMed]

Kelly, K. F.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE6065, 606509 (2006).
[CrossRef]

Kelly, K.F.

M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K.F. Kelly, and R. G. Baraniuk, “Compressive imaging for video representation and coding,” in Proceedings of Picture Coding Symposium, (2006).

Kittle, D. S.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature486(7403), 386–389, (2012).
[CrossRef] [PubMed]

Kleinfelder, S.

S. Kleinfelder, S. H. Lim, X. Liu, and A. El Gamal, “A 10000 frames/s CMOS digital pixel sensor,” IEEE J. Solid-St. Circ.36(12), 2049–2059, (2001).
[CrossRef]

Laska, J.

M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K.F. Kelly, and R. G. Baraniuk, “Compressive imaging for video representation and coding,” in Proceedings of Picture Coding Symposium, (2006).

Laska, J. N.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE6065, 606509 (2006).
[CrossRef]

Levoy, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR 2, (2005).

Li, H.

X. Liao, H. Li, and L. Carin, “Generalized alternating projection for weighted-ℓ2,1minimization with applications to model-based compressive sensing,” SIAM Journal on Imaging Sciences (to be published).

Liao, X.

X. Liao, H. Li, and L. Carin, “Generalized alternating projection for weighted-ℓ2,1minimization with applications to model-based compressive sensing,” SIAM Journal on Imaging Sciences (to be published).

Lim, S. H.

S. Kleinfelder, S. H. Lim, X. Liu, and A. El Gamal, “A 10000 frames/s CMOS digital pixel sensor,” IEEE J. Solid-St. Circ.36(12), 2049–2059, (2001).
[CrossRef]

Liu, X.

S. Kleinfelder, S. H. Lim, X. Liu, and A. El Gamal, “A 10000 frames/s CMOS digital pixel sensor,” IEEE J. Solid-St. Circ.36(12), 2049–2059, (2001).
[CrossRef]

Marks, D. L.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature486(7403), 386–389, (2012).
[CrossRef] [PubMed]

Mitsunaga, T.

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in Proceedings of IEEE International Conference on Computer Vision. (IEEE, 2011), pp. 287–294.

Nayar, S. K.

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in Proceedings of IEEE International Conference on Computer Vision. (IEEE, 2011), pp. 287–294.

Neifeld, M. A.

V. Treeaporn, A. Ashok, and M. A. Neifeld, “Space–time compressive imaging,” App. Opt.51(4), A67–A79, (2012).
[CrossRef]

M. A. Neifeld and P. Shankar, “Feature-specific imaging,” App. Opt.42(17), 3379–3389, (2003).
[CrossRef]

Ng, R.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR 2, (2005).

Oike, Y.

Y. Oike and A. E Gamal, “A 256× 256 CMOS image sensor with δσ-based single-shot compressed sensing,” In Proceedings of IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE, 2012), pp. 386–388.

Pitsianis, N.

D. J. Brady, M. Feldman, N. Pitsianis, J. P Guo, A. Portnoy, and M. Fiddy, “Compressive optical MONTAGE photography,” Photonic Devices and Algorithms for Computing VII5907(1), 590708, (2005).
[CrossRef]

Pitsianis, N. P.

Portnoy, A.

D. J. Brady, M. Feldman, N. Pitsianis, J. P Guo, A. Portnoy, and M. Fiddy, “Compressive optical MONTAGE photography,” Photonic Devices and Algorithms for Computing VII5907(1), 590708, (2005).
[CrossRef]

Raskar, R.

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Transactions on Graphics25(3), 795–804, (2006).
[CrossRef]

Reddy, D.

D. Reddy, A. Veeraraghavan, and R. Chellappa, “P2C2: Programmable pixel compressive camera for high speed imaging,” In Proceedings of IEEEE Conference on Coputer Vision and Pattern Recognition (IEEE, 2011), pp. 329–336.

Sankaranarayanan, A. C.

A. C. Sankaranarayanan, C. Studer, and R. G. Baraniuk, “CS-MUVI: Video compressive sensing for spatial-multiplexing cameras,” In Proceedings of IEEE International Conference on Computational Photography (IEEE, 2012), pp. 1–10.

Sarvotham, S.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE6065, 606509 (2006).
[CrossRef]

M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K.F. Kelly, and R. G. Baraniuk, “Compressive imaging for video representation and coding,” in Proceedings of Picture Coding Symposium, (2006).

Schulz, T. J.

M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Optics Express15(21), 14013–14027, (2007).
[CrossRef] [PubMed]

Shankar, M.

Shankar, P.

M. A. Neifeld and P. Shankar, “Feature-specific imaging,” App. Opt.42(17), 3379–3389, (2003).
[CrossRef]

Stack, R. A.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature486(7403), 386–389, (2012).
[CrossRef] [PubMed]

Studer, C.

A. C. Sankaranarayanan, C. Studer, and R. G. Baraniuk, “CS-MUVI: Video compressive sensing for spatial-multiplexing cameras,” In Proceedings of IEEE International Conference on Computational Photography (IEEE, 2012), pp. 1–10.

Takhar, D.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE6065, 606509 (2006).
[CrossRef]

M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K.F. Kelly, and R. G. Baraniuk, “Compressive imaging for video representation and coding,” in Proceedings of Picture Coding Symposium, (2006).

Tao, T.

E. J. Candès and T. Tao, “Reflections on compressed sensing,” IEEE Information Theory Society Newsletter58(4), 20–23, (2008).

Treeaporn, V.

V. Treeaporn, A. Ashok, and M. A. Neifeld, “Space–time compressive imaging,” App. Opt.51(4), A67–A79, (2012).
[CrossRef]

Tumblin, J.

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Transactions on Graphics25(3), 795–804, (2006).
[CrossRef]

Veeraraghavan, A.

D. Reddy, A. Veeraraghavan, and R. Chellappa, “P2C2: Programmable pixel compressive camera for high speed imaging,” In Proceedings of IEEEE Conference on Coputer Vision and Pattern Recognition (IEEE, 2011), pp. 329–336.

Vera, E. M.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature486(7403), 386–389, (2012).
[CrossRef] [PubMed]

Wakin, M.

M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K.F. Kelly, and R. G. Baraniuk, “Compressive imaging for video representation and coding,” in Proceedings of Picture Coding Symposium, (2006).

Wakin, M. B.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE6065, 606509 (2006).
[CrossRef]

Willett, R. M.

M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Optics Express15(21), 14013–14027, (2007).
[CrossRef] [PubMed]

Yadid-Pecht, O.

A. Fish and O. Yadid-Pecht, “Low Power CMOS Imager Circuits,” in Circuits at the Nanoscale: Communications, Imaging, and Sensing, K. Iniewski, ed. (CRC Press, Inc., 2008), pp. 457–484.
[CrossRef]

Zhang, M.

M. Zhang and A. Bermak, “CMOS image sensor with on-chip image compression: A review and performance analysis,” J. Sens.2010, 1–17, (2010).
[CrossRef]

ACM Transactions on Graphics

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Transactions on Graphics25(3), 795–804, (2006).
[CrossRef]

App. Opt.

V. Treeaporn, A. Ashok, and M. A. Neifeld, “Space–time compressive imaging,” App. Opt.51(4), A67–A79, (2012).
[CrossRef]

M. A. Neifeld and P. Shankar, “Feature-specific imaging,” App. Opt.42(17), 3379–3389, (2003).
[CrossRef]

Appl. Opt.

IEEE Information Theory Society Newsletter

E. J. Candès and T. Tao, “Reflections on compressed sensing,” IEEE Information Theory Society Newsletter58(4), 20–23, (2008).

IEEE J. Solid-St. Circ.

S. Kleinfelder, S. H. Lim, X. Liu, and A. El Gamal, “A 10000 frames/s CMOS digital pixel sensor,” IEEE J. Solid-St. Circ.36(12), 2049–2059, (2001).
[CrossRef]

IEEE Trans. Inf. Theory

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory52(4), 1289–1306, (2006).
[CrossRef]

IEEE Transactions on Image Processing

J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Transactions on Image Processing16(12), 2992–3014 (2007).
[CrossRef] [PubMed]

J. Sens.

M. Zhang and A. Bermak, “CMOS image sensor with on-chip image compression: A review and performance analysis,” J. Sens.2010, 1–17, (2010).
[CrossRef]

Nature

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature486(7403), 386–389, (2012).
[CrossRef] [PubMed]

Optics Express

M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Optics Express15(21), 14013–14027, (2007).
[CrossRef] [PubMed]

Photonic Devices and Algorithms for Computing VII

D. J. Brady, M. Feldman, N. Pitsianis, J. P Guo, A. Portnoy, and M. Fiddy, “Compressive optical MONTAGE photography,” Photonic Devices and Algorithms for Computing VII5907(1), 590708, (2005).
[CrossRef]

Proc. SPIE

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE6065, 606509 (2006).
[CrossRef]

Other

M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K.F. Kelly, and R. G. Baraniuk, “Compressive imaging for video representation and coding,” in Proceedings of Picture Coding Symposium, (2006).

Y. Oike and A. E Gamal, “A 256× 256 CMOS image sensor with δσ-based single-shot compressed sensing,” In Proceedings of IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE, 2012), pp. 386–388.

A. Fish and O. Yadid-Pecht, “Low Power CMOS Imager Circuits,” in Circuits at the Nanoscale: Communications, Imaging, and Sensing, K. Iniewski, ed. (CRC Press, Inc., 2008), pp. 457–484.
[CrossRef]

X. Liao, H. Li, and L. Carin, “Generalized alternating projection for weighted-ℓ2,1minimization with applications to model-based compressive sensing,” SIAM Journal on Imaging Sciences (to be published).

D. Reddy, A. Veeraraghavan, and R. Chellappa, “P2C2: Programmable pixel compressive camera for high speed imaging,” In Proceedings of IEEEE Conference on Coputer Vision and Pattern Recognition (IEEE, 2011), pp. 329–336.

A. C. Sankaranarayanan, C. Studer, and R. G. Baraniuk, “CS-MUVI: Video compressive sensing for spatial-multiplexing cameras,” In Proceedings of IEEE International Conference on Computational Photography (IEEE, 2012), pp. 1–10.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR 2, (2005).

D. J. Brady, Optical Imaging and Spectroscopy. (Wiley-Interscience, 2009).
[CrossRef]

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in Proceedings of IEEE International Conference on Computer Vision. (IEEE, 2011), pp. 287–294.

Supplementary Material (15)

» Media 1: AVI (156 KB)     
» Media 2: AVI (1590 KB)     
» Media 3: AVI (359 KB)     
» Media 4: AVI (4299 KB)     
» Media 5: AVI (418 KB)     
» Media 6: AVI (4215 KB)     
» Media 7: AVI (330 KB)     
» Media 8: AVI (3562 KB)     
» Media 9: AVI (3964 KB)     
» Media 10: AVI (4078 KB)     
» Media 11: AVI (4253 KB)     
» Media 12: AVI (4312 KB)     
» Media 13: AVI (3944 KB)     
» Media 14: AVI (1528 KB)     
» Media 15: AVI (1658 KB)     

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1
Fig. 1

CACTI image acquisition process. (a) A discrete space-time source datacube is (b) multiplied at each of NF temporal channels with a shifted version of a coded aperture pattern. (c) Each detected frame g is the summation of the coded temporal channels and contains the object’s spatiotemporal-multiplexed information. The dark grey (red-outlined) and black detected pixels in (c) pictorially depict the code’s location at the beginning and the end of the camera’s integration window, respectively.

Fig. 2
Fig. 2

Linear system model. NF subframes of high-speed data f are estimated from a single snapshot g. The forward model matrix H has many more columns than rows and has dimensions N × (N × NF).

Fig. 3
Fig. 3

Waveform choices for s(t). Yellow: signal from function generator. Blue: actual hardware motion. Note the poor mechanical response to sharp rising/falling edges in (a) and (b). The sine wave (d) is unpreferable because of the nonuniform exposure time of different Tk.

Fig. 4
Fig. 4

Continuous motion and discrete approximation to coded aperture movement during integration time. The discrete triangle function sk more-accurately approximates the continuous triangle wave driving the mask with smaller values of d but adds more columns to H.

Fig. 5
Fig. 5

Temporal channels used for the reconstruction. Red lines indicate which subset of transverse mask positions sk were utilized to construct the forward matrix H. Blue lines represent the camera integration windows. (a) Calibrating with fewer sk results in a better-posed inverse problem but doesn’t as closely approximate the temporal motion s(t). (b) With d = 1, each pixel integrates several unique coding patterns with a temporal separation Δt = C−1. (c) Constructing H with large NF (d < 1) interpolates the motion occurring between the uniquely-coded image frames but retains most of the residual motion blur.

Fig. 6
Fig. 6

CACTI Prototype hardware setup. The coded aperture is 5.06mm × 4.91mm and spans 248 × 256 detector pixels. The function generator moves the coded aperture and triggers camera acquisition with signals from its function and SYNC outputs, respectively.

Fig. 7
Fig. 7

Spatial and temporal modulation. (a) A stationary coded aperture spatially modulates the image. (b) Moving the coded aperture during the integration window applies local code structures to NF temporal channels, effectively shearing the coded space-time datacube and providing per-pixel flutter shutter. (c) Imaging the (stationary) mask at positions d pixels apart and storing them into the forward matrix H simulates the mask’s motion, thereby conditioning the inversion.

Fig. 8
Fig. 8

Algorithm convergence times and relative residual reconstruction errors for various compression ratios. (a) GAP’s reconstruction time increases linearly with data size. Tests were performed on a ASUS U46E laptop (Intel quad core I7 operated at 3.1GHz. (b) Normalized 2 reconstruction error vs. number of reconstructed frames. The residual error reaches a minimum at critical temporal sampling and gradually flattens out with finer temporal interpolation (lower d).

Fig. 9
Fig. 9

Illustration of the GAP algorithm.

Fig. 10
Fig. 10

High-speed (C = 14) video of an eye blink, from closed to open, reconstructed from a single coded snapshot for NF = 14 ( Media 1) and NF = 148 ( Media 2). The numbers on the bottom-right of the pictures represent the frame number of the video sequence. Note that the eye is the only part of the scene that moves. The top left frame shows the sum of these reconstructed frames, which approximates the motion captured by a 30fps camera without a coded aperture modulating the focal plane.

Fig. 11
Fig. 11

Capture and reconstruction of a lens falling in front of a hand for NF = 14 ( Media 3) and NF = 148 ( Media 4). Notice the reconstructed frames capture the magnification effects of the lens as it passes in front of the hand.

Fig. 12
Fig. 12

Capture and reconstruction of a letter ’D’ placed at the edge of a chopper wheel rotating at 15Hz for NF = 14 ( Media 5) and NF = 148 ( Media 6). The white part of the letter exhibits ghosting effects in the reconstructions due to ambiguities in the solution. The TwIST algorithm with TV regularization was used to reconstruct this data [22].

Fig. 13
Fig. 13

Capture and reconstructed video of a bottle pouring water for NF = 14 ( Media 7) and NF = 148 ( Media 8). Note the time-varying specularities in the video. The TwIST algorithm with TV regularization was used to reconstruct this data [22].

Fig. 14
Fig. 14

Spatial resolution tests of (a),(b) an ISO 12233 resolution target and (c),(d) a soup can. These objects were kept stationary several feet away from the camera. (a),(c) show reconstructed results without temporally moving the mask; (b),(d) show the same objects when reconstructed with temporal mask motion.

Fig. 15
Fig. 15

Simulated and actual reconstruction PSNR by frame. (a), (b), and (c) show PSNR by high-speed, reconstructed video frame for 14 eye blink frames ( Media 9), 14 chopper wheel frames ( Media 10), and 210 chopper wheel frames ( Media 11, Media 12, Media 13, Media 14, Media 15) respectively, from snapshots g. Implemented mechanically-translated masks (red curves), simulated translating masks (black curves), and simulated LCoS coding (blue curves) were applied to high-speed ground truth data and reconstructed using GAP. Reconstructions using the implemented mask have a PSNR that periodically drops every 14th reconstructed frame due to the mechanical deceleration time of the piezoelectric stage holding the mask; these frames correspond to the time when the mask changes direction.

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

g ( x , t ) = 1 N F 1 N f ( x , t ) T ( x s ( t ) ) rect ( x x Δ x ) rect ( t t Δ t ) d x d t ,
g ^ ( u , v ) = sinc ( u Δ x ) sinc ( v Δ t ) f ^ ( u w , v ν w ) T ^ ( w ) d w ,
g i , j = k = 1 N F T i , j , k f i , j , k + n i , j ,
g = Hf + n ,
H k = def diag [ T 1 , 1 , k T 2 , 1 , k T N , N , k ] , k = 1 , , N F ;
H = def [ H 1 H 2 H N F ] ,
T k = Rand ( N , N , s k ) ,
s k = C Tri [ k 2 Δ t ]
N F = C d ,
f e = arg min f g Hf 2 + λ Ω ( f ) ,
Ω ( f ) = k N F i , j N ( f i + 1 , j , k f i , j , k ) 2 + ( f i , j + 1 , k f i , j , k ) 2 ,
w i , j , k = [ Ψ ( f ) ] i , j , k = i , j , k Ψ 1 ( i , i ) Ψ 2 ( j , j ) Ψ 3 ( k , k ) f i , j , k .
w G β = l = 1 m β l w G l 2 ,
[ P Π ( f ˜ ) ] i , j , k = f ˜ i , j , k + T i , j , k k = 1 N f T i , j , k 2 ( g i , j k = 1 N f T i , j , k f ˜ i , j , k ) .
P Λ ( C ) ( f ) = Ψ 1 ( arg min θ : θ G β C θ Ψ ( f ) 2 ) ,
f ( t ) = P Π ( f ˜ ( t 1 ) ) , t 1 ,
f ˜ ( t ) = P Λ ( C ( t ) ) ( f ( t ) ) = Ψ 1 ( θ ( t ) ) , t 1 ,
θ i , j , k ( t ) = w i , j , k ( t ) max { 1 β l w G l m * + 1 ( t ) 2 β l m + 1 w G l ( t ) 2 , 0 } , ( i , j , k ) G l , l = 1 , , m ,
w G l q ( t ) 2 β l q w G l q + 1 ( t ) 2 β l q + 1
C ( t ) = q = 1 m β l q 2 ( w G l q ( t ) 2 β l q w G l m * + 1 ( t ) 2 β l m + 1 ) ,

Metrics