Abstract

We report a parallel lensless compressive imaging system, which enjoys real-time reconstruction using deep convolutional neural networks. A prototype composed of a low-cost LCD, 16 photo-diodes and isolation chambers, has been built. Each of these 16 channels captures a fraction of the scene with 16×16 pixels and they are performing in parallel. An efficient inversion algorithm based on deep convolutional neural networks is developed to reconstruct the image. We have demonstrated encouraging results using only 2% (relative to pixel numbers, e.g. 5 for a block with 16×16 pixels) measurements per sensor for digits and around 10% measurements per sensor for facial images.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Efficient patch-based approach for compressive depth imaging

Xin Yuan, Xuejun Liao, Patrick Llull, David Brady, and Lawrence Carin
Appl. Opt. 55(27) 7556-7564 (2016)

Object classification through scattering media with deep learning on time resolved measurement

Guy Satat, Matthew Tancik, Otkrist Gupta, Barmak Heshmat, and Ramesh Raskar
Opt. Express 25(15) 17466-17479 (2017)

Learned phase coded aperture for the benefit of depth of field extension

Shay Elmalem, Raja Giryes, and Emanuel Marom
Opt. Express 26(12) 15316-15331 (2018)

References

  • View by:
  • |
  • |
  • |

  1. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006).
    [Crossref]
  2. E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006).
    [Crossref]
  3. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Sig. Process. Mag. 25(2), 83–91 (2008).
    [Crossref]
  4. G. Huang, H. Jiang, K. Matthews, and P. Wilford, “Lensless imaging by compressive sensing,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2013), pp. 2101–2105.
  5. P. Llull, X. Yuan, L. Carin, and D. Brady, “Image translation for single-shot focal tomography,” Optica 2(9), 822–825 (2015).
    [Crossref]
  6. X. Yuan, X. Liao, P. Llull, D. Brady, and L. Carin, “Efficient patch-based approach for compressive depth imaging,” Appl. Opt. 55(27), 7556–7564 (2016).
    [Crossref] [PubMed]
  7. X. Yuan, P. Llull, X. Liao, J. Yang, G. Sapiro, D. J. Brady, and L. Carin, “Low-cost compressive sensing for color video and depth,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3318–3325.
  8. P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21(9) 10526–10545 (2013).
    [Crossref] [PubMed]
  9. D. Reddy, A. Veeraraghavan, and R. Chellappa, “P2C2: Programmable pixel compressive camera for high speed imaging,” in Proceedings of IEEE Computer Vision and Pattern Recognition (IEEE, 2011), pp. 329–336.
  10. Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2011), pp. 287–294.
  11. Y. Sun, X. Yuan, and S. Pang, “High-speed compressive range imaging based on active illumination,” Opt. Express 24(20), 22836–22846 (2016).
    [Crossref] [PubMed]
  12. X. Yuan and S. Pang, “Structured illumination temporal compressive microscopy,” Biomed. Opt. Express 7(3), 746–758 (2016).
    [Crossref] [PubMed]
  13. Y. Sun, X. Yuan, and S. Pang, “Compressive high-speed stereo imaging,” Opt. Express 25(15), 18182–18190 (2017).
    [Crossref] [PubMed]
  14. X. Yuan, Y. Sun, and S. Pang, “Compressive video sensing with side information,” Appl. Opt. 56(10), 2697–2704 (2017).
    [Crossref] [PubMed]
  15. A. Wagadarikar, R. John, R. Willett, and D. J. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47(10), B44–B51 (2008).
    [Crossref] [PubMed]
  16. X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Sig. Process. Mag. 33(5), 95–108 (2016).
    [Crossref]
  17. T.-H. Tsai, P. Llull, X. Yuan, D. J. Brady, and L. Carin, “Spectral-temporal compressive imaging,” Opt. Lett. 40(17), 4054–4057 (2015).
    [Crossref] [PubMed]
  18. X. Yuan, T.-H. Tsai, R. Zhu, P. Llull, D. J. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Sig. Process. 9(6), 964–976 (2015).
    [Crossref]
  19. T.-H. Tsai, X. Yuan, and D. J. Brady, “Spatial light modulator based color polarization imaging,” Opt. Express 23(9), 11912–11926 (2015).
    [Crossref] [PubMed]
  20. X. Yuan, “Compressive dynamic range imaging via Bayesian shrinkage dictionary learning,” Opt. Eng. 55, 123110 (2016).
    [Crossref]
  21. X. Yuan, H. Jiang, G. Huang, and P. Wilford, “SLOPE: Shrinkage of local overlapping patches estimator for lensless compressive imaging,” IEEE Sens. J. 16(22), 8091–8102 (2016).
    [Crossref]
  22. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86(11), 2278–2324 (1998).
    [Crossref]
  23. Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin, “Variational autoencoder for deep learning of images, labels and captions,” in Proceedings of Advances in Neural Information Processing Systems (NIPS2016), pp. 2352–2360.
  24. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017).
    [Crossref]
  25. M. Minsky and S. Papert, Perceptrons: An Introduction to Computational Geometry (MIT Press, 1969).
  26. I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning (MIT Press, 2016).
  27. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323, 533–536 (1986).
    [Crossref]
  28. D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
    [Crossref] [PubMed]
  29. R. Horisaki, R. Takagi, and J. Tanida, “Learning-based imaging through scattering media,” Opt. Express 24(13), 13738–13743 (2016).
    [Crossref] [PubMed]
  30. D. L. Donoho, “For most large underdetermined systems of linear equations the minimal ℓ1-norm solution is also the sparsest solution,” Commun. Pure Appl. Math. 59(7), 907–934 (2006).
    [Crossref]
  31. Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2015), pp. 3730–3738.
  32. K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed random measurements,” in Proceedings of IEEE Computer Vision and Pattern Recognition (IEEE, 2016), pp. 449–458.
  33. A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” in Proceedings of International Conference on Learning Representations (ICLR, 2016), pp. 1–16.
  34. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proceedings of the International Conference on Machine Learning (ICML, 2015), pp. 448–456.
  35. A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in Proceedings of the International Conference on Machine Learning (ICML, 2013), pp. 1–6.
  36. J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for simplicity: The all convolutional net,” in Proceedings of International Conference on Learning Representations Workshop (ICLR, 2015), pp. 1–15.
  37. D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
    [Crossref] [PubMed]
  38. X. Yuan, J. Yang, P. Llull, X. Liao, G. Sapiro, D. J. Brady, and L. Carin, “Adaptive temporal compressive sensing for video,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2013), pp. 14–18.
  39. M. S. Asif, A. Ayremlou, A. Sankaranarayanan, A. Veeraraghavan, and R. Baraniuk, “Flatcam: Thin, bare-sensor cameras using coded aperture and computation,” IEEE Trans. Comput. Imag. 3(3), 384–397 (2017).
    [Crossref]
  40. M. Chen, J. Silva, J. Paisley, C. Wang, D. Dunson, and L. Carin, “Compressive sensing on manifolds using a nonparametric mixture of factor analyzers: Algorithm and performance bounds,” IEEE Trans. Sig. Process. 58(12), 6140–6155 (2010).
    [Crossref]
  41. M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Sig. Process. 54(11), 4311–4322 (2006).
    [Crossref]
  42. X. Liao, H. Li, and L. Carin, “Generalized alternating projection for weighted-ℓ2,1 minimization with applications to model-based compressive sensing,” SIAM J. Imag. Sci. 7, 797–823 (2014).
    [Crossref]
  43. X. Yuan, “Generalized alternating projection based total variation minimization for compressive sensing,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2016), pp. 2539–2543.
  44. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences 2, 183–202 (2009).
    [Crossref]
  45. G. Yu, G. Sapiro, and S. Mallat, “Solving inverse problems with piecewise linear estimators: From Gaussian mixture models to structured sparsity,” IEEE Trans. Image Process. 21(5), 2481–2499 (2012).
    [Crossref]
  46. J. Yang, X. Yuan, X. Liao, P. Llull, G. Sapiro, D. J. Brady, and L. Carin, “Video compressive sensing using Gaussian mixture models,” IEEE Trans. Image Process. 23(11), 4863–4878 (2014).
    [Crossref] [PubMed]
  47. A. Mousavi and R. G. Baraniuk, “Learning to invert: Signal recovery via deep convolutional networks,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE, 2017), pp. 2272–2276.
  48. Y. Pu, Z. Gan, R. Henao, C. Li, S. Han, and L. Carin, “VAE Learning via Stein Variational Gradient Descent,” in Proceedings of Advances in Neural Information Processing Systems (NIPS, 2017), pp. 1–10.
  49. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of IEEE Computer Vision and Pattern Recognition (IEEE, 2016), pp. 770–778.
  50. D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proceedings of International Conference on Learning Representations Workshop (ICLR, 2015), pp. 1–15.
  51. X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in Proceedings of International Conference on Artificial Intelligence and Statistics (AISTATS, 2010), pp. 249–256.
  52. F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. Goodfellow, A. Bergeron, N. Bouchard, D. Warde-Farley, and Y. Bengio, “Theano: new features and speed improvements,” in Proceedings of Advances in Neural Information Processing Systems Workshop (NIPS, 2012), pp. 1–10.
  53. G. Griffin, A. D. Holub, and P. Perona, “The Caltech 256,” in Caltech Technical Report (2006).

2017 (4)

2016 (8)

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref] [PubMed]

X. Yuan and S. Pang, “Structured illumination temporal compressive microscopy,” Biomed. Opt. Express 7(3), 746–758 (2016).
[Crossref] [PubMed]

R. Horisaki, R. Takagi, and J. Tanida, “Learning-based imaging through scattering media,” Opt. Express 24(13), 13738–13743 (2016).
[Crossref] [PubMed]

X. Yuan, X. Liao, P. Llull, D. Brady, and L. Carin, “Efficient patch-based approach for compressive depth imaging,” Appl. Opt. 55(27), 7556–7564 (2016).
[Crossref] [PubMed]

Y. Sun, X. Yuan, and S. Pang, “High-speed compressive range imaging based on active illumination,” Opt. Express 24(20), 22836–22846 (2016).
[Crossref] [PubMed]

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Sig. Process. Mag. 33(5), 95–108 (2016).
[Crossref]

X. Yuan, “Compressive dynamic range imaging via Bayesian shrinkage dictionary learning,” Opt. Eng. 55, 123110 (2016).
[Crossref]

X. Yuan, H. Jiang, G. Huang, and P. Wilford, “SLOPE: Shrinkage of local overlapping patches estimator for lensless compressive imaging,” IEEE Sens. J. 16(22), 8091–8102 (2016).
[Crossref]

2015 (4)

2014 (2)

J. Yang, X. Yuan, X. Liao, P. Llull, G. Sapiro, D. J. Brady, and L. Carin, “Video compressive sensing using Gaussian mixture models,” IEEE Trans. Image Process. 23(11), 4863–4878 (2014).
[Crossref] [PubMed]

X. Liao, H. Li, and L. Carin, “Generalized alternating projection for weighted-ℓ2,1 minimization with applications to model-based compressive sensing,” SIAM J. Imag. Sci. 7, 797–823 (2014).
[Crossref]

2013 (1)

2012 (2)

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref] [PubMed]

G. Yu, G. Sapiro, and S. Mallat, “Solving inverse problems with piecewise linear estimators: From Gaussian mixture models to structured sparsity,” IEEE Trans. Image Process. 21(5), 2481–2499 (2012).
[Crossref]

2010 (1)

M. Chen, J. Silva, J. Paisley, C. Wang, D. Dunson, and L. Carin, “Compressive sensing on manifolds using a nonparametric mixture of factor analyzers: Algorithm and performance bounds,” IEEE Trans. Sig. Process. 58(12), 6140–6155 (2010).
[Crossref]

2009 (1)

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences 2, 183–202 (2009).
[Crossref]

2008 (2)

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Sig. Process. Mag. 25(2), 83–91 (2008).
[Crossref]

A. Wagadarikar, R. John, R. Willett, and D. J. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47(10), B44–B51 (2008).
[Crossref] [PubMed]

2006 (4)

D. L. Donoho, “For most large underdetermined systems of linear equations the minimal ℓ1-norm solution is also the sparsest solution,” Commun. Pure Appl. Math. 59(7), 907–934 (2006).
[Crossref]

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006).
[Crossref]

E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006).
[Crossref]

M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Sig. Process. 54(11), 4311–4322 (2006).
[Crossref]

1998 (1)

Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86(11), 2278–2324 (1998).
[Crossref]

1986 (1)

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323, 533–536 (1986).
[Crossref]

Aharon, M.

M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Sig. Process. 54(11), 4311–4322 (2006).
[Crossref]

Antonoglou, I.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref] [PubMed]

Ashok, A.

K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed random measurements,” in Proceedings of IEEE Computer Vision and Pattern Recognition (IEEE, 2016), pp. 449–458.

Asif, M. S.

M. S. Asif, A. Ayremlou, A. Sankaranarayanan, A. Veeraraghavan, and R. Baraniuk, “Flatcam: Thin, bare-sensor cameras using coded aperture and computation,” IEEE Trans. Comput. Imag. 3(3), 384–397 (2017).
[Crossref]

Ayremlou, A.

M. S. Asif, A. Ayremlou, A. Sankaranarayanan, A. Veeraraghavan, and R. Baraniuk, “Flatcam: Thin, bare-sensor cameras using coded aperture and computation,” IEEE Trans. Comput. Imag. 3(3), 384–397 (2017).
[Crossref]

Ba, J.

D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proceedings of International Conference on Learning Representations Workshop (ICLR, 2015), pp. 1–15.

Baraniuk, R.

M. S. Asif, A. Ayremlou, A. Sankaranarayanan, A. Veeraraghavan, and R. Baraniuk, “Flatcam: Thin, bare-sensor cameras using coded aperture and computation,” IEEE Trans. Comput. Imag. 3(3), 384–397 (2017).
[Crossref]

Baraniuk, R. G.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Sig. Process. Mag. 25(2), 83–91 (2008).
[Crossref]

A. Mousavi and R. G. Baraniuk, “Learning to invert: Signal recovery via deep convolutional networks,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE, 2017), pp. 2272–2276.

Barbastathis, G.

Bastien, F.

F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. Goodfellow, A. Bergeron, N. Bouchard, D. Warde-Farley, and Y. Bengio, “Theano: new features and speed improvements,” in Proceedings of Advances in Neural Information Processing Systems Workshop (NIPS, 2012), pp. 1–10.

Beck, A.

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences 2, 183–202 (2009).
[Crossref]

Bengio, Y.

Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86(11), 2278–2324 (1998).
[Crossref]

F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. Goodfellow, A. Bergeron, N. Bouchard, D. Warde-Farley, and Y. Bengio, “Theano: new features and speed improvements,” in Proceedings of Advances in Neural Information Processing Systems Workshop (NIPS, 2012), pp. 1–10.

X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in Proceedings of International Conference on Artificial Intelligence and Statistics (AISTATS, 2010), pp. 249–256.

I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning (MIT Press, 2016).

Bergeron, A.

F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. Goodfellow, A. Bergeron, N. Bouchard, D. Warde-Farley, and Y. Bengio, “Theano: new features and speed improvements,” in Proceedings of Advances in Neural Information Processing Systems Workshop (NIPS, 2012), pp. 1–10.

Bergstra, J.

F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. Goodfellow, A. Bergeron, N. Bouchard, D. Warde-Farley, and Y. Bengio, “Theano: new features and speed improvements,” in Proceedings of Advances in Neural Information Processing Systems Workshop (NIPS, 2012), pp. 1–10.

Bottou, L.

Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86(11), 2278–2324 (1998).
[Crossref]

Bouchard, N.

F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. Goodfellow, A. Bergeron, N. Bouchard, D. Warde-Farley, and Y. Bengio, “Theano: new features and speed improvements,” in Proceedings of Advances in Neural Information Processing Systems Workshop (NIPS, 2012), pp. 1–10.

Brady, D.

Brady, D. J.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Sig. Process. Mag. 33(5), 95–108 (2016).
[Crossref]

T.-H. Tsai, X. Yuan, and D. J. Brady, “Spatial light modulator based color polarization imaging,” Opt. Express 23(9), 11912–11926 (2015).
[Crossref] [PubMed]

T.-H. Tsai, P. Llull, X. Yuan, D. J. Brady, and L. Carin, “Spectral-temporal compressive imaging,” Opt. Lett. 40(17), 4054–4057 (2015).
[Crossref] [PubMed]

X. Yuan, T.-H. Tsai, R. Zhu, P. Llull, D. J. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Sig. Process. 9(6), 964–976 (2015).
[Crossref]

J. Yang, X. Yuan, X. Liao, P. Llull, G. Sapiro, D. J. Brady, and L. Carin, “Video compressive sensing using Gaussian mixture models,” IEEE Trans. Image Process. 23(11), 4863–4878 (2014).
[Crossref] [PubMed]

P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21(9) 10526–10545 (2013).
[Crossref] [PubMed]

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref] [PubMed]

A. Wagadarikar, R. John, R. Willett, and D. J. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47(10), B44–B51 (2008).
[Crossref] [PubMed]

X. Yuan, J. Yang, P. Llull, X. Liao, G. Sapiro, D. J. Brady, and L. Carin, “Adaptive temporal compressive sensing for video,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2013), pp. 14–18.

X. Yuan, P. Llull, X. Liao, J. Yang, G. Sapiro, D. J. Brady, and L. Carin, “Low-cost compressive sensing for color video and depth,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3318–3325.

Brox, T.

J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for simplicity: The all convolutional net,” in Proceedings of International Conference on Learning Representations Workshop (ICLR, 2015), pp. 1–15.

Bruckstein, A.

M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Sig. Process. 54(11), 4311–4322 (2006).
[Crossref]

Candès, E. J.

E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006).
[Crossref]

Cao, X.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Sig. Process. Mag. 33(5), 95–108 (2016).
[Crossref]

Carin, L.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Sig. Process. Mag. 33(5), 95–108 (2016).
[Crossref]

X. Yuan, X. Liao, P. Llull, D. Brady, and L. Carin, “Efficient patch-based approach for compressive depth imaging,” Appl. Opt. 55(27), 7556–7564 (2016).
[Crossref] [PubMed]

P. Llull, X. Yuan, L. Carin, and D. Brady, “Image translation for single-shot focal tomography,” Optica 2(9), 822–825 (2015).
[Crossref]

T.-H. Tsai, P. Llull, X. Yuan, D. J. Brady, and L. Carin, “Spectral-temporal compressive imaging,” Opt. Lett. 40(17), 4054–4057 (2015).
[Crossref] [PubMed]

X. Yuan, T.-H. Tsai, R. Zhu, P. Llull, D. J. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Sig. Process. 9(6), 964–976 (2015).
[Crossref]

X. Liao, H. Li, and L. Carin, “Generalized alternating projection for weighted-ℓ2,1 minimization with applications to model-based compressive sensing,” SIAM J. Imag. Sci. 7, 797–823 (2014).
[Crossref]

J. Yang, X. Yuan, X. Liao, P. Llull, G. Sapiro, D. J. Brady, and L. Carin, “Video compressive sensing using Gaussian mixture models,” IEEE Trans. Image Process. 23(11), 4863–4878 (2014).
[Crossref] [PubMed]

P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21(9) 10526–10545 (2013).
[Crossref] [PubMed]

M. Chen, J. Silva, J. Paisley, C. Wang, D. Dunson, and L. Carin, “Compressive sensing on manifolds using a nonparametric mixture of factor analyzers: Algorithm and performance bounds,” IEEE Trans. Sig. Process. 58(12), 6140–6155 (2010).
[Crossref]

X. Yuan, J. Yang, P. Llull, X. Liao, G. Sapiro, D. J. Brady, and L. Carin, “Adaptive temporal compressive sensing for video,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2013), pp. 14–18.

Y. Pu, Z. Gan, R. Henao, C. Li, S. Han, and L. Carin, “VAE Learning via Stein Variational Gradient Descent,” in Proceedings of Advances in Neural Information Processing Systems (NIPS, 2017), pp. 1–10.

Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin, “Variational autoencoder for deep learning of images, labels and captions,” in Proceedings of Advances in Neural Information Processing Systems (NIPS2016), pp. 2352–2360.

X. Yuan, P. Llull, X. Liao, J. Yang, G. Sapiro, D. J. Brady, and L. Carin, “Low-cost compressive sensing for color video and depth,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3318–3325.

Chellappa, R.

D. Reddy, A. Veeraraghavan, and R. Chellappa, “P2C2: Programmable pixel compressive camera for high speed imaging,” in Proceedings of IEEE Computer Vision and Pattern Recognition (IEEE, 2011), pp. 329–336.

Chen, M.

M. Chen, J. Silva, J. Paisley, C. Wang, D. Dunson, and L. Carin, “Compressive sensing on manifolds using a nonparametric mixture of factor analyzers: Algorithm and performance bounds,” IEEE Trans. Sig. Process. 58(12), 6140–6155 (2010).
[Crossref]

Chintala, S.

A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” in Proceedings of International Conference on Learning Representations (ICLR, 2016), pp. 1–16.

Courville, A.

I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning (MIT Press, 2016).

Dai, Q.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Sig. Process. Mag. 33(5), 95–108 (2016).
[Crossref]

Davenport, M. A.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Sig. Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Dieleman, S.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref] [PubMed]

Donoho, D. L.

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006).
[Crossref]

D. L. Donoho, “For most large underdetermined systems of linear equations the minimal ℓ1-norm solution is also the sparsest solution,” Commun. Pure Appl. Math. 59(7), 907–934 (2006).
[Crossref]

Dosovitskiy, A.

J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for simplicity: The all convolutional net,” in Proceedings of International Conference on Learning Representations Workshop (ICLR, 2015), pp. 1–15.

Duarte, M. F.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Sig. Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Dunson, D.

M. Chen, J. Silva, J. Paisley, C. Wang, D. Dunson, and L. Carin, “Compressive sensing on manifolds using a nonparametric mixture of factor analyzers: Algorithm and performance bounds,” IEEE Trans. Sig. Process. 58(12), 6140–6155 (2010).
[Crossref]

Elad, M.

M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Sig. Process. 54(11), 4311–4322 (2006).
[Crossref]

Feller, S. D.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref] [PubMed]

Gan, Z.

Y. Pu, Z. Gan, R. Henao, C. Li, S. Han, and L. Carin, “VAE Learning via Stein Variational Gradient Descent,” in Proceedings of Advances in Neural Information Processing Systems (NIPS, 2017), pp. 1–10.

Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin, “Variational autoencoder for deep learning of images, labels and captions,” in Proceedings of Advances in Neural Information Processing Systems (NIPS2016), pp. 2352–2360.

Gehm, M. E.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref] [PubMed]

Glorot, X.

X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in Proceedings of International Conference on Artificial Intelligence and Statistics (AISTATS, 2010), pp. 249–256.

Golish, D. R.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref] [PubMed]

Goodfellow, I.

F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. Goodfellow, A. Bergeron, N. Bouchard, D. Warde-Farley, and Y. Bengio, “Theano: new features and speed improvements,” in Proceedings of Advances in Neural Information Processing Systems Workshop (NIPS, 2012), pp. 1–10.

I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning (MIT Press, 2016).

Graepel, T.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref] [PubMed]

Grewe, D.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref] [PubMed]

Griffin, G.

G. Griffin, A. D. Holub, and P. Perona, “The Caltech 256,” in Caltech Technical Report (2006).

Gu, J.

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2011), pp. 287–294.

Guez, A.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref] [PubMed]

Gupta, M.

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2011), pp. 287–294.

Haffner, P.

Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86(11), 2278–2324 (1998).
[Crossref]

Han, S.

Y. Pu, Z. Gan, R. Henao, C. Li, S. Han, and L. Carin, “VAE Learning via Stein Variational Gradient Descent,” in Proceedings of Advances in Neural Information Processing Systems (NIPS, 2017), pp. 1–10.

Hannun, A. Y.

A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in Proceedings of the International Conference on Machine Learning (ICML, 2013), pp. 1–6.

Hassabis, D.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref] [PubMed]

He, K.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of IEEE Computer Vision and Pattern Recognition (IEEE, 2016), pp. 770–778.

Henao, R.

Y. Pu, Z. Gan, R. Henao, C. Li, S. Han, and L. Carin, “VAE Learning via Stein Variational Gradient Descent,” in Proceedings of Advances in Neural Information Processing Systems (NIPS, 2017), pp. 1–10.

Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin, “Variational autoencoder for deep learning of images, labels and captions,” in Proceedings of Advances in Neural Information Processing Systems (NIPS2016), pp. 2352–2360.

Hinton, G. E.

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323, 533–536 (1986).
[Crossref]

Hitomi, Y.

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2011), pp. 287–294.

Holub, A. D.

G. Griffin, A. D. Holub, and P. Perona, “The Caltech 256,” in Caltech Technical Report (2006).

Horisaki, R.

Huang, A.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref] [PubMed]

Huang, G.

X. Yuan, H. Jiang, G. Huang, and P. Wilford, “SLOPE: Shrinkage of local overlapping patches estimator for lensless compressive imaging,” IEEE Sens. J. 16(22), 8091–8102 (2016).
[Crossref]

G. Huang, H. Jiang, K. Matthews, and P. Wilford, “Lensless imaging by compressive sensing,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2013), pp. 2101–2105.

Ioffe, S.

S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proceedings of the International Conference on Machine Learning (ICML, 2015), pp. 448–456.

Jiang, H.

X. Yuan, H. Jiang, G. Huang, and P. Wilford, “SLOPE: Shrinkage of local overlapping patches estimator for lensless compressive imaging,” IEEE Sens. J. 16(22), 8091–8102 (2016).
[Crossref]

G. Huang, H. Jiang, K. Matthews, and P. Wilford, “Lensless imaging by compressive sensing,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2013), pp. 2101–2105.

John, R.

Kalchbrenner, N.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref] [PubMed]

Kavukcuoglu, K.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref] [PubMed]

Kelly, K. F.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Sig. Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Kerviche, R.

K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed random measurements,” in Proceedings of IEEE Computer Vision and Pattern Recognition (IEEE, 2016), pp. 449–458.

Kingma, D.

D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proceedings of International Conference on Learning Representations Workshop (ICLR, 2015), pp. 1–15.

Kittle, D.

Kittle, D. S.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref] [PubMed]

Kulkarni, K.

K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed random measurements,” in Proceedings of IEEE Computer Vision and Pattern Recognition (IEEE, 2016), pp. 449–458.

Lamblin, P.

F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. Goodfellow, A. Bergeron, N. Bouchard, D. Warde-Farley, and Y. Bengio, “Theano: new features and speed improvements,” in Proceedings of Advances in Neural Information Processing Systems Workshop (NIPS, 2012), pp. 1–10.

Lanctot, M.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref] [PubMed]

Laska, J. N.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Sig. Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Leach, M.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref] [PubMed]

LeCun, Y.

Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86(11), 2278–2324 (1998).
[Crossref]

Lee, J.

Li, C.

Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin, “Variational autoencoder for deep learning of images, labels and captions,” in Proceedings of Advances in Neural Information Processing Systems (NIPS2016), pp. 2352–2360.

Y. Pu, Z. Gan, R. Henao, C. Li, S. Han, and L. Carin, “VAE Learning via Stein Variational Gradient Descent,” in Proceedings of Advances in Neural Information Processing Systems (NIPS, 2017), pp. 1–10.

Li, H.

X. Liao, H. Li, and L. Carin, “Generalized alternating projection for weighted-ℓ2,1 minimization with applications to model-based compressive sensing,” SIAM J. Imag. Sci. 7, 797–823 (2014).
[Crossref]

Li, S.

Liao, X.

X. Yuan, X. Liao, P. Llull, D. Brady, and L. Carin, “Efficient patch-based approach for compressive depth imaging,” Appl. Opt. 55(27), 7556–7564 (2016).
[Crossref] [PubMed]

J. Yang, X. Yuan, X. Liao, P. Llull, G. Sapiro, D. J. Brady, and L. Carin, “Video compressive sensing using Gaussian mixture models,” IEEE Trans. Image Process. 23(11), 4863–4878 (2014).
[Crossref] [PubMed]

X. Liao, H. Li, and L. Carin, “Generalized alternating projection for weighted-ℓ2,1 minimization with applications to model-based compressive sensing,” SIAM J. Imag. Sci. 7, 797–823 (2014).
[Crossref]

P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21(9) 10526–10545 (2013).
[Crossref] [PubMed]

X. Yuan, J. Yang, P. Llull, X. Liao, G. Sapiro, D. J. Brady, and L. Carin, “Adaptive temporal compressive sensing for video,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2013), pp. 14–18.

X. Yuan, P. Llull, X. Liao, J. Yang, G. Sapiro, D. J. Brady, and L. Carin, “Low-cost compressive sensing for color video and depth,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3318–3325.

Lillicrap, T.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref] [PubMed]

Lin, S.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Sig. Process. Mag. 33(5), 95–108 (2016).
[Crossref]

Lin, X.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Sig. Process. Mag. 33(5), 95–108 (2016).
[Crossref]

Liu, Z.

Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2015), pp. 3730–3738.

Llull, P.

X. Yuan, X. Liao, P. Llull, D. Brady, and L. Carin, “Efficient patch-based approach for compressive depth imaging,” Appl. Opt. 55(27), 7556–7564 (2016).
[Crossref] [PubMed]

T.-H. Tsai, P. Llull, X. Yuan, D. J. Brady, and L. Carin, “Spectral-temporal compressive imaging,” Opt. Lett. 40(17), 4054–4057 (2015).
[Crossref] [PubMed]

X. Yuan, T.-H. Tsai, R. Zhu, P. Llull, D. J. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Sig. Process. 9(6), 964–976 (2015).
[Crossref]

P. Llull, X. Yuan, L. Carin, and D. Brady, “Image translation for single-shot focal tomography,” Optica 2(9), 822–825 (2015).
[Crossref]

J. Yang, X. Yuan, X. Liao, P. Llull, G. Sapiro, D. J. Brady, and L. Carin, “Video compressive sensing using Gaussian mixture models,” IEEE Trans. Image Process. 23(11), 4863–4878 (2014).
[Crossref] [PubMed]

P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21(9) 10526–10545 (2013).
[Crossref] [PubMed]

X. Yuan, J. Yang, P. Llull, X. Liao, G. Sapiro, D. J. Brady, and L. Carin, “Adaptive temporal compressive sensing for video,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2013), pp. 14–18.

X. Yuan, P. Llull, X. Liao, J. Yang, G. Sapiro, D. J. Brady, and L. Carin, “Low-cost compressive sensing for color video and depth,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3318–3325.

Lohit, S.

K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed random measurements,” in Proceedings of IEEE Computer Vision and Pattern Recognition (IEEE, 2016), pp. 449–458.

Luo, P.

Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2015), pp. 3730–3738.

Maas, A. L.

A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in Proceedings of the International Conference on Machine Learning (ICML, 2013), pp. 1–6.

Maddison, C. J.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref] [PubMed]

Mallat, S.

G. Yu, G. Sapiro, and S. Mallat, “Solving inverse problems with piecewise linear estimators: From Gaussian mixture models to structured sparsity,” IEEE Trans. Image Process. 21(5), 2481–2499 (2012).
[Crossref]

Marks, D. L.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref] [PubMed]

Matthews, K.

G. Huang, H. Jiang, K. Matthews, and P. Wilford, “Lensless imaging by compressive sensing,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2013), pp. 2101–2105.

Metz, L.

A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” in Proceedings of International Conference on Learning Representations (ICLR, 2016), pp. 1–16.

Minsky, M.

M. Minsky and S. Papert, Perceptrons: An Introduction to Computational Geometry (MIT Press, 1969).

Mitsunaga, T.

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2011), pp. 287–294.

Mousavi, A.

A. Mousavi and R. G. Baraniuk, “Learning to invert: Signal recovery via deep convolutional networks,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE, 2017), pp. 2272–2276.

Nayar, S. K.

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2011), pp. 287–294.

Ng, A. Y.

A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in Proceedings of the International Conference on Machine Learning (ICML, 2013), pp. 1–6.

Nham, J.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref] [PubMed]

Paisley, J.

M. Chen, J. Silva, J. Paisley, C. Wang, D. Dunson, and L. Carin, “Compressive sensing on manifolds using a nonparametric mixture of factor analyzers: Algorithm and performance bounds,” IEEE Trans. Sig. Process. 58(12), 6140–6155 (2010).
[Crossref]

Pang, S.

Panneershelvam, V.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref] [PubMed]

Papert, S.

M. Minsky and S. Papert, Perceptrons: An Introduction to Computational Geometry (MIT Press, 1969).

Pascanu, R.

F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. Goodfellow, A. Bergeron, N. Bouchard, D. Warde-Farley, and Y. Bengio, “Theano: new features and speed improvements,” in Proceedings of Advances in Neural Information Processing Systems Workshop (NIPS, 2012), pp. 1–10.

Perona, P.

G. Griffin, A. D. Holub, and P. Perona, “The Caltech 256,” in Caltech Technical Report (2006).

Pu, Y.

Y. Pu, Z. Gan, R. Henao, C. Li, S. Han, and L. Carin, “VAE Learning via Stein Variational Gradient Descent,” in Proceedings of Advances in Neural Information Processing Systems (NIPS, 2017), pp. 1–10.

Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin, “Variational autoencoder for deep learning of images, labels and captions,” in Proceedings of Advances in Neural Information Processing Systems (NIPS2016), pp. 2352–2360.

Radford, A.

A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” in Proceedings of International Conference on Learning Representations (ICLR, 2016), pp. 1–16.

Reddy, D.

D. Reddy, A. Veeraraghavan, and R. Chellappa, “P2C2: Programmable pixel compressive camera for high speed imaging,” in Proceedings of IEEE Computer Vision and Pattern Recognition (IEEE, 2011), pp. 329–336.

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of IEEE Computer Vision and Pattern Recognition (IEEE, 2016), pp. 770–778.

Riedmiller, M.

J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for simplicity: The all convolutional net,” in Proceedings of International Conference on Learning Representations Workshop (ICLR, 2015), pp. 1–15.

Romberg, J.

E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006).
[Crossref]

Rumelhart, D. E.

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323, 533–536 (1986).
[Crossref]

Sankaranarayanan, A.

M. S. Asif, A. Ayremlou, A. Sankaranarayanan, A. Veeraraghavan, and R. Baraniuk, “Flatcam: Thin, bare-sensor cameras using coded aperture and computation,” IEEE Trans. Comput. Imag. 3(3), 384–397 (2017).
[Crossref]

Sapiro, G.

J. Yang, X. Yuan, X. Liao, P. Llull, G. Sapiro, D. J. Brady, and L. Carin, “Video compressive sensing using Gaussian mixture models,” IEEE Trans. Image Process. 23(11), 4863–4878 (2014).
[Crossref] [PubMed]

P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21(9) 10526–10545 (2013).
[Crossref] [PubMed]

G. Yu, G. Sapiro, and S. Mallat, “Solving inverse problems with piecewise linear estimators: From Gaussian mixture models to structured sparsity,” IEEE Trans. Image Process. 21(5), 2481–2499 (2012).
[Crossref]

X. Yuan, J. Yang, P. Llull, X. Liao, G. Sapiro, D. J. Brady, and L. Carin, “Adaptive temporal compressive sensing for video,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2013), pp. 14–18.

X. Yuan, P. Llull, X. Liao, J. Yang, G. Sapiro, D. J. Brady, and L. Carin, “Low-cost compressive sensing for color video and depth,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3318–3325.

Schrittwieser, J.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref] [PubMed]

Sifre, L.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref] [PubMed]

Silva, J.

M. Chen, J. Silva, J. Paisley, C. Wang, D. Dunson, and L. Carin, “Compressive sensing on manifolds using a nonparametric mixture of factor analyzers: Algorithm and performance bounds,” IEEE Trans. Sig. Process. 58(12), 6140–6155 (2010).
[Crossref]

Silver, D.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref] [PubMed]

Sinha, A.

Springenberg, J. T.

J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for simplicity: The all convolutional net,” in Proceedings of International Conference on Learning Representations Workshop (ICLR, 2015), pp. 1–15.

Stack, R. A.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref] [PubMed]

Stevens, A.

Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin, “Variational autoencoder for deep learning of images, labels and captions,” in Proceedings of Advances in Neural Information Processing Systems (NIPS2016), pp. 2352–2360.

Sun, J.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of IEEE Computer Vision and Pattern Recognition (IEEE, 2016), pp. 770–778.

Sun, T.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Sig. Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Sun, Y.

Sutskever, I.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref] [PubMed]

Szegedy, C.

S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proceedings of the International Conference on Machine Learning (ICML, 2015), pp. 448–456.

Takagi, R.

Takhar, D.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Sig. Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Tang, X.

Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2015), pp. 3730–3738.

Tanida, J.

Tao, T.

E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006).
[Crossref]

Teboulle, M.

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences 2, 183–202 (2009).
[Crossref]

Tsai, T.-H.

Turaga, P.

K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed random measurements,” in Proceedings of IEEE Computer Vision and Pattern Recognition (IEEE, 2016), pp. 449–458.

van den Driessche, G.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref] [PubMed]

Veeraraghavan, A.

M. S. Asif, A. Ayremlou, A. Sankaranarayanan, A. Veeraraghavan, and R. Baraniuk, “Flatcam: Thin, bare-sensor cameras using coded aperture and computation,” IEEE Trans. Comput. Imag. 3(3), 384–397 (2017).
[Crossref]

D. Reddy, A. Veeraraghavan, and R. Chellappa, “P2C2: Programmable pixel compressive camera for high speed imaging,” in Proceedings of IEEE Computer Vision and Pattern Recognition (IEEE, 2011), pp. 329–336.

Vera, E. M.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref] [PubMed]

Wagadarikar, A.

Wang, C.

M. Chen, J. Silva, J. Paisley, C. Wang, D. Dunson, and L. Carin, “Compressive sensing on manifolds using a nonparametric mixture of factor analyzers: Algorithm and performance bounds,” IEEE Trans. Sig. Process. 58(12), 6140–6155 (2010).
[Crossref]

Wang, X.

Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2015), pp. 3730–3738.

Warde-Farley, D.

F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. Goodfellow, A. Bergeron, N. Bouchard, D. Warde-Farley, and Y. Bengio, “Theano: new features and speed improvements,” in Proceedings of Advances in Neural Information Processing Systems Workshop (NIPS, 2012), pp. 1–10.

Wilford, P.

X. Yuan, H. Jiang, G. Huang, and P. Wilford, “SLOPE: Shrinkage of local overlapping patches estimator for lensless compressive imaging,” IEEE Sens. J. 16(22), 8091–8102 (2016).
[Crossref]

G. Huang, H. Jiang, K. Matthews, and P. Wilford, “Lensless imaging by compressive sensing,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2013), pp. 2101–2105.

Willett, R.

Williams, R. J.

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323, 533–536 (1986).
[Crossref]

Yang, J.

J. Yang, X. Yuan, X. Liao, P. Llull, G. Sapiro, D. J. Brady, and L. Carin, “Video compressive sensing using Gaussian mixture models,” IEEE Trans. Image Process. 23(11), 4863–4878 (2014).
[Crossref] [PubMed]

P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21(9) 10526–10545 (2013).
[Crossref] [PubMed]

X. Yuan, J. Yang, P. Llull, X. Liao, G. Sapiro, D. J. Brady, and L. Carin, “Adaptive temporal compressive sensing for video,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2013), pp. 14–18.

X. Yuan, P. Llull, X. Liao, J. Yang, G. Sapiro, D. J. Brady, and L. Carin, “Low-cost compressive sensing for color video and depth,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3318–3325.

Yu, G.

G. Yu, G. Sapiro, and S. Mallat, “Solving inverse problems with piecewise linear estimators: From Gaussian mixture models to structured sparsity,” IEEE Trans. Image Process. 21(5), 2481–2499 (2012).
[Crossref]

Yuan, X.

X. Yuan, Y. Sun, and S. Pang, “Compressive video sensing with side information,” Appl. Opt. 56(10), 2697–2704 (2017).
[Crossref] [PubMed]

Y. Sun, X. Yuan, and S. Pang, “Compressive high-speed stereo imaging,” Opt. Express 25(15), 18182–18190 (2017).
[Crossref] [PubMed]

X. Yuan, X. Liao, P. Llull, D. Brady, and L. Carin, “Efficient patch-based approach for compressive depth imaging,” Appl. Opt. 55(27), 7556–7564 (2016).
[Crossref] [PubMed]

Y. Sun, X. Yuan, and S. Pang, “High-speed compressive range imaging based on active illumination,” Opt. Express 24(20), 22836–22846 (2016).
[Crossref] [PubMed]

X. Yuan and S. Pang, “Structured illumination temporal compressive microscopy,” Biomed. Opt. Express 7(3), 746–758 (2016).
[Crossref] [PubMed]

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Sig. Process. Mag. 33(5), 95–108 (2016).
[Crossref]

X. Yuan, “Compressive dynamic range imaging via Bayesian shrinkage dictionary learning,” Opt. Eng. 55, 123110 (2016).
[Crossref]

X. Yuan, H. Jiang, G. Huang, and P. Wilford, “SLOPE: Shrinkage of local overlapping patches estimator for lensless compressive imaging,” IEEE Sens. J. 16(22), 8091–8102 (2016).
[Crossref]

T.-H. Tsai, P. Llull, X. Yuan, D. J. Brady, and L. Carin, “Spectral-temporal compressive imaging,” Opt. Lett. 40(17), 4054–4057 (2015).
[Crossref] [PubMed]

T.-H. Tsai, X. Yuan, and D. J. Brady, “Spatial light modulator based color polarization imaging,” Opt. Express 23(9), 11912–11926 (2015).
[Crossref] [PubMed]

P. Llull, X. Yuan, L. Carin, and D. Brady, “Image translation for single-shot focal tomography,” Optica 2(9), 822–825 (2015).
[Crossref]

X. Yuan, T.-H. Tsai, R. Zhu, P. Llull, D. J. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Sig. Process. 9(6), 964–976 (2015).
[Crossref]

J. Yang, X. Yuan, X. Liao, P. Llull, G. Sapiro, D. J. Brady, and L. Carin, “Video compressive sensing using Gaussian mixture models,” IEEE Trans. Image Process. 23(11), 4863–4878 (2014).
[Crossref] [PubMed]

P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21(9) 10526–10545 (2013).
[Crossref] [PubMed]

X. Yuan, J. Yang, P. Llull, X. Liao, G. Sapiro, D. J. Brady, and L. Carin, “Adaptive temporal compressive sensing for video,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2013), pp. 14–18.

X. Yuan, “Generalized alternating projection based total variation minimization for compressive sensing,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2016), pp. 2539–2543.

Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin, “Variational autoencoder for deep learning of images, labels and captions,” in Proceedings of Advances in Neural Information Processing Systems (NIPS2016), pp. 2352–2360.

X. Yuan, P. Llull, X. Liao, J. Yang, G. Sapiro, D. J. Brady, and L. Carin, “Low-cost compressive sensing for color video and depth,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3318–3325.

Yue, T.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Sig. Process. Mag. 33(5), 95–108 (2016).
[Crossref]

Zhang, X.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of IEEE Computer Vision and Pattern Recognition (IEEE, 2016), pp. 770–778.

Zhu, R.

X. Yuan, T.-H. Tsai, R. Zhu, P. Llull, D. J. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Sig. Process. 9(6), 964–976 (2015).
[Crossref]

Appl. Opt. (3)

Biomed. Opt. Express (1)

Commun. Pure Appl. Math. (1)

D. L. Donoho, “For most large underdetermined systems of linear equations the minimal ℓ1-norm solution is also the sparsest solution,” Commun. Pure Appl. Math. 59(7), 907–934 (2006).
[Crossref]

IEEE J. Sel. Top. Sig. Process. (1)

X. Yuan, T.-H. Tsai, R. Zhu, P. Llull, D. J. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Sig. Process. 9(6), 964–976 (2015).
[Crossref]

IEEE Sens. J. (1)

X. Yuan, H. Jiang, G. Huang, and P. Wilford, “SLOPE: Shrinkage of local overlapping patches estimator for lensless compressive imaging,” IEEE Sens. J. 16(22), 8091–8102 (2016).
[Crossref]

IEEE Sig. Process. Mag. (2)

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Sig. Process. Mag. 33(5), 95–108 (2016).
[Crossref]

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Sig. Process. Mag. 25(2), 83–91 (2008).
[Crossref]

IEEE Trans. Comput. Imag. (1)

M. S. Asif, A. Ayremlou, A. Sankaranarayanan, A. Veeraraghavan, and R. Baraniuk, “Flatcam: Thin, bare-sensor cameras using coded aperture and computation,” IEEE Trans. Comput. Imag. 3(3), 384–397 (2017).
[Crossref]

IEEE Trans. Image Process. (2)

G. Yu, G. Sapiro, and S. Mallat, “Solving inverse problems with piecewise linear estimators: From Gaussian mixture models to structured sparsity,” IEEE Trans. Image Process. 21(5), 2481–2499 (2012).
[Crossref]

J. Yang, X. Yuan, X. Liao, P. Llull, G. Sapiro, D. J. Brady, and L. Carin, “Video compressive sensing using Gaussian mixture models,” IEEE Trans. Image Process. 23(11), 4863–4878 (2014).
[Crossref] [PubMed]

IEEE Trans. Inf. Theory (2)

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006).
[Crossref]

E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006).
[Crossref]

IEEE Trans. Sig. Process. (2)

M. Chen, J. Silva, J. Paisley, C. Wang, D. Dunson, and L. Carin, “Compressive sensing on manifolds using a nonparametric mixture of factor analyzers: Algorithm and performance bounds,” IEEE Trans. Sig. Process. 58(12), 6140–6155 (2010).
[Crossref]

M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Sig. Process. 54(11), 4311–4322 (2006).
[Crossref]

Nature (3)

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323, 533–536 (1986).
[Crossref]

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref] [PubMed]

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref] [PubMed]

Opt. Eng. (1)

X. Yuan, “Compressive dynamic range imaging via Bayesian shrinkage dictionary learning,” Opt. Eng. 55, 123110 (2016).
[Crossref]

Opt. Express (5)

Opt. Lett. (1)

Optica (2)

Proc. IEEE (1)

Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86(11), 2278–2324 (1998).
[Crossref]

SIAM J. Imag. Sci. (1)

X. Liao, H. Li, and L. Carin, “Generalized alternating projection for weighted-ℓ2,1 minimization with applications to model-based compressive sensing,” SIAM J. Imag. Sci. 7, 797–823 (2014).
[Crossref]

SIAM Journal on Imaging Sciences (1)

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences 2, 183–202 (2009).
[Crossref]

Other (22)

X. Yuan, J. Yang, P. Llull, X. Liao, G. Sapiro, D. J. Brady, and L. Carin, “Adaptive temporal compressive sensing for video,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2013), pp. 14–18.

A. Mousavi and R. G. Baraniuk, “Learning to invert: Signal recovery via deep convolutional networks,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE, 2017), pp. 2272–2276.

Y. Pu, Z. Gan, R. Henao, C. Li, S. Han, and L. Carin, “VAE Learning via Stein Variational Gradient Descent,” in Proceedings of Advances in Neural Information Processing Systems (NIPS, 2017), pp. 1–10.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of IEEE Computer Vision and Pattern Recognition (IEEE, 2016), pp. 770–778.

D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proceedings of International Conference on Learning Representations Workshop (ICLR, 2015), pp. 1–15.

X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in Proceedings of International Conference on Artificial Intelligence and Statistics (AISTATS, 2010), pp. 249–256.

F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. Goodfellow, A. Bergeron, N. Bouchard, D. Warde-Farley, and Y. Bengio, “Theano: new features and speed improvements,” in Proceedings of Advances in Neural Information Processing Systems Workshop (NIPS, 2012), pp. 1–10.

G. Griffin, A. D. Holub, and P. Perona, “The Caltech 256,” in Caltech Technical Report (2006).

X. Yuan, “Generalized alternating projection based total variation minimization for compressive sensing,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2016), pp. 2539–2543.

Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2015), pp. 3730–3738.

K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed random measurements,” in Proceedings of IEEE Computer Vision and Pattern Recognition (IEEE, 2016), pp. 449–458.

A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” in Proceedings of International Conference on Learning Representations (ICLR, 2016), pp. 1–16.

S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proceedings of the International Conference on Machine Learning (ICML, 2015), pp. 448–456.

A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in Proceedings of the International Conference on Machine Learning (ICML, 2013), pp. 1–6.

J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for simplicity: The all convolutional net,” in Proceedings of International Conference on Learning Representations Workshop (ICLR, 2015), pp. 1–15.

Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin, “Variational autoencoder for deep learning of images, labels and captions,” in Proceedings of Advances in Neural Information Processing Systems (NIPS2016), pp. 2352–2360.

M. Minsky and S. Papert, Perceptrons: An Introduction to Computational Geometry (MIT Press, 1969).

I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning (MIT Press, 2016).

D. Reddy, A. Veeraraghavan, and R. Chellappa, “P2C2: Programmable pixel compressive camera for high speed imaging,” in Proceedings of IEEE Computer Vision and Pattern Recognition (IEEE, 2011), pp. 329–336.

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2011), pp. 287–294.

X. Yuan, P. Llull, X. Liao, J. Yang, G. Sapiro, D. J. Brady, and L. Carin, “Low-cost compressive sensing for color video and depth,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3318–3325.

G. Huang, H. Jiang, K. Matthews, and P. Wilford, “Lensless imaging by compressive sensing,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2013), pp. 2101–2105.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Demonstration of the lensless camera: (a) single sensor lensless compressive camera [4]. A ray (black line) is starting from a point on the scene, passing through the point (x, y) on the aperture assembly, and ending at the sensor. (b) Proposed parallel (block-wise) lensless camera and its components (below). Four sensors are shown in this example. Each sensor will capture a fraction of the scene. These fractions can be overlapping. The image is reconstructed via first performing block-based inversion (reconstruction) and then stitching these blocks. Each part of the component can be built with off-the-shelf components.
Fig. 2
Fig. 2 Photo of our prototype. From left to right, (a) transparent LCD, (b) isolation chamber, and (c) sensor board.
Fig. 3
Fig. 3 (a) Demonstration of the cross-talk issue introduced by the configuration in Fig. 1(b). Two adjacent sensors, {S1, S2} are used, and two corresponding rays {R1,1, R1,2}, and {R2,1, R2,2} are plotted for each sensor. When the sensor is not close to the aperture, there will be cross-talk between adjacent sensors (the red region). (b) Mitigate the cross-talk by putting the sensors together. Rays for adjacent sensors will not overlap and thus the cross-talk issue can be mitigated. When the scene is infinitely far away, the sensing gap between adjacent sensors is negligible. (c–e) Cross-sectional view of different configurations of the “Concentration-Sensor Regime”, where the aperture assembly can be a plane (c–d) or a spherical surface (e). The sensors can be mounted on a plane (c) or a sphere (d–e). (f–g) Sensor layout of the “Concentration-Sensor Regime” in (e). Each sensor covers a hexagon like area (f) and the senor array forms a sphere (g), and the isolation chamber in the “Concentration-Sensor Regime” is now a “trumpet” shape (h).
Fig. 4
Fig. 4 Architecture of the deep CNN used in our imaging system (a). The input is the measurement y i m. For example, if the block size is 16 × 16 and CSr = 0.1, then m = round(0.1 × 16 × 16) = 26. The “output” on the bottom of (a) lists the size of data at each step. The final output on the right is the reconstructed image block of size 16 × 16. The “Deconv.” denotes the deconvolutional (a.k.a transposed convolutional) layers [33]. (b–d) The “Deconvolution + ReLU” units, where ×3 means the network in the dashed box is stacked three times. “BN” denotes batch normalization. The layer without “stride” denotes stride 1.
Fig. 5
Fig. 5 Simulation results of the proposed parallel lensless compressive imaging system using the “CelebA” face dataset, where 1000 face images (different from the training images) are resized to spatial size 64×64 and further divide to 16×16 blocks for parallel sampling. (a) Selected 64 exemplar truth images, (b) reconstructed images using GMM at CSr = 0.05, (c) reconstructed images using GMM at CSr = 0.1, (d) PSNR curves of reconstructed images compared with ground truth using GMM and CNN at various CSr, (e) reconstructed images using CNN at CSr = 0.05, (f) reconstructed images using CNN at CSr = 0.1.
Fig. 6
Fig. 6 Experimental results using CNN with measurements taken by the prototype with 16 sensors for the digit dataset. (a) Truth, (b) CSr = 0.01, (c) CSr = 0.02, and (d) CSr = 0.05.
Fig. 7
Fig. 7 Experimental results of our parallel lensless compressive imaging system by sampling the “face easy” category images in the “Caltech256” dataset, where 435 face images are used for testing. (a) Selected 64 exemplar truth images, (b) reconstructed images using GMM at CSr = 0.1, (c) reconstructed images using GMM at CSr = 0.3, (d) PSNR curves of reconstructed images compared with (misaligned) ground truth using GMM and CNN at various compressive sensing ratios, (e) reconstructed images using CNN at CSr = 0.1, (f) reconstructed images using CNN at CSr = 0.3. 5.2.1. Digital data
Fig. 8
Fig. 8 Experimental results of digit dataset reconstructed by CNN trained by different datasets. (a) PSNR of reconstructed digits compared with ground truth using CNN trained by digital data (different from the testing data, red line) and face data (blue line). (b) Reconstructed digits using CNN trained by digital data at CSr = 0.09. (c) Reconstructed digits using CNN trained by face data at CSr = 0.09.

Tables (1)

Tables Icon

Algorithm 1 Train CNN to learn A CNN inv via Adam [50]. A CNN inv is a function of weights, i.e., A CNN inv = f ( { w ( i ) } i = 0 3 ), where w(0) denotes the weights in the fully connected layer, and { w ( i ) } i = 1 3 denotes the weights in the ith deconvolutional unit in Fig. 4. E = | | X train A CNN inv ( Y train ) | | is the loss. α is the learning rate. The hyperparamters are set as β1 = 0.9, β2 = 0.999 and η = 10−12. β 1 t and β 2 t represent the tth power of β1 and β2. ⊙ denotes the Hadamard (element-wise) product.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

I ( x , y ) = 0 Δ t r ( x , y ; t ) d t .
I ( i , j ) = ( i 1 ) Δ x i Δ x ( j 1 ) Δ y i Δ y I ( x , y ) T ( x , y ) d x d y .
Y = AX + ϵ ,
X = DS ,
Y = ADS + ϵ ,
S ^ = min S | | Y ADS | | F 2 + τ | | S | | 1 ,
x i ~ k = 1 K π k N ( μ k , Σ k ) ,
p ( x | y ) = k = 1 K π ˜ k N ( x | μ ˜ k , Σ ˜ k ) ,
π ˜ k = π k N ( y | A x k , R 1 + A Σ k A T ) Σ l = 1 K π l N ( y | A x l , R 1 + A Σ l A T ) ,
Σ ˜ k = ( A T RA + Σ k 1 ) 1 ,
μ ˜ k = Σ ˜ k ( A T Ry + Σ k 1 μ k ) .
E [ x ^ ] = k = 1 K π ˜ k μ ˜ k ,
w ~ Uniform ( 6 n i n + n o u t , 6 n i n + n o u t ) ,
CSr = m / n ,

Metrics