Abstract

We review the impact of deep-learning technologies on camera architecture. The function of a camera is first to capture visual information and second to form an image. Conventionally, both functions are implemented in physical optics. Throughout the digital age, however, joint design of physical sampling and electronic processing, e.g., computational imaging, has been increasingly applied to improve these functions. Over the past five years, deep learning has radically improved the capacity of computational imaging. Here we briefly review the development of artificial neural networks and their recent intersection with computational imaging. We then consider in more detail how deep learning impacts the primary strategies of computational photography: focal plane modulation, lens design, and robotic control. With focal plane modulation, we show that deep learning improves signal inference to enable faster hyperspectral, polarization, and video capture while reducing the power per pixel by $10 {-} 100\times$. With lens design, deep learning improves multiple aperture image fusion to enable task-specific array cameras. With control, deep learning enables dynamic scene-specific control that may ultimately enable cameras that capture the entire optical data cube (the “light field”), rather than just a focal slice. Finally, we discuss how these three strategies impact the physical camera design as we seek to balance physical compactness and simplicity, information capacity, computational complexity, and visual fidelity.

© 2020 Optical Society of America

Full Article  |  PDF Article
More Like This
Off-axis digital holographic multiplexing for rapid wavefront acquisition and processing

Natan T. Shaked, Vicente Micó, Maciej Trusiak, Arkadiusz Kuś, and Simcha K. Mirsky
Adv. Opt. Photon. 12(3) 556-611 (2020)

Computational imaging

Joseph N. Mait, Gary W. Euliss, and Ravindra A. Athale
Adv. Opt. Photon. 10(2) 409-483 (2018)

Parallel cameras

David J. Brady, Wubin Pang, Han Li, Zhan Ma, Yue Tao, and Xun Cao
Optica 5(2) 127-137 (2018)

References

  • View by:
  • |
  • |
  • |

  1. E. R. Fossum, “CMOS image sensors: electronic camera-on-a-chip,” IEEE Trans. Electron Devices 44, 1689–1698 (1997).
    [Crossref]
  2. R. Ramanath, W. E. Snyder, Y. Yoo, and M. S. Drew, “Color image processing pipeline,” IEEE Signal Process Mag. 22(1), 34–43 (2005).
    [Crossref]
  3. P. Pirsch, N. Demassieux, and W. Gehrke, “VLSI architectures for video compression—a survey,” Proc. IEEE 83, 220–246 (1995).
    [Crossref]
  4. H. Yin, H. Jia, J. Zhou, and Z. Gao, “Survey on algorithms and VLSI architectures for MPEG-like video coders,” J. Signal Process. Syst. 88, 357–410 (2017).
    [Crossref]
  5. F. Heide, M. Steinberger, Y.-T. Tsai, M. Rouf, D. Pająk, D. Reddy, O. Gallo, J. Liu, W. Heidrich, K. Egiazarian, J. Kautz, and K. A. Pulli, “FlexISP: a flexible camera image processing framework,” ACM Trans. Graph. 33, 231 (2014).
    [Crossref]
  6. M. Levoy, “Light fields and computational imaging,” IEEE Computer 39, 46–55 (2006).
    [Crossref]
  7. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR 2 (2005), pp. 1–11.
  8. M.-B. Lien, C.-H. Liu, I. Y. Chun, S. Ravishankar, H. Nien, M. Zhou, J. A. Fessler, Z. Zhong, and T. B. Norris, “Ranging and light field imaging with transparent photodetectors,” Nat. Photonics 14, 143–148 (2020).
    [Crossref]
  9. J. N. Mait, G. W. Euliss, and R. A. Athale, “Computational imaging,” Adv. Opt. Photon. 10, 409–483 (2018).
    [Crossref]
  10. W. T. Cathey, B. R. Frieden, W. T. Rhodes, and C. K. Rushforth, “Image gathering and processing for enhanced resolution,” J. Opt. Soc. Am. A 1, 241–250 (1984).
    [Crossref]
  11. S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Trans. Graph. 35, 192 (2016).
    [Crossref]
  12. J. Von Neumann, “First draft of a report on the EDVAC,” IEEE Ann. Hist. Comput. 15, 27–75 (1993).
    [Crossref]
  13. W. S. McCulloch and W. Pitts, “A logical calculus of the ideas immanent in nervous activity,” Bull. Math. Biophys. 5, 115–133 (1943).
    [Crossref]
  14. D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of Go without human knowledge,” Nature 550, 354–359 (2017).
    [Crossref]
  15. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 1097–1105.
  16. B. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, P. Rajpurkar, T. Migimatsu, R. Cheng-Yue, F. Mujica, A. Coates, and A. Y. Ng, “An empirical evaluation of deep learning on highway driving,” arXiv:1504.01716 (2015).
  17. J. Lemley, S. Bazrafkan, and P. Corcoran, “Deep learning for consumer devices and services: pushing the limits for machine learning, artificial intelligence, and computer vision,” IEEE Consum. Electron. Mag. 6(2), 48–56 (2017).
    [Crossref]
  18. H. A. Pierson and M. S. Gashler, “Deep learning in robotics: a review of recent research,” Adv. Robot. 31, 821–835 (2017).
    [Crossref]
  19. P. Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (Basic Books, 2015).
  20. F. Rosenblatt, “The perceptron: a probabilistic model for information storage and organization in the brain,” Psychol. Rev. 65, 386–408 (1958).
    [Crossref]
  21. M. Minsky and S. A. Papert, Perceptrons: An Introduction to Computational Geometry (MIT, 2017).
  22. G. Cybenko, “Approximation by superpositions of a sigmoidal function,” Math. Control Signals Syst. 2, 303–314 (1989).
    [Crossref]
  23. P. Grohs, D. Perekrestenko, D. Elbrächter, and H. Bölcskei, “Deep neural network approximation theory,” arXiv:1901.02220 (2019).
  24. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323, 533 (1986).
    [Crossref]
  25. Y. LeCun, B. E. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. E. Hubbard, and L. D. Jackel, “Handwritten digit recognition with a back-propagation network,” in Advances in Neural Information Processing Systems (1990), pp. 396–404.
  26. Y. LeCun, K. Kavukcuoglu, and C. Farabet, “Convolutional networks and applications in vision,” in IEEE International Symposium on Circuits and Systems (IEEE, 2010), pp. 253–256.
  27. G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science 313, 504–507 (2006).
    [Crossref]
  28. G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural Comput. 18, 1527–1554 (2006).
    [Crossref]
  29. S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” arXiv:1502.03167 (2015).
  30. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556 (2014).
  31. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1–9.
  32. G. Larsson, M. Maire, and G. Shakhnarovich, “FractalNet: ultra-deep neural networks without residuals,” arXiv:1605.07648 (2016).
  33. M. Z. Alom, T. M. Taha, C. Yakopcic, S. Westberg, P. Sidike, M. S. Nasrin, B. C. Van Esesn, A. A. S. Awwal, and V. K. Asari, “The history began from AlexNet: a comprehensive survey on deep learning approaches,” arXiv:1803.01164 (2018).
  34. J. Misra and I. Saha, “Artificial neural networks in hardware: a survey of two decades of progress,” Neurocomputing 74, 239–255 (2010).
    [Crossref]
  35. D. Psaltis, D. Brady, X.-G. Gu, and S. Lin, “Holography in artificial neural networks,” Nature 343, 325–330 (1990).
    [Crossref]
  36. J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
    [Crossref]
  37. X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361, 1004–1008 (2018).
    [Crossref]
  38. K.-S. Oh and K. Jung, “GPU implementation of neural networks,” Pattern Recogn. 37, 1311–1314 (2004).
    [Crossref]
  39. D. Moloney, B. Barry, R. Richmond, F. Connor, C. Brick, and D. Donohoe, “Myriad 2: eye of the computational vision storm,” in IEEE Hot Chips 26 Symposium (HCS) (IEEE, 2014), pp. 1–18.
  40. M. Ditty, T. Architecture, J. Montrym, and C. Wittenbrink, “NVIDIA’s Tegra K1 system-on-chip,” in IEEE Hot Chips 26 Symposium (HCS) (IEEE, 2014), pp. 1–26.
  41. J. Redgrave, A. Meixner, N. Goulding-Hotta, A. Vasilyev, and O. Shacham, “Pixel visual core: Google’s fully programmable image vision and AI processor for mobile devices,” in Hot Chips: A Symposium on High Performance Chips (2018).
  42. P. Dempsey, “The teardown: Huawei mate 10 pro,” Eng. Technol. 13, 80–81 (2018).
    [Crossref]
  43. M. Buckler, S. Jayasuriya, and A. Sampson, “Reconfiguring the imaging pipeline for computer vision,” in IEEE International Conference on Computer Vision (ICCV) (2017).
  44. C. Wu, L. F. Isikdogan, S. Rao, B. Nayak, T. Gerasimow, A. Sutic, L. Ain-kedem, and G. Michael, “VisionISP: Repurposing the image signal processor for computer vision applications,” in IEEE International Conference on Image Processing (ICIP) (2019), pp. 4624–4628.
  45. P. Pirsch and H.-J. Stolberg, “VLSI implementations of image and video multimedia processing systems,” IEEE Trans. Circuits Syst. Video Technol. 8, 878–891 (1998).
    [Crossref]
  46. “Ambarella CV22 product brief,” 2018, https://www.ambarella.com/wp-content/uploads/CV22-product-brief-consumer.pdf .
  47. A. Gulli and S. Pal, Deep Learning with Keras (Packt, 2017).
  48. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.
  49. Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: convolutional architecture for fast feature embedding,” in 22nd ACM International Conference on Multimedia (ACM, 2014), pp. 675–678.
  50. A. Paszke, S. Gross, S. Chintala, and G. Chanan, PyTorch: Tensors and Dynamic Neural Networks in Python with Strong GPU Acceleration (2017), Vol. 6.
  51. “Kodak photocd dataset,” http://r0k.us/graphics/kodak/ .
  52. L. Zhang, X. Wu, A. Buades, and X. Li, “Color demosaicking by local directional interpolation and nonlocal adaptive thresholding,” J. Electron. Imaging 20, 043010 (2011).
    [Crossref]
  53. J. Mait, “A history of imaging: revisiting the past to chart the future,” Opt. Photon. News 17(2), 22–27 (2006).
    [Crossref]
  54. R. Raskar and J. Tumblin, Computational Photography: Mastering New Techniques for Lenses, Lighting, and Sensors (AK Peters, 2009).
  55. A. Adams, E.-V. Talvala, S. H. Park, D. E. Jacobs, B. Ajdin, N. Gelfand, J. Dolson, D. Vaquero, J. Baek, M. Tico, H. P. A. Lensch, W. Matusik, K. Pulli, M. Horowitz, and M. Levoy, “The Frankencamera: an experimental platform for computational photography,” ACM Trans. Graph. 29, 1–12 (2010).
    [Crossref]
  56. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
    [Crossref]
  57. W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. 62, 55–59 (1972).
    [Crossref]
  58. L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745–754 (1974).
    [Crossref]
  59. A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” J. Royal Stat. Soc. B 39, 1–22 (1977).
    [Crossref]
  60. M. Harwit, Hadamard Transform Optics (Elsevier, 2012).
  61. M. Beister, D. Kolditz, and W. A. Kalender, “Iterative reconstruction methods in X-ray CT,” Phys. Med. 28, 94–108 (2012).
    [Crossref]
  62. D. J. Brady, Optical Imaging and Spectroscopy (Wiley, 2009).
  63. A. Buades, B. Coll, and J.-M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Simul. 4, 490–530 (2005).
    [Crossref]
  64. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006).
    [Crossref]
  65. E. J. Candes, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl. Math. 59, 1207–1223 (2006).
    [Crossref]
  66. E. J. Candès and M. B. Wakin, “An introduction to compressive sampling [a sensing/sampling paradigm that goes against the common knowledge in data acquisition],” IEEE Signal Process Mag. 25(2), 21–30 (2008).
    [Crossref]
  67. Y. Sun, X. Yuan, and S. Pang, “High-speed compressive range imaging based on active illumination,” Opt. Express 24, 22836–22846 (2016).
    [Crossref]
  68. B. E. Bayer, “Color imaging array,” U.S. patent3,971,065 (20July1976).
  69. X. Yuan, Y. Liu, J. Suo, and Q. Dai, “Plug-and-play algorithms for large-scale snapshot compressive imaging,”, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (IEEE, 2020), pp. 1447–1457.
  70. X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process Mag. 33(5), 95–108 (2016).
    [Crossref]
  71. G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process Mag. 31(1), 105–115 (2014).
    [Crossref]
  72. Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in International Conference on Computer Vision (IEEE, 2011), pp. 287–294.
  73. D. Reddy, A. Veeraraghavan, and R. Chellappa, “P2C2: programmable pixel compressive camera for high speed imaging,” in Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2011), pp. 329–336.
  74. P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–10545 (2013).
    [Crossref]
  75. T.-H. Tsai, X. Yuan, and D. J. Brady, “Spatial light modulator based color polarization imaging,” Opt. Express 23, 11912–11926 (2015).
    [Crossref]
  76. S. K. Nayar and T. Mitsunaga, “High dynamic range imaging: spatially varying pixel exposures,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2000), vol. 1, pp. 472–479.
  77. P. Llull, X. Yuan, L. Carin, and D. J. Brady, “Image translation for single-shot focal tomography,” Optica 2, 822–825 (2015).
    [Crossref]
  78. T.-H. Tsai, P. Llull, X. Yuan, L. Carin, and D. J. Brady, “Spectral-temporal compressive imaging,” Opt. Lett. 40, 4054–4057 (2015).
    [Crossref]
  79. M. Qiao, Z. Meng, J. Ma, and X. Yuan, “Deep learning for video compressive sensing,” APL Photon. 5, 030801 (2020).
    [Crossref]
  80. E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995).
    [Crossref]
  81. O. Cossairt and S. Nayar, “Spectral focal sweep: extended depth of field from chromatic aberrations,” in IEEE International Conference on Computational Photography (ICCP) (2010), pp. 1–8.
  82. D. J. Brady and D. L. Marks, “Coding for compressive focal tomography,” Appl. Opt. 50, 4436–4449 (2011).
    [Crossref]
  83. D. J. Brady and N. Hagen, “Multiscale lens design,” Opt. Express 17, 10659–10674 (2009).
    [Crossref]
  84. Y. Zhao, T. Yue, L. Chen, H. Wang, Z. Ma, D. J. Brady, and X. Cao, “Heterogeneous camera array for multispectral light field imaging,” Opt. Express 25, 14008–14022 (2017).
    [Crossref]
  85. M. Aittala and F. Durand, “Burst image deblurring using permutation invariant convolutional neural networks,” in European Conference on Computer Vision (ECCV) (2018), pp. 731–747.
  86. D. J. Brady, W. Pang, H. Li, Z. Ma, Y. Tao, and X. Cao, “Parallel cameras,” Optica 5, 127–137 (2018).
    [Crossref]
  87. R. Shogenji, Y. Kitamura, K. Yamada, S. Miyatake, and J. Tanida, “Multispectral imaging using compact compound optics,” Opt. Express 12, 1643–1655 (2004).
    [Crossref]
  88. M. Shankar, N. P. Pitsianis, and D. J. Brady, “Compressive video sensors using multichannel imagers,” Appl. Opt. 49, B9–B17 (2010).
    [Crossref]
  89. P. Sen, “Overview of state-of-the-art algorithms for stack-based high-dynamic range (hdr) imaging,” Electron. Imaging 2018, 311 (2018).
    [Crossref]
  90. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6, 921–943 (2019).
    [Crossref]
  91. M. A. Neifeld and P. Shankar, “Feature-specific imaging,” Appl. Opt. 42, 3379–3389 (2003).
    [Crossref]
  92. J. C. Kim and J. W. Chong, “Facial contour correcting method and device,” U.S.patent app.16/304,337 (4July2019).
  93. A. Bora, A. Jalal, E. Price, and A. G. Dimakis, “Compressed sensing using generative models,” in 34th International Conference on Machine Learning (JMLR, 2017), Vol. 70, pp. 537–546.
  94. D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 9446–9454.
  95. Y. Blau and T. Michaeli, “The perception-distortion tradeoff,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018).
  96. E. Schwartz, R. Giryes, and A. M. Bronstein, “DeepISP: toward learning an end-to-end image processing pipeline,” IEEE Trans. Image Process. 28, 912–923 (2018).
    [Crossref]
  97. C. Mead, “Neuromorphic electronic systems,” Proc. IEEE 78, 1629–1636 (1990).
    [Crossref]
  98. C. Brandli, R. Berner, M. Yang, S.-C. Liu, and T. Delbruck, “A 240× 80 130 db 3 µs latency global shutter spatiotemporal vision sensor,” IEEE J. Solid-State Circuits 49, 2333–2341 (2014).
    [Crossref]
  99. S. Sukegawa, T. Umebayashi, T. Nakajima, H. Kawanobe, K. Koseki, I. Hirota, T. Haruta, M. Kasai, K. Fukumoto, T. Wakano, K. Inoue, H. Takahashi, T. Nagano, Y. Nitta, T. Hirayama, and N. Fukushima, “A 1/4-inch 8 mpixel back-illuminated stacked CMOS image sensor,” in IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE, 2013), pp. 484–485.
  100. D. Khashabi, S. Nowozin, J. Jancsary, and A. W. Fitzgibbon, “Joint demosaicing and denoising via learned nonparametric random fields,” IEEE Trans. Image Process. 23, 4968–4981 (2014).
    [Crossref]
  101. C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to see in the dark,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 3291–3300.
  102. B. K. Gunturk, J. Glotzbach, Y. Altunbasak, R. W. Schafer, and R. M. Mersereau, “Demosaicking: color filter array interpolation,” IEEE Signal Process Mag. 22(1), 44–54 (2005).
    [Crossref]
  103. X. Li, B. Gunturk, and L. Zhang, “Image demosaicing: a systematic survey,” Proc. SPIE 6822, 68221J (2008).
    [Crossref]
  104. D. Menon and G. Calvagno, “Color image demosaicking: an overview,” Signal Process. Image Commun. 26, 518–533 (2011).
    [Crossref]
  105. N.-S. Syu, Y.-S. Chen, and Y.-Y. Chuang, “Learning deep convolutional networks for demosaicing,” arXiv:1802.03769 (2018).
  106. O. Kapah and H. Z. Hel-Or, “Demosaicking using artificial neural networks,” Proc. SPIE 3962, 112–120 (2000).
    [Crossref]
  107. Y.-Q. Wang, “A multilayer neural network for image demosaicking,” in IEEE International Conference on Image Processing (ICIP) (IEEE, 2014), pp. 1852–1856.
  108. F.-L. He, Y.-C. F. Wang, and K.-L. Hua, “Self-learning approach to color demosaicking via support vector regression,” in 19th IEEE International Conference on Image Processing (IEEE, 2012), pp. 2765–2768.
  109. J. Sun and M. F. Tappen, “Separable Markov random field model and its applications in low level vision,” IEEE Trans. Image Process. 22, 402–407 (2012).
    [Crossref]
  110. R. Tan, K. Zhang, W. Zuo, and L. Zhang, “Color image demosaicking via deep residual learning,” in IEEE International Conference on Multimedia and Expo (ICME) (IEEE, 2017), pp. 793–798.
  111. K. Hirakawa and T. W. Parks, “Adaptive homogeneity-directed demosaicing algorithm,” IEEE Trans. Image Process. 14, 360–369 (2005).
    [Crossref]
  112. I. Pekkucuksen and Y. Altunbasak, “Gradient based threshold free color filter array interpolation,” in IEEE International Conference on Image Processing (IEEE, 2010), pp. 137–140.
  113. L. Zhang and X. Wu, “Color demosaicking via directional linear minimum mean square-error estimation,” IEEE Trans. Image Process. 14, 2167–2178 (2005).
    [Crossref]
  114. D. Kiku, Y. Monno, M. Tanaka, and M. Okutomi, “Minimized-Laplacian residual interpolation for color image demosaicking,” Proc. SPIE 9023, 90230L (2014).
    [Crossref]
  115. Y. Monno, D. Kiku, M. Tanaka, and M. Okutomi, “Adaptive residual interpolation for color image demosaicking,” in IEEE International Conference on Image Processing (ICIP) (IEEE, 2015), pp. 3861–3865.
  116. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising,” IEEE Trans. Image Process. 26, 3142–3155 (2017).
    [Crossref]
  117. S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, “An iterative regularization method for total variation-based image restoration,” Multiscale Model. Simul. 4, 460–489 (2005).
    [Crossref]
  118. M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. Image Process. 15, 3736–3745 (2006).
    [Crossref]
  119. W. Dong, L. Zhang, G. Shi, and X. Li, “Nonlocally centralized sparse representation for image restoration,” IEEE Trans. Image Process. 22, 1620–1630 (2012).
    [Crossref]
  120. X. Lan, S. Roth, D. Huttenlocher, and M. J. Black, “Efficient belief propagation with learned higher-order Markov random fields,” in European Conference on Computer Vision (Springer, 2006), pp. 269–282.
  121. A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 2 (IEEE, 2005), pp. 60–65.
  122. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
    [Crossref]
  123. A. Buades, B. Coll, and J.-M. Morel, “Nonlocal image and movie denoising,” Int. J. Comput. Vision 76, 123–139 (2008).
    [Crossref]
  124. J. Mairal, F. R. Bach, J. Ponce, G. Sapiro, and A. Zisserman, “Non-local sparse models for image restoration,” in International Conference on Computer Vision (ICCV) (2009), Vol. 29, pp. 54–62.
  125. U. Schmidt and S. Roth, “Shrinkage fields for effective image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 2774–2781.
  126. Y. Chen and T. Pock, “Trainable nonlinear reaction diffusion: a flexible framework for fast and effective image restoration,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 1256–1272 (2016).
    [Crossref]
  127. U. Schmidt, J. Jancsary, S. Nowozin, S. Roth, and C. Rother, “Cascades of regression tree fields for image restoration,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 677–689 (2015).
    [Crossref]
  128. V. Jain and S. Seung, “Natural image denoising with convolutional networks,” in Advances in Neural Information Processing Systems (2009), pp. 769–776.
  129. H. C. Burger, C. J. Schuler, and S. Harmeling, “Image denoising: can plain neural networks compete with BM3D?” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 2392–2399.
  130. J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with deep neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 341–349.
  131. B. Ahn and N. I. Cho, “Block-matching convolutional neural network for image denoising,” arXiv:1704.00524 (2017).
  132. K. Zhang, W. Zuo, and L. Zhang, “FFDNet: toward a fast and flexible solution for CNN-based image denoising,” IEEE Trans. Image Process. 27, 4608–4622 (2018).
    [Crossref]
  133. S. Lefkimmiatis, “Universal denoising networks: a novel CNN architecture for image denoising,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 3204–3213.
  134. K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 3929–3938.
  135. C. Tian, Y. Xu, L. Fei, J. Wang, J. Wen, and N. Luo, “Enhanced CNN for image denoising,” CAAI Trans. Intell. Technol. 4, 17–23 (2019).
    [Crossref]
  136. S. H. Park, H. S. Kim, S. Lansel, M. Parmar, and B. A. Wandell, “A case for denoising before demosaicking color filter array data,” in Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers (IEEE, 2009), pp. 860–864.
  137. A. Danielyan, M. Vehvilainen, A. Foi, V. Katkovnik, and K. Egiazarian, “Cross-color BM3D filtering of noisy raw data,” in International Workshop on Local and Non-local Approximation in Image Processing (IEEE, 2009), pp. 125–129.
  138. K. Hirakawa and T. W. Parks, “Joint demosaicing and denoising,” IEEE Trans. Image Process. 15, 2146–2157 (2006).
    [Crossref]
  139. L. Condat and S. Mosaddegh, “Joint demosaicking and denoising by total variation minimization,” in 19th IEEE International Conference on Image Processing (IEEE, 2012), pp. 2781–2784.
  140. T. Klatzer, K. Hammernik, P. Knobelreiter, and T. Pock, “Learning joint demosaicing and denoising based on sequential energy minimization,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2016), pp. 1–11.
  141. M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, “Deep joint demosaicking and denoising,” ACM Trans. Graph. 35, 191 (2016).
    [Crossref]
  142. F. Kokkinos and S. Lefkimmiatis, “Deep image demosaicking using a cascade of convolutional residual denoising networks,” in European Conference on Computer Vision (ECCV) (2018), pp. 303–319.
  143. F. Kokkinos and S. Lefkimmiatis, “Iterative residual network for deep joint image demosaicking and denoising,” arXiv:1807.06403 (2018).
  144. W. Dong, M. Yuan, X. Li, and G. Shi, “Joint demosaicing and denoising with perceptual optimization on a generative adversarial network,” arXiv:1802.04723 (2018).
  145. K. Barnard, V. Cardei, and B. Funt, “A comparison of computational color constancy algorithms. I: methodology and experiments with synthesized data,” IEEE Trans. Image Process. 11, 972–984 (2002).
    [Crossref]
  146. C.-C. Weng, H. Chen, and C.-S. Fuh, “A novel automatic white balance method for digital still cameras,” in IEEE International Symposium on Circuits and Systems (IEEE, 2005), pp. 3801–3804.
  147. E. Y. Lam and G. S. Fung, “Automatic white balancing in digital photography,” in Single-Sensor Imaging (CRC Press, 2018), pp. 287–314.
  148. J.-Y. Huo, Y.-L. Chang, J. Wang, and X.-X. Wei, “Robust automatic white balance algorithm using gray color points in images,” IEEE Trans. Consum. Electron. 52, 541–546 (2006).
    [Crossref]
  149. E. H. Land, “The retinex theory of color vision,” Sci. Am. 237, 108–129 (1977).
    [Crossref]
  150. E. Y. Lam, “Combining gray world and retinex theory for automatic white balance in digital photography,” in 9th International Symposium on Consumer Electronics (ISCE) (IEEE, 2005), pp. 134–139.
  151. Y.-C. Liu, W.-H. Chan, and Y.-Q. Chen, “Automatic white balance for digital still camera,” IEEE Trans. Consum. Electron. 41, 460–466 (1995).
    [Crossref]
  152. S. Wang, Y. Zhang, P. Deng, and F. Zhou, “Fast automatic white balancing method by color histogram stretching,” in 4th International Congress on Image and Signal Processing (IEEE, 2011), Vol. 2, pp. 979–983.
  153. S.-C. Tai, T.-W. Liao, Y.-Y. Chang, and C.-P. Yeh, “Automatic white balance algorithm through the average equalization and threshold,” in 8th International Conference on Information Science and Digital Content Technology (ICIDT) (IEEE, 2012), Vol. 3, pp. 571–576.
  154. V. Agarwal, A. V. Gribok, and M. A. Abidi, “Machine learning approach to color constancy,” Neural Netw. 20, 559–563 (2007).
    [Crossref]
  155. S. Bianco, C. Cusano, and R. Schettini, “Color constancy using CNNS,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2015), pp. 81–89.
  156. W. Shi, C. C. Loy, and X. Tang, “Deep specialized network for illuminant estimation,” in European Conference on Computer Vision (Springer, 2016), pp. 371–387.
  157. Y. Hu, B. Wang, and S. Lin, “FC4: fully convolutional color constancy with confidence-weighted pooling,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4085–4094.
  158. S. Bianco, C. Cusano, and R. Schettini, “Single and multiple illuminant estimation using convolutional neural networks,” IEEE Trans. Image Process. 26, 4347–4362 (2017).
    [Crossref]
  159. M. Afifi, “Semantic white balance: semantic color constancy using convolutional neural network,” arXiv:1802.00153 (2018).
  160. S. Rahman, M. M. Rahman, M. Abdullah-Al-Wadud, G. D. Al-Quaderi, and M. Shoyaib, “An adaptive gamma correction for image enhancement,” EURASIP J. Image Video Process. 2016, 35 (2016).
    [Crossref]
  161. J. A. Parker, R. V. Kenyon, and D. E. Troxel, “Comparison of interpolating methods for image resampling,” IEEE Trans. Med. Imaging 2, 31–39 (1983).
    [Crossref]
  162. B. Wronski, I. Garcia-Dorado, M. Ernst, D. Kelly, M. Krainin, C.-K. Liang, M. Levoy, and P. Milanfar, “Handheld multi-frame super-resolution,” ACM Trans. Graph. 38, 1–18 (2019).
    [Crossref]
  163. G. Freedman and R. Fattal, “Image and video upscaling from local self-examples,” ACM Trans. Graph. 30, 12 (2011).
    [Crossref]
  164. D. Glasner, S. Bagon, and M. Irani, “Super-resolution from a single image,” in IEEE 12th International Conference on Computer Vision (IEEE, 2009), pp. 349–356.
  165. J. Yang, Z. Lin, and S. Cohen, “Fast image super-resolution based on in-place example regression,” in IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 1059–1066.
  166. C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in European Conference on Computer Vision (Springer, 2014), pp. 184–199.
  167. J. Yang, J. Wright, T. Huang, and Y. Ma, “Image super-resolution as sparse representation of raw image patches,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.
  168. J. Yang, J. Wright, T. S. Huang, and Y. Ma, “Image super-resolution via sparse representation,” IEEE Trans. Image Process. 19, 2861–2873 (2010).
    [Crossref]
  169. J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 1646–1654.
  170. B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2017).
  171. Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 2472–2481.
  172. X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and C. C. Loy, “ESRGAN: enhanced super-resolution generative adversarial networks,” in European Conference on Computer Vision (ECCV) (2018).
  173. Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image super-resolution using very deep residual channel attention networks,” in European Conference on Computer Vision (ECCV) (2018), pp. 286–301.
  174. C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4681–4690.
  175. J. Kim, J. K. Lee, and K. M. Lee, “Deeply-recursive convolutional network for image super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 1637–1645.
  176. Y. Tai, J. Yang, and X. Liu, “Image super-resolution via deep recursive residual network,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 3147–3155.
  177. M. Haris, G. Shakhnarovich, and N. Ukita, “Deep back-projection networks for super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 1664–1673.
  178. T. Tong, G. Li, X. Liu, and Q. Gao, “Image super-resolution using dense skip connections,” in IEEE International Conference on Computer Vision (2017), pp. 4799–4807.
  179. X. Hu, H. Mu, X. Zhang, Z. Wang, T. Tan, and J. Sun, “Meta-SR: a magnification-arbitrary network for super-resolution,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 1575–1584.
  180. K. Zhang, L. Gool, and R. Timofte, “Deep unfolding network for image super-resolution,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020), pp. 3214–3223.
  181. C. Chen, Z. Xiong, X. Tian, Z. Zha, and F. Wu, “Camera lens super-resolution,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 1652–1660.
  182. K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 2808–2817.
  183. K. Zhang, W. Zuo, and L. Zhang, “Deep plug-and-play super-resolution for arbitrary blur kernels,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 1671–1681.
  184. K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Gool, and R. Timofte, “Plug-and-play image restoration with deep denoiser prior,” arXiv abs/2008.13751 (2020).
  185. W. Yang, X. Zhang, Y. Tian, W. Wang, J.-H. Xue, and Q. Liao, “Deep learning for single image super-resolution: a brief review,” IEEE Trans. Multimedia 21, 3106–3121 (2019).
  186. D. J. Brady, A. Mrozack, K. MacCabe, and P. Llull, “Compressive tomography,” Adv. Opt. Photon. 7, 756–813 (2015).
    [Crossref]
  187. D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 606509 (2006).
    [Crossref]
  188. R. Koller, L. Schmid, N. Matsuda, T. Niederberger, L. Spinoulas, O. Cossairt, G. Schuster, and A. K. Katsaggelos, “High spatio-temporal resolution video with compressed sensing,” Opt. Express 23, 15992–16007 (2015).
    [Crossref]
  189. M. Qiao, X. Liu, and X. Yuan, “Snapshot spatial–temporal compressive imaging,” Opt. Lett. 45, 1659–1662 (2020).
    [Crossref]
  190. L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
    [Crossref]
  191. M. Iliadis, L. Spinoulas, and A. K. Katsaggelos, “Deep fully-connected networks for video compressive sensing,” Digital Signal Process. 72, 9–18 (2018).
    [Crossref]
  192. X. Miao, X. Yuan, Y. Pu, and V. Athitsos, “λ-net: reconstruct hyperspectral images from a snapshot measurement,” in IEEE/CVF Conference on Computer Vision (ICCV) (2019), Vol. 1.
  193. M. Iliadis, L. Spinoulas, and A. K. Katsaggelos, “DeepBinaryMask: learning a binary mask for video compressive sensing,” Digital Signal Processing 96, 102591 (2020).
    [Crossref]
  194. N. Ahmed, T. Natarajan, and K. R. Rao, “Discrete cosine transform,” IEEE Trans. Comput. C- 23, 90–93 (1974).
    [Crossref]
  195. A. Skodras, C. Christopoulos, and T. Ebrahimi, “The JPEG 2000 still image compression standard,” IEEE Signal Process Mag. 18(5), 36–58 (2001).
    [Crossref]
  196. J. Lainema, F. Bossen, W.-J. Han, J. Min, and K. Ugur, “Intra coding of the HEVC standard,” IEEE Trans. Circuits Syst. Video Technol. 22, 1792–1801 (2012).
    [Crossref]
  197. X. Xu, S. Liu, T. Chuang, Y. Huang, S. Lei, K. Rapaka, C. Pang, V. Seregin, Y. Wang, and M. Karczewicz, “Intra block copy in HEVC screen content coding extensions,” IEEE J. Emerg. Sel. Top. Circuits Syst. 6, 409–419 (2016).
    [Crossref]
  198. J. Ballé, D. Minnen, S. Singh, S. J. Hwang, and N. Johnston, “Variational image compression with a scale hyperprior,” arXiv:1802.01436 (2018).
  199. O. Rippel and L. Bourdev, “Real-time adaptive image compression,” in International Conference on Machine Learning (ICML, 2017), pp. 2922–2930.
  200. F. Mentzer, E. Agustsson, M. Tschannen, R. Timofte, and L. Van Gool, “Conditional probability models for deep image compression,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018).
  201. M. Li, W. Zuo, S. Gu, D. Zhao, and D. Zhang, “Learning convolutional networks for content-weighted image compression,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 3214–3223.
  202. H. Liu, T. Chen, Q. Shen, T. Yue, and Z. Ma, “Deep image compression via end-to-end learning,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2018), pp. 2575–2578.
  203. G. J. Sullivan and T. Wiegand, “Rate-distortion optimization for video compression,” IEEE Signal Process Mag. 15(6), 74–90 (1998).
    [Crossref]
  204. Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” in 37th Asilomar Conference on Signals, Systems & Computers (IEEE, 2003), Vol. 2, pp. 1398–1402.
  205. H. Liu, T. Chen, P. Guo, Q. Shen, X. Cao, Y. Wang, and Z. Ma, “Non-local attention optimized deep image compression,” arXiv:1904.09757 (2019).
  206. H. Liu, T. Chen, Q. Shen, and Z. Ma, “Practical stacked non-local attention modules for image compression,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2019).
  207. T. Chen, H. Liu, Q. Shen, T. Yue, X. Cao, and Z. Ma, “DeepCoder: A deep neural network based video compression,” in IEEE Visual Communications and Image Processing (VCIP) (IEEE, 2017), pp. 1–4.
  208. G. Lu, W. Ouyang, D. Xu, X. Zhang, C. Cai, and Z. Gao, “DVC: an end-to-end deep video compression framework,” in IEEE Conference on Computer Vision and Pattern Recognition (2019), pp. 11006–11015.
  209. H. Liu, T. Chen, M. Lu, Q. Shen, and Z. Ma, “Neural video compression using spatio-temporal priors,” arXiv:1902.07383 (2019).
  210. Q. Shen, J. Cai, L. Liu, H. Liu, T. Chen, L. Ye, and Z. Ma, “CodedVision: towards joint image understanding and compression via end-to-end learning,” in Pacific Rim Conference on Multimedia (Springer, 2018), pp. 3–14.
  211. X. Yan, D. J. Brady, J. Wang, C. Huang, Z. Li, S. Yan, D. Liu, and Z. Ma, “Compressive sampling for array cameras,” arXiv:1908.10903 (2019).
  212. J. Ojeda-Castañeda and C. M. Gómez-Sarabia, “Tuning field depth at high resolution by pupil engineering,” Adv. Opt. Photon. 7, 814–880 (2015).
    [Crossref]
  213. J. Ojeda-Castaneda, R. Ramos, and A. Noyola-Isgleas, “High focal depth by apodization and digital restoration,” Appl. Opt. 27, 2583–2586 (1988).
    [Crossref]
  214. S. Bradburn, W. T. Cathey, and E. R. Dowski, “Realizations of focus invariance in optical–digital systems with wave-front coding,” Appl. Opt. 36, 9157–9166 (1997).
    [Crossref]
  215. W. Chi and N. George, “Electronic imaging using a logarithmic asphere,” Opt. Lett. 26, 875–877 (2001).
    [Crossref]
  216. A. Greengard, Y. Y. Schechner, and R. Piestun, “Depth from diffracted rotation,” Opt. Lett. 31, 181–183 (2006).
    [Crossref]
  217. S. Prasad, T. C. Torgersen, V. P. Pauca, R. J. Plemmons, and J. van der Gracht, “Engineering the pupil phase to improve image quality,” Proc. SPIE 5108, 1–12 (2003).
    [Crossref]
  218. A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26, 70 (2007).
    [Crossref]
  219. G. Muyo and A. R. Harvey, “Decomposition of the optical transfer function: wavefront coding imaging systems,” Opt. Lett. 30, 2715–2717 (2005).
    [Crossref]
  220. H. B. Wach, E. R. Dowski, and W. T. Cathey, “Control of chromatic focal shift through wave-front coding,” Appl. Opt. 37, 5359–5367 (1998).
    [Crossref]
  221. R. Cicala, “Sony FE 135mm f1.8 GM early MTF results,” 2019, https://www.lensrentals.com/blog/2019/03/sony-fe-135mm-f1-8-gm-early-mtf-results/ .
  222. Z. Ji-Yan, H. Yuan-Qing, X. Fei-Bing, and M. Xian-Guo, “Design of 10 mega-pixel mobile phone lens,” in 3rd International Conference on Instrumentation, Measurement, Computer, Communication and Control (IEEE, 2013), pp. 569–573.
  223. G. Li, J. Wang, and Y. Zhang, “Design of 8 mega-pixel mobile phone camera,” J. Appl. Opt. 32, 420–425 (2011).
  224. E. J. Tremblay, D. L. Marks, D. J. Brady, and J. E. Ford, “Design and scaling of monocentric multiscale imagers,” Appl. Opt. 51, 4691–4702 (2012).
    [Crossref]
  225. D. J. Brady, M. Gehm, R. Stack, D. Marks, D. Kittle, D. R. Golish, E. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
    [Crossref]
  226. R. Gutierrez, E. Fossum, and T. Tang, “Auto-focus technology,” in International Image Sensor Workshop (2007), pp. 20–25.
  227. W. Pang and D. J. Brady, “Distributed focus and digital zoom,” Eng. Res. Express 2, 035019 (2020).
    [Crossref]
  228. P. M. Shankar, W. C. Hasenplaugh, R. L. Morrison, R. A. Stack, and M. A. Neifeld, “Multiaperture imaging,” Appl. Opt. 45, 2871–2883 (2006).
    [Crossref]
  229. V. R. Bhakta, M. Somayaji, S. C. Douglas, and M. P. Christensen, “Experimentally validated computational imaging with adaptive multiaperture folded architecture,” Appl. Opt. 49, B51–B58 (2010).
    [Crossref]
  230. R. T. Collins, A. J. Lipton, H. Fujiyoshi, and T. Kanade, “Algorithms for cooperative multisensor surveillance,” Proc. IEEE 89, 1456–1477 (2001).
    [Crossref]
  231. B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
    [Crossref]
  232. D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 (10November2015).
  233. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): concept and experimental verification,” Appl. Opt. 40, 1806–1813 (2001).
    [Crossref]
  234. J. Duparré, P. Dannberg, P. Schreiber, A. Bräuer, and A. Tünnermann, “Thin compound-eye camera,” Appl. Opt. 44, 2949–2956 (2005).
    [Crossref]
  235. K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: an ultra-thin high performance monolithic camera array,” ACM Trans. Graph. 32, 166 (2013).
    [Crossref]
  236. M. Shankar, R. Willett, N. Pitsianis, T. Schulz, R. Gibbons, R. Te Kolste, J. Carriere, C. Chen, D. Prather, and D. Brady, “Thin infrared imaging systems through multichannel sampling,” Appl. Opt. 47, B1–B10 (2008).
    [Crossref]
  237. G. Druart, N. Guérineau, R. Hadar, S. Thétas, J. Taboury, S. Rommeluère, J. Primot, and M. Fendler, “Demonstration of an infrared microcamera inspired by Xenos peckii vision,” Appl. Opt. 48, 3368–3374 (2009).
    [Crossref]
  238. A. Portnoy, N. Pitsianis, X. Sun, D. Brady, R. Gibbons, A. Silver, R. Te Kolste, C. Chen, T. Dillon, and D. Prather, “Design and characterization of thin multiple aperture infrared cameras,” Appl. Opt. 48, 2115–2126 (2009).
    [Crossref]
  239. B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, J. D. Targove, S. Wein, P. Brewer, D. G. Madden, and K. H. Shafique, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
    [Crossref]
  240. O. S. Cossairt, D. Miau, and S. K. Nayar, “Gigapixel computational imaging,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2011), pp. 1–8.
  241. D. G. Lowe, “Object recognition from local scale-invariant features,” in 7th IEEE International Conference on Computer Vision (1999), Vol. 2, pp. 1150–1157.
  242. H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: speeded up robust features,” in European Conference on Computer Vision (ECCV), A. Leonardis, H. Bischof, and A. Pinz, eds. (Springer,2006), pp. 404–417.
  243. E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: an efficient alternative to sift or surf,” in International Conference on Computer Vision (2011), pp. 2564–2571.
  244. E. Tola, V. Lepetit, and P. Fua, “Daisy: an efficient dense descriptor applied to wide-baseline stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 815–830 (2010).
    [Crossref]
  245. B. K. Horn and B. G. Schunck, “Determining optical flow,” Artif. Intell. 17, 185–203 (1981).
    [Crossref]
  246. B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in 7th International Joint Conference on Artificial Intelligence (IJCAI) (Morgan Kaufmann, 1981), Vol. 2, pp. 674–679.
  247. M. J. Black and P. Anandan, “The robust estimation of multiple motions: parametric and piecewise-smooth flow fields,” Comput. Vision Image Understanding 63, 75–104 (1996).
    [Crossref]
  248. T. Brox and J. Malik, “Large displacement optical flow: descriptor matching in variational motion estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 500–513 (2010).
    [Crossref]
  249. A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox, “FlowNet: learning optical flow with convolutional networks,” in IEEE International Conference on Computer Vision (2015), pp. 2758–2766.
  250. E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, “FlowNet 2.0: evolution of optical flow estimation with deep networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 2462–2470.
  251. A. Ranjan and M. J. Black, “Optical flow estimation using a spatial pyramid network,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4161–4170.
  252. D. Sun, X. Yang, M.-Y. Liu, and J. Kautz, “PWC-net: CNNS for optical flow using pyramid, warping, and cost volume,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 8934–8943.
  253. K. Briechle and U. D. Hanebeck, “Template matching using fast normalized cross correlation,” Proc. SPIE 4387, 95–102 (2001).
    [Crossref]
  254. R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in European Conference on Computer Vision (Springer, 2016), pp. 649–666.
  255. D. Eigen, C. Puhrsch, and R. Fergus, “Depth map prediction from a single image using a multi-scale deep network,” in Advances in Neural Information Processing Systems (2014), pp. 2366–2374.
  256. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 1125–1134.
  257. L. A. Teodosio and M. Mills, “Panoramic overviews for navigating real-world scenes,” in 1st ACM International Conference on Multimedia (1993), pp. 359–364.
  258. S. Mann and R. W. Picard, “Virtual bellows: constructing high quality stills from video,” in 1st International Conference on Image Processing (1994), Vol. 1, pp. 363–367.
  259. R. Szeliski, “Image mosaicing for tele-reality applications,” in IEEE Workshop on Applications of Computer Vision (1994), pp. 44–53.
  260. D. Capel and A. Zisserman, “Automated mosaicing with super-resolution zoom,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (1998), Vol. 98, pp. 885–891.
  261. M. Brown and D. G. Lowe, “Recognising panoramas,” in IEEE International Conference on Computer Vision (ICCV) (2003), Vol. 3, p. 1218.
  262. M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” Int. J. Comput. Vision 74, 59–73 (2007).
    [Crossref]
  263. P. J. Burt and E. H. Adelson, “A multiresolution spline with application to image mosaics,” ACM Trans. Graphics 2, 217–236 (1983).
    [Crossref]
  264. J. Zaragoza, T.-J. Chin, M. S. Brown, and D. Suter, “As-projective-as-possible image stitching with moving DLT,” in IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 2339–2346.
  265. C.-C. Lin, S. U. Pankanti, K. N. Ramamurthy, and A. Y. Aravkin, “Adaptive as-natural-as-possible image stitching,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1155–1163.
  266. V. Kwatra, A. Schödl, I. Essa, G. Turk, and A. Bobick, “Graphcut textures: image and video synthesis using graph cuts,” ACM Trans. Graph. 22, 277–286 (2003).
    [Crossref]
  267. X. Yuan, L. Fang, Q. Dai, D. J. Brady, and Y. Liu, “Multiscale gigapixel video: a cross resolution image matching and warping approach,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2017), pp. 1–9.
  268. J.-C. Yoo and T. H. Han, “Fast normalized cross-correlation,” Circuits Syst. Signal Process. 28, 819 (2009).
    [Crossref]
  269. J. Zbontar and Y. LeCun, “Computing the stereo matching cost with a convolutional neural network,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1592–1599.
  270. S. Zagoruyko and N. Komodakis, “Learning to compare image patches via convolutional neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 4353–4361.
  271. P. Fischer, A. Dosovitskiy, and T. Brox, “Descriptor matching with convolutional neural networks: a comparison to sift,” arXiv:1405.5769 (2014).
  272. D. DeTone, T. Malisiewicz, and A. Rabinovich, “Deep image homography estimation,” arXiv:1606.03798 (2016).
  273. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 3431–3440.
  274. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFS,” IEEE Trans. Pattern Anal. Mach. Intell. 40, 834–848 (2017).
    [Crossref]
  275. V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: a deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).
    [Crossref]
  276. H. Zheng, M. Ji, H. Wang, Y. Liu, and L. Fang, “Learning cross-scale correspondence and patch-based synthesis for reference-based super-resolution,” in British Machine Vision Conference (2017).
  277. V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in IEEE International Conference on Computational Photography (IEEE, 2014), pp. 1–10.
  278. Y. Wang, Y. Liu, W. Heidrich, and Q. Dai, “The light field attachment: turning a DSLR into a light field camera using a low budget camera ring,” in IEEE Transactions on Visualization and Computer Graphics (IEEE, 2016).
  279. A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE Computer Society, 2005), Vol. 2, pp. 60–65.
  280. J. Wu, H. Wang, X. Wang, and Y. Zhang, “A novel light field super-resolution framework based on hybrid imaging system,” in Visual Communications and Image Processing (IEEE, 2015), pp. 1–4.
  281. H. Zheng, M. Guo, H. Wang, Y. Liu, and L. Fang, “Combining exemplar-based approach and learning-based approach for light field super-resolution using a hybrid imaging system,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2481–2486.
  282. H. Zheng, M. Ji, H. Wang, Y. Liu, and L. Fang, “CrossNet: an end-to-end reference-based super resolution network using cross-scale warping,” in European Conference on Computer Vision (ECCV) (ECCV, 2018), pp. 87–104.
  283. P. P. Srinivasan, T. Wang, A. Sreelal, R. Ramamoorthi, and R. Ng, “Learning to synthesize a 4D RGBD light field from a single image,” IEEE International Conference on Computer Vision (ICCV), Venice, Italy, October22–29, 2017, pp. 2262–2270.
  284. M. Cheng, Z. Ma, S. Asif, Y. Xu, H. Liu, W. Bao, and J. Sun, “A dual camera system for high spatiotemporal resolution video acquisition,” IEEE Comp. Arch. Lett. 1, 1 (2020).
  285. R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. 25, 795–804 (2006).
    [Crossref]
  286. Y. Bando, B.-Y. Chen, and T. Nishita, “Motion deblurring from a single image using circular sensor motion,” Comput. Graph. Forum 30, 1869–1878 (2011).
    [Crossref]
  287. Y. Yao, B. Abidi, N. Doggaz, and M. Abidi, “Evaluation of sharpness measures and search algorithms for the auto focusing of high-magnification images,” Proc. SPIE 6246, 62460G (2006).
    [Crossref]
  288. O. Baltag, “History of automatic focusing reflected by patents,” Sci. Innov. 3,1–17 (2015).
    [Crossref]
  289. P. Śliwiński and P. Wachel, “A simple model for on-sensor phase-detection autofocusing algorithm,” J. Comput. Commun. 1, 11–17 (2013).
    [Crossref]
  290. A. Santos, C. O. de Solórzano, J. J. Vaquero, J. Pena, N. Malpica, and F. Del Pozo, “Evaluation of autofocus functions in molecular cytogenetic analysis,” J. Microsc. 188, 264–272 (1997).
    [Crossref]
  291. S. Yousefi, M. Rahman, and N. Kehtarnavaz, “A new auto-focus sharpness function for digital and smart-phone cameras,” IEEE Trans. Consum. Electron. 57, 1003–1009 (2011).
    [Crossref]
  292. C. Guo, Z. Ma, X. Guo, W. Li, X. Qi, and Q. Zhao, “Fast auto-focusing search algorithm for a high-speed and high-resolution camera based on the image histogram feature function,” Appl. Opt. 57, F44–F49 (2018).
    [Crossref]
  293. E. Krotkov, “Focusing,” Int. J. Comput. Vision 1, 223–237 (1988).
    [Crossref]
  294. N. Kehtarnavaz and H.-J. Oh, “Development and real-time implementation of a rule-based auto-focus algorithm,” Real-Time Imaging 9, 197–203 (2003).
    [Crossref]
  295. J. He, R. Zhou, and Z. Hong, “Modified fast climbing search auto-focus algorithm with adaptive step size searching technique for digital camera,” IEEE Trans. Consum. Electron. 49, 257–262 (2003).
    [Crossref]
  296. S. Yazdanfar, K. B. Kenny, K. Tasimi, A. D. Corwin, E. L. Dixon, and R. J. Filkins, “Simple and robust image-based autofocusing for digital microscopy,” Opt. Express 16, 8670–8677 (2008).
    [Crossref]
  297. Z. Wu, D. Wang, and F. Zhou, “Bilateral prediction and intersection calculation autofocus method for automated microscopy,” J. Microsc. 248, 271–280 (2012).
    [Crossref]
  298. Y. Wang, H. Feng, Z. Xu, Q. Li, Y. Chen, and M. Cen, “Fast auto-focus scheme based on optical defocus fitting model,” J. Mod. Opt. 65, 858–868 (2018).
    [Crossref]
  299. J.-W. Han, J.-H. Kim, H.-T. Lee, and S.-J. Ko, “A novel training based auto-focus for mobile-phone cameras,” IEEE Trans. Consum. Electron. 57, 232–238 (2011).
    [Crossref]
  300. C.-Y. Chen, R.-C. Hwang, and Y.-J. Chen, “A passive auto-focus camera control system,” Appl. Soft Comput. 10, 296–303 (2010).
    [Crossref]
  301. B.-K. Park, S.-S. Kim, D.-S. Chung, S.-D. Lee, and C.-Y. Kim, “Fast and accurate auto focusing algorithm based on two defocused images using discrete cosine transform,” Proc. SPIE 6817, 68170D (2008).
    [Crossref]
  302. S. Jiang, J. Liao, Z. Bian, K. Guo, Y. Zhang, and G. Zheng, “Transform-and multi-domain deep learning for single-frame rapid autofocusing in whole slide imaging,” Biomed. Opt. Express 9, 1601–1612 (2018).
    [Crossref]
  303. Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5, 337–344 (2018).
    [Crossref]
  304. L. Wei and E. Roberts, “Neural network control of focal position during time-lapse microscopy of cells,” Sci. Rep. 8, 7313 (2018).
    [Crossref]
  305. M. T. Rahman and N. Kehtarnavaz, “Real-time face-priority auto focus for digital and cell-phone cameras,” IEEE Trans. Consum. Electron. 54, 1506–1513 (2008).
    [Crossref]
  306. S.-Y. Lee, Y. Kumar, J.-M. Cho, S.-W. Lee, and S.-W. Kim, “Enhanced autofocus algorithm using robust focus measure and fuzzy reasoning,” IEEE Trans. Circuits Syst. Video Technol. 18, 983–1246 (2008).
    [Crossref]
  307. R. Cong, J. Lei, H. Fu, M.-M. Cheng, W. Lin, and Q. Huang, “Review of visual saliency detection with comprehensive information,” IEEE Trans. Circuits Syst. Video Technol. 29, 2941–2959 (2018).
    [Crossref]
  308. Z. Li and J. Tang, “Weakly supervised deep matrix factorization for social image understanding,” IEEE Trans. Image Process. 26, 276–288 (2016).
    [Crossref]
  309. P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang, “Bottom-up and top-down attention for image captioning and visual question answering,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 6077–6086.
  310. C. Piciarelli, L. Esterle, A. Khan, B. Rinner, and G. L. Foresti, “Dynamic reconfiguration in camera networks: a short survey,” IEEE Trans. Circuits Syst. Video Technol. 26, 965–977 (2015).
    [Crossref]
  311. C. Chen, A. Seff, A. Kornhauser, and J. Xiao, “DeepDriving: learning affordance for direct perception in autonomous driving,” in IEEE International Conference on Computer Vision (2015), pp. 2722–2730.
  312. M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to end learning for self-driving cars,” arXiv:1604.07316 (2016).
  313. M. Bojarski, P. Yeres, A. Choromanska, K. Choromanski, B. Firner, L. Jackel, and U. Muller, “Explaining how a deep neural network trained with end-to-end learning steers a car,” arXiv:1704.07911 (2017).
  314. H. Xu, Y. Gao, F. Yu, and T. Darrell, “End-to-end learning of driving models from large-scale video datasets,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 2174–2182.
  315. J. Kober, J. A. Bagnell, and J. Peters, “Reinforcement learning in robotics: a survey,” Int. J. Rob. Res. 32, 1238–1274 (2013).
    [Crossref]
  316. S. Shalev-Shwartz, S. Shammah, and A. Shashua, “Safe, multi-agent, reinforcement learning for autonomous driving,” arXiv:1610.03295 (2016).
  317. A. E. Sallab, M. Abdou, E. Perot, and S. Yogamani, “Deep reinforcement learning framework for autonomous driving,” Electron. Imaging 2017, 70–76 (2017).
    [Crossref]
  318. D. Isele, R. Rahimi, A. Cosgun, K. Subramanian, and K. Fujimura, “Navigating occluded intersections with autonomous vehicles using deep reinforcement learning,” in IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2018), pp. 2034–2039.
  319. K. Makantasis, M. Kontorinaki, and I. Nikolos, “A deep reinforcement learning driving policy for autonomous road vehicles,” arXiv:1905.09046 (2019).
  320. C. Paxton, V. Raman, G. D. Hager, and M. Kobilarov, “Combining neural networks and tree search for task and motion planning in challenging environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2017), pp. 6059–6066.
  321. M. Zhu, X. Wang, and Y. Wang, “Human-like autonomous car-following model with deep reinforcement learning,” Transp. Res. C 97, 348–368 (2018).
    [Crossref]
  322. D. S. Kittle, D. L. Marks, and D. J. Brady, “Design and fabrication of an ultraviolet-visible coded aperture snapshot spectral imager,” Opt. Eng. 51, 071403 (2012).
    [Crossref]
  323. “Bayer area scan color cameras compared to 3-CCD color cameras,” 2020, https://www.adimec.com/bayer-area-scan-color-cameras-compared-to-3-ccd-color-cameras-part-1/ .
  324. D. Güera and E. J. Delp, “Deepfake video detection using recurrent neural networks,” in 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) (IEEE, 2018), pp. 1–6.

2020 (6)

M.-B. Lien, C.-H. Liu, I. Y. Chun, S. Ravishankar, H. Nien, M. Zhou, J. A. Fessler, Z. Zhong, and T. B. Norris, “Ranging and light field imaging with transparent photodetectors,” Nat. Photonics 14, 143–148 (2020).
[Crossref]

M. Qiao, Z. Meng, J. Ma, and X. Yuan, “Deep learning for video compressive sensing,” APL Photon. 5, 030801 (2020).
[Crossref]

M. Qiao, X. Liu, and X. Yuan, “Snapshot spatial–temporal compressive imaging,” Opt. Lett. 45, 1659–1662 (2020).
[Crossref]

M. Iliadis, L. Spinoulas, and A. K. Katsaggelos, “DeepBinaryMask: learning a binary mask for video compressive sensing,” Digital Signal Processing 96, 102591 (2020).
[Crossref]

W. Pang and D. J. Brady, “Distributed focus and digital zoom,” Eng. Res. Express 2, 035019 (2020).
[Crossref]

M. Cheng, Z. Ma, S. Asif, Y. Xu, H. Liu, W. Bao, and J. Sun, “A dual camera system for high spatiotemporal resolution video acquisition,” IEEE Comp. Arch. Lett. 1, 1 (2020).

2019 (4)

B. Wronski, I. Garcia-Dorado, M. Ernst, D. Kelly, M. Krainin, C.-K. Liang, M. Levoy, and P. Milanfar, “Handheld multi-frame super-resolution,” ACM Trans. Graph. 38, 1–18 (2019).
[Crossref]

W. Yang, X. Zhang, Y. Tian, W. Wang, J.-H. Xue, and Q. Liao, “Deep learning for single image super-resolution: a brief review,” IEEE Trans. Multimedia 21, 3106–3121 (2019).

C. Tian, Y. Xu, L. Fei, J. Wang, J. Wen, and N. Luo, “Enhanced CNN for image denoising,” CAAI Trans. Intell. Technol. 4, 17–23 (2019).
[Crossref]

G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6, 921–943 (2019).
[Crossref]

2018 (16)

E. Schwartz, R. Giryes, and A. M. Bronstein, “DeepISP: toward learning an end-to-end image processing pipeline,” IEEE Trans. Image Process. 28, 912–923 (2018).
[Crossref]

P. Sen, “Overview of state-of-the-art algorithms for stack-based high-dynamic range (hdr) imaging,” Electron. Imaging 2018, 311 (2018).
[Crossref]

D. J. Brady, W. Pang, H. Li, Z. Ma, Y. Tao, and X. Cao, “Parallel cameras,” Optica 5, 127–137 (2018).
[Crossref]

J. N. Mait, G. W. Euliss, and R. A. Athale, “Computational imaging,” Adv. Opt. Photon. 10, 409–483 (2018).
[Crossref]

J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
[Crossref]

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361, 1004–1008 (2018).
[Crossref]

P. Dempsey, “The teardown: Huawei mate 10 pro,” Eng. Technol. 13, 80–81 (2018).
[Crossref]

K. Zhang, W. Zuo, and L. Zhang, “FFDNet: toward a fast and flexible solution for CNN-based image denoising,” IEEE Trans. Image Process. 27, 4608–4622 (2018).
[Crossref]

M. Iliadis, L. Spinoulas, and A. K. Katsaggelos, “Deep fully-connected networks for video compressive sensing,” Digital Signal Process. 72, 9–18 (2018).
[Crossref]

C. Guo, Z. Ma, X. Guo, W. Li, X. Qi, and Q. Zhao, “Fast auto-focusing search algorithm for a high-speed and high-resolution camera based on the image histogram feature function,” Appl. Opt. 57, F44–F49 (2018).
[Crossref]

Y. Wang, H. Feng, Z. Xu, Q. Li, Y. Chen, and M. Cen, “Fast auto-focus scheme based on optical defocus fitting model,” J. Mod. Opt. 65, 858–868 (2018).
[Crossref]

S. Jiang, J. Liao, Z. Bian, K. Guo, Y. Zhang, and G. Zheng, “Transform-and multi-domain deep learning for single-frame rapid autofocusing in whole slide imaging,” Biomed. Opt. Express 9, 1601–1612 (2018).
[Crossref]

Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5, 337–344 (2018).
[Crossref]

L. Wei and E. Roberts, “Neural network control of focal position during time-lapse microscopy of cells,” Sci. Rep. 8, 7313 (2018).
[Crossref]

R. Cong, J. Lei, H. Fu, M.-M. Cheng, W. Lin, and Q. Huang, “Review of visual saliency detection with comprehensive information,” IEEE Trans. Circuits Syst. Video Technol. 29, 2941–2959 (2018).
[Crossref]

M. Zhu, X. Wang, and Y. Wang, “Human-like autonomous car-following model with deep reinforcement learning,” Transp. Res. C 97, 348–368 (2018).
[Crossref]

2017 (10)

A. E. Sallab, M. Abdou, E. Perot, and S. Yogamani, “Deep reinforcement learning framework for autonomous driving,” Electron. Imaging 2017, 70–76 (2017).
[Crossref]

S. Bianco, C. Cusano, and R. Schettini, “Single and multiple illuminant estimation using convolutional neural networks,” IEEE Trans. Image Process. 26, 4347–4362 (2017).
[Crossref]

L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFS,” IEEE Trans. Pattern Anal. Mach. Intell. 40, 834–848 (2017).
[Crossref]

V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: a deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).
[Crossref]

H. Yin, H. Jia, J. Zhou, and Z. Gao, “Survey on algorithms and VLSI architectures for MPEG-like video coders,” J. Signal Process. Syst. 88, 357–410 (2017).
[Crossref]

D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of Go without human knowledge,” Nature 550, 354–359 (2017).
[Crossref]

J. Lemley, S. Bazrafkan, and P. Corcoran, “Deep learning for consumer devices and services: pushing the limits for machine learning, artificial intelligence, and computer vision,” IEEE Consum. Electron. Mag. 6(2), 48–56 (2017).
[Crossref]

H. A. Pierson and M. S. Gashler, “Deep learning in robotics: a review of recent research,” Adv. Robot. 31, 821–835 (2017).
[Crossref]

Y. Zhao, T. Yue, L. Chen, H. Wang, Z. Ma, D. J. Brady, and X. Cao, “Heterogeneous camera array for multispectral light field imaging,” Opt. Express 25, 14008–14022 (2017).
[Crossref]

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising,” IEEE Trans. Image Process. 26, 3142–3155 (2017).
[Crossref]

2016 (8)

Y. Sun, X. Yuan, and S. Pang, “High-speed compressive range imaging based on active illumination,” Opt. Express 24, 22836–22846 (2016).
[Crossref]

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process Mag. 33(5), 95–108 (2016).
[Crossref]

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Trans. Graph. 35, 192 (2016).
[Crossref]

S. Rahman, M. M. Rahman, M. Abdullah-Al-Wadud, G. D. Al-Quaderi, and M. Shoyaib, “An adaptive gamma correction for image enhancement,” EURASIP J. Image Video Process. 2016, 35 (2016).
[Crossref]

M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, “Deep joint demosaicking and denoising,” ACM Trans. Graph. 35, 191 (2016).
[Crossref]

Y. Chen and T. Pock, “Trainable nonlinear reaction diffusion: a flexible framework for fast and effective image restoration,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 1256–1272 (2016).
[Crossref]

X. Xu, S. Liu, T. Chuang, Y. Huang, S. Lei, K. Rapaka, C. Pang, V. Seregin, Y. Wang, and M. Karczewicz, “Intra block copy in HEVC screen content coding extensions,” IEEE J. Emerg. Sel. Top. Circuits Syst. 6, 409–419 (2016).
[Crossref]

Z. Li and J. Tang, “Weakly supervised deep matrix factorization for social image understanding,” IEEE Trans. Image Process. 26, 276–288 (2016).
[Crossref]

2015 (9)

2014 (6)

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process Mag. 31(1), 105–115 (2014).
[Crossref]

D. Kiku, Y. Monno, M. Tanaka, and M. Okutomi, “Minimized-Laplacian residual interpolation for color image demosaicking,” Proc. SPIE 9023, 90230L (2014).
[Crossref]

C. Brandli, R. Berner, M. Yang, S.-C. Liu, and T. Delbruck, “A 240× 80 130 db 3 µs latency global shutter spatiotemporal vision sensor,” IEEE J. Solid-State Circuits 49, 2333–2341 (2014).
[Crossref]

D. Khashabi, S. Nowozin, J. Jancsary, and A. W. Fitzgibbon, “Joint demosaicing and denoising via learned nonparametric random fields,” IEEE Trans. Image Process. 23, 4968–4981 (2014).
[Crossref]

F. Heide, M. Steinberger, Y.-T. Tsai, M. Rouf, D. Pająk, D. Reddy, O. Gallo, J. Liu, W. Heidrich, K. Egiazarian, J. Kautz, and K. A. Pulli, “FlexISP: a flexible camera image processing framework,” ACM Trans. Graph. 33, 231 (2014).
[Crossref]

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref]

2013 (4)

K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: an ultra-thin high performance monolithic camera array,” ACM Trans. Graph. 32, 166 (2013).
[Crossref]

P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–10545 (2013).
[Crossref]

P. Śliwiński and P. Wachel, “A simple model for on-sensor phase-detection autofocusing algorithm,” J. Comput. Commun. 1, 11–17 (2013).
[Crossref]

J. Kober, J. A. Bagnell, and J. Peters, “Reinforcement learning in robotics: a survey,” Int. J. Rob. Res. 32, 1238–1274 (2013).
[Crossref]

2012 (8)

D. S. Kittle, D. L. Marks, and D. J. Brady, “Design and fabrication of an ultraviolet-visible coded aperture snapshot spectral imager,” Opt. Eng. 51, 071403 (2012).
[Crossref]

Z. Wu, D. Wang, and F. Zhou, “Bilateral prediction and intersection calculation autofocus method for automated microscopy,” J. Microsc. 248, 271–280 (2012).
[Crossref]

J. Sun and M. F. Tappen, “Separable Markov random field model and its applications in low level vision,” IEEE Trans. Image Process. 22, 402–407 (2012).
[Crossref]

M. Beister, D. Kolditz, and W. A. Kalender, “Iterative reconstruction methods in X-ray CT,” Phys. Med. 28, 94–108 (2012).
[Crossref]

E. J. Tremblay, D. L. Marks, D. J. Brady, and J. E. Ford, “Design and scaling of monocentric multiscale imagers,” Appl. Opt. 51, 4691–4702 (2012).
[Crossref]

D. J. Brady, M. Gehm, R. Stack, D. Marks, D. Kittle, D. R. Golish, E. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

J. Lainema, F. Bossen, W.-J. Han, J. Min, and K. Ugur, “Intra coding of the HEVC standard,” IEEE Trans. Circuits Syst. Video Technol. 22, 1792–1801 (2012).
[Crossref]

W. Dong, L. Zhang, G. Shi, and X. Li, “Nonlocally centralized sparse representation for image restoration,” IEEE Trans. Image Process. 22, 1620–1630 (2012).
[Crossref]

2011 (8)

G. Freedman and R. Fattal, “Image and video upscaling from local self-examples,” ACM Trans. Graph. 30, 12 (2011).
[Crossref]

G. Li, J. Wang, and Y. Zhang, “Design of 8 mega-pixel mobile phone camera,” J. Appl. Opt. 32, 420–425 (2011).

L. Zhang, X. Wu, A. Buades, and X. Li, “Color demosaicking by local directional interpolation and nonlocal adaptive thresholding,” J. Electron. Imaging 20, 043010 (2011).
[Crossref]

D. Menon and G. Calvagno, “Color image demosaicking: an overview,” Signal Process. Image Commun. 26, 518–533 (2011).
[Crossref]

D. J. Brady and D. L. Marks, “Coding for compressive focal tomography,” Appl. Opt. 50, 4436–4449 (2011).
[Crossref]

J.-W. Han, J.-H. Kim, H.-T. Lee, and S.-J. Ko, “A novel training based auto-focus for mobile-phone cameras,” IEEE Trans. Consum. Electron. 57, 232–238 (2011).
[Crossref]

Y. Bando, B.-Y. Chen, and T. Nishita, “Motion deblurring from a single image using circular sensor motion,” Comput. Graph. Forum 30, 1869–1878 (2011).
[Crossref]

S. Yousefi, M. Rahman, and N. Kehtarnavaz, “A new auto-focus sharpness function for digital and smart-phone cameras,” IEEE Trans. Consum. Electron. 57, 1003–1009 (2011).
[Crossref]

2010 (8)

C.-Y. Chen, R.-C. Hwang, and Y.-J. Chen, “A passive auto-focus camera control system,” Appl. Soft Comput. 10, 296–303 (2010).
[Crossref]

M. Shankar, N. P. Pitsianis, and D. J. Brady, “Compressive video sensors using multichannel imagers,” Appl. Opt. 49, B9–B17 (2010).
[Crossref]

J. Misra and I. Saha, “Artificial neural networks in hardware: a survey of two decades of progress,” Neurocomputing 74, 239–255 (2010).
[Crossref]

A. Adams, E.-V. Talvala, S. H. Park, D. E. Jacobs, B. Ajdin, N. Gelfand, J. Dolson, D. Vaquero, J. Baek, M. Tico, H. P. A. Lensch, W. Matusik, K. Pulli, M. Horowitz, and M. Levoy, “The Frankencamera: an experimental platform for computational photography,” ACM Trans. Graph. 29, 1–12 (2010).
[Crossref]

V. R. Bhakta, M. Somayaji, S. C. Douglas, and M. P. Christensen, “Experimentally validated computational imaging with adaptive multiaperture folded architecture,” Appl. Opt. 49, B51–B58 (2010).
[Crossref]

E. Tola, V. Lepetit, and P. Fua, “Daisy: an efficient dense descriptor applied to wide-baseline stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 815–830 (2010).
[Crossref]

T. Brox and J. Malik, “Large displacement optical flow: descriptor matching in variational motion estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 500–513 (2010).
[Crossref]

J. Yang, J. Wright, T. S. Huang, and Y. Ma, “Image super-resolution via sparse representation,” IEEE Trans. Image Process. 19, 2861–2873 (2010).
[Crossref]

2009 (4)

2008 (9)

B.-K. Park, S.-S. Kim, D.-S. Chung, S.-D. Lee, and C.-Y. Kim, “Fast and accurate auto focusing algorithm based on two defocused images using discrete cosine transform,” Proc. SPIE 6817, 68170D (2008).
[Crossref]

S. Yazdanfar, K. B. Kenny, K. Tasimi, A. D. Corwin, E. L. Dixon, and R. J. Filkins, “Simple and robust image-based autofocusing for digital microscopy,” Opt. Express 16, 8670–8677 (2008).
[Crossref]

M. T. Rahman and N. Kehtarnavaz, “Real-time face-priority auto focus for digital and cell-phone cameras,” IEEE Trans. Consum. Electron. 54, 1506–1513 (2008).
[Crossref]

S.-Y. Lee, Y. Kumar, J.-M. Cho, S.-W. Lee, and S.-W. Kim, “Enhanced autofocus algorithm using robust focus measure and fuzzy reasoning,” IEEE Trans. Circuits Syst. Video Technol. 18, 983–1246 (2008).
[Crossref]

E. J. Candès and M. B. Wakin, “An introduction to compressive sampling [a sensing/sampling paradigm that goes against the common knowledge in data acquisition],” IEEE Signal Process Mag. 25(2), 21–30 (2008).
[Crossref]

X. Li, B. Gunturk, and L. Zhang, “Image demosaicing: a systematic survey,” Proc. SPIE 6822, 68221J (2008).
[Crossref]

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, J. D. Targove, S. Wein, P. Brewer, D. G. Madden, and K. H. Shafique, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

M. Shankar, R. Willett, N. Pitsianis, T. Schulz, R. Gibbons, R. Te Kolste, J. Carriere, C. Chen, D. Prather, and D. Brady, “Thin infrared imaging systems through multichannel sampling,” Appl. Opt. 47, B1–B10 (2008).
[Crossref]

A. Buades, B. Coll, and J.-M. Morel, “Nonlocal image and movie denoising,” Int. J. Comput. Vision 76, 123–139 (2008).
[Crossref]

2007 (4)

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref]

V. Agarwal, A. V. Gribok, and M. A. Abidi, “Machine learning approach to color constancy,” Neural Netw. 20, 559–563 (2007).
[Crossref]

M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” Int. J. Comput. Vision 74, 59–73 (2007).
[Crossref]

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26, 70 (2007).
[Crossref]

2006 (14)

A. Greengard, Y. Y. Schechner, and R. Piestun, “Depth from diffracted rotation,” Opt. Lett. 31, 181–183 (2006).
[Crossref]

P. M. Shankar, W. C. Hasenplaugh, R. L. Morrison, R. A. Stack, and M. A. Neifeld, “Multiaperture imaging,” Appl. Opt. 45, 2871–2883 (2006).
[Crossref]

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. 25, 795–804 (2006).
[Crossref]

J.-Y. Huo, Y.-L. Chang, J. Wang, and X.-X. Wei, “Robust automatic white balance algorithm using gray color points in images,” IEEE Trans. Consum. Electron. 52, 541–546 (2006).
[Crossref]

K. Hirakawa and T. W. Parks, “Joint demosaicing and denoising,” IEEE Trans. Image Process. 15, 2146–2157 (2006).
[Crossref]

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 606509 (2006).
[Crossref]

M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. Image Process. 15, 3736–3745 (2006).
[Crossref]

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006).
[Crossref]

E. J. Candes, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl. Math. 59, 1207–1223 (2006).
[Crossref]

J. Mait, “A history of imaging: revisiting the past to chart the future,” Opt. Photon. News 17(2), 22–27 (2006).
[Crossref]

M. Levoy, “Light fields and computational imaging,” IEEE Computer 39, 46–55 (2006).
[Crossref]

G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science 313, 504–507 (2006).
[Crossref]

G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural Comput. 18, 1527–1554 (2006).
[Crossref]

Y. Yao, B. Abidi, N. Doggaz, and M. Abidi, “Evaluation of sharpness measures and search algorithms for the auto focusing of high-magnification images,” Proc. SPIE 6246, 62460G (2006).
[Crossref]

2005 (9)

R. Ramanath, W. E. Snyder, Y. Yoo, and M. S. Drew, “Color image processing pipeline,” IEEE Signal Process Mag. 22(1), 34–43 (2005).
[Crossref]

A. Buades, B. Coll, and J.-M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Simul. 4, 490–530 (2005).
[Crossref]

S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, “An iterative regularization method for total variation-based image restoration,” Multiscale Model. Simul. 4, 460–489 (2005).
[Crossref]

K. Hirakawa and T. W. Parks, “Adaptive homogeneity-directed demosaicing algorithm,” IEEE Trans. Image Process. 14, 360–369 (2005).
[Crossref]

L. Zhang and X. Wu, “Color demosaicking via directional linear minimum mean square-error estimation,” IEEE Trans. Image Process. 14, 2167–2178 (2005).
[Crossref]

B. K. Gunturk, J. Glotzbach, Y. Altunbasak, R. W. Schafer, and R. M. Mersereau, “Demosaicking: color filter array interpolation,” IEEE Signal Process Mag. 22(1), 44–54 (2005).
[Crossref]

J. Duparré, P. Dannberg, P. Schreiber, A. Bräuer, and A. Tünnermann, “Thin compound-eye camera,” Appl. Opt. 44, 2949–2956 (2005).
[Crossref]

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

G. Muyo and A. R. Harvey, “Decomposition of the optical transfer function: wavefront coding imaging systems,” Opt. Lett. 30, 2715–2717 (2005).
[Crossref]

2004 (2)

2003 (5)

M. A. Neifeld and P. Shankar, “Feature-specific imaging,” Appl. Opt. 42, 3379–3389 (2003).
[Crossref]

S. Prasad, T. C. Torgersen, V. P. Pauca, R. J. Plemmons, and J. van der Gracht, “Engineering the pupil phase to improve image quality,” Proc. SPIE 5108, 1–12 (2003).
[Crossref]

V. Kwatra, A. Schödl, I. Essa, G. Turk, and A. Bobick, “Graphcut textures: image and video synthesis using graph cuts,” ACM Trans. Graph. 22, 277–286 (2003).
[Crossref]

N. Kehtarnavaz and H.-J. Oh, “Development and real-time implementation of a rule-based auto-focus algorithm,” Real-Time Imaging 9, 197–203 (2003).
[Crossref]

J. He, R. Zhou, and Z. Hong, “Modified fast climbing search auto-focus algorithm with adaptive step size searching technique for digital camera,” IEEE Trans. Consum. Electron. 49, 257–262 (2003).
[Crossref]

2002 (1)

K. Barnard, V. Cardei, and B. Funt, “A comparison of computational color constancy algorithms. I: methodology and experiments with synthesized data,” IEEE Trans. Image Process. 11, 972–984 (2002).
[Crossref]

2001 (5)

A. Skodras, C. Christopoulos, and T. Ebrahimi, “The JPEG 2000 still image compression standard,” IEEE Signal Process Mag. 18(5), 36–58 (2001).
[Crossref]

K. Briechle and U. D. Hanebeck, “Template matching using fast normalized cross correlation,” Proc. SPIE 4387, 95–102 (2001).
[Crossref]

W. Chi and N. George, “Electronic imaging using a logarithmic asphere,” Opt. Lett. 26, 875–877 (2001).
[Crossref]

J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): concept and experimental verification,” Appl. Opt. 40, 1806–1813 (2001).
[Crossref]

R. T. Collins, A. J. Lipton, H. Fujiyoshi, and T. Kanade, “Algorithms for cooperative multisensor surveillance,” Proc. IEEE 89, 1456–1477 (2001).
[Crossref]

2000 (1)

O. Kapah and H. Z. Hel-Or, “Demosaicking using artificial neural networks,” Proc. SPIE 3962, 112–120 (2000).
[Crossref]

1998 (3)

P. Pirsch and H.-J. Stolberg, “VLSI implementations of image and video multimedia processing systems,” IEEE Trans. Circuits Syst. Video Technol. 8, 878–891 (1998).
[Crossref]

H. B. Wach, E. R. Dowski, and W. T. Cathey, “Control of chromatic focal shift through wave-front coding,” Appl. Opt. 37, 5359–5367 (1998).
[Crossref]

G. J. Sullivan and T. Wiegand, “Rate-distortion optimization for video compression,” IEEE Signal Process Mag. 15(6), 74–90 (1998).
[Crossref]

1997 (3)

S. Bradburn, W. T. Cathey, and E. R. Dowski, “Realizations of focus invariance in optical–digital systems with wave-front coding,” Appl. Opt. 36, 9157–9166 (1997).
[Crossref]

E. R. Fossum, “CMOS image sensors: electronic camera-on-a-chip,” IEEE Trans. Electron Devices 44, 1689–1698 (1997).
[Crossref]

A. Santos, C. O. de Solórzano, J. J. Vaquero, J. Pena, N. Malpica, and F. Del Pozo, “Evaluation of autofocus functions in molecular cytogenetic analysis,” J. Microsc. 188, 264–272 (1997).
[Crossref]

1996 (1)

M. J. Black and P. Anandan, “The robust estimation of multiple motions: parametric and piecewise-smooth flow fields,” Comput. Vision Image Understanding 63, 75–104 (1996).
[Crossref]

1995 (3)

Y.-C. Liu, W.-H. Chan, and Y.-Q. Chen, “Automatic white balance for digital still camera,” IEEE Trans. Consum. Electron. 41, 460–466 (1995).
[Crossref]

P. Pirsch, N. Demassieux, and W. Gehrke, “VLSI architectures for video compression—a survey,” Proc. IEEE 83, 220–246 (1995).
[Crossref]

E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995).
[Crossref]

1993 (1)

J. Von Neumann, “First draft of a report on the EDVAC,” IEEE Ann. Hist. Comput. 15, 27–75 (1993).
[Crossref]

1992 (1)

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
[Crossref]

1990 (2)

D. Psaltis, D. Brady, X.-G. Gu, and S. Lin, “Holography in artificial neural networks,” Nature 343, 325–330 (1990).
[Crossref]

C. Mead, “Neuromorphic electronic systems,” Proc. IEEE 78, 1629–1636 (1990).
[Crossref]

1989 (1)

G. Cybenko, “Approximation by superpositions of a sigmoidal function,” Math. Control Signals Syst. 2, 303–314 (1989).
[Crossref]

1988 (2)

1986 (1)

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323, 533 (1986).
[Crossref]

1984 (1)

1983 (2)

J. A. Parker, R. V. Kenyon, and D. E. Troxel, “Comparison of interpolating methods for image resampling,” IEEE Trans. Med. Imaging 2, 31–39 (1983).
[Crossref]

P. J. Burt and E. H. Adelson, “A multiresolution spline with application to image mosaics,” ACM Trans. Graphics 2, 217–236 (1983).
[Crossref]

1981 (1)

B. K. Horn and B. G. Schunck, “Determining optical flow,” Artif. Intell. 17, 185–203 (1981).
[Crossref]

1977 (2)

E. H. Land, “The retinex theory of color vision,” Sci. Am. 237, 108–129 (1977).
[Crossref]

A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” J. Royal Stat. Soc. B 39, 1–22 (1977).
[Crossref]

1974 (2)

L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745–754 (1974).
[Crossref]

N. Ahmed, T. Natarajan, and K. R. Rao, “Discrete cosine transform,” IEEE Trans. Comput. C- 23, 90–93 (1974).
[Crossref]

1972 (1)

1958 (1)

F. Rosenblatt, “The perceptron: a probabilistic model for information storage and organization in the brain,” Psychol. Rev. 65, 386–408 (1958).
[Crossref]

1943 (1)

W. S. McCulloch and W. Pitts, “A logical calculus of the ideas immanent in nervous activity,” Bull. Math. Biophys. 5, 115–133 (1943).
[Crossref]

Abadi, M.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.

Abdou, M.

A. E. Sallab, M. Abdou, E. Perot, and S. Yogamani, “Deep reinforcement learning framework for autonomous driving,” Electron. Imaging 2017, 70–76 (2017).
[Crossref]

Abdullah-Al-Wadud, M.

S. Rahman, M. M. Rahman, M. Abdullah-Al-Wadud, G. D. Al-Quaderi, and M. Shoyaib, “An adaptive gamma correction for image enhancement,” EURASIP J. Image Video Process. 2016, 35 (2016).
[Crossref]

Abidi, B.

Y. Yao, B. Abidi, N. Doggaz, and M. Abidi, “Evaluation of sharpness measures and search algorithms for the auto focusing of high-magnification images,” Proc. SPIE 6246, 62460G (2006).
[Crossref]

Abidi, M.

Y. Yao, B. Abidi, N. Doggaz, and M. Abidi, “Evaluation of sharpness measures and search algorithms for the auto focusing of high-magnification images,” Proc. SPIE 6246, 62460G (2006).
[Crossref]

Abidi, M. A.

V. Agarwal, A. V. Gribok, and M. A. Abidi, “Machine learning approach to color constancy,” Neural Netw. 20, 559–563 (2007).
[Crossref]

Acosta, A.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4681–4690.

Adams, A.

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Trans. Graph. 35, 192 (2016).
[Crossref]

A. Adams, E.-V. Talvala, S. H. Park, D. E. Jacobs, B. Ajdin, N. Gelfand, J. Dolson, D. Vaquero, J. Baek, M. Tico, H. P. A. Lensch, W. Matusik, K. Pulli, M. Horowitz, and M. Levoy, “The Frankencamera: an experimental platform for computational photography,” ACM Trans. Graph. 29, 1–12 (2010).
[Crossref]

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Adelson, E. H.

P. J. Burt and E. H. Adelson, “A multiresolution spline with application to image mosaics,” ACM Trans. Graphics 2, 217–236 (1983).
[Crossref]

Afifi, M.

M. Afifi, “Semantic white balance: semantic color constancy using convolutional neural network,” arXiv:1802.00153 (2018).

Agarwal, V.

V. Agarwal, A. V. Gribok, and M. A. Abidi, “Machine learning approach to color constancy,” Neural Netw. 20, 559–563 (2007).
[Crossref]

Agrawal, A.

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. 25, 795–804 (2006).
[Crossref]

Agustsson, E.

F. Mentzer, E. Agustsson, M. Tschannen, R. Timofte, and L. Van Gool, “Conditional probability models for deep image compression,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018).

Aharon, M.

M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. Image Process. 15, 3736–3745 (2006).
[Crossref]

Ahmed, N.

N. Ahmed, T. Natarajan, and K. R. Rao, “Discrete cosine transform,” IEEE Trans. Comput. C- 23, 90–93 (1974).
[Crossref]

Ahn, B.

B. Ahn and N. I. Cho, “Block-matching convolutional neural network for image denoising,” arXiv:1704.00524 (2017).

Ain-kedem, L.

C. Wu, L. F. Isikdogan, S. Rao, B. Nayak, T. Gerasimow, A. Sutic, L. Ain-kedem, and G. Michael, “VisionISP: Repurposing the image signal processor for computer vision applications,” in IEEE International Conference on Image Processing (ICIP) (2019), pp. 4624–4628.

Aitken, A.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4681–4690.

Aittala, M.

M. Aittala and F. Durand, “Burst image deblurring using permutation invariant convolutional neural networks,” in European Conference on Computer Vision (ECCV) (2018), pp. 731–747.

Ajdin, B.

A. Adams, E.-V. Talvala, S. H. Park, D. E. Jacobs, B. Ajdin, N. Gelfand, J. Dolson, D. Vaquero, J. Baek, M. Tico, H. P. A. Lensch, W. Matusik, K. Pulli, M. Horowitz, and M. Levoy, “The Frankencamera: an experimental platform for computational photography,” ACM Trans. Graph. 29, 1–12 (2010).
[Crossref]

Alom, M. Z.

M. Z. Alom, T. M. Taha, C. Yakopcic, S. Westberg, P. Sidike, M. S. Nasrin, B. C. Van Esesn, A. A. S. Awwal, and V. K. Asari, “The history began from AlexNet: a comprehensive survey on deep learning approaches,” arXiv:1803.01164 (2018).

Al-Quaderi, G. D.

S. Rahman, M. M. Rahman, M. Abdullah-Al-Wadud, G. D. Al-Quaderi, and M. Shoyaib, “An adaptive gamma correction for image enhancement,” EURASIP J. Image Video Process. 2016, 35 (2016).
[Crossref]

Altunbasak, Y.

B. K. Gunturk, J. Glotzbach, Y. Altunbasak, R. W. Schafer, and R. M. Mersereau, “Demosaicking: color filter array interpolation,” IEEE Signal Process Mag. 22(1), 44–54 (2005).
[Crossref]

I. Pekkucuksen and Y. Altunbasak, “Gradient based threshold free color filter array interpolation,” in IEEE International Conference on Image Processing (IEEE, 2010), pp. 137–140.

Anandan, P.

M. J. Black and P. Anandan, “The robust estimation of multiple motions: parametric and piecewise-smooth flow fields,” Comput. Vision Image Understanding 63, 75–104 (1996).
[Crossref]

Anderson, P.

P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang, “Bottom-up and top-down attention for image captioning and visual question answering,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 6077–6086.

Andriluka, M.

B. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, P. Rajpurkar, T. Migimatsu, R. Cheng-Yue, F. Mujica, A. Coates, and A. Y. Ng, “An empirical evaluation of deep learning on highway driving,” arXiv:1504.01716 (2015).

Anguelov, D.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1–9.

Antoniades, J.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, J. D. Targove, S. Wein, P. Brewer, D. G. Madden, and K. H. Shafique, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Antonoglou, I.

D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of Go without human knowledge,” Nature 550, 354–359 (2017).
[Crossref]

Antunez, E.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Aravkin, A. Y.

C.-C. Lin, S. U. Pankanti, K. N. Ramamurthy, and A. Y. Aravkin, “Adaptive as-natural-as-possible image stitching,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1155–1163.

Arce, G. R.

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process Mag. 31(1), 105–115 (2014).
[Crossref]

Architecture, T.

M. Ditty, T. Architecture, J. Montrym, and C. Wittenbrink, “NVIDIA’s Tegra K1 system-on-chip,” in IEEE Hot Chips 26 Symposium (HCS) (IEEE, 2014), pp. 1–26.

Arguello, H.

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process Mag. 31(1), 105–115 (2014).
[Crossref]

Asari, V. K.

M. Z. Alom, T. M. Taha, C. Yakopcic, S. Westberg, P. Sidike, M. S. Nasrin, B. C. Van Esesn, A. A. S. Awwal, and V. K. Asari, “The history began from AlexNet: a comprehensive survey on deep learning approaches,” arXiv:1803.01164 (2018).

Asif, S.

M. Cheng, Z. Ma, S. Asif, Y. Xu, H. Liu, W. Bao, and J. Sun, “A dual camera system for high spatiotemporal resolution video acquisition,” IEEE Comp. Arch. Lett. 1, 1 (2020).

Athale, R. A.

Athitsos, V.

X. Miao, X. Yuan, Y. Pu, and V. Athitsos, “λ-net: reconstruct hyperspectral images from a snapshot measurement,” in IEEE/CVF Conference on Computer Vision (ICCV) (2019), Vol. 1.

Awwal, A. A. S.

M. Z. Alom, T. M. Taha, C. Yakopcic, S. Westberg, P. Sidike, M. S. Nasrin, B. C. Van Esesn, A. A. S. Awwal, and V. K. Asari, “The history began from AlexNet: a comprehensive survey on deep learning approaches,” arXiv:1803.01164 (2018).

Bach, F. R.

J. Mairal, F. R. Bach, J. Ponce, G. Sapiro, and A. Zisserman, “Non-local sparse models for image restoration,” in International Conference on Computer Vision (ICCV) (2009), Vol. 29, pp. 54–62.

Badrinarayanan, V.

V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: a deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).
[Crossref]

Baek, J.

A. Adams, E.-V. Talvala, S. H. Park, D. E. Jacobs, B. Ajdin, N. Gelfand, J. Dolson, D. Vaquero, J. Baek, M. Tico, H. P. A. Lensch, W. Matusik, K. Pulli, M. Horowitz, and M. Levoy, “The Frankencamera: an experimental platform for computational photography,” ACM Trans. Graph. 29, 1–12 (2010).
[Crossref]

Bagnell, J. A.

J. Kober, J. A. Bagnell, and J. Peters, “Reinforcement learning in robotics: a survey,” Int. J. Rob. Res. 32, 1238–1274 (2013).
[Crossref]

Bagon, S.

D. Glasner, S. Bagon, and M. Irani, “Super-resolution from a single image,” in IEEE 12th International Conference on Computer Vision (IEEE, 2009), pp. 349–356.

Baker, L.

D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of Go without human knowledge,” Nature 550, 354–359 (2017).
[Crossref]

Ballé, J.

J. Ballé, D. Minnen, S. Singh, S. J. Hwang, and N. Johnston, “Variational image compression with a scale hyperprior,” arXiv:1802.01436 (2018).

Baltag, O.

O. Baltag, “History of automatic focusing reflected by patents,” Sci. Innov. 3,1–17 (2015).
[Crossref]

Bando, Y.

Y. Bando, B.-Y. Chen, and T. Nishita, “Motion deblurring from a single image using circular sensor motion,” Comput. Graph. Forum 30, 1869–1878 (2011).
[Crossref]

Bao, W.

M. Cheng, Z. Ma, S. Asif, Y. Xu, H. Liu, W. Bao, and J. Sun, “A dual camera system for high spatiotemporal resolution video acquisition,” IEEE Comp. Arch. Lett. 1, 1 (2020).

Baraniuk, R. G.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 606509 (2006).
[Crossref]

Barbastathis, G.

Barham, P.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.

Barnard, K.

K. Barnard, V. Cardei, and B. Funt, “A comparison of computational color constancy algorithms. I: methodology and experiments with synthesized data,” IEEE Trans. Image Process. 11, 972–984 (2002).
[Crossref]

Baron, D.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 606509 (2006).
[Crossref]

Barron, J. T.

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Trans. Graph. 35, 192 (2016).
[Crossref]

Barry, B.

D. Moloney, B. Barry, R. Richmond, F. Connor, C. Brick, and D. Donohoe, “Myriad 2: eye of the computational vision storm,” in IEEE Hot Chips 26 Symposium (HCS) (IEEE, 2014), pp. 1–18.

Barth, A.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Bay, H.

H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: speeded up robust features,” in European Conference on Computer Vision (ECCV), A. Leonardis, H. Bischof, and A. Pinz, eds. (Springer,2006), pp. 404–417.

Bayer, B. E.

B. E. Bayer, “Color imaging array,” U.S. patent3,971,065 (20July1976).

Bazrafkan, S.

J. Lemley, S. Bazrafkan, and P. Corcoran, “Deep learning for consumer devices and services: pushing the limits for machine learning, artificial intelligence, and computer vision,” IEEE Consum. Electron. Mag. 6(2), 48–56 (2017).
[Crossref]

Beister, M.

M. Beister, D. Kolditz, and W. A. Kalender, “Iterative reconstruction methods in X-ray CT,” Phys. Med. 28, 94–108 (2012).
[Crossref]

Berner, R.

C. Brandli, R. Berner, M. Yang, S.-C. Liu, and T. Delbruck, “A 240× 80 130 db 3 µs latency global shutter spatiotemporal vision sensor,” IEEE J. Solid-State Circuits 49, 2333–2341 (2014).
[Crossref]

Bhakta, V. R.

Bian, Z.

Bianco, S.

S. Bianco, C. Cusano, and R. Schettini, “Single and multiple illuminant estimation using convolutional neural networks,” IEEE Trans. Image Process. 26, 4347–4362 (2017).
[Crossref]

S. Bianco, C. Cusano, and R. Schettini, “Color constancy using CNNS,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2015), pp. 81–89.

Black, M. J.

M. J. Black and P. Anandan, “The robust estimation of multiple motions: parametric and piecewise-smooth flow fields,” Comput. Vision Image Understanding 63, 75–104 (1996).
[Crossref]

A. Ranjan and M. J. Black, “Optical flow estimation using a spatial pyramid network,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4161–4170.

X. Lan, S. Roth, D. Huttenlocher, and M. J. Black, “Efficient belief propagation with learned higher-order Markov random fields,” in European Conference on Computer Vision (Springer, 2006), pp. 269–282.

Blau, Y.

Y. Blau and T. Michaeli, “The perception-distortion tradeoff,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018).

Bobick, A.

V. Kwatra, A. Schödl, I. Essa, G. Turk, and A. Bobick, “Graphcut textures: image and video synthesis using graph cuts,” ACM Trans. Graph. 22, 277–286 (2003).
[Crossref]

Bojarski, M.

M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to end learning for self-driving cars,” arXiv:1604.07316 (2016).

M. Bojarski, P. Yeres, A. Choromanska, K. Choromanski, B. Firner, L. Jackel, and U. Muller, “Explaining how a deep neural network trained with end-to-end learning steers a car,” arXiv:1704.07911 (2017).

Bölcskei, H.

P. Grohs, D. Perekrestenko, D. Elbrächter, and H. Bölcskei, “Deep neural network approximation theory,” arXiv:1901.02220 (2019).

Bolton, A.

D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of Go without human knowledge,” Nature 550, 354–359 (2017).
[Crossref]

Boominathan, V.

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in IEEE International Conference on Computational Photography (IEEE, 2014), pp. 1–10.

Bora, A.

A. Bora, A. Jalal, E. Price, and A. G. Dimakis, “Compressed sensing using generative models,” in 34th International Conference on Machine Learning (JMLR, 2017), Vol. 70, pp. 537–546.

Boser, B. E.

Y. LeCun, B. E. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. E. Hubbard, and L. D. Jackel, “Handwritten digit recognition with a back-propagation network,” in Advances in Neural Information Processing Systems (1990), pp. 396–404.

Bossen, F.

J. Lainema, F. Bossen, W.-J. Han, J. Min, and K. Ugur, “Intra coding of the HEVC standard,” IEEE Trans. Circuits Syst. Video Technol. 22, 1792–1801 (2012).
[Crossref]

Bourdev, L.

O. Rippel and L. Bourdev, “Real-time adaptive image compression,” in International Conference on Machine Learning (ICML, 2017), pp. 2922–2930.

Bovik, A. C.

Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” in 37th Asilomar Conference on Signals, Systems & Computers (IEEE, 2003), Vol. 2, pp. 1398–1402.

Bradburn, S.

Bradski, G.

E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: an efficient alternative to sift or surf,” in International Conference on Computer Vision (2011), pp. 2564–2571.

Brady, D.

Brady, D. J.

W. Pang and D. J. Brady, “Distributed focus and digital zoom,” Eng. Res. Express 2, 035019 (2020).
[Crossref]

D. J. Brady, W. Pang, H. Li, Z. Ma, Y. Tao, and X. Cao, “Parallel cameras,” Optica 5, 127–137 (2018).
[Crossref]

Y. Zhao, T. Yue, L. Chen, H. Wang, Z. Ma, D. J. Brady, and X. Cao, “Heterogeneous camera array for multispectral light field imaging,” Opt. Express 25, 14008–14022 (2017).
[Crossref]

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process Mag. 33(5), 95–108 (2016).
[Crossref]

T.-H. Tsai, X. Yuan, and D. J. Brady, “Spatial light modulator based color polarization imaging,” Opt. Express 23, 11912–11926 (2015).
[Crossref]

P. Llull, X. Yuan, L. Carin, and D. J. Brady, “Image translation for single-shot focal tomography,” Optica 2, 822–825 (2015).
[Crossref]

T.-H. Tsai, P. Llull, X. Yuan, L. Carin, and D. J. Brady, “Spectral-temporal compressive imaging,” Opt. Lett. 40, 4054–4057 (2015).
[Crossref]

D. J. Brady, A. Mrozack, K. MacCabe, and P. Llull, “Compressive tomography,” Adv. Opt. Photon. 7, 756–813 (2015).
[Crossref]

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process Mag. 31(1), 105–115 (2014).
[Crossref]

P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–10545 (2013).
[Crossref]

E. J. Tremblay, D. L. Marks, D. J. Brady, and J. E. Ford, “Design and scaling of monocentric multiscale imagers,” Appl. Opt. 51, 4691–4702 (2012).
[Crossref]

D. J. Brady, M. Gehm, R. Stack, D. Marks, D. Kittle, D. R. Golish, E. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

D. S. Kittle, D. L. Marks, and D. J. Brady, “Design and fabrication of an ultraviolet-visible coded aperture snapshot spectral imager,” Opt. Eng. 51, 071403 (2012).
[Crossref]

D. J. Brady and D. L. Marks, “Coding for compressive focal tomography,” Appl. Opt. 50, 4436–4449 (2011).
[Crossref]

M. Shankar, N. P. Pitsianis, and D. J. Brady, “Compressive video sensors using multichannel imagers,” Appl. Opt. 49, B9–B17 (2010).
[Crossref]

D. J. Brady and N. Hagen, “Multiscale lens design,” Opt. Express 17, 10659–10674 (2009).
[Crossref]

D. J. Brady, Optical Imaging and Spectroscopy (Wiley, 2009).

X. Yan, D. J. Brady, J. Wang, C. Huang, Z. Li, S. Yan, D. Liu, and Z. Ma, “Compressive sampling for array cameras,” arXiv:1908.10903 (2019).

X. Yuan, L. Fang, Q. Dai, D. J. Brady, and Y. Liu, “Multiscale gigapixel video: a cross resolution image matching and warping approach,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2017), pp. 1–9.

Brandli, C.

C. Brandli, R. Berner, M. Yang, S.-C. Liu, and T. Delbruck, “A 240× 80 130 db 3 µs latency global shutter spatiotemporal vision sensor,” IEEE J. Solid-State Circuits 49, 2333–2341 (2014).
[Crossref]

Bräuer, A.

Braun, M.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, J. D. Targove, S. Wein, P. Brewer, D. G. Madden, and K. H. Shafique, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Brédif, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR 2 (2005), pp. 1–11.

Brewer, P.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, J. D. Targove, S. Wein, P. Brewer, D. G. Madden, and K. H. Shafique, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Brick, C.

D. Moloney, B. Barry, R. Richmond, F. Connor, C. Brick, and D. Donohoe, “Myriad 2: eye of the computational vision storm,” in IEEE Hot Chips 26 Symposium (HCS) (IEEE, 2014), pp. 1–18.

Briechle, K.

K. Briechle and U. D. Hanebeck, “Template matching using fast normalized cross correlation,” Proc. SPIE 4387, 95–102 (2001).
[Crossref]

Bronstein, A. M.

E. Schwartz, R. Giryes, and A. M. Bronstein, “DeepISP: toward learning an end-to-end image processing pipeline,” IEEE Trans. Image Process. 28, 912–923 (2018).
[Crossref]

Brown, M.

M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” Int. J. Comput. Vision 74, 59–73 (2007).
[Crossref]

M. Brown and D. G. Lowe, “Recognising panoramas,” in IEEE International Conference on Computer Vision (ICCV) (2003), Vol. 3, p. 1218.

Brown, M. S.

J. Zaragoza, T.-J. Chin, M. S. Brown, and D. Suter, “As-projective-as-possible image stitching with moving DLT,” in IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 2339–2346.

Brox, T.

T. Brox and J. Malik, “Large displacement optical flow: descriptor matching in variational motion estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 500–513 (2010).
[Crossref]

E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, “FlowNet 2.0: evolution of optical flow estimation with deep networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 2462–2470.

A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox, “FlowNet: learning optical flow with convolutional networks,” in IEEE International Conference on Computer Vision (2015), pp. 2758–2766.

P. Fischer, A. Dosovitskiy, and T. Brox, “Descriptor matching with convolutional neural networks: a comparison to sift,” arXiv:1405.5769 (2014).

Buades, A.

L. Zhang, X. Wu, A. Buades, and X. Li, “Color demosaicking by local directional interpolation and nonlocal adaptive thresholding,” J. Electron. Imaging 20, 043010 (2011).
[Crossref]

A. Buades, B. Coll, and J.-M. Morel, “Nonlocal image and movie denoising,” Int. J. Comput. Vision 76, 123–139 (2008).
[Crossref]

A. Buades, B. Coll, and J.-M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Simul. 4, 490–530 (2005).
[Crossref]

A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 2 (IEEE, 2005), pp. 60–65.

A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE Computer Society, 2005), Vol. 2, pp. 60–65.

Buckler, M.

M. Buckler, S. Jayasuriya, and A. Sampson, “Reconfiguring the imaging pipeline for computer vision,” in IEEE International Conference on Computer Vision (ICCV) (2017).

Buehler, C.

P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang, “Bottom-up and top-down attention for image captioning and visual question answering,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 6077–6086.

Burger, H. C.

H. C. Burger, C. J. Schuler, and S. Harmeling, “Image denoising: can plain neural networks compete with BM3D?” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 2392–2399.

Burger, M.

S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, “An iterative regularization method for total variation-based image restoration,” Multiscale Model. Simul. 4, 460–489 (2005).
[Crossref]

Burt, P. J.

P. J. Burt and E. H. Adelson, “A multiresolution spline with application to image mosaics,” ACM Trans. Graphics 2, 217–236 (1983).
[Crossref]

Caballero, J.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4681–4690.

Cai, C.

G. Lu, W. Ouyang, D. Xu, X. Zhang, C. Cai, and Z. Gao, “DVC: an end-to-end deep video compression framework,” in IEEE Conference on Computer Vision and Pattern Recognition (2019), pp. 11006–11015.

Cai, J.

Q. Shen, J. Cai, L. Liu, H. Liu, T. Chen, L. Ye, and Z. Ma, “CodedVision: towards joint image understanding and compression via end-to-end learning,” in Pacific Rim Conference on Multimedia (Springer, 2018), pp. 3–14.

Calvagno, G.

D. Menon and G. Calvagno, “Color image demosaicking: an overview,” Signal Process. Image Commun. 26, 518–533 (2011).
[Crossref]

Candes, E. J.

E. J. Candes, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl. Math. 59, 1207–1223 (2006).
[Crossref]

Candès, E. J.

E. J. Candès and M. B. Wakin, “An introduction to compressive sampling [a sensing/sampling paradigm that goes against the common knowledge in data acquisition],” IEEE Signal Process Mag. 25(2), 21–30 (2008).
[Crossref]

Cao, X.

D. J. Brady, W. Pang, H. Li, Z. Ma, Y. Tao, and X. Cao, “Parallel cameras,” Optica 5, 127–137 (2018).
[Crossref]

Y. Zhao, T. Yue, L. Chen, H. Wang, Z. Ma, D. J. Brady, and X. Cao, “Heterogeneous camera array for multispectral light field imaging,” Opt. Express 25, 14008–14022 (2017).
[Crossref]

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process Mag. 33(5), 95–108 (2016).
[Crossref]

T. Chen, H. Liu, Q. Shen, T. Yue, X. Cao, and Z. Ma, “DeepCoder: A deep neural network based video compression,” in IEEE Visual Communications and Image Processing (VCIP) (IEEE, 2017), pp. 1–4.

H. Liu, T. Chen, P. Guo, Q. Shen, X. Cao, Y. Wang, and Z. Ma, “Non-local attention optimized deep image compression,” arXiv:1904.09757 (2019).

Capel, D.

D. Capel and A. Zisserman, “Automated mosaicing with super-resolution zoom,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (1998), Vol. 98, pp. 885–891.

Cardei, V.

K. Barnard, V. Cardei, and B. Funt, “A comparison of computational color constancy algorithms. I: methodology and experiments with synthesized data,” IEEE Trans. Image Process. 11, 972–984 (2002).
[Crossref]

Carin, L.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process Mag. 33(5), 95–108 (2016).
[Crossref]

P. Llull, X. Yuan, L. Carin, and D. J. Brady, “Image translation for single-shot focal tomography,” Optica 2, 822–825 (2015).
[Crossref]

T.-H. Tsai, P. Llull, X. Yuan, L. Carin, and D. J. Brady, “Spectral-temporal compressive imaging,” Opt. Lett. 40, 4054–4057 (2015).
[Crossref]

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process Mag. 31(1), 105–115 (2014).
[Crossref]

P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–10545 (2013).
[Crossref]

Carriere, J.

Cathey, W. T.

Cen, M.

Y. Wang, H. Feng, Z. Xu, Q. Li, Y. Chen, and M. Cen, “Fast auto-focus scheme based on optical defocus fitting model,” J. Mod. Opt. 65, 858–868 (2018).
[Crossref]

Chan, W.-H.

Y.-C. Liu, W.-H. Chan, and Y.-Q. Chen, “Automatic white balance for digital still camera,” IEEE Trans. Consum. Electron. 41, 460–466 (1995).
[Crossref]

Chanan, G.

A. Paszke, S. Gross, S. Chintala, and G. Chanan, PyTorch: Tensors and Dynamic Neural Networks in Python with Strong GPU Acceleration (2017), Vol. 6.

Chang, J.

J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
[Crossref]

Chang, Y.-L.

J.-Y. Huo, Y.-L. Chang, J. Wang, and X.-X. Wei, “Robust automatic white balance algorithm using gray color points in images,” IEEE Trans. Consum. Electron. 52, 541–546 (2006).
[Crossref]

Chang, Y.-Y.

S.-C. Tai, T.-W. Liao, Y.-Y. Chang, and C.-P. Yeh, “Automatic white balance algorithm through the average equalization and threshold,” in 8th International Conference on Information Science and Digital Content Technology (ICIDT) (IEEE, 2012), Vol. 3, pp. 571–576.

Chatterjee, P.

K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: an ultra-thin high performance monolithic camera array,” ACM Trans. Graph. 32, 166 (2013).
[Crossref]

Chaurasia, G.

M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, “Deep joint demosaicking and denoising,” ACM Trans. Graph. 35, 191 (2016).
[Crossref]

Chellappa, R.

D. Reddy, A. Veeraraghavan, and R. Chellappa, “P2C2: programmable pixel compressive camera for high speed imaging,” in Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2011), pp. 329–336.

Chen, B.-Y.

Y. Bando, B.-Y. Chen, and T. Nishita, “Motion deblurring from a single image using circular sensor motion,” Comput. Graph. Forum 30, 1869–1878 (2011).
[Crossref]

Chen, C.

A. Portnoy, N. Pitsianis, X. Sun, D. Brady, R. Gibbons, A. Silver, R. Te Kolste, C. Chen, T. Dillon, and D. Prather, “Design and characterization of thin multiple aperture infrared cameras,” Appl. Opt. 48, 2115–2126 (2009).
[Crossref]

M. Shankar, R. Willett, N. Pitsianis, T. Schulz, R. Gibbons, R. Te Kolste, J. Carriere, C. Chen, D. Prather, and D. Brady, “Thin infrared imaging systems through multichannel sampling,” Appl. Opt. 47, B1–B10 (2008).
[Crossref]

C. Chen, Z. Xiong, X. Tian, Z. Zha, and F. Wu, “Camera lens super-resolution,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 1652–1660.

C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to see in the dark,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 3291–3300.

C. Chen, A. Seff, A. Kornhauser, and J. Xiao, “DeepDriving: learning affordance for direct perception in autonomous driving,” in IEEE International Conference on Computer Vision (2015), pp. 2722–2730.

Chen, C.-Y.

C.-Y. Chen, R.-C. Hwang, and Y.-J. Chen, “A passive auto-focus camera control system,” Appl. Soft Comput. 10, 296–303 (2010).
[Crossref]

Chen, E.

J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with deep neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 341–349.

Chen, H.

C.-C. Weng, H. Chen, and C.-S. Fuh, “A novel automatic white balance method for digital still cameras,” in IEEE International Symposium on Circuits and Systems (IEEE, 2005), pp. 3801–3804.

Chen, J.

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Trans. Graph. 35, 192 (2016).
[Crossref]

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.

Chen, L.

Chen, L.-C.

L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFS,” IEEE Trans. Pattern Anal. Mach. Intell. 40, 834–848 (2017).
[Crossref]

Chen, Q.

C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to see in the dark,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 3291–3300.

Chen, T.

H. Liu, T. Chen, Q. Shen, and Z. Ma, “Practical stacked non-local attention modules for image compression,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2019).

T. Chen, H. Liu, Q. Shen, T. Yue, X. Cao, and Z. Ma, “DeepCoder: A deep neural network based video compression,” in IEEE Visual Communications and Image Processing (VCIP) (IEEE, 2017), pp. 1–4.

Q. Shen, J. Cai, L. Liu, H. Liu, T. Chen, L. Ye, and Z. Ma, “CodedVision: towards joint image understanding and compression via end-to-end learning,” in Pacific Rim Conference on Multimedia (Springer, 2018), pp. 3–14.

H. Liu, T. Chen, M. Lu, Q. Shen, and Z. Ma, “Neural video compression using spatio-temporal priors,” arXiv:1902.07383 (2019).

H. Liu, T. Chen, P. Guo, Q. Shen, X. Cao, Y. Wang, and Z. Ma, “Non-local attention optimized deep image compression,” arXiv:1904.09757 (2019).

H. Liu, T. Chen, Q. Shen, T. Yue, and Z. Ma, “Deep image compression via end-to-end learning,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2018), pp. 2575–2578.

Chen, Y.

Y. Wang, H. Feng, Z. Xu, Q. Li, Y. Chen, and M. Cen, “Fast auto-focus scheme based on optical defocus fitting model,” J. Mod. Opt. 65, 858–868 (2018).
[Crossref]

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising,” IEEE Trans. Image Process. 26, 3142–3155 (2017).
[Crossref]

D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of Go without human knowledge,” Nature 550, 354–359 (2017).
[Crossref]

Y. Chen and T. Pock, “Trainable nonlinear reaction diffusion: a flexible framework for fast and effective image restoration,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 1256–1272 (2016).
[Crossref]

Chen, Y.-J.

C.-Y. Chen, R.-C. Hwang, and Y.-J. Chen, “A passive auto-focus camera control system,” Appl. Soft Comput. 10, 296–303 (2010).
[Crossref]

Chen, Y.-Q.

Y.-C. Liu, W.-H. Chan, and Y.-Q. Chen, “Automatic white balance for digital still camera,” IEEE Trans. Consum. Electron. 41, 460–466 (1995).
[Crossref]

Chen, Y.-S.

N.-S. Syu, Y.-S. Chen, and Y.-Y. Chuang, “Learning deep convolutional networks for demosaicing,” arXiv:1802.03769 (2018).

Chen, Z.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.

Cheng, M.

M. Cheng, Z. Ma, S. Asif, Y. Xu, H. Liu, W. Bao, and J. Sun, “A dual camera system for high spatiotemporal resolution video acquisition,” IEEE Comp. Arch. Lett. 1, 1 (2020).

Cheng, M.-M.

R. Cong, J. Lei, H. Fu, M.-M. Cheng, W. Lin, and Q. Huang, “Review of visual saliency detection with comprehensive information,” IEEE Trans. Circuits Syst. Video Technol. 29, 2941–2959 (2018).
[Crossref]

Cheng-Yue, R.

B. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, P. Rajpurkar, T. Migimatsu, R. Cheng-Yue, F. Mujica, A. Coates, and A. Y. Ng, “An empirical evaluation of deep learning on highway driving,” arXiv:1504.01716 (2015).

Chester, D.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, J. D. Targove, S. Wein, P. Brewer, D. G. Madden, and K. H. Shafique, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Chi, W.

Chin, T.-J.

J. Zaragoza, T.-J. Chin, M. S. Brown, and D. Suter, “As-projective-as-possible image stitching with moving DLT,” in IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 2339–2346.

Chintala, S.

A. Paszke, S. Gross, S. Chintala, and G. Chanan, PyTorch: Tensors and Dynamic Neural Networks in Python with Strong GPU Acceleration (2017), Vol. 6.

Cho, J.-M.

S.-Y. Lee, Y. Kumar, J.-M. Cho, S.-W. Lee, and S.-W. Kim, “Enhanced autofocus algorithm using robust focus measure and fuzzy reasoning,” IEEE Trans. Circuits Syst. Video Technol. 18, 983–1246 (2008).
[Crossref]

Cho, N. I.

B. Ahn and N. I. Cho, “Block-matching convolutional neural network for image denoising,” arXiv:1704.00524 (2017).

Chong, J. W.

J. C. Kim and J. W. Chong, “Facial contour correcting method and device,” U.S.patent app.16/304,337 (4July2019).

Choromanska, A.

M. Bojarski, P. Yeres, A. Choromanska, K. Choromanski, B. Firner, L. Jackel, and U. Muller, “Explaining how a deep neural network trained with end-to-end learning steers a car,” arXiv:1704.07911 (2017).

Choromanski, K.

M. Bojarski, P. Yeres, A. Choromanska, K. Choromanski, B. Firner, L. Jackel, and U. Muller, “Explaining how a deep neural network trained with end-to-end learning steers a car,” arXiv:1704.07911 (2017).

Christensen, M. P.

Christopoulos, C.

A. Skodras, C. Christopoulos, and T. Ebrahimi, “The JPEG 2000 still image compression standard,” IEEE Signal Process Mag. 18(5), 36–58 (2001).
[Crossref]

Chuang, T.

X. Xu, S. Liu, T. Chuang, Y. Huang, S. Lei, K. Rapaka, C. Pang, V. Seregin, Y. Wang, and M. Karczewicz, “Intra block copy in HEVC screen content coding extensions,” IEEE J. Emerg. Sel. Top. Circuits Syst. 6, 409–419 (2016).
[Crossref]

Chuang, Y.-Y.

N.-S. Syu, Y.-S. Chen, and Y.-Y. Chuang, “Learning deep convolutional networks for demosaicing,” arXiv:1802.03769 (2018).

Chun, I. Y.

M.-B. Lien, C.-H. Liu, I. Y. Chun, S. Ravishankar, H. Nien, M. Zhou, J. A. Fessler, Z. Zhong, and T. B. Norris, “Ranging and light field imaging with transparent photodetectors,” Nat. Photonics 14, 143–148 (2020).
[Crossref]

Chung, D.-S.

B.-K. Park, S.-S. Kim, D.-S. Chung, S.-D. Lee, and C.-Y. Kim, “Fast and accurate auto focusing algorithm based on two defocused images using discrete cosine transform,” Proc. SPIE 6817, 68170D (2008).
[Crossref]

Cipolla, R.

V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: a deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).
[Crossref]

Coates, A.

B. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, P. Rajpurkar, T. Migimatsu, R. Cheng-Yue, F. Mujica, A. Coates, and A. Y. Ng, “An empirical evaluation of deep learning on highway driving,” arXiv:1504.01716 (2015).

Cohen, S.

J. Yang, Z. Lin, and S. Cohen, “Fast image super-resolution based on in-place example regression,” in IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 1059–1066.

Coll, B.

A. Buades, B. Coll, and J.-M. Morel, “Nonlocal image and movie denoising,” Int. J. Comput. Vision 76, 123–139 (2008).
[Crossref]

A. Buades, B. Coll, and J.-M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Simul. 4, 490–530 (2005).
[Crossref]

A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 2 (IEEE, 2005), pp. 60–65.

A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE Computer Society, 2005), Vol. 2, pp. 60–65.

Collins, R. T.

R. T. Collins, A. J. Lipton, H. Fujiyoshi, and T. Kanade, “Algorithms for cooperative multisensor surveillance,” Proc. IEEE 89, 1456–1477 (2001).
[Crossref]

Condat, L.

L. Condat and S. Mosaddegh, “Joint demosaicking and denoising by total variation minimization,” in 19th IEEE International Conference on Image Processing (IEEE, 2012), pp. 2781–2784.

Cong, R.

R. Cong, J. Lei, H. Fu, M.-M. Cheng, W. Lin, and Q. Huang, “Review of visual saliency detection with comprehensive information,” IEEE Trans. Circuits Syst. Video Technol. 29, 2941–2959 (2018).
[Crossref]

Connor, F.

D. Moloney, B. Barry, R. Richmond, F. Connor, C. Brick, and D. Donohoe, “Myriad 2: eye of the computational vision storm,” in IEEE Hot Chips 26 Symposium (HCS) (IEEE, 2014), pp. 1–18.

Corcoran, P.

J. Lemley, S. Bazrafkan, and P. Corcoran, “Deep learning for consumer devices and services: pushing the limits for machine learning, artificial intelligence, and computer vision,” IEEE Consum. Electron. Mag. 6(2), 48–56 (2017).
[Crossref]

Corwin, A. D.

Cosgun, A.

D. Isele, R. Rahimi, A. Cosgun, K. Subramanian, and K. Fujimura, “Navigating occluded intersections with autonomous vehicles using deep reinforcement learning,” in IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2018), pp. 2034–2039.

Cossairt, O.

R. Koller, L. Schmid, N. Matsuda, T. Niederberger, L. Spinoulas, O. Cossairt, G. Schuster, and A. K. Katsaggelos, “High spatio-temporal resolution video with compressed sensing,” Opt. Express 23, 15992–16007 (2015).
[Crossref]

O. Cossairt and S. Nayar, “Spectral focal sweep: extended depth of field from chromatic aberrations,” in IEEE International Conference on Computational Photography (ICCP) (2010), pp. 1–8.

Cossairt, O. S.

O. S. Cossairt, D. Miau, and S. K. Nayar, “Gigapixel computational imaging,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2011), pp. 1–8.

Cremers, D.

A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox, “FlowNet: learning optical flow with convolutional networks,” in IEEE International Conference on Computer Vision (2015), pp. 2758–2766.

Cunningham, A.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4681–4690.

Cusano, C.

S. Bianco, C. Cusano, and R. Schettini, “Single and multiple illuminant estimation using convolutional neural networks,” IEEE Trans. Image Process. 26, 4347–4362 (2017).
[Crossref]

S. Bianco, C. Cusano, and R. Schettini, “Color constancy using CNNS,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2015), pp. 81–89.

Cybenko, G.

G. Cybenko, “Approximation by superpositions of a sigmoidal function,” Math. Control Signals Syst. 2, 303–314 (1989).
[Crossref]

Dabov, K.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref]

Dai, Q.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process Mag. 33(5), 95–108 (2016).
[Crossref]

X. Yuan, Y. Liu, J. Suo, and Q. Dai, “Plug-and-play algorithms for large-scale snapshot compressive imaging,”, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (IEEE, 2020), pp. 1447–1457.

Y. Wang, Y. Liu, W. Heidrich, and Q. Dai, “The light field attachment: turning a DSLR into a light field camera using a low budget camera ring,” in IEEE Transactions on Visualization and Computer Graphics (IEEE, 2016).

X. Yuan, L. Fang, Q. Dai, D. J. Brady, and Y. Liu, “Multiscale gigapixel video: a cross resolution image matching and warping approach,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2017), pp. 1–9.

Danielyan, A.

A. Danielyan, M. Vehvilainen, A. Foi, V. Katkovnik, and K. Egiazarian, “Cross-color BM3D filtering of noisy raw data,” in International Workshop on Local and Non-local Approximation in Image Processing (IEEE, 2009), pp. 125–129.

Dannberg, P.

Darrell, T.

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 3431–3440.

Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: convolutional architecture for fast feature embedding,” in 22nd ACM International Conference on Multimedia (ACM, 2014), pp. 675–678.

H. Xu, Y. Gao, F. Yu, and T. Darrell, “End-to-end learning of driving models from large-scale video datasets,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 2174–2182.

Davis, A.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.

de Solórzano, C. O.

A. Santos, C. O. de Solórzano, J. J. Vaquero, J. Pena, N. Malpica, and F. Del Pozo, “Evaluation of autofocus functions in molecular cytogenetic analysis,” J. Microsc. 188, 264–272 (1997).
[Crossref]

Dean, J.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.

Del Pozo, F.

A. Santos, C. O. de Solórzano, J. J. Vaquero, J. Pena, N. Malpica, and F. Del Pozo, “Evaluation of autofocus functions in molecular cytogenetic analysis,” J. Microsc. 188, 264–272 (1997).
[Crossref]

Delbruck, T.

C. Brandli, R. Berner, M. Yang, S.-C. Liu, and T. Delbruck, “A 240× 80 130 db 3 µs latency global shutter spatiotemporal vision sensor,” IEEE J. Solid-State Circuits 49, 2333–2341 (2014).
[Crossref]

Delp, E. J.

D. Güera and E. J. Delp, “Deepfake video detection using recurrent neural networks,” in 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) (IEEE, 2018), pp. 1–6.

Demassieux, N.

P. Pirsch, N. Demassieux, and W. Gehrke, “VLSI architectures for video compression—a survey,” Proc. IEEE 83, 220–246 (1995).
[Crossref]

Dempsey, P.

P. Dempsey, “The teardown: Huawei mate 10 pro,” Eng. Technol. 13, 80–81 (2018).
[Crossref]

Dempster, A. P.

A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” J. Royal Stat. Soc. B 39, 1–22 (1977).
[Crossref]

Deng, P.

S. Wang, Y. Zhang, P. Deng, and F. Zhou, “Fast automatic white balancing method by color histogram stretching,” in 4th International Congress on Image and Signal Processing (IEEE, 2011), Vol. 2, pp. 979–983.

Denker, J. S.

Y. LeCun, B. E. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. E. Hubbard, and L. D. Jackel, “Handwritten digit recognition with a back-propagation network,” in Advances in Neural Information Processing Systems (1990), pp. 396–404.

DeTone, D.

D. DeTone, T. Malisiewicz, and A. Rabinovich, “Deep image homography estimation,” arXiv:1606.03798 (2016).

Devin, M.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.

Dillon, T.

Dimakis, A. G.

A. Bora, A. Jalal, E. Price, and A. G. Dimakis, “Compressed sensing using generative models,” in 34th International Conference on Machine Learning (JMLR, 2017), Vol. 70, pp. 537–546.

Ditty, M.

M. Ditty, T. Architecture, J. Montrym, and C. Wittenbrink, “NVIDIA’s Tegra K1 system-on-chip,” in IEEE Hot Chips 26 Symposium (HCS) (IEEE, 2014), pp. 1–26.

Dixon, E. L.

Doggaz, N.

Y. Yao, B. Abidi, N. Doggaz, and M. Abidi, “Evaluation of sharpness measures and search algorithms for the auto focusing of high-magnification images,” Proc. SPIE 6246, 62460G (2006).
[Crossref]

Dolson, J.

A. Adams, E.-V. Talvala, S. H. Park, D. E. Jacobs, B. Ajdin, N. Gelfand, J. Dolson, D. Vaquero, J. Baek, M. Tico, H. P. A. Lensch, W. Matusik, K. Pulli, M. Horowitz, and M. Levoy, “The Frankencamera: an experimental platform for computational photography,” ACM Trans. Graph. 29, 1–12 (2010).
[Crossref]

Domingos, P.

P. Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (Basic Books, 2015).

Donahue, J.

Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: convolutional architecture for fast feature embedding,” in 22nd ACM International Conference on Multimedia (ACM, 2014), pp. 675–678.

Dong, C.

C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in European Conference on Computer Vision (Springer, 2014), pp. 184–199.

X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and C. C. Loy, “ESRGAN: enhanced super-resolution generative adversarial networks,” in European Conference on Computer Vision (ECCV) (2018).

Dong, W.

W. Dong, L. Zhang, G. Shi, and X. Li, “Nonlocally centralized sparse representation for image restoration,” IEEE Trans. Image Process. 22, 1620–1630 (2012).
[Crossref]

W. Dong, M. Yuan, X. Li, and G. Shi, “Joint demosaicing and denoising with perceptual optimization on a generative adversarial network,” arXiv:1802.04723 (2018).

Donoho, D. L.

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006).
[Crossref]

Donohoe, D.

D. Moloney, B. Barry, R. Richmond, F. Connor, C. Brick, and D. Donohoe, “Myriad 2: eye of the computational vision storm,” in IEEE Hot Chips 26 Symposium (HCS) (IEEE, 2014), pp. 1–18.

Dosovitskiy, A.

P. Fischer, A. Dosovitskiy, and T. Brox, “Descriptor matching with convolutional neural networks: a comparison to sift,” arXiv:1405.5769 (2014).

E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, “FlowNet 2.0: evolution of optical flow estimation with deep networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 2462–2470.

A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox, “FlowNet: learning optical flow with convolutional networks,” in IEEE International Conference on Computer Vision (2015), pp. 2758–2766.

Douglas, S. C.

Dowski, E. R.

Drew, M. S.

R. Ramanath, W. E. Snyder, Y. Yoo, and M. S. Drew, “Color image processing pipeline,” IEEE Signal Process Mag. 22(1), 34–43 (2005).
[Crossref]

Druart, G.

Duarte, M. F.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 606509 (2006).
[Crossref]

Dun, X.

J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
[Crossref]

Duparré, J.

K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: an ultra-thin high performance monolithic camera array,” ACM Trans. Graph. 32, 166 (2013).
[Crossref]

J. Duparré, P. Dannberg, P. Schreiber, A. Bräuer, and A. Tünnermann, “Thin compound-eye camera,” Appl. Opt. 44, 2949–2956 (2005).
[Crossref]

Durand, F.

M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, “Deep joint demosaicking and denoising,” ACM Trans. Graph. 35, 191 (2016).
[Crossref]

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26, 70 (2007).
[Crossref]

M. Aittala and F. Durand, “Burst image deblurring using permutation invariant convolutional neural networks,” in European Conference on Computer Vision (ECCV) (2018), pp. 731–747.

Duval, G.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR 2 (2005), pp. 1–11.

Dworakowski, D.

M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to end learning for self-driving cars,” arXiv:1604.07316 (2016).

Ebrahimi, T.

A. Skodras, C. Christopoulos, and T. Ebrahimi, “The JPEG 2000 still image compression standard,” IEEE Signal Process Mag. 18(5), 36–58 (2001).
[Crossref]

Edwards, J.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, J. D. Targove, S. Wein, P. Brewer, D. G. Madden, and K. H. Shafique, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Efros, A. A.

R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in European Conference on Computer Vision (Springer, 2016), pp. 649–666.

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 1125–1134.

Egiazarian, K.

F. Heide, M. Steinberger, Y.-T. Tsai, M. Rouf, D. Pająk, D. Reddy, O. Gallo, J. Liu, W. Heidrich, K. Egiazarian, J. Kautz, and K. A. Pulli, “FlexISP: a flexible camera image processing framework,” ACM Trans. Graph. 33, 231 (2014).
[Crossref]

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref]

A. Danielyan, M. Vehvilainen, A. Foi, V. Katkovnik, and K. Egiazarian, “Cross-color BM3D filtering of noisy raw data,” in International Workshop on Local and Non-local Approximation in Image Processing (IEEE, 2009), pp. 125–129.

Egnal, G.

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 (10November2015).

Eigen, D.

D. Eigen, C. Puhrsch, and R. Fergus, “Depth map prediction from a single image using a multi-scale deep network,” in Advances in Neural Information Processing Systems (2014), pp. 2366–2374.

Elad, M.

M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. Image Process. 15, 3736–3745 (2006).
[Crossref]

Elbrächter, D.

P. Grohs, D. Perekrestenko, D. Elbrächter, and H. Bölcskei, “Deep neural network approximation theory,” arXiv:1901.02220 (2019).

Erhan, D.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1–9.

Ernst, M.

B. Wronski, I. Garcia-Dorado, M. Ernst, D. Kelly, M. Krainin, C.-K. Liang, M. Levoy, and P. Milanfar, “Handheld multi-frame super-resolution,” ACM Trans. Graph. 38, 1–18 (2019).
[Crossref]

Essa, I.

V. Kwatra, A. Schödl, I. Essa, G. Turk, and A. Bobick, “Graphcut textures: image and video synthesis using graph cuts,” ACM Trans. Graph. 22, 277–286 (2003).
[Crossref]

Esterle, L.

C. Piciarelli, L. Esterle, A. Khan, B. Rinner, and G. L. Foresti, “Dynamic reconfiguration in camera networks: a short survey,” IEEE Trans. Circuits Syst. Video Technol. 26, 965–977 (2015).
[Crossref]

Euliss, G. W.

Fang, L.

X. Yuan, L. Fang, Q. Dai, D. J. Brady, and Y. Liu, “Multiscale gigapixel video: a cross resolution image matching and warping approach,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2017), pp. 1–9.

H. Zheng, M. Ji, H. Wang, Y. Liu, and L. Fang, “Learning cross-scale correspondence and patch-based synthesis for reference-based super-resolution,” in British Machine Vision Conference (2017).

H. Zheng, M. Guo, H. Wang, Y. Liu, and L. Fang, “Combining exemplar-based approach and learning-based approach for light field super-resolution using a hybrid imaging system,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2481–2486.

H. Zheng, M. Ji, H. Wang, Y. Liu, and L. Fang, “CrossNet: an end-to-end reference-based super resolution network using cross-scale warping,” in European Conference on Computer Vision (ECCV) (ECCV, 2018), pp. 87–104.

Farabet, C.

Y. LeCun, K. Kavukcuoglu, and C. Farabet, “Convolutional networks and applications in vision,” in IEEE International Symposium on Circuits and Systems (IEEE, 2010), pp. 253–256.

Fatemi, E.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
[Crossref]

Fattal, R.

G. Freedman and R. Fattal, “Image and video upscaling from local self-examples,” ACM Trans. Graph. 30, 12 (2011).
[Crossref]

Fei, L.

C. Tian, Y. Xu, L. Fei, J. Wang, J. Wen, and N. Luo, “Enhanced CNN for image denoising,” CAAI Trans. Intell. Technol. 4, 17–23 (2019).
[Crossref]

Fei-Bing, X.

Z. Ji-Yan, H. Yuan-Qing, X. Fei-Bing, and M. Xian-Guo, “Design of 10 mega-pixel mobile phone lens,” in 3rd International Conference on Instrumentation, Measurement, Computer, Communication and Control (IEEE, 2013), pp. 569–573.

Feller, S. D.

D. J. Brady, M. Gehm, R. Stack, D. Marks, D. Kittle, D. R. Golish, E. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Fendler, M.

Feng, H.

Y. Wang, H. Feng, Z. Xu, Q. Li, Y. Chen, and M. Cen, “Fast auto-focus scheme based on optical defocus fitting model,” J. Mod. Opt. 65, 858–868 (2018).
[Crossref]

Fergus, R.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26, 70 (2007).
[Crossref]

D. Eigen, C. Puhrsch, and R. Fergus, “Depth map prediction from a single image using a multi-scale deep network,” in Advances in Neural Information Processing Systems (2014), pp. 2366–2374.

Fessler, J. A.

M.-B. Lien, C.-H. Liu, I. Y. Chun, S. Ravishankar, H. Nien, M. Zhou, J. A. Fessler, Z. Zhong, and T. B. Norris, “Ranging and light field imaging with transparent photodetectors,” Nat. Photonics 14, 143–148 (2020).
[Crossref]

Filkins, R. J.

Firner, B.

M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to end learning for self-driving cars,” arXiv:1604.07316 (2016).

M. Bojarski, P. Yeres, A. Choromanska, K. Choromanski, B. Firner, L. Jackel, and U. Muller, “Explaining how a deep neural network trained with end-to-end learning steers a car,” arXiv:1704.07911 (2017).

Fischer, P.

P. Fischer, A. Dosovitskiy, and T. Brox, “Descriptor matching with convolutional neural networks: a comparison to sift,” arXiv:1405.5769 (2014).

A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox, “FlowNet: learning optical flow with convolutional networks,” in IEEE International Conference on Computer Vision (2015), pp. 2758–2766.

Fitzgibbon, A. W.

D. Khashabi, S. Nowozin, J. Jancsary, and A. W. Fitzgibbon, “Joint demosaicing and denoising via learned nonparametric random fields,” IEEE Trans. Image Process. 23, 4968–4981 (2014).
[Crossref]

Flepp, B.

M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to end learning for self-driving cars,” arXiv:1604.07316 (2016).

Foi, A.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref]

A. Danielyan, M. Vehvilainen, A. Foi, V. Katkovnik, and K. Egiazarian, “Cross-color BM3D filtering of noisy raw data,” in International Workshop on Local and Non-local Approximation in Image Processing (IEEE, 2009), pp. 125–129.

Ford, J. E.

Foresti, G. L.

C. Piciarelli, L. Esterle, A. Khan, B. Rinner, and G. L. Foresti, “Dynamic reconfiguration in camera networks: a short survey,” IEEE Trans. Circuits Syst. Video Technol. 26, 965–977 (2015).
[Crossref]

Fossum, E.

R. Gutierrez, E. Fossum, and T. Tang, “Auto-focus technology,” in International Image Sensor Workshop (2007), pp. 20–25.

Fossum, E. R.

E. R. Fossum, “CMOS image sensors: electronic camera-on-a-chip,” IEEE Trans. Electron Devices 44, 1689–1698 (1997).
[Crossref]

Freedman, G.

G. Freedman and R. Fattal, “Image and video upscaling from local self-examples,” ACM Trans. Graph. 30, 12 (2011).
[Crossref]

Freeman, W. T.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26, 70 (2007).
[Crossref]

Frieden, B. R.

Fu, H.

R. Cong, J. Lei, H. Fu, M.-M. Cheng, W. Lin, and Q. Huang, “Review of visual saliency detection with comprehensive information,” IEEE Trans. Circuits Syst. Video Technol. 29, 2941–2959 (2018).
[Crossref]

Fu, Y.

Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image super-resolution using very deep residual channel attention networks,” in European Conference on Computer Vision (ECCV) (2018), pp. 286–301.

Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 2472–2481.

Fua, P.

E. Tola, V. Lepetit, and P. Fua, “Daisy: an efficient dense descriptor applied to wide-baseline stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 815–830 (2010).
[Crossref]

Fuh, C.-S.

C.-C. Weng, H. Chen, and C.-S. Fuh, “A novel automatic white balance method for digital still cameras,” in IEEE International Symposium on Circuits and Systems (IEEE, 2005), pp. 3801–3804.

Fujimura, K.

D. Isele, R. Rahimi, A. Cosgun, K. Subramanian, and K. Fujimura, “Navigating occluded intersections with autonomous vehicles using deep reinforcement learning,” in IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2018), pp. 2034–2039.

Fujiyoshi, H.

R. T. Collins, A. J. Lipton, H. Fujiyoshi, and T. Kanade, “Algorithms for cooperative multisensor surveillance,” Proc. IEEE 89, 1456–1477 (2001).
[Crossref]

Fukumoto, K.

S. Sukegawa, T. Umebayashi, T. Nakajima, H. Kawanobe, K. Koseki, I. Hirota, T. Haruta, M. Kasai, K. Fukumoto, T. Wakano, K. Inoue, H. Takahashi, T. Nagano, Y. Nitta, T. Hirayama, and N. Fukushima, “A 1/4-inch 8 mpixel back-illuminated stacked CMOS image sensor,” in IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE, 2013), pp. 484–485.

Fukushima, N.

S. Sukegawa, T. Umebayashi, T. Nakajima, H. Kawanobe, K. Koseki, I. Hirota, T. Haruta, M. Kasai, K. Fukumoto, T. Wakano, K. Inoue, H. Takahashi, T. Nagano, Y. Nitta, T. Hirayama, and N. Fukushima, “A 1/4-inch 8 mpixel back-illuminated stacked CMOS image sensor,” in IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE, 2013), pp. 484–485.

Fung, G. S.

E. Y. Lam and G. S. Fung, “Automatic white balancing in digital photography,” in Single-Sensor Imaging (CRC Press, 2018), pp. 287–314.

Funt, B.

K. Barnard, V. Cardei, and B. Funt, “A comparison of computational color constancy algorithms. I: methodology and experiments with synthesized data,” IEEE Trans. Image Process. 11, 972–984 (2002).
[Crossref]

Gallo, O.

F. Heide, M. Steinberger, Y.-T. Tsai, M. Rouf, D. Pająk, D. Reddy, O. Gallo, J. Liu, W. Heidrich, K. Egiazarian, J. Kautz, and K. A. Pulli, “FlexISP: a flexible camera image processing framework,” ACM Trans. Graph. 33, 231 (2014).
[Crossref]

Gao, L.

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref]

Gao, Q.

T. Tong, G. Li, X. Liu, and Q. Gao, “Image super-resolution using dense skip connections,” in IEEE International Conference on Computer Vision (2017), pp. 4799–4807.

Gao, Y.

H. Xu, Y. Gao, F. Yu, and T. Darrell, “End-to-end learning of driving models from large-scale video datasets,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 2174–2182.

Gao, Z.

H. Yin, H. Jia, J. Zhou, and Z. Gao, “Survey on algorithms and VLSI architectures for MPEG-like video coders,” J. Signal Process. Syst. 88, 357–410 (2017).
[Crossref]

G. Lu, W. Ouyang, D. Xu, X. Zhang, C. Cai, and Z. Gao, “DVC: an end-to-end deep video compression framework,” in IEEE Conference on Computer Vision and Pattern Recognition (2019), pp. 11006–11015.

Garcia-Dorado, I.

B. Wronski, I. Garcia-Dorado, M. Ernst, D. Kelly, M. Krainin, C.-K. Liang, M. Levoy, and P. Milanfar, “Handheld multi-frame super-resolution,” ACM Trans. Graph. 38, 1–18 (2019).
[Crossref]

Gashler, M. S.

H. A. Pierson and M. S. Gashler, “Deep learning in robotics: a review of recent research,” Adv. Robot. 31, 821–835 (2017).
[Crossref]

Gehm, M.

D. J. Brady, M. Gehm, R. Stack, D. Marks, D. Kittle, D. R. Golish, E. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Gehrke, W.

P. Pirsch, N. Demassieux, and W. Gehrke, “VLSI architectures for video compression—a survey,” Proc. IEEE 83, 220–246 (1995).
[Crossref]

Geiss, R.

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Trans. Graph. 35, 192 (2016).
[Crossref]

Gelfand, N.

A. Adams, E.-V. Talvala, S. H. Park, D. E. Jacobs, B. Ajdin, N. Gelfand, J. Dolson, D. Vaquero, J. Baek, M. Tico, H. P. A. Lensch, W. Matusik, K. Pulli, M. Horowitz, and M. Levoy, “The Frankencamera: an experimental platform for computational photography,” ACM Trans. Graph. 29, 1–12 (2010).
[Crossref]

George, N.

Gerasimow, T.

C. Wu, L. F. Isikdogan, S. Rao, B. Nayak, T. Gerasimow, A. Sutic, L. Ain-kedem, and G. Michael, “VisionISP: Repurposing the image signal processor for computer vision applications,” in IEEE International Conference on Image Processing (ICIP) (2019), pp. 4624–4628.

Gershfield, C.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, J. D. Targove, S. Wein, P. Brewer, D. G. Madden, and K. H. Shafique, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Gharbi, M.

M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, “Deep joint demosaicking and denoising,” ACM Trans. Graph. 35, 191 (2016).
[Crossref]

Ghemawat, S.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.

Gibbons, R.

Girshick, R.

Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: convolutional architecture for fast feature embedding,” in 22nd ACM International Conference on Multimedia (ACM, 2014), pp. 675–678.

Giryes, R.

E. Schwartz, R. Giryes, and A. M. Bronstein, “DeepISP: toward learning an end-to-end image processing pipeline,” IEEE Trans. Image Process. 28, 912–923 (2018).
[Crossref]

Glasner, D.

D. Glasner, S. Bagon, and M. Irani, “Super-resolution from a single image,” in IEEE 12th International Conference on Computer Vision (IEEE, 2009), pp. 349–356.

Glotzbach, J.

B. K. Gunturk, J. Glotzbach, Y. Altunbasak, R. W. Schafer, and R. M. Mersereau, “Demosaicking: color filter array interpolation,” IEEE Signal Process Mag. 22(1), 44–54 (2005).
[Crossref]

Goldfarb, D.

S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, “An iterative regularization method for total variation-based image restoration,” Multiscale Model. Simul. 4, 460–489 (2005).
[Crossref]

Golish, D. R.

D. J. Brady, M. Gehm, R. Stack, D. Marks, D. Kittle, D. R. Golish, E. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Golkov, V.

A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox, “FlowNet: learning optical flow with convolutional networks,” in IEEE International Conference on Computer Vision (2015), pp. 2758–2766.

Gómez-Sarabia, C. M.

Gool, L.

K. Zhang, L. Gool, and R. Timofte, “Deep unfolding network for image super-resolution,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020), pp. 3214–3223.

K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Gool, and R. Timofte, “Plug-and-play image restoration with deep denoiser prior,” arXiv abs/2008.13751 (2020).

Gould, S.

P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang, “Bottom-up and top-down attention for image captioning and visual question answering,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 6077–6086.

Goulding-Hotta, N.

J. Redgrave, A. Meixner, N. Goulding-Hotta, A. Vasilyev, and O. Shacham, “Pixel visual core: Google’s fully programmable image vision and AI processor for mobile devices,” in Hot Chips: A Symposium on High Performance Chips (2018).

Goyal, P.

M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to end learning for self-driving cars,” arXiv:1604.07316 (2016).

Graepel, T.

D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of Go without human knowledge,” Nature 550, 354–359 (2017).
[Crossref]

Greengard, A.

Gribok, A. V.

V. Agarwal, A. V. Gribok, and M. A. Abidi, “Machine learning approach to color constancy,” Neural Netw. 20, 559–563 (2007).
[Crossref]

Grohs, P.

P. Grohs, D. Perekrestenko, D. Elbrächter, and H. Bölcskei, “Deep neural network approximation theory,” arXiv:1901.02220 (2019).

Gross, S.

A. Paszke, S. Gross, S. Chintala, and G. Chanan, PyTorch: Tensors and Dynamic Neural Networks in Python with Strong GPU Acceleration (2017), Vol. 6.

Gu, J.

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in International Conference on Computer Vision (IEEE, 2011), pp. 287–294.

X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and C. C. Loy, “ESRGAN: enhanced super-resolution generative adversarial networks,” in European Conference on Computer Vision (ECCV) (2018).

Gu, S.

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 2808–2817.

M. Li, W. Zuo, S. Gu, D. Zhao, and D. Zhang, “Learning convolutional networks for content-weighted image compression,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 3214–3223.

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 3929–3938.

Gu, X.-G.

D. Psaltis, D. Brady, X.-G. Gu, and S. Lin, “Holography in artificial neural networks,” Nature 343, 325–330 (1990).
[Crossref]

Guadarrama, S.

Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: convolutional architecture for fast feature embedding,” in 22nd ACM International Conference on Multimedia (ACM, 2014), pp. 675–678.

Güera, D.

D. Güera and E. J. Delp, “Deepfake video detection using recurrent neural networks,” in 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) (IEEE, 2018), pp. 1–6.

Guérineau, N.

Guez, A.

D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of Go without human knowledge,” Nature 550, 354–359 (2017).
[Crossref]

Gulli, A.

A. Gulli and S. Pal, Deep Learning with Keras (Packt, 2017).

Gunturk, B.

X. Li, B. Gunturk, and L. Zhang, “Image demosaicing: a systematic survey,” Proc. SPIE 6822, 68221J (2008).
[Crossref]

Gunturk, B. K.

B. K. Gunturk, J. Glotzbach, Y. Altunbasak, R. W. Schafer, and R. M. Mersereau, “Demosaicking: color filter array interpolation,” IEEE Signal Process Mag. 22(1), 44–54 (2005).
[Crossref]

Guo, C.

Guo, K.

Guo, M.

H. Zheng, M. Guo, H. Wang, Y. Liu, and L. Fang, “Combining exemplar-based approach and learning-based approach for light field super-resolution using a hybrid imaging system,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2481–2486.

Guo, P.

H. Liu, T. Chen, P. Guo, Q. Shen, X. Cao, Y. Wang, and Z. Ma, “Non-local attention optimized deep image compression,” arXiv:1904.09757 (2019).

Guo, X.

Gupta, M.

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in International Conference on Computer Vision (IEEE, 2011), pp. 287–294.

Gutierrez, R.

R. Gutierrez, E. Fossum, and T. Tang, “Auto-focus technology,” in International Image Sensor Workshop (2007), pp. 20–25.

Haas, D.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, J. D. Targove, S. Wein, P. Brewer, D. G. Madden, and K. H. Shafique, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Hadar, R.

Hagen, N.

Hager, G. D.

C. Paxton, V. Raman, G. D. Hager, and M. Kobilarov, “Combining neural networks and tree search for task and motion planning in challenging environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2017), pp. 6059–6066.

Hammernik, K.

T. Klatzer, K. Hammernik, P. Knobelreiter, and T. Pock, “Learning joint demosaicing and denoising based on sequential energy minimization,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2016), pp. 1–11.

Han, J.-W.

J.-W. Han, J.-H. Kim, H.-T. Lee, and S.-J. Ko, “A novel training based auto-focus for mobile-phone cameras,” IEEE Trans. Consum. Electron. 57, 232–238 (2011).
[Crossref]

Han, T. H.

J.-C. Yoo and T. H. Han, “Fast normalized cross-correlation,” Circuits Syst. Signal Process. 28, 819 (2009).
[Crossref]

Han, W.-J.

J. Lainema, F. Bossen, W.-J. Han, J. Min, and K. Ugur, “Intra coding of the HEVC standard,” IEEE Trans. Circuits Syst. Video Technol. 22, 1792–1801 (2012).
[Crossref]

Hanebeck, U. D.

K. Briechle and U. D. Hanebeck, “Template matching using fast normalized cross correlation,” Proc. SPIE 4387, 95–102 (2001).
[Crossref]

Hanrahan, P.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR 2 (2005), pp. 1–11.

Haris, M.

M. Haris, G. Shakhnarovich, and N. Ukita, “Deep back-projection networks for super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 1664–1673.

Harmeling, S.

H. C. Burger, C. J. Schuler, and S. Harmeling, “Image denoising: can plain neural networks compete with BM3D?” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 2392–2399.

Haruta, T.

S. Sukegawa, T. Umebayashi, T. Nakajima, H. Kawanobe, K. Koseki, I. Hirota, T. Haruta, M. Kasai, K. Fukumoto, T. Wakano, K. Inoue, H. Takahashi, T. Nagano, Y. Nitta, T. Hirayama, and N. Fukushima, “A 1/4-inch 8 mpixel back-illuminated stacked CMOS image sensor,” in IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE, 2013), pp. 484–485.

Harvey, A. R.

Harwit, M.

M. Harwit, Hadamard Transform Optics (Elsevier, 2012).

Hasenplaugh, W. C.

Hasinoff, S. W.

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Trans. Graph. 35, 192 (2016).
[Crossref]

Hassabis, D.

D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of Go without human knowledge,” Nature 550, 354–359 (2017).
[Crossref]

Hausser, P.

A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox, “FlowNet: learning optical flow with convolutional networks,” in IEEE International Conference on Computer Vision (2015), pp. 2758–2766.

Hazirbas, C.

A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox, “FlowNet: learning optical flow with convolutional networks,” in IEEE International Conference on Computer Vision (2015), pp. 2758–2766.

He, F.-L.

F.-L. He, Y.-C. F. Wang, and K.-L. Hua, “Self-learning approach to color demosaicking via support vector regression,” in 19th IEEE International Conference on Image Processing (IEEE, 2012), pp. 2765–2768.

He, J.

J. He, R. Zhou, and Z. Hong, “Modified fast climbing search auto-focus algorithm with adaptive step size searching technique for digital camera,” IEEE Trans. Consum. Electron. 49, 257–262 (2003).
[Crossref]

He, K.

C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in European Conference on Computer Vision (Springer, 2014), pp. 184–199.

He, X.

P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang, “Bottom-up and top-down attention for image captioning and visual question answering,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 6077–6086.

Heide, F.

F. Heide, M. Steinberger, Y.-T. Tsai, M. Rouf, D. Pająk, D. Reddy, O. Gallo, J. Liu, W. Heidrich, K. Egiazarian, J. Kautz, and K. A. Pulli, “FlexISP: a flexible camera image processing framework,” ACM Trans. Graph. 33, 231 (2014).
[Crossref]

Heidrich, W.

J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
[Crossref]

F. Heide, M. Steinberger, Y.-T. Tsai, M. Rouf, D. Pająk, D. Reddy, O. Gallo, J. Liu, W. Heidrich, K. Egiazarian, J. Kautz, and K. A. Pulli, “FlexISP: a flexible camera image processing framework,” ACM Trans. Graph. 33, 231 (2014).
[Crossref]

Y. Wang, Y. Liu, W. Heidrich, and Q. Dai, “The light field attachment: turning a DSLR into a light field camera using a low budget camera ring,” in IEEE Transactions on Visualization and Computer Graphics (IEEE, 2016).

Hel-Or, H. Z.

O. Kapah and H. Z. Hel-Or, “Demosaicking using artificial neural networks,” Proc. SPIE 3962, 112–120 (2000).
[Crossref]

Henderson, D.

Y. LeCun, B. E. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. E. Hubbard, and L. D. Jackel, “Handwritten digit recognition with a back-propagation network,” in Advances in Neural Information Processing Systems (1990), pp. 396–404.

Hinton, G. E.

G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science 313, 504–507 (2006).
[Crossref]

G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural Comput. 18, 1527–1554 (2006).
[Crossref]

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323, 533 (1986).
[Crossref]

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 1097–1105.

Hirakawa, K.

K. Hirakawa and T. W. Parks, “Joint demosaicing and denoising,” IEEE Trans. Image Process. 15, 2146–2157 (2006).
[Crossref]

K. Hirakawa and T. W. Parks, “Adaptive homogeneity-directed demosaicing algorithm,” IEEE Trans. Image Process. 14, 360–369 (2005).
[Crossref]

Hirayama, T.

S. Sukegawa, T. Umebayashi, T. Nakajima, H. Kawanobe, K. Koseki, I. Hirota, T. Haruta, M. Kasai, K. Fukumoto, T. Wakano, K. Inoue, H. Takahashi, T. Nagano, Y. Nitta, T. Hirayama, and N. Fukushima, “A 1/4-inch 8 mpixel back-illuminated stacked CMOS image sensor,” in IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE, 2013), pp. 484–485.

Hirota, I.

S. Sukegawa, T. Umebayashi, T. Nakajima, H. Kawanobe, K. Koseki, I. Hirota, T. Haruta, M. Kasai, K. Fukumoto, T. Wakano, K. Inoue, H. Takahashi, T. Nagano, Y. Nitta, T. Hirayama, and N. Fukushima, “A 1/4-inch 8 mpixel back-illuminated stacked CMOS image sensor,” in IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE, 2013), pp. 484–485.

Hitomi, Y.

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in International Conference on Computer Vision (IEEE, 2011), pp. 287–294.

Hong, Z.

J. He, R. Zhou, and Z. Hong, “Modified fast climbing search auto-focus algorithm with adaptive step size searching technique for digital camera,” IEEE Trans. Consum. Electron. 49, 257–262 (2003).
[Crossref]

Horn, B. K.

B. K. Horn and B. G. Schunck, “Determining optical flow,” Artif. Intell. 17, 185–203 (1981).
[Crossref]

Horowitz, M.

A. Adams, E.-V. Talvala, S. H. Park, D. E. Jacobs, B. Ajdin, N. Gelfand, J. Dolson, D. Vaquero, J. Baek, M. Tico, H. P. A. Lensch, W. Matusik, K. Pulli, M. Horowitz, and M. Levoy, “The Frankencamera: an experimental platform for computational photography,” ACM Trans. Graph. 29, 1–12 (2010).
[Crossref]

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR 2 (2005), pp. 1–11.

Howard, R. E.

Y. LeCun, B. E. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. E. Hubbard, and L. D. Jackel, “Handwritten digit recognition with a back-propagation network,” in Advances in Neural Information Processing Systems (1990), pp. 396–404.

Hu, X.

X. Hu, H. Mu, X. Zhang, Z. Wang, T. Tan, and J. Sun, “Meta-SR: a magnification-arbitrary network for super-resolution,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 1575–1584.

Hu, Y.

Y. Hu, B. Wang, and S. Lin, “FC4: fully convolutional color constancy with confidence-weighted pooling,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4085–4094.

Hua, K.-L.

F.-L. He, Y.-C. F. Wang, and K.-L. Hua, “Self-learning approach to color demosaicking via support vector regression,” in 19th IEEE International Conference on Image Processing (IEEE, 2012), pp. 2765–2768.

Huang, A.

D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of Go without human knowledge,” Nature 550, 354–359 (2017).
[Crossref]

Huang, C.

X. Yan, D. J. Brady, J. Wang, C. Huang, Z. Li, S. Yan, D. Liu, and Z. Ma, “Compressive sampling for array cameras,” arXiv:1908.10903 (2019).

Huang, Q.

R. Cong, J. Lei, H. Fu, M.-M. Cheng, W. Lin, and Q. Huang, “Review of visual saliency detection with comprehensive information,” IEEE Trans. Circuits Syst. Video Technol. 29, 2941–2959 (2018).
[Crossref]

Huang, T.

J. Yang, J. Wright, T. Huang, and Y. Ma, “Image super-resolution as sparse representation of raw image patches,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

Huang, T. S.

J. Yang, J. Wright, T. S. Huang, and Y. Ma, “Image super-resolution via sparse representation,” IEEE Trans. Image Process. 19, 2861–2873 (2010).
[Crossref]

Huang, Y.

X. Xu, S. Liu, T. Chuang, Y. Huang, S. Lei, K. Rapaka, C. Pang, V. Seregin, Y. Wang, and M. Karczewicz, “Intra block copy in HEVC screen content coding extensions,” IEEE J. Emerg. Sel. Top. Circuits Syst. 6, 409–419 (2016).
[Crossref]

Hubbard, W. E.

Y. LeCun, B. E. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. E. Hubbard, and L. D. Jackel, “Handwritten digit recognition with a back-propagation network,” in Advances in Neural Information Processing Systems (1990), pp. 396–404.

Hubert, T.

D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of Go without human knowledge,” Nature 550, 354–359 (2017).
[Crossref]

Hui, F.

D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of Go without human knowledge,” Nature 550, 354–359 (2017).
[Crossref]

Huo, J.-Y.

J.-Y. Huo, Y.-L. Chang, J. Wang, and X.-X. Wei, “Robust automatic white balance algorithm using gray color points in images,” IEEE Trans. Consum. Electron. 52, 541–546 (2006).
[Crossref]

Huszár, F.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4681–4690.

Huttenlocher, D.

X. Lan, S. Roth, D. Huttenlocher, and M. J. Black, “Efficient belief propagation with learned higher-order Markov random fields,” in European Conference on Computer Vision (Springer, 2006), pp. 269–282.

Huval, B.

B. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, P. Rajpurkar, T. Migimatsu, R. Cheng-Yue, F. Mujica, A. Coates, and A. Y. Ng, “An empirical evaluation of deep learning on highway driving,” arXiv:1504.01716 (2015).

Hwang, R.-C.

C.-Y. Chen, R.-C. Hwang, and Y.-J. Chen, “A passive auto-focus camera control system,” Appl. Soft Comput. 10, 296–303 (2010).
[Crossref]

Hwang, S. J.

J. Ballé, D. Minnen, S. Singh, S. J. Hwang, and N. Johnston, “Variational image compression with a scale hyperprior,” arXiv:1802.01436 (2018).

Ichioka, Y.

Ilg, E.

A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox, “FlowNet: learning optical flow with convolutional networks,” in IEEE International Conference on Computer Vision (2015), pp. 2758–2766.

E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, “FlowNet 2.0: evolution of optical flow estimation with deep networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 2462–2470.

Iliadis, M.

M. Iliadis, L. Spinoulas, and A. K. Katsaggelos, “DeepBinaryMask: learning a binary mask for video compressive sensing,” Digital Signal Processing 96, 102591 (2020).
[Crossref]

M. Iliadis, L. Spinoulas, and A. K. Katsaggelos, “Deep fully-connected networks for video compressive sensing,” Digital Signal Process. 72, 9–18 (2018).
[Crossref]

Inoue, K.

S. Sukegawa, T. Umebayashi, T. Nakajima, H. Kawanobe, K. Koseki, I. Hirota, T. Haruta, M. Kasai, K. Fukumoto, T. Wakano, K. Inoue, H. Takahashi, T. Nagano, Y. Nitta, T. Hirayama, and N. Fukushima, “A 1/4-inch 8 mpixel back-illuminated stacked CMOS image sensor,” in IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE, 2013), pp. 484–485.

Ioffe, S.

S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” arXiv:1502.03167 (2015).

Irani, M.

D. Glasner, S. Bagon, and M. Irani, “Super-resolution from a single image,” in IEEE 12th International Conference on Computer Vision (IEEE, 2009), pp. 349–356.

Irving, G.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.

Isard, M.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.

Isele, D.

D. Isele, R. Rahimi, A. Cosgun, K. Subramanian, and K. Fujimura, “Navigating occluded intersections with autonomous vehicles using deep reinforcement learning,” in IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2018), pp. 2034–2039.

Ishida, K.

Isikdogan, L. F.

C. Wu, L. F. Isikdogan, S. Rao, B. Nayak, T. Gerasimow, A. Sutic, L. Ain-kedem, and G. Michael, “VisionISP: Repurposing the image signal processor for computer vision applications,” in IEEE International Conference on Image Processing (ICIP) (2019), pp. 4624–4628.

Isola, P.

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 1125–1134.

R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in European Conference on Computer Vision (Springer, 2016), pp. 649–666.

Jackel, L.

M. Bojarski, P. Yeres, A. Choromanska, K. Choromanski, B. Firner, L. Jackel, and U. Muller, “Explaining how a deep neural network trained with end-to-end learning steers a car,” arXiv:1704.07911 (2017).

Jackel, L. D.

M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to end learning for self-driving cars,” arXiv:1604.07316 (2016).

Y. LeCun, B. E. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. E. Hubbard, and L. D. Jackel, “Handwritten digit recognition with a back-propagation network,” in Advances in Neural Information Processing Systems (1990), pp. 396–404.

Jacobs, D. E.

A. Adams, E.-V. Talvala, S. H. Park, D. E. Jacobs, B. Ajdin, N. Gelfand, J. Dolson, D. Vaquero, J. Baek, M. Tico, H. P. A. Lensch, W. Matusik, K. Pulli, M. Horowitz, and M. Levoy, “The Frankencamera: an experimental platform for computational photography,” ACM Trans. Graph. 29, 1–12 (2010).
[Crossref]

Jain, V.

V. Jain and S. Seung, “Natural image denoising with convolutional networks,” in Advances in Neural Information Processing Systems (2009), pp. 769–776.

Jalal, A.

A. Bora, A. Jalal, E. Price, and A. G. Dimakis, “Compressed sensing using generative models,” in 34th International Conference on Machine Learning (JMLR, 2017), Vol. 70, pp. 537–546.

Jancsary, J.

U. Schmidt, J. Jancsary, S. Nowozin, S. Roth, and C. Rother, “Cascades of regression tree fields for image restoration,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 677–689 (2015).
[Crossref]

D. Khashabi, S. Nowozin, J. Jancsary, and A. W. Fitzgibbon, “Joint demosaicing and denoising via learned nonparametric random fields,” IEEE Trans. Image Process. 23, 4968–4981 (2014).
[Crossref]

Jarrahi, M.

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361, 1004–1008 (2018).
[Crossref]

Jayasuriya, S.

M. Buckler, S. Jayasuriya, and A. Sampson, “Reconfiguring the imaging pipeline for computer vision,” in IEEE International Conference on Computer Vision (ICCV) (2017).

Ji, M.

H. Zheng, M. Ji, H. Wang, Y. Liu, and L. Fang, “Learning cross-scale correspondence and patch-based synthesis for reference-based super-resolution,” in British Machine Vision Conference (2017).

H. Zheng, M. Ji, H. Wang, Y. Liu, and L. Fang, “CrossNet: an end-to-end reference-based super resolution network using cross-scale warping,” in European Conference on Computer Vision (ECCV) (ECCV, 2018), pp. 87–104.

Jia, H.

H. Yin, H. Jia, J. Zhou, and Z. Gao, “Survey on algorithms and VLSI architectures for MPEG-like video coders,” J. Signal Process. Syst. 88, 357–410 (2017).
[Crossref]

Jia, Y.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1–9.

Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: convolutional architecture for fast feature embedding,” in 22nd ACM International Conference on Multimedia (ACM, 2014), pp. 675–678.

Jiang, S.

Ji-Yan, Z.

Z. Ji-Yan, H. Yuan-Qing, X. Fei-Bing, and M. Xian-Guo, “Design of 10 mega-pixel mobile phone lens,” in 3rd International Conference on Instrumentation, Measurement, Computer, Communication and Control (IEEE, 2013), pp. 569–573.

Johnson, M.

P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang, “Bottom-up and top-down attention for image captioning and visual question answering,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 6077–6086.

Johnston, N.

J. Ballé, D. Minnen, S. Singh, S. J. Hwang, and N. Johnston, “Variational image compression with a scale hyperprior,” arXiv:1802.01436 (2018).

Joshi, N.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Jung, K.

K.-S. Oh and K. Jung, “GPU implementation of neural networks,” Pattern Recogn. 37, 1311–1314 (2004).
[Crossref]

Kainz, F.

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Trans. Graph. 35, 192 (2016).
[Crossref]

Kalender, W. A.

M. Beister, D. Kolditz, and W. A. Kalender, “Iterative reconstruction methods in X-ray CT,” Phys. Med. 28, 94–108 (2012).
[Crossref]

Kanade, T.

R. T. Collins, A. J. Lipton, H. Fujiyoshi, and T. Kanade, “Algorithms for cooperative multisensor surveillance,” Proc. IEEE 89, 1456–1477 (2001).
[Crossref]

B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in 7th International Joint Conference on Artificial Intelligence (IJCAI) (Morgan Kaufmann, 1981), Vol. 2, pp. 674–679.

Kapah, O.

O. Kapah and H. Z. Hel-Or, “Demosaicking using artificial neural networks,” Proc. SPIE 3962, 112–120 (2000).
[Crossref]

Karayev, S.

Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: convolutional architecture for fast feature embedding,” in 22nd ACM International Conference on Multimedia (ACM, 2014), pp. 675–678.

Karczewicz, M.

X. Xu, S. Liu, T. Chuang, Y. Huang, S. Lei, K. Rapaka, C. Pang, V. Seregin, Y. Wang, and M. Karczewicz, “Intra block copy in HEVC screen content coding extensions,” IEEE J. Emerg. Sel. Top. Circuits Syst. 6, 409–419 (2016).
[Crossref]

Kasai, M.

S. Sukegawa, T. Umebayashi, T. Nakajima, H. Kawanobe, K. Koseki, I. Hirota, T. Haruta, M. Kasai, K. Fukumoto, T. Wakano, K. Inoue, H. Takahashi, T. Nagano, Y. Nitta, T. Hirayama, and N. Fukushima, “A 1/4-inch 8 mpixel back-illuminated stacked CMOS image sensor,” in IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE, 2013), pp. 484–485.

Katkovnik, V.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref]

A. Danielyan, M. Vehvilainen, A. Foi, V. Katkovnik, and K. Egiazarian, “Cross-color BM3D filtering of noisy raw data,” in International Workshop on Local and Non-local Approximation in Image Processing (IEEE, 2009), pp. 125–129.

Katsaggelos, A. K.

M. Iliadis, L. Spinoulas, and A. K. Katsaggelos, “DeepBinaryMask: learning a binary mask for video compressive sensing,” Digital Signal Processing 96, 102591 (2020).
[Crossref]

M. Iliadis, L. Spinoulas, and A. K. Katsaggelos, “Deep fully-connected networks for video compressive sensing,” Digital Signal Process. 72, 9–18 (2018).
[Crossref]

R. Koller, L. Schmid, N. Matsuda, T. Niederberger, L. Spinoulas, O. Cossairt, G. Schuster, and A. K. Katsaggelos, “High spatio-temporal resolution video with compressed sensing,” Opt. Express 23, 15992–16007 (2015).
[Crossref]

Kautz, J.

F. Heide, M. Steinberger, Y.-T. Tsai, M. Rouf, D. Pająk, D. Reddy, O. Gallo, J. Liu, W. Heidrich, K. Egiazarian, J. Kautz, and K. A. Pulli, “FlexISP: a flexible camera image processing framework,” ACM Trans. Graph. 33, 231 (2014).
[Crossref]

D. Sun, X. Yang, M.-Y. Liu, and J. Kautz, “PWC-net: CNNS for optical flow using pyramid, warping, and cost volume,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 8934–8943.

Kavukcuoglu, K.

Y. LeCun, K. Kavukcuoglu, and C. Farabet, “Convolutional networks and applications in vision,” in IEEE International Symposium on Circuits and Systems (IEEE, 2010), pp. 253–256.

Kawanobe, H.

S. Sukegawa, T. Umebayashi, T. Nakajima, H. Kawanobe, K. Koseki, I. Hirota, T. Haruta, M. Kasai, K. Fukumoto, T. Wakano, K. Inoue, H. Takahashi, T. Nagano, Y. Nitta, T. Hirayama, and N. Fukushima, “A 1/4-inch 8 mpixel back-illuminated stacked CMOS image sensor,” in IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE, 2013), pp. 484–485.

Kehtarnavaz, N.

S. Yousefi, M. Rahman, and N. Kehtarnavaz, “A new auto-focus sharpness function for digital and smart-phone cameras,” IEEE Trans. Consum. Electron. 57, 1003–1009 (2011).
[Crossref]

M. T. Rahman and N. Kehtarnavaz, “Real-time face-priority auto focus for digital and cell-phone cameras,” IEEE Trans. Consum. Electron. 54, 1506–1513 (2008).
[Crossref]

N. Kehtarnavaz and H.-J. Oh, “Development and real-time implementation of a rule-based auto-focus algorithm,” Real-Time Imaging 9, 197–203 (2003).
[Crossref]

Kelly, D.

B. Wronski, I. Garcia-Dorado, M. Ernst, D. Kelly, M. Krainin, C.-K. Liang, M. Levoy, and P. Milanfar, “Handheld multi-frame super-resolution,” ACM Trans. Graph. 38, 1–18 (2019).
[Crossref]

Kelly, K. F.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 606509 (2006).
[Crossref]

Kendall, A.

V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: a deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).
[Crossref]

Kenny, K. B.

Kenyon, R. V.

J. A. Parker, R. V. Kenyon, and D. E. Troxel, “Comparison of interpolating methods for image resampling,” IEEE Trans. Med. Imaging 2, 31–39 (1983).
[Crossref]

Keuper, M.

E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, “FlowNet 2.0: evolution of optical flow estimation with deep networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 2462–2470.

Khan, A.

C. Piciarelli, L. Esterle, A. Khan, B. Rinner, and G. L. Foresti, “Dynamic reconfiguration in camera networks: a short survey,” IEEE Trans. Circuits Syst. Video Technol. 26, 965–977 (2015).
[Crossref]

Khashabi, D.

D. Khashabi, S. Nowozin, J. Jancsary, and A. W. Fitzgibbon, “Joint demosaicing and denoising via learned nonparametric random fields,” IEEE Trans. Image Process. 23, 4968–4981 (2014).
[Crossref]

Kiku, D.

D. Kiku, Y. Monno, M. Tanaka, and M. Okutomi, “Minimized-Laplacian residual interpolation for color image demosaicking,” Proc. SPIE 9023, 90230L (2014).
[Crossref]

Y. Monno, D. Kiku, M. Tanaka, and M. Okutomi, “Adaptive residual interpolation for color image demosaicking,” in IEEE International Conference on Image Processing (ICIP) (IEEE, 2015), pp. 3861–3865.

Kim, C.-Y.

B.-K. Park, S.-S. Kim, D.-S. Chung, S.-D. Lee, and C.-Y. Kim, “Fast and accurate auto focusing algorithm based on two defocused images using discrete cosine transform,” Proc. SPIE 6817, 68170D (2008).
[Crossref]

Kim, H.

B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2017).

Kim, H. S.

S. H. Park, H. S. Kim, S. Lansel, M. Parmar, and B. A. Wandell, “A case for denoising before demosaicking color filter array data,” in Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers (IEEE, 2009), pp. 860–864.

Kim, J.

J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 1646–1654.

J. Kim, J. K. Lee, and K. M. Lee, “Deeply-recursive convolutional network for image super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 1637–1645.

Kim, J. C.

J. C. Kim and J. W. Chong, “Facial contour correcting method and device,” U.S.patent app.16/304,337 (4July2019).

Kim, J.-H.

J.-W. Han, J.-H. Kim, H.-T. Lee, and S.-J. Ko, “A novel training based auto-focus for mobile-phone cameras,” IEEE Trans. Consum. Electron. 57, 232–238 (2011).
[Crossref]

Kim, S.-S.

B.-K. Park, S.-S. Kim, D.-S. Chung, S.-D. Lee, and C.-Y. Kim, “Fast and accurate auto focusing algorithm based on two defocused images using discrete cosine transform,” Proc. SPIE 6817, 68170D (2008).
[Crossref]

Kim, S.-W.

S.-Y. Lee, Y. Kumar, J.-M. Cho, S.-W. Lee, and S.-W. Kim, “Enhanced autofocus algorithm using robust focus measure and fuzzy reasoning,” IEEE Trans. Circuits Syst. Video Technol. 18, 983–1246 (2008).
[Crossref]

Kiske, J.

B. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, P. Rajpurkar, T. Migimatsu, R. Cheng-Yue, F. Mujica, A. Coates, and A. Y. Ng, “An empirical evaluation of deep learning on highway driving,” arXiv:1504.01716 (2015).

Kitamura, Y.

Kittle, D.

P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–10545 (2013).
[Crossref]

D. J. Brady, M. Gehm, R. Stack, D. Marks, D. Kittle, D. R. Golish, E. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Kittle, D. S.

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process Mag. 31(1), 105–115 (2014).
[Crossref]

D. S. Kittle, D. L. Marks, and D. J. Brady, “Design and fabrication of an ultraviolet-visible coded aperture snapshot spectral imager,” Opt. Eng. 51, 071403 (2012).
[Crossref]

Klatzer, T.

T. Klatzer, K. Hammernik, P. Knobelreiter, and T. Pock, “Learning joint demosaicing and denoising based on sequential energy minimization,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2016), pp. 1–11.

Knobelreiter, P.

T. Klatzer, K. Hammernik, P. Knobelreiter, and T. Pock, “Learning joint demosaicing and denoising based on sequential energy minimization,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2016), pp. 1–11.

Ko, S.-J.

J.-W. Han, J.-H. Kim, H.-T. Lee, and S.-J. Ko, “A novel training based auto-focus for mobile-phone cameras,” IEEE Trans. Consum. Electron. 57, 232–238 (2011).
[Crossref]

Kober, J.

J. Kober, J. A. Bagnell, and J. Peters, “Reinforcement learning in robotics: a survey,” Int. J. Rob. Res. 32, 1238–1274 (2013).
[Crossref]

Kobilarov, M.

C. Paxton, V. Raman, G. D. Hager, and M. Kobilarov, “Combining neural networks and tree search for task and motion planning in challenging environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2017), pp. 6059–6066.

Kokkinos, F.

F. Kokkinos and S. Lefkimmiatis, “Deep image demosaicking using a cascade of convolutional residual denoising networks,” in European Conference on Computer Vision (ECCV) (2018), pp. 303–319.

F. Kokkinos and S. Lefkimmiatis, “Iterative residual network for deep joint image demosaicking and denoising,” arXiv:1807.06403 (2018).

Kokkinos, I.

L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFS,” IEEE Trans. Pattern Anal. Mach. Intell. 40, 834–848 (2017).
[Crossref]

Kolditz, D.

M. Beister, D. Kolditz, and W. A. Kalender, “Iterative reconstruction methods in X-ray CT,” Phys. Med. 28, 94–108 (2012).
[Crossref]

Koller, R.

Koltun, V.

C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to see in the dark,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 3291–3300.

Komodakis, N.

S. Zagoruyko and N. Komodakis, “Learning to compare image patches via convolutional neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 4353–4361.

Kondou, N.

Kong, Y.

Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 2472–2481.

Konolige, K.

E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: an efficient alternative to sift or surf,” in International Conference on Computer Vision (2011), pp. 2564–2571.

Kontorinaki, M.

K. Makantasis, M. Kontorinaki, and I. Nikolos, “A deep reinforcement learning driving policy for autonomous road vehicles,” arXiv:1905.09046 (2019).

Kornhauser, A.

C. Chen, A. Seff, A. Kornhauser, and J. Xiao, “DeepDriving: learning affordance for direct perception in autonomous driving,” in IEEE International Conference on Computer Vision (2015), pp. 2722–2730.

Koseki, K.

S. Sukegawa, T. Umebayashi, T. Nakajima, H. Kawanobe, K. Koseki, I. Hirota, T. Haruta, M. Kasai, K. Fukumoto, T. Wakano, K. Inoue, H. Takahashi, T. Nagano, Y. Nitta, T. Hirayama, and N. Fukushima, “A 1/4-inch 8 mpixel back-illuminated stacked CMOS image sensor,” in IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE, 2013), pp. 484–485.

Krainin, M.

B. Wronski, I. Garcia-Dorado, M. Ernst, D. Kelly, M. Krainin, C.-K. Liang, M. Levoy, and P. Milanfar, “Handheld multi-frame super-resolution,” ACM Trans. Graph. 38, 1–18 (2019).
[Crossref]

Krizhevsky, A.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 1097–1105.

Krotkov, E.

E. Krotkov, “Focusing,” Int. J. Comput. Vision 1, 223–237 (1988).
[Crossref]

Kudlur, M.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.

Kumagai, T.

Kumar, Y.

S.-Y. Lee, Y. Kumar, J.-M. Cho, S.-W. Lee, and S.-W. Kim, “Enhanced autofocus algorithm using robust focus measure and fuzzy reasoning,” IEEE Trans. Circuits Syst. Video Technol. 18, 983–1246 (2008).
[Crossref]

Kwatra, V.

V. Kwatra, A. Schödl, I. Essa, G. Turk, and A. Bobick, “Graphcut textures: image and video synthesis using graph cuts,” ACM Trans. Graph. 22, 277–286 (2003).
[Crossref]

Lai, M.

D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of Go without human knowledge,” Nature 550, 354–359 (2017).
[Crossref]

Lainema, J.

J. Lainema, F. Bossen, W.-J. Han, J. Min, and K. Ugur, “Intra coding of the HEVC standard,” IEEE Trans. Circuits Syst. Video Technol. 22, 1792–1801 (2012).
[Crossref]

Laird, N. M.

A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” J. Royal Stat. Soc. B 39, 1–22 (1977).
[Crossref]

Lam, E. Y.

Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5, 337–344 (2018).
[Crossref]

E. Y. Lam and G. S. Fung, “Automatic white balancing in digital photography,” in Single-Sensor Imaging (CRC Press, 2018), pp. 287–314.

E. Y. Lam, “Combining gray world and retinex theory for automatic white balance in digital photography,” in 9th International Symposium on Consumer Electronics (ISCE) (IEEE, 2005), pp. 134–139.

Lan, X.

X. Lan, S. Roth, D. Huttenlocher, and M. J. Black, “Efficient belief propagation with learned higher-order Markov random fields,” in European Conference on Computer Vision (Springer, 2006), pp. 269–282.

Land, E. H.

E. H. Land, “The retinex theory of color vision,” Sci. Am. 237, 108–129 (1977).
[Crossref]

Lansel, S.

S. H. Park, H. S. Kim, S. Lansel, M. Parmar, and B. A. Wandell, “A case for denoising before demosaicking color filter array data,” in Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers (IEEE, 2009), pp. 860–864.

Larsson, G.

G. Larsson, M. Maire, and G. Shakhnarovich, “FractalNet: ultra-deep neural networks without residuals,” arXiv:1605.07648 (2016).

Laska, J. N.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 606509 (2006).
[Crossref]

LeCun, Y.

J. Zbontar and Y. LeCun, “Computing the stereo matching cost with a convolutional neural network,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1592–1599.

Y. LeCun, K. Kavukcuoglu, and C. Farabet, “Convolutional networks and applications in vision,” in IEEE International Symposium on Circuits and Systems (IEEE, 2010), pp. 253–256.

Y. LeCun, B. E. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. E. Hubbard, and L. D. Jackel, “Handwritten digit recognition with a back-propagation network,” in Advances in Neural Information Processing Systems (1990), pp. 396–404.

Ledig, C.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4681–4690.

Lee, H.-T.

J.-W. Han, J.-H. Kim, H.-T. Lee, and S.-J. Ko, “A novel training based auto-focus for mobile-phone cameras,” IEEE Trans. Consum. Electron. 57, 232–238 (2011).
[Crossref]

Lee, J. K.

J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 1646–1654.

J. Kim, J. K. Lee, and K. M. Lee, “Deeply-recursive convolutional network for image super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 1637–1645.

Lee, K. M.

J. Kim, J. K. Lee, and K. M. Lee, “Deeply-recursive convolutional network for image super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 1637–1645.

J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 1646–1654.

B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2017).

Lee, S.-D.

B.-K. Park, S.-S. Kim, D.-S. Chung, S.-D. Lee, and C.-Y. Kim, “Fast and accurate auto focusing algorithm based on two defocused images using discrete cosine transform,” Proc. SPIE 6817, 68170D (2008).
[Crossref]

Lee, S.-W.

S.-Y. Lee, Y. Kumar, J.-M. Cho, S.-W. Lee, and S.-W. Kim, “Enhanced autofocus algorithm using robust focus measure and fuzzy reasoning,” IEEE Trans. Circuits Syst. Video Technol. 18, 983–1246 (2008).
[Crossref]

Lee, S.-Y.

S.-Y. Lee, Y. Kumar, J.-M. Cho, S.-W. Lee, and S.-W. Kim, “Enhanced autofocus algorithm using robust focus measure and fuzzy reasoning,” IEEE Trans. Circuits Syst. Video Technol. 18, 983–1246 (2008).
[Crossref]

Lefkimmiatis, S.

F. Kokkinos and S. Lefkimmiatis, “Iterative residual network for deep joint image demosaicking and denoising,” arXiv:1807.06403 (2018).

F. Kokkinos and S. Lefkimmiatis, “Deep image demosaicking using a cascade of convolutional residual denoising networks,” in European Conference on Computer Vision (ECCV) (2018), pp. 303–319.

S. Lefkimmiatis, “Universal denoising networks: a novel CNN architecture for image denoising,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 3204–3213.

Lei, J.

R. Cong, J. Lei, H. Fu, M.-M. Cheng, W. Lin, and Q. Huang, “Review of visual saliency detection with comprehensive information,” IEEE Trans. Circuits Syst. Video Technol. 29, 2941–2959 (2018).
[Crossref]

Lei, S.

X. Xu, S. Liu, T. Chuang, Y. Huang, S. Lei, K. Rapaka, C. Pang, V. Seregin, Y. Wang, and M. Karczewicz, “Intra block copy in HEVC screen content coding extensions,” IEEE J. Emerg. Sel. Top. Circuits Syst. 6, 409–419 (2016).
[Crossref]

Leininger, B.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, J. D. Targove, S. Wein, P. Brewer, D. G. Madden, and K. H. Shafique, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Lelescu, D.

K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: an ultra-thin high performance monolithic camera array,” ACM Trans. Graph. 32, 166 (2013).
[Crossref]

Lemley, J.

J. Lemley, S. Bazrafkan, and P. Corcoran, “Deep learning for consumer devices and services: pushing the limits for machine learning, artificial intelligence, and computer vision,” IEEE Consum. Electron. Mag. 6(2), 48–56 (2017).
[Crossref]

Lempitsky, V.

D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 9446–9454.

Lensch, H. P. A.

A. Adams, E.-V. Talvala, S. H. Park, D. E. Jacobs, B. Ajdin, N. Gelfand, J. Dolson, D. Vaquero, J. Baek, M. Tico, H. P. A. Lensch, W. Matusik, K. Pulli, M. Horowitz, and M. Levoy, “The Frankencamera: an experimental platform for computational photography,” ACM Trans. Graph. 29, 1–12 (2010).
[Crossref]

Lepetit, V.

E. Tola, V. Lepetit, and P. Fua, “Daisy: an efficient dense descriptor applied to wide-baseline stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 815–830 (2010).
[Crossref]

Levenberg, J.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.

Levin, A.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26, 70 (2007).
[Crossref]

Levoy, M.

B. Wronski, I. Garcia-Dorado, M. Ernst, D. Kelly, M. Krainin, C.-K. Liang, M. Levoy, and P. Milanfar, “Handheld multi-frame super-resolution,” ACM Trans. Graph. 38, 1–18 (2019).
[Crossref]

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Trans. Graph. 35, 192 (2016).
[Crossref]

A. Adams, E.-V. Talvala, S. H. Park, D. E. Jacobs, B. Ajdin, N. Gelfand, J. Dolson, D. Vaquero, J. Baek, M. Tico, H. P. A. Lensch, W. Matusik, K. Pulli, M. Horowitz, and M. Levoy, “The Frankencamera: an experimental platform for computational photography,” ACM Trans. Graph. 29, 1–12 (2010).
[Crossref]

M. Levoy, “Light fields and computational imaging,” IEEE Computer 39, 46–55 (2006).
[Crossref]

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR 2 (2005), pp. 1–11.

Li, C.

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref]

Li, G.

G. Li, J. Wang, and Y. Zhang, “Design of 8 mega-pixel mobile phone camera,” J. Appl. Opt. 32, 420–425 (2011).

T. Tong, G. Li, X. Liu, and Q. Gao, “Image super-resolution using dense skip connections,” in IEEE International Conference on Computer Vision (2017), pp. 4799–4807.

Li, H.

Li, K.

Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image super-resolution using very deep residual channel attention networks,” in European Conference on Computer Vision (ECCV) (2018), pp. 286–301.

Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image super-resolution using very deep residual channel attention networks,” in European Conference on Computer Vision (ECCV) (2018), pp. 286–301.

Li, M.

M. Li, W. Zuo, S. Gu, D. Zhao, and D. Zhang, “Learning convolutional networks for content-weighted image compression,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 3214–3223.

Li, Q.

Y. Wang, H. Feng, Z. Xu, Q. Li, Y. Chen, and M. Cen, “Fast auto-focus scheme based on optical defocus fitting model,” J. Mod. Opt. 65, 858–868 (2018).
[Crossref]

Li, W.

Li, X.

W. Dong, L. Zhang, G. Shi, and X. Li, “Nonlocally centralized sparse representation for image restoration,” IEEE Trans. Image Process. 22, 1620–1630 (2012).
[Crossref]

L. Zhang, X. Wu, A. Buades, and X. Li, “Color demosaicking by local directional interpolation and nonlocal adaptive thresholding,” J. Electron. Imaging 20, 043010 (2011).
[Crossref]

X. Li, B. Gunturk, and L. Zhang, “Image demosaicing: a systematic survey,” Proc. SPIE 6822, 68221J (2008).
[Crossref]

W. Dong, M. Yuan, X. Li, and G. Shi, “Joint demosaicing and denoising with perceptual optimization on a generative adversarial network,” arXiv:1802.04723 (2018).

Li, Y.

K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Gool, and R. Timofte, “Plug-and-play image restoration with deep denoiser prior,” arXiv abs/2008.13751 (2020).

Li, Z.

Z. Li and J. Tang, “Weakly supervised deep matrix factorization for social image understanding,” IEEE Trans. Image Process. 26, 276–288 (2016).
[Crossref]

X. Yan, D. J. Brady, J. Wang, C. Huang, Z. Li, S. Yan, D. Liu, and Z. Ma, “Compressive sampling for array cameras,” arXiv:1908.10903 (2019).

Liang, C.-K.

B. Wronski, I. Garcia-Dorado, M. Ernst, D. Kelly, M. Krainin, C.-K. Liang, M. Levoy, and P. Milanfar, “Handheld multi-frame super-resolution,” ACM Trans. Graph. 38, 1–18 (2019).
[Crossref]

Liang, J.

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref]

Liao, J.

Liao, Q.

W. Yang, X. Zhang, Y. Tian, W. Wang, J.-H. Xue, and Q. Liao, “Deep learning for single image super-resolution: a brief review,” IEEE Trans. Multimedia 21, 3106–3121 (2019).

Liao, T.-W.

S.-C. Tai, T.-W. Liao, Y.-Y. Chang, and C.-P. Yeh, “Automatic white balance algorithm through the average equalization and threshold,” in 8th International Conference on Information Science and Digital Content Technology (ICIDT) (IEEE, 2012), Vol. 3, pp. 571–576.

Liao, X.

Lien, M.-B.

M.-B. Lien, C.-H. Liu, I. Y. Chun, S. Ravishankar, H. Nien, M. Zhou, J. A. Fessler, Z. Zhong, and T. B. Norris, “Ranging and light field imaging with transparent photodetectors,” Nat. Photonics 14, 143–148 (2020).
[Crossref]

Lillicrap, T.

D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of Go without human knowledge,” Nature 550, 354–359 (2017).
[Crossref]

Lim, B.

B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2017).

Lin, C.-C.

C.-C. Lin, S. U. Pankanti, K. N. Ramamurthy, and A. Y. Aravkin, “Adaptive as-natural-as-possible image stitching,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1155–1163.

Lin, S.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process Mag. 33(5), 95–108 (2016).
[Crossref]

D. Psaltis, D. Brady, X.-G. Gu, and S. Lin, “Holography in artificial neural networks,” Nature 343, 325–330 (1990).
[Crossref]

Y. Hu, B. Wang, and S. Lin, “FC4: fully convolutional color constancy with confidence-weighted pooling,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4085–4094.

Lin, W.

R. Cong, J. Lei, H. Fu, M.-M. Cheng, W. Lin, and Q. Huang, “Review of visual saliency detection with comprehensive information,” IEEE Trans. Circuits Syst. Video Technol. 29, 2941–2959 (2018).
[Crossref]

Lin, X.

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361, 1004–1008 (2018).
[Crossref]

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process Mag. 33(5), 95–108 (2016).
[Crossref]

Lin, Z.

J. Yang, Z. Lin, and S. Cohen, “Fast image super-resolution based on in-place example regression,” in IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 1059–1066.

Lipton, A. J.

R. T. Collins, A. J. Lipton, H. Fujiyoshi, and T. Kanade, “Algorithms for cooperative multisensor surveillance,” Proc. IEEE 89, 1456–1477 (2001).
[Crossref]

Liu, C.-H.

M.-B. Lien, C.-H. Liu, I. Y. Chun, S. Ravishankar, H. Nien, M. Zhou, J. A. Fessler, Z. Zhong, and T. B. Norris, “Ranging and light field imaging with transparent photodetectors,” Nat. Photonics 14, 143–148 (2020).
[Crossref]

Liu, D.

X. Yan, D. J. Brady, J. Wang, C. Huang, Z. Li, S. Yan, D. Liu, and Z. Ma, “Compressive sampling for array cameras,” arXiv:1908.10903 (2019).

Liu, E.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, J. D. Targove, S. Wein, P. Brewer, D. G. Madden, and K. H. Shafique, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Liu, H.

M. Cheng, Z. Ma, S. Asif, Y. Xu, H. Liu, W. Bao, and J. Sun, “A dual camera system for high spatiotemporal resolution video acquisition,” IEEE Comp. Arch. Lett. 1, 1 (2020).

H. Liu, T. Chen, M. Lu, Q. Shen, and Z. Ma, “Neural video compression using spatio-temporal priors,” arXiv:1902.07383 (2019).

T. Chen, H. Liu, Q. Shen, T. Yue, X. Cao, and Z. Ma, “DeepCoder: A deep neural network based video compression,” in IEEE Visual Communications and Image Processing (VCIP) (IEEE, 2017), pp. 1–4.

H. Liu, T. Chen, Q. Shen, and Z. Ma, “Practical stacked non-local attention modules for image compression,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2019).

Q. Shen, J. Cai, L. Liu, H. Liu, T. Chen, L. Ye, and Z. Ma, “CodedVision: towards joint image understanding and compression via end-to-end learning,” in Pacific Rim Conference on Multimedia (Springer, 2018), pp. 3–14.

H. Liu, T. Chen, Q. Shen, T. Yue, and Z. Ma, “Deep image compression via end-to-end learning,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2018), pp. 2575–2578.

H. Liu, T. Chen, P. Guo, Q. Shen, X. Cao, Y. Wang, and Z. Ma, “Non-local attention optimized deep image compression,” arXiv:1904.09757 (2019).

Liu, J.

F. Heide, M. Steinberger, Y.-T. Tsai, M. Rouf, D. Pająk, D. Reddy, O. Gallo, J. Liu, W. Heidrich, K. Egiazarian, J. Kautz, and K. A. Pulli, “FlexISP: a flexible camera image processing framework,” ACM Trans. Graph. 33, 231 (2014).
[Crossref]

Liu, L.

Q. Shen, J. Cai, L. Liu, H. Liu, T. Chen, L. Ye, and Z. Ma, “CodedVision: towards joint image understanding and compression via end-to-end learning,” in Pacific Rim Conference on Multimedia (Springer, 2018), pp. 3–14.

Liu, M.-Y.

D. Sun, X. Yang, M.-Y. Liu, and J. Kautz, “PWC-net: CNNS for optical flow using pyramid, warping, and cost volume,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 8934–8943.

Liu, S.

X. Xu, S. Liu, T. Chuang, Y. Huang, S. Lei, K. Rapaka, C. Pang, V. Seregin, Y. Wang, and M. Karczewicz, “Intra block copy in HEVC screen content coding extensions,” IEEE J. Emerg. Sel. Top. Circuits Syst. 6, 409–419 (2016).
[Crossref]

Liu, S.-C.

C. Brandli, R. Berner, M. Yang, S.-C. Liu, and T. Delbruck, “A 240× 80 130 db 3 µs latency global shutter spatiotemporal vision sensor,” IEEE J. Solid-State Circuits 49, 2333–2341 (2014).
[Crossref]

Liu, W.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1–9.

Liu, X.

M. Qiao, X. Liu, and X. Yuan, “Snapshot spatial–temporal compressive imaging,” Opt. Lett. 45, 1659–1662 (2020).
[Crossref]

T. Tong, G. Li, X. Liu, and Q. Gao, “Image super-resolution using dense skip connections,” in IEEE International Conference on Computer Vision (2017), pp. 4799–4807.

Y. Tai, J. Yang, and X. Liu, “Image super-resolution via deep recursive residual network,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 3147–3155.

Liu, Y.

X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and C. C. Loy, “ESRGAN: enhanced super-resolution generative adversarial networks,” in European Conference on Computer Vision (ECCV) (2018).

H. Zheng, M. Guo, H. Wang, Y. Liu, and L. Fang, “Combining exemplar-based approach and learning-based approach for light field super-resolution using a hybrid imaging system,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2481–2486.

H. Zheng, M. Ji, H. Wang, Y. Liu, and L. Fang, “CrossNet: an end-to-end reference-based super resolution network using cross-scale warping,” in European Conference on Computer Vision (ECCV) (ECCV, 2018), pp. 87–104.

Y. Wang, Y. Liu, W. Heidrich, and Q. Dai, “The light field attachment: turning a DSLR into a light field camera using a low budget camera ring,” in IEEE Transactions on Visualization and Computer Graphics (IEEE, 2016).

X. Yuan, L. Fang, Q. Dai, D. J. Brady, and Y. Liu, “Multiscale gigapixel video: a cross resolution image matching and warping approach,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2017), pp. 1–9.

H. Zheng, M. Ji, H. Wang, Y. Liu, and L. Fang, “Learning cross-scale correspondence and patch-based synthesis for reference-based super-resolution,” in British Machine Vision Conference (2017).

X. Yuan, Y. Liu, J. Suo, and Q. Dai, “Plug-and-play algorithms for large-scale snapshot compressive imaging,”, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (IEEE, 2020), pp. 1447–1457.

Liu, Y.-C.

Y.-C. Liu, W.-H. Chan, and Y.-Q. Chen, “Automatic white balance for digital still camera,” IEEE Trans. Consum. Electron. 41, 460–466 (1995).
[Crossref]

Llull, P.

Long, J.

Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: convolutional architecture for fast feature embedding,” in 22nd ACM International Conference on Multimedia (ACM, 2014), pp. 675–678.

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 3431–3440.

Lowe, D. G.

M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” Int. J. Comput. Vision 74, 59–73 (2007).
[Crossref]

M. Brown and D. G. Lowe, “Recognising panoramas,” in IEEE International Conference on Computer Vision (ICCV) (2003), Vol. 3, p. 1218.

D. G. Lowe, “Object recognition from local scale-invariant features,” in 7th IEEE International Conference on Computer Vision (1999), Vol. 2, pp. 1150–1157.

Loy, C. C.

X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and C. C. Loy, “ESRGAN: enhanced super-resolution generative adversarial networks,” in European Conference on Computer Vision (ECCV) (2018).

C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in European Conference on Computer Vision (Springer, 2014), pp. 184–199.

W. Shi, C. C. Loy, and X. Tang, “Deep specialized network for illuminant estimation,” in European Conference on Computer Vision (Springer, 2016), pp. 371–387.

Lu, G.

G. Lu, W. Ouyang, D. Xu, X. Zhang, C. Cai, and Z. Gao, “DVC: an end-to-end deep video compression framework,” in IEEE Conference on Computer Vision and Pattern Recognition (2019), pp. 11006–11015.

Lu, M.

H. Liu, T. Chen, M. Lu, Q. Shen, and Z. Ma, “Neural video compression using spatio-temporal priors,” arXiv:1902.07383 (2019).

Lucas, B. D.

B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in 7th International Joint Conference on Artificial Intelligence (IJCAI) (Morgan Kaufmann, 1981), Vol. 2, pp. 674–679.

Lucy, L. B.

L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745–754 (1974).
[Crossref]

Luo, N.

C. Tian, Y. Xu, L. Fei, J. Wang, J. Wen, and N. Luo, “Enhanced CNN for image denoising,” CAAI Trans. Intell. Technol. 4, 17–23 (2019).
[Crossref]

Luo, Y.

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361, 1004–1008 (2018).
[Crossref]

Ma, J.

M. Qiao, Z. Meng, J. Ma, and X. Yuan, “Deep learning for video compressive sensing,” APL Photon. 5, 030801 (2020).
[Crossref]

Ma, Y.

J. Yang, J. Wright, T. S. Huang, and Y. Ma, “Image super-resolution via sparse representation,” IEEE Trans. Image Process. 19, 2861–2873 (2010).
[Crossref]

J. Yang, J. Wright, T. Huang, and Y. Ma, “Image super-resolution as sparse representation of raw image patches,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

Ma, Z.

M. Cheng, Z. Ma, S. Asif, Y. Xu, H. Liu, W. Bao, and J. Sun, “A dual camera system for high spatiotemporal resolution video acquisition,” IEEE Comp. Arch. Lett. 1, 1 (2020).

C. Guo, Z. Ma, X. Guo, W. Li, X. Qi, and Q. Zhao, “Fast auto-focusing search algorithm for a high-speed and high-resolution camera based on the image histogram feature function,” Appl. Opt. 57, F44–F49 (2018).
[Crossref]

D. J. Brady, W. Pang, H. Li, Z. Ma, Y. Tao, and X. Cao, “Parallel cameras,” Optica 5, 127–137 (2018).
[Crossref]

Y. Zhao, T. Yue, L. Chen, H. Wang, Z. Ma, D. J. Brady, and X. Cao, “Heterogeneous camera array for multispectral light field imaging,” Opt. Express 25, 14008–14022 (2017).
[Crossref]

H. Liu, T. Chen, M. Lu, Q. Shen, and Z. Ma, “Neural video compression using spatio-temporal priors,” arXiv:1902.07383 (2019).

X. Yan, D. J. Brady, J. Wang, C. Huang, Z. Li, S. Yan, D. Liu, and Z. Ma, “Compressive sampling for array cameras,” arXiv:1908.10903 (2019).

Q. Shen, J. Cai, L. Liu, H. Liu, T. Chen, L. Ye, and Z. Ma, “CodedVision: towards joint image understanding and compression via end-to-end learning,” in Pacific Rim Conference on Multimedia (Springer, 2018), pp. 3–14.

T. Chen, H. Liu, Q. Shen, T. Yue, X. Cao, and Z. Ma, “DeepCoder: A deep neural network based video compression,” in IEEE Visual Communications and Image Processing (VCIP) (IEEE, 2017), pp. 1–4.

H. Liu, T. Chen, Q. Shen, and Z. Ma, “Practical stacked non-local attention modules for image compression,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2019).

H. Liu, T. Chen, P. Guo, Q. Shen, X. Cao, Y. Wang, and Z. Ma, “Non-local attention optimized deep image compression,” arXiv:1904.09757 (2019).

H. Liu, T. Chen, Q. Shen, T. Yue, and Z. Ma, “Deep image compression via end-to-end learning,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2018), pp. 2575–2578.

MacCabe, K.

Madden, D. G.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, J. D. Targove, S. Wein, P. Brewer, D. G. Madden, and K. H. Shafique, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Mairal, J.

J. Mairal, F. R. Bach, J. Ponce, G. Sapiro, and A. Zisserman, “Non-local sparse models for image restoration,” in International Conference on Computer Vision (ICCV) (2009), Vol. 29, pp. 54–62.

Maire, M.

G. Larsson, M. Maire, and G. Shakhnarovich, “FractalNet: ultra-deep neural networks without residuals,” arXiv:1605.07648 (2016).

Mait, J.

J. Mait, “A history of imaging: revisiting the past to chart the future,” Opt. Photon. News 17(2), 22–27 (2006).
[Crossref]

Mait, J. N.

Makantasis, K.

K. Makantasis, M. Kontorinaki, and I. Nikolos, “A deep reinforcement learning driving policy for autonomous road vehicles,” arXiv:1905.09046 (2019).

Malik, J.

T. Brox and J. Malik, “Large displacement optical flow: descriptor matching in variational motion estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 500–513 (2010).
[Crossref]

Malisiewicz, T.

D. DeTone, T. Malisiewicz, and A. Rabinovich, “Deep image homography estimation,” arXiv:1606.03798 (2016).

Malpica, N.

A. Santos, C. O. de Solórzano, J. J. Vaquero, J. Pena, N. Malpica, and F. Del Pozo, “Evaluation of autofocus functions in molecular cytogenetic analysis,” J. Microsc. 188, 264–272 (1997).
[Crossref]

Mann, S.

S. Mann and R. W. Picard, “Virtual bellows: constructing high quality stills from video,” in 1st International Conference on Image Processing (1994), Vol. 1, pp. 363–367.

Marks, D.

D. J. Brady, M. Gehm, R. Stack, D. Marks, D. Kittle, D. R. Golish, E. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Marks, D. L.

Matsuda, N.

Matusik, W.

A. Adams, E.-V. Talvala, S. H. Park, D. E. Jacobs, B. Ajdin, N. Gelfand, J. Dolson, D. Vaquero, J. Baek, M. Tico, H. P. A. Lensch, W. Matusik, K. Pulli, M. Horowitz, and M. Levoy, “The Frankencamera: an experimental platform for computational photography,” ACM Trans. Graph. 29, 1–12 (2010).
[Crossref]

Mayer, N.

E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, “FlowNet 2.0: evolution of optical flow estimation with deep networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 2462–2470.

McCulloch, W. S.

W. S. McCulloch and W. Pitts, “A logical calculus of the ideas immanent in nervous activity,” Bull. Math. Biophys. 5, 115–133 (1943).
[Crossref]

McMahon, A.

K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: an ultra-thin high performance monolithic camera array,” ACM Trans. Graph. 32, 166 (2013).
[Crossref]

Mead, C.

C. Mead, “Neuromorphic electronic systems,” Proc. IEEE 78, 1629–1636 (1990).
[Crossref]

Meixner, A.

J. Redgrave, A. Meixner, N. Goulding-Hotta, A. Vasilyev, and O. Shacham, “Pixel visual core: Google’s fully programmable image vision and AI processor for mobile devices,” in Hot Chips: A Symposium on High Performance Chips (2018).

Meng, D.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising,” IEEE Trans. Image Process. 26, 3142–3155 (2017).
[Crossref]

Meng, Z.

M. Qiao, Z. Meng, J. Ma, and X. Yuan, “Deep learning for video compressive sensing,” APL Photon. 5, 030801 (2020).
[Crossref]

Menon, D.

D. Menon and G. Calvagno, “Color image demosaicking: an overview,” Signal Process. Image Commun. 26, 518–533 (2011).
[Crossref]

Mentzer, F.

F. Mentzer, E. Agustsson, M. Tschannen, R. Timofte, and L. Van Gool, “Conditional probability models for deep image compression,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018).

Mersereau, R. M.

B. K. Gunturk, J. Glotzbach, Y. Altunbasak, R. W. Schafer, and R. M. Mersereau, “Demosaicking: color filter array interpolation,” IEEE Signal Process Mag. 22(1), 44–54 (2005).
[Crossref]

Miao, X.

X. Miao, X. Yuan, Y. Pu, and V. Athitsos, “λ-net: reconstruct hyperspectral images from a snapshot measurement,” in IEEE/CVF Conference on Computer Vision (ICCV) (2019), Vol. 1.

Miau, D.

O. S. Cossairt, D. Miau, and S. K. Nayar, “Gigapixel computational imaging,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2011), pp. 1–8.

Michael, G.

C. Wu, L. F. Isikdogan, S. Rao, B. Nayak, T. Gerasimow, A. Sutic, L. Ain-kedem, and G. Michael, “VisionISP: Repurposing the image signal processor for computer vision applications,” in IEEE International Conference on Image Processing (ICIP) (2019), pp. 4624–4628.

Michaeli, T.

Y. Blau and T. Michaeli, “The perception-distortion tradeoff,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018).

Migimatsu, T.

B. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, P. Rajpurkar, T. Migimatsu, R. Cheng-Yue, F. Mujica, A. Coates, and A. Y. Ng, “An empirical evaluation of deep learning on highway driving,” arXiv:1504.01716 (2015).

Milanfar, P.

B. Wronski, I. Garcia-Dorado, M. Ernst, D. Kelly, M. Krainin, C.-K. Liang, M. Levoy, and P. Milanfar, “Handheld multi-frame super-resolution,” ACM Trans. Graph. 38, 1–18 (2019).
[Crossref]

Mills, M.

L. A. Teodosio and M. Mills, “Panoramic overviews for navigating real-world scenes,” in 1st ACM International Conference on Multimedia (1993), pp. 359–364.

Min, J.

J. Lainema, F. Bossen, W.-J. Han, J. Min, and K. Ugur, “Intra coding of the HEVC standard,” IEEE Trans. Circuits Syst. Video Technol. 22, 1792–1801 (2012).
[Crossref]

Minnen, D.

J. Ballé, D. Minnen, S. Singh, S. J. Hwang, and N. Johnston, “Variational image compression with a scale hyperprior,” arXiv:1802.01436 (2018).

Minsky, M.

M. Minsky and S. A. Papert, Perceptrons: An Introduction to Computational Geometry (MIT, 2017).

Misra, J.

J. Misra and I. Saha, “Artificial neural networks in hardware: a survey of two decades of progress,” Neurocomputing 74, 239–255 (2010).
[Crossref]

Mitra, K.

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in IEEE International Conference on Computational Photography (IEEE, 2014), pp. 1–10.

Mitsunaga, T.

S. K. Nayar and T. Mitsunaga, “High dynamic range imaging: spatially varying pixel exposures,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2000), vol. 1, pp. 472–479.

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in International Conference on Computer Vision (IEEE, 2011), pp. 287–294.

Miyatake, S.

Miyazaki, D.

Molina, G.

K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: an ultra-thin high performance monolithic camera array,” ACM Trans. Graph. 32, 166 (2013).
[Crossref]

Moloney, D.

D. Moloney, B. Barry, R. Richmond, F. Connor, C. Brick, and D. Donohoe, “Myriad 2: eye of the computational vision storm,” in IEEE Hot Chips 26 Symposium (HCS) (IEEE, 2014), pp. 1–18.

Monfort, M.

M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to end learning for self-driving cars,” arXiv:1604.07316 (2016).

Monga, R.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.

Monno, Y.

D. Kiku, Y. Monno, M. Tanaka, and M. Okutomi, “Minimized-Laplacian residual interpolation for color image demosaicking,” Proc. SPIE 9023, 90230L (2014).
[Crossref]

Y. Monno, D. Kiku, M. Tanaka, and M. Okutomi, “Adaptive residual interpolation for color image demosaicking,” in IEEE International Conference on Image Processing (ICIP) (IEEE, 2015), pp. 3861–3865.

Montrym, J.

M. Ditty, T. Architecture, J. Montrym, and C. Wittenbrink, “NVIDIA’s Tegra K1 system-on-chip,” in IEEE Hot Chips 26 Symposium (HCS) (IEEE, 2014), pp. 1–26.

Moore, S.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.

Morel, J.-M.

A. Buades, B. Coll, and J.-M. Morel, “Nonlocal image and movie denoising,” Int. J. Comput. Vision 76, 123–139 (2008).
[Crossref]

A. Buades, B. Coll, and J.-M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Simul. 4, 490–530 (2005).
[Crossref]

A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 2 (IEEE, 2005), pp. 60–65.

A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE Computer Society, 2005), Vol. 2, pp. 60–65.

Morimoto, T.

Morrison, R. L.

Mosaddegh, S.

L. Condat and S. Mosaddegh, “Joint demosaicking and denoising by total variation minimization,” in 19th IEEE International Conference on Image Processing (IEEE, 2012), pp. 2781–2784.

Mrozack, A.

Mu, H.

X. Hu, H. Mu, X. Zhang, Z. Wang, T. Tan, and J. Sun, “Meta-SR: a magnification-arbitrary network for super-resolution,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 1575–1584.

Mujica, F.

B. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, P. Rajpurkar, T. Migimatsu, R. Cheng-Yue, F. Mujica, A. Coates, and A. Y. Ng, “An empirical evaluation of deep learning on highway driving,” arXiv:1504.01716 (2015).

Muller, U.

M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to end learning for self-driving cars,” arXiv:1604.07316 (2016).

M. Bojarski, P. Yeres, A. Choromanska, K. Choromanski, B. Firner, L. Jackel, and U. Muller, “Explaining how a deep neural network trained with end-to-end learning steers a car,” arXiv:1704.07911 (2017).

Mullis, R.

K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: an ultra-thin high performance monolithic camera array,” ACM Trans. Graph. 32, 166 (2013).
[Crossref]

Murphy, K.

L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFS,” IEEE Trans. Pattern Anal. Mach. Intell. 40, 834–848 (2017).
[Crossref]

Murray, D. G.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.

Muyo, G.

Nagano, T.

S. Sukegawa, T. Umebayashi, T. Nakajima, H. Kawanobe, K. Koseki, I. Hirota, T. Haruta, M. Kasai, K. Fukumoto, T. Wakano, K. Inoue, H. Takahashi, T. Nagano, Y. Nitta, T. Hirayama, and N. Fukushima, “A 1/4-inch 8 mpixel back-illuminated stacked CMOS image sensor,” in IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE, 2013), pp. 484–485.

Nah, S.

B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2017).

Nakajima, T.

S. Sukegawa, T. Umebayashi, T. Nakajima, H. Kawanobe, K. Koseki, I. Hirota, T. Haruta, M. Kasai, K. Fukumoto, T. Wakano, K. Inoue, H. Takahashi, T. Nagano, Y. Nitta, T. Hirayama, and N. Fukushima, “A 1/4-inch 8 mpixel back-illuminated stacked CMOS image sensor,” in IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE, 2013), pp. 484–485.

Nasrin, M. S.

M. Z. Alom, T. M. Taha, C. Yakopcic, S. Westberg, P. Sidike, M. S. Nasrin, B. C. Van Esesn, A. A. S. Awwal, and V. K. Asari, “The history began from AlexNet: a comprehensive survey on deep learning approaches,” arXiv:1803.01164 (2018).

Natarajan, T.

N. Ahmed, T. Natarajan, and K. R. Rao, “Discrete cosine transform,” IEEE Trans. Comput. C- 23, 90–93 (1974).
[Crossref]

Nayak, B.

C. Wu, L. F. Isikdogan, S. Rao, B. Nayak, T. Gerasimow, A. Sutic, L. Ain-kedem, and G. Michael, “VisionISP: Repurposing the image signal processor for computer vision applications,” in IEEE International Conference on Image Processing (ICIP) (2019), pp. 4624–4628.

Nayar, S.

K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: an ultra-thin high performance monolithic camera array,” ACM Trans. Graph. 32, 166 (2013).
[Crossref]

O. Cossairt and S. Nayar, “Spectral focal sweep: extended depth of field from chromatic aberrations,” in IEEE International Conference on Computational Photography (ICCP) (2010), pp. 1–8.

Nayar, S. K.

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in International Conference on Computer Vision (IEEE, 2011), pp. 287–294.

S. K. Nayar and T. Mitsunaga, “High dynamic range imaging: spatially varying pixel exposures,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2000), vol. 1, pp. 472–479.

O. S. Cossairt, D. Miau, and S. K. Nayar, “Gigapixel computational imaging,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2011), pp. 1–8.

Neifeld, M. A.

Ng, A. Y.

B. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, P. Rajpurkar, T. Migimatsu, R. Cheng-Yue, F. Mujica, A. Coates, and A. Y. Ng, “An empirical evaluation of deep learning on highway driving,” arXiv:1504.01716 (2015).

Ng, R.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR 2 (2005), pp. 1–11.

P. P. Srinivasan, T. Wang, A. Sreelal, R. Ramamoorthi, and R. Ng, “Learning to synthesize a 4D RGBD light field from a single image,” IEEE International Conference on Computer Vision (ICCV), Venice, Italy, October22–29, 2017, pp. 2262–2270.

Niederberger, T.

Nien, H.

M.-B. Lien, C.-H. Liu, I. Y. Chun, S. Ravishankar, H. Nien, M. Zhou, J. A. Fessler, Z. Zhong, and T. B. Norris, “Ranging and light field imaging with transparent photodetectors,” Nat. Photonics 14, 143–148 (2020).
[Crossref]

Nikolos, I.

K. Makantasis, M. Kontorinaki, and I. Nikolos, “A deep reinforcement learning driving policy for autonomous road vehicles,” arXiv:1905.09046 (2019).

Nishita, T.

Y. Bando, B.-Y. Chen, and T. Nishita, “Motion deblurring from a single image using circular sensor motion,” Comput. Graph. Forum 30, 1869–1878 (2011).
[Crossref]

Nitta, Y.

S. Sukegawa, T. Umebayashi, T. Nakajima, H. Kawanobe, K. Koseki, I. Hirota, T. Haruta, M. Kasai, K. Fukumoto, T. Wakano, K. Inoue, H. Takahashi, T. Nagano, Y. Nitta, T. Hirayama, and N. Fukushima, “A 1/4-inch 8 mpixel back-illuminated stacked CMOS image sensor,” in IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE, 2013), pp. 484–485.

Norris, T. B.

M.-B. Lien, C.-H. Liu, I. Y. Chun, S. Ravishankar, H. Nien, M. Zhou, J. A. Fessler, Z. Zhong, and T. B. Norris, “Ranging and light field imaging with transparent photodetectors,” Nat. Photonics 14, 143–148 (2020).
[Crossref]

Nowozin, S.

U. Schmidt, J. Jancsary, S. Nowozin, S. Roth, and C. Rother, “Cascades of regression tree fields for image restoration,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 677–689 (2015).
[Crossref]

D. Khashabi, S. Nowozin, J. Jancsary, and A. W. Fitzgibbon, “Joint demosaicing and denoising via learned nonparametric random fields,” IEEE Trans. Image Process. 23, 4968–4981 (2014).
[Crossref]

Noyola-Isgleas, A.

Oh, H.-J.

N. Kehtarnavaz and H.-J. Oh, “Development and real-time implementation of a rule-based auto-focus algorithm,” Real-Time Imaging 9, 197–203 (2003).
[Crossref]

Oh, K.-S.

K.-S. Oh and K. Jung, “GPU implementation of neural networks,” Pattern Recogn. 37, 1311–1314 (2004).
[Crossref]

Ojeda-Castaneda, J.

Ojeda-Castañeda, J.

Okutomi, M.

D. Kiku, Y. Monno, M. Tanaka, and M. Okutomi, “Minimized-Laplacian residual interpolation for color image demosaicking,” Proc. SPIE 9023, 90230L (2014).
[Crossref]

Y. Monno, D. Kiku, M. Tanaka, and M. Okutomi, “Adaptive residual interpolation for color image demosaicking,” in IEEE International Conference on Image Processing (ICIP) (IEEE, 2015), pp. 3861–3865.

Osher, S.

S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, “An iterative regularization method for total variation-based image restoration,” Multiscale Model. Simul. 4, 460–489 (2005).
[Crossref]

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
[Crossref]

Osindero, S.

G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural Comput. 18, 1527–1554 (2006).
[Crossref]

Ouyang, W.

G. Lu, W. Ouyang, D. Xu, X. Zhang, C. Cai, and Z. Gao, “DVC: an end-to-end deep video compression framework,” in IEEE Conference on Computer Vision and Pattern Recognition (2019), pp. 11006–11015.

Ozcan, A.

G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6, 921–943 (2019).
[Crossref]

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361, 1004–1008 (2018).
[Crossref]

Pajak, D.

F. Heide, M. Steinberger, Y.-T. Tsai, M. Rouf, D. Pająk, D. Reddy, O. Gallo, J. Liu, W. Heidrich, K. Egiazarian, J. Kautz, and K. A. Pulli, “FlexISP: a flexible camera image processing framework,” ACM Trans. Graph. 33, 231 (2014).
[Crossref]

Pal, S.

A. Gulli and S. Pal, Deep Learning with Keras (Packt, 2017).

Pang, C.

X. Xu, S. Liu, T. Chuang, Y. Huang, S. Lei, K. Rapaka, C. Pang, V. Seregin, Y. Wang, and M. Karczewicz, “Intra block copy in HEVC screen content coding extensions,” IEEE J. Emerg. Sel. Top. Circuits Syst. 6, 409–419 (2016).
[Crossref]

Pang, S.

Pang, W.

W. Pang and D. J. Brady, “Distributed focus and digital zoom,” Eng. Res. Express 2, 035019 (2020).
[Crossref]

D. J. Brady, W. Pang, H. Li, Z. Ma, Y. Tao, and X. Cao, “Parallel cameras,” Optica 5, 127–137 (2018).
[Crossref]

Pankanti, S. U.

C.-C. Lin, S. U. Pankanti, K. N. Ramamurthy, and A. Y. Aravkin, “Adaptive as-natural-as-possible image stitching,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1155–1163.

Papandreou, G.

L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFS,” IEEE Trans. Pattern Anal. Mach. Intell. 40, 834–848 (2017).
[Crossref]

Papert, S. A.

M. Minsky and S. A. Papert, Perceptrons: An Introduction to Computational Geometry (MIT, 2017).

Paris, S.

M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, “Deep joint demosaicking and denoising,” ACM Trans. Graph. 35, 191 (2016).
[Crossref]

Park, B.-K.

B.-K. Park, S.-S. Kim, D.-S. Chung, S.-D. Lee, and C.-Y. Kim, “Fast and accurate auto focusing algorithm based on two defocused images using discrete cosine transform,” Proc. SPIE 6817, 68170D (2008).
[Crossref]

Park, S. H.

A. Adams, E.-V. Talvala, S. H. Park, D. E. Jacobs, B. Ajdin, N. Gelfand, J. Dolson, D. Vaquero, J. Baek, M. Tico, H. P. A. Lensch, W. Matusik, K. Pulli, M. Horowitz, and M. Levoy, “The Frankencamera: an experimental platform for computational photography,” ACM Trans. Graph. 29, 1–12 (2010).
[Crossref]

S. H. Park, H. S. Kim, S. Lansel, M. Parmar, and B. A. Wandell, “A case for denoising before demosaicking color filter array data,” in Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers (IEEE, 2009), pp. 860–864.

Parker, J. A.

J. A. Parker, R. V. Kenyon, and D. E. Troxel, “Comparison of interpolating methods for image resampling,” IEEE Trans. Med. Imaging 2, 31–39 (1983).
[Crossref]

Parks, T. W.

K. Hirakawa and T. W. Parks, “Joint demosaicing and denoising,” IEEE Trans. Image Process. 15, 2146–2157 (2006).
[Crossref]

K. Hirakawa and T. W. Parks, “Adaptive homogeneity-directed demosaicing algorithm,” IEEE Trans. Image Process. 14, 360–369 (2005).
[Crossref]

Parmar, M.

S. H. Park, H. S. Kim, S. Lansel, M. Parmar, and B. A. Wandell, “A case for denoising before demosaicking color filter array data,” in Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers (IEEE, 2009), pp. 860–864.

Paszke, A.

A. Paszke, S. Gross, S. Chintala, and G. Chanan, PyTorch: Tensors and Dynamic Neural Networks in Python with Strong GPU Acceleration (2017), Vol. 6.

Pauca, V. P.

S. Prasad, T. C. Torgersen, V. P. Pauca, R. J. Plemmons, and J. van der Gracht, “Engineering the pupil phase to improve image quality,” Proc. SPIE 5108, 1–12 (2003).
[Crossref]

Paxton, C.

C. Paxton, V. Raman, G. D. Hager, and M. Kobilarov, “Combining neural networks and tree search for task and motion planning in challenging environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2017), pp. 6059–6066.

Pazhayampallil, J.

B. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, P. Rajpurkar, T. Migimatsu, R. Cheng-Yue, F. Mujica, A. Coates, and A. Y. Ng, “An empirical evaluation of deep learning on highway driving,” arXiv:1504.01716 (2015).

Pekkucuksen, I.

I. Pekkucuksen and Y. Altunbasak, “Gradient based threshold free color filter array interpolation,” in IEEE International Conference on Image Processing (IEEE, 2010), pp. 137–140.

Pena, J.

A. Santos, C. O. de Solórzano, J. J. Vaquero, J. Pena, N. Malpica, and F. Del Pozo, “Evaluation of autofocus functions in molecular cytogenetic analysis,” J. Microsc. 188, 264–272 (1997).
[Crossref]

Perekrestenko, D.

P. Grohs, D. Perekrestenko, D. Elbrächter, and H. Bölcskei, “Deep neural network approximation theory,” arXiv:1901.02220 (2019).

Perot, E.

A. E. Sallab, M. Abdou, E. Perot, and S. Yogamani, “Deep reinforcement learning framework for autonomous driving,” Electron. Imaging 2017, 70–76 (2017).
[Crossref]

Peters, J.

J. Kober, J. A. Bagnell, and J. Peters, “Reinforcement learning in robotics: a survey,” Int. J. Rob. Res. 32, 1238–1274 (2013).
[Crossref]

Picard, R. W.

S. Mann and R. W. Picard, “Virtual bellows: constructing high quality stills from video,” in 1st International Conference on Image Processing (1994), Vol. 1, pp. 363–367.

Piciarelli, C.

C. Piciarelli, L. Esterle, A. Khan, B. Rinner, and G. L. Foresti, “Dynamic reconfiguration in camera networks: a short survey,” IEEE Trans. Circuits Syst. Video Technol. 26, 965–977 (2015).
[Crossref]

Pierson, H. A.

H. A. Pierson and M. S. Gashler, “Deep learning in robotics: a review of recent research,” Adv. Robot. 31, 821–835 (2017).
[Crossref]

Piestun, R.

Pirsch, P.

P. Pirsch and H.-J. Stolberg, “VLSI implementations of image and video multimedia processing systems,” IEEE Trans. Circuits Syst. Video Technol. 8, 878–891 (1998).
[Crossref]

P. Pirsch, N. Demassieux, and W. Gehrke, “VLSI architectures for video compression—a survey,” Proc. IEEE 83, 220–246 (1995).
[Crossref]

Pitalo, S.

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 (10November2015).

Pitsianis, N.

Pitsianis, N. P.

Pitts, W.

W. S. McCulloch and W. Pitts, “A logical calculus of the ideas immanent in nervous activity,” Bull. Math. Biophys. 5, 115–133 (1943).
[Crossref]

Plemmons, R. J.

S. Prasad, T. C. Torgersen, V. P. Pauca, R. J. Plemmons, and J. van der Gracht, “Engineering the pupil phase to improve image quality,” Proc. SPIE 5108, 1–12 (2003).
[Crossref]

Pock, T.

Y. Chen and T. Pock, “Trainable nonlinear reaction diffusion: a flexible framework for fast and effective image restoration,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 1256–1272 (2016).
[Crossref]

T. Klatzer, K. Hammernik, P. Knobelreiter, and T. Pock, “Learning joint demosaicing and denoising based on sequential energy minimization,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2016), pp. 1–11.

Pollock, D.

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 (10November2015).

Ponce, J.

J. Mairal, F. R. Bach, J. Ponce, G. Sapiro, and A. Zisserman, “Non-local sparse models for image restoration,” in International Conference on Computer Vision (ICCV) (2009), Vol. 29, pp. 54–62.

Portnoy, A.

Prasad, S.

S. Prasad, T. C. Torgersen, V. P. Pauca, R. J. Plemmons, and J. van der Gracht, “Engineering the pupil phase to improve image quality,” Proc. SPIE 5108, 1–12 (2003).
[Crossref]

Prather, D.

Price, E.

A. Bora, A. Jalal, E. Price, and A. G. Dimakis, “Compressed sensing using generative models,” in 34th International Conference on Machine Learning (JMLR, 2017), Vol. 70, pp. 537–546.

Primot, J.

Psaltis, D.

D. Psaltis, D. Brady, X.-G. Gu, and S. Lin, “Holography in artificial neural networks,” Nature 343, 325–330 (1990).
[Crossref]

Pu, Y.

X. Miao, X. Yuan, Y. Pu, and V. Athitsos, “λ-net: reconstruct hyperspectral images from a snapshot measurement,” in IEEE/CVF Conference on Computer Vision (ICCV) (2019), Vol. 1.

Puhrsch, C.

D. Eigen, C. Puhrsch, and R. Fergus, “Depth map prediction from a single image using a multi-scale deep network,” in Advances in Neural Information Processing Systems (2014), pp. 2366–2374.

Pulli, K.

A. Adams, E.-V. Talvala, S. H. Park, D. E. Jacobs, B. Ajdin, N. Gelfand, J. Dolson, D. Vaquero, J. Baek, M. Tico, H. P. A. Lensch, W. Matusik, K. Pulli, M. Horowitz, and M. Levoy, “The Frankencamera: an experimental platform for computational photography,” ACM Trans. Graph. 29, 1–12 (2010).
[Crossref]

Pulli, K. A.

F. Heide, M. Steinberger, Y.-T. Tsai, M. Rouf, D. Pająk, D. Reddy, O. Gallo, J. Liu, W. Heidrich, K. Egiazarian, J. Kautz, and K. A. Pulli, “FlexISP: a flexible camera image processing framework,” ACM Trans. Graph. 33, 231 (2014).
[Crossref]

Qi, X.

Qiao, M.

M. Qiao, X. Liu, and X. Yuan, “Snapshot spatial–temporal compressive imaging,” Opt. Lett. 45, 1659–1662 (2020).
[Crossref]

M. Qiao, Z. Meng, J. Ma, and X. Yuan, “Deep learning for video compressive sensing,” APL Photon. 5, 030801 (2020).
[Crossref]

Qiao, Y.

X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and C. C. Loy, “ESRGAN: enhanced super-resolution generative adversarial networks,” in European Conference on Computer Vision (ECCV) (2018).

Rabaud, V.

E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: an efficient alternative to sift or surf,” in International Conference on Computer Vision (2011), pp. 2564–2571.

Rabinovich, A.

D. DeTone, T. Malisiewicz, and A. Rabinovich, “Deep image homography estimation,” arXiv:1606.03798 (2016).

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1–9.

Rahimi, R.

D. Isele, R. Rahimi, A. Cosgun, K. Subramanian, and K. Fujimura, “Navigating occluded intersections with autonomous vehicles using deep reinforcement learning,” in IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2018), pp. 2034–2039.

Rahman, M.

S. Yousefi, M. Rahman, and N. Kehtarnavaz, “A new auto-focus sharpness function for digital and smart-phone cameras,” IEEE Trans. Consum. Electron. 57, 1003–1009 (2011).
[Crossref]

Rahman, M. M.

S. Rahman, M. M. Rahman, M. Abdullah-Al-Wadud, G. D. Al-Quaderi, and M. Shoyaib, “An adaptive gamma correction for image enhancement,” EURASIP J. Image Video Process. 2016, 35 (2016).
[Crossref]

Rahman, M. T.

M. T. Rahman and N. Kehtarnavaz, “Real-time face-priority auto focus for digital and cell-phone cameras,” IEEE Trans. Consum. Electron. 54, 1506–1513 (2008).
[Crossref]

Rahman, S.

S. Rahman, M. M. Rahman, M. Abdullah-Al-Wadud, G. D. Al-Quaderi, and M. Shoyaib, “An adaptive gamma correction for image enhancement,” EURASIP J. Image Video Process. 2016, 35 (2016).
[Crossref]

Rajpurkar, P.

B. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, P. Rajpurkar, T. Migimatsu, R. Cheng-Yue, F. Mujica, A. Coates, and A. Y. Ng, “An empirical evaluation of deep learning on highway driving,” arXiv:1504.01716 (2015).

Ramamoorthi, R.

P. P. Srinivasan, T. Wang, A. Sreelal, R. Ramamoorthi, and R. Ng, “Learning to synthesize a 4D RGBD light field from a single image,” IEEE International Conference on Computer Vision (ICCV), Venice, Italy, October22–29, 2017, pp. 2262–2270.

Ramamurthy, K. N.

C.-C. Lin, S. U. Pankanti, K. N. Ramamurthy, and A. Y. Aravkin, “Adaptive as-natural-as-possible image stitching,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1155–1163.

Raman, V.

C. Paxton, V. Raman, G. D. Hager, and M. Kobilarov, “Combining neural networks and tree search for task and motion planning in challenging environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2017), pp. 6059–6066.

Ramanath, R.

R. Ramanath, W. E. Snyder, Y. Yoo, and M. S. Drew, “Color image processing pipeline,” IEEE Signal Process Mag. 22(1), 34–43 (2005).
[Crossref]

Ramos, R.

Ranjan, A.

A. Ranjan and M. J. Black, “Optical flow estimation using a spatial pyramid network,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4161–4170.

Rao, K. R.

N. Ahmed, T. Natarajan, and K. R. Rao, “Discrete cosine transform,” IEEE Trans. Comput. C- 23, 90–93 (1974).
[Crossref]

Rao, S.

C. Wu, L. F. Isikdogan, S. Rao, B. Nayak, T. Gerasimow, A. Sutic, L. Ain-kedem, and G. Michael, “VisionISP: Repurposing the image signal processor for computer vision applications,” in IEEE International Conference on Image Processing (ICIP) (2019), pp. 4624–4628.

Rapaka, K.

X. Xu, S. Liu, T. Chuang, Y. Huang, S. Lei, K. Rapaka, C. Pang, V. Seregin, Y. Wang, and M. Karczewicz, “Intra block copy in HEVC screen content coding extensions,” IEEE J. Emerg. Sel. Top. Circuits Syst. 6, 409–419 (2016).
[Crossref]

Raskar, R.

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. 25, 795–804 (2006).
[Crossref]

R. Raskar and J. Tumblin, Computational Photography: Mastering New Techniques for Lenses, Lighting, and Sensors (AK Peters, 2009).

Ravishankar, S.

M.-B. Lien, C.-H. Liu, I. Y. Chun, S. Ravishankar, H. Nien, M. Zhou, J. A. Fessler, Z. Zhong, and T. B. Norris, “Ranging and light field imaging with transparent photodetectors,” Nat. Photonics 14, 143–148 (2020).
[Crossref]

Reardon, P.

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 (10November2015).

Reddy, D.

F. Heide, M. Steinberger, Y.-T. Tsai, M. Rouf, D. Pająk, D. Reddy, O. Gallo, J. Liu, W. Heidrich, K. Egiazarian, J. Kautz, and K. A. Pulli, “FlexISP: a flexible camera image processing framework,” ACM Trans. Graph. 33, 231 (2014).
[Crossref]

D. Reddy, A. Veeraraghavan, and R. Chellappa, “P2C2: programmable pixel compressive camera for high speed imaging,” in Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2011), pp. 329–336.

Redgrave, J.

J. Redgrave, A. Meixner, N. Goulding-Hotta, A. Vasilyev, and O. Shacham, “Pixel visual core: Google’s fully programmable image vision and AI processor for mobile devices,” in Hot Chips: A Symposium on High Performance Chips (2018).

Reed, S.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1–9.

Ren, Z.

Rhodes, W. T.

Richardson, W. H.

Richmond, R.

D. Moloney, B. Barry, R. Richmond, F. Connor, C. Brick, and D. Donohoe, “Myriad 2: eye of the computational vision storm,” in IEEE Hot Chips 26 Symposium (HCS) (IEEE, 2014), pp. 1–18.

Rinner, B.

C. Piciarelli, L. Esterle, A. Khan, B. Rinner, and G. L. Foresti, “Dynamic reconfiguration in camera networks: a short survey,” IEEE Trans. Circuits Syst. Video Technol. 26, 965–977 (2015).
[Crossref]

Rippel, O.

O. Rippel and L. Bourdev, “Real-time adaptive image compression,” in International Conference on Machine Learning (ICML, 2017), pp. 2922–2930.

Rivenson, Y.

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361, 1004–1008 (2018).
[Crossref]

Roberts, E.

L. Wei and E. Roberts, “Neural network control of focal position during time-lapse microscopy of cells,” Sci. Rep. 8, 7313 (2018).
[Crossref]

Rogers, T.

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 (10November2015).

Romberg, J. K.

E. J. Candes, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl. Math. 59, 1207–1223 (2006).
[Crossref]

Rommeluère, S.

Rosenblatt, F.

F. Rosenblatt, “The perceptron: a probabilistic model for information storage and organization in the brain,” Psychol. Rev. 65, 386–408 (1958).
[Crossref]

Roth, S.

U. Schmidt, J. Jancsary, S. Nowozin, S. Roth, and C. Rother, “Cascades of regression tree fields for image restoration,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 677–689 (2015).
[Crossref]

U. Schmidt and S. Roth, “Shrinkage fields for effective image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 2774–2781.

X. Lan, S. Roth, D. Huttenlocher, and M. J. Black, “Efficient belief propagation with learned higher-order Markov random fields,” in European Conference on Computer Vision (Springer, 2006), pp. 269–282.

Rother, C.

U. Schmidt, J. Jancsary, S. Nowozin, S. Roth, and C. Rother, “Cascades of regression tree fields for image restoration,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 677–689 (2015).
[Crossref]

Rouf, M.

F. Heide, M. Steinberger, Y.-T. Tsai, M. Rouf, D. Pająk, D. Reddy, O. Gallo, J. Liu, W. Heidrich, K. Egiazarian, J. Kautz, and K. A. Pulli, “FlexISP: a flexible camera image processing framework,” ACM Trans. Graph. 33, 231 (2014).
[Crossref]

Rubin, D. B.

A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” J. Royal Stat. Soc. B 39, 1–22 (1977).
[Crossref]

Rublee, E.

E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: an efficient alternative to sift or surf,” in International Conference on Computer Vision (2011), pp. 2564–2571.

Rudin, L. I.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
[Crossref]

Rumelhart, D. E.

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323, 533 (1986).
[Crossref]

Rushforth, C. K.

Saha, I.

J. Misra and I. Saha, “Artificial neural networks in hardware: a survey of two decades of progress,” Neurocomputing 74, 239–255 (2010).
[Crossref]

Saikia, T.

E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, “FlowNet 2.0: evolution of optical flow estimation with deep networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 2462–2470.

Salakhutdinov, R. R.

G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science 313, 504–507 (2006).
[Crossref]

Sallab, A. E.

A. E. Sallab, M. Abdou, E. Perot, and S. Yogamani, “Deep reinforcement learning framework for autonomous driving,” Electron. Imaging 2017, 70–76 (2017).
[Crossref]

Sampson, A.

M. Buckler, S. Jayasuriya, and A. Sampson, “Reconfiguring the imaging pipeline for computer vision,” in IEEE International Conference on Computer Vision (ICCV) (2017).

Santos, A.

A. Santos, C. O. de Solórzano, J. J. Vaquero, J. Pena, N. Malpica, and F. Del Pozo, “Evaluation of autofocus functions in molecular cytogenetic analysis,” J. Microsc. 188, 264–272 (1997).
[Crossref]

Sapiro, G.

P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–10545 (2013).
[Crossref]

J. Mairal, F. R. Bach, J. Ponce, G. Sapiro, and A. Zisserman, “Non-local sparse models for image restoration,” in International Conference on Computer Vision (ICCV) (2009), Vol. 29, pp. 54–62.

Sarvotham, S.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 606509 (2006).
[Crossref]

Schafer, R. W.

B. K. Gunturk, J. Glotzbach, Y. Altunbasak, R. W. Schafer, and R. M. Mersereau, “Demosaicking: color filter array interpolation,” IEEE Signal Process Mag. 22(1), 44–54 (2005).
[Crossref]

Schechner, Y. Y.

Schettini, R.

S. Bianco, C. Cusano, and R. Schettini, “Single and multiple illuminant estimation using convolutional neural networks,” IEEE Trans. Image Process. 26, 4347–4362 (2017).
[Crossref]

S. Bianco, C. Cusano, and R. Schettini, “Color constancy using CNNS,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2015), pp. 81–89.

Schmid, L.

Schmidt, U.

U. Schmidt, J. Jancsary, S. Nowozin, S. Roth, and C. Rother, “Cascades of regression tree fields for image restoration,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 677–689 (2015).
[Crossref]

U. Schmidt and S. Roth, “Shrinkage fields for effective image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 2774–2781.

Schödl, A.

V. Kwatra, A. Schödl, I. Essa, G. Turk, and A. Bobick, “Graphcut textures: image and video synthesis using graph cuts,” ACM Trans. Graph. 22, 277–286 (2003).
[Crossref]

Schreiber, P.

Schrittwieser, J.

D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of Go without human knowledge,” Nature 550, 354–359 (2017).
[Crossref]

Schuler, C. J.

H. C. Burger, C. J. Schuler, and S. Harmeling, “Image denoising: can plain neural networks compete with BM3D?” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 2392–2399.

Schulz, T.

Schunck, B. G.

B. K. Horn and B. G. Schunck, “Determining optical flow,” Artif. Intell. 17, 185–203 (1981).
[Crossref]

Schuster, G.

Schwartz, E.

E. Schwartz, R. Giryes, and A. M. Bronstein, “DeepISP: toward learning an end-to-end image processing pipeline,” IEEE Trans. Image Process. 28, 912–923 (2018).
[Crossref]

Seff, A.

C. Chen, A. Seff, A. Kornhauser, and J. Xiao, “DeepDriving: learning affordance for direct perception in autonomous driving,” in IEEE International Conference on Computer Vision (2015), pp. 2722–2730.

Sen, P.

P. Sen, “Overview of state-of-the-art algorithms for stack-based high-dynamic range (hdr) imaging,” Electron. Imaging 2018, 311 (2018).
[Crossref]

Seregin, V.

X. Xu, S. Liu, T. Chuang, Y. Huang, S. Lei, K. Rapaka, C. Pang, V. Seregin, Y. Wang, and M. Karczewicz, “Intra block copy in HEVC screen content coding extensions,” IEEE J. Emerg. Sel. Top. Circuits Syst. 6, 409–419 (2016).
[Crossref]

Sermanet, P.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1–9.

Seung, S.

V. Jain and S. Seung, “Natural image denoising with convolutional networks,” in Advances in Neural Information Processing Systems (2009), pp. 769–776.

Shacham, O.

J. Redgrave, A. Meixner, N. Goulding-Hotta, A. Vasilyev, and O. Shacham, “Pixel visual core: Google’s fully programmable image vision and AI processor for mobile devices,” in Hot Chips: A Symposium on High Performance Chips (2018).

Shafique, K. H.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, J. D. Targove, S. Wein, P. Brewer, D. G. Madden, and K. H. Shafique, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Shakhnarovich, G.

M. Haris, G. Shakhnarovich, and N. Ukita, “Deep back-projection networks for super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 1664–1673.

G. Larsson, M. Maire, and G. Shakhnarovich, “FractalNet: ultra-deep neural networks without residuals,” arXiv:1605.07648 (2016).

Shalev-Shwartz, S.

S. Shalev-Shwartz, S. Shammah, and A. Shashua, “Safe, multi-agent, reinforcement learning for autonomous driving,” arXiv:1610.03295 (2016).

Shammah, S.

S. Shalev-Shwartz, S. Shammah, and A. Shashua, “Safe, multi-agent, reinforcement learning for autonomous driving,” arXiv:1610.03295 (2016).

Shankar, M.

Shankar, P.

Shankar, P. M.

Sharlet, D.

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Trans. Graph. 35, 192 (2016).
[Crossref]

Shashua, A.

S. Shalev-Shwartz, S. Shammah, and A. Shashua, “Safe, multi-agent, reinforcement learning for autonomous driving,” arXiv:1610.03295 (2016).

Shelhamer, E.

Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: convolutional architecture for fast feature embedding,” in 22nd ACM International Conference on Multimedia (ACM, 2014), pp. 675–678.

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 3431–3440.

Shen, Q.

H. Liu, T. Chen, Q. Shen, T. Yue, and Z. Ma, “Deep image compression via end-to-end learning,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2018), pp. 2575–2578.

H. Liu, T. Chen, P. Guo, Q. Shen, X. Cao, Y. Wang, and Z. Ma, “Non-local attention optimized deep image compression,” arXiv:1904.09757 (2019).

H. Liu, T. Chen, Q. Shen, and Z. Ma, “Practical stacked non-local attention modules for image compression,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2019).

T. Chen, H. Liu, Q. Shen, T. Yue, X. Cao, and Z. Ma, “DeepCoder: A deep neural network based video compression,” in IEEE Visual Communications and Image Processing (VCIP) (IEEE, 2017), pp. 1–4.

Q. Shen, J. Cai, L. Liu, H. Liu, T. Chen, L. Ye, and Z. Ma, “CodedVision: towards joint image understanding and compression via end-to-end learning,” in Pacific Rim Conference on Multimedia (Springer, 2018), pp. 3–14.

H. Liu, T. Chen, M. Lu, Q. Shen, and Z. Ma, “Neural video compression using spatio-temporal priors,” arXiv:1902.07383 (2019).

Shi, G.

W. Dong, L. Zhang, G. Shi, and X. Li, “Nonlocally centralized sparse representation for image restoration,” IEEE Trans. Image Process. 22, 1620–1630 (2012).
[Crossref]

W. Dong, M. Yuan, X. Li, and G. Shi, “Joint demosaicing and denoising with perceptual optimization on a generative adversarial network,” arXiv:1802.04723 (2018).

Shi, W.

W. Shi, C. C. Loy, and X. Tang, “Deep specialized network for illuminant estimation,” in European Conference on Computer Vision (Springer, 2016), pp. 371–387.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4681–4690.

Shogenji, R.

Shoyaib, M.

S. Rahman, M. M. Rahman, M. Abdullah-Al-Wadud, G. D. Al-Quaderi, and M. Shoyaib, “An adaptive gamma correction for image enhancement,” EURASIP J. Image Video Process. 2016, 35 (2016).
[Crossref]

Sidike, P.

M. Z. Alom, T. M. Taha, C. Yakopcic, S. Westberg, P. Sidike, M. S. Nasrin, B. C. Van Esesn, A. A. S. Awwal, and V. K. Asari, “The history began from AlexNet: a comprehensive survey on deep learning approaches,” arXiv:1803.01164 (2018).

Sifre, L.

D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of Go without human knowledge,” Nature 550, 354–359 (2017).
[Crossref]

Silver, A.

Silver, D.

D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of Go without human knowledge,” Nature 550, 354–359 (2017).
[Crossref]

Simoncelli, E. P.

Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” in 37th Asilomar Conference on Signals, Systems & Computers (IEEE, 2003), Vol. 2, pp. 1398–1402.

Simonyan, K.

D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of Go without human knowledge,” Nature 550, 354–359 (2017).
[Crossref]

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556 (2014).

Singh, S.

J. Ballé, D. Minnen, S. Singh, S. J. Hwang, and N. Johnston, “Variational image compression with a scale hyperprior,” arXiv:1802.01436 (2018).

Situ, G.

Sitzmann, V.

J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
[Crossref]

Skodras, A.

A. Skodras, C. Christopoulos, and T. Ebrahimi, “The JPEG 2000 still image compression standard,” IEEE Signal Process Mag. 18(5), 36–58 (2001).
[Crossref]

Sliwinski, P.

P. Śliwiński and P. Wachel, “A simple model for on-sensor phase-detection autofocusing algorithm,” J. Comput. Commun. 1, 11–17 (2013).
[Crossref]

Snyder, W. E.

R. Ramanath, W. E. Snyder, Y. Yoo, and M. S. Drew, “Color image processing pipeline,” IEEE Signal Process Mag. 22(1), 34–43 (2005).
[Crossref]

Somayaji, M.

Son, S.

B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2017).

Song, W.

B. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, P. Rajpurkar, T. Migimatsu, R. Cheng-Yue, F. Mujica, A. Coates, and A. Y. Ng, “An empirical evaluation of deep learning on highway driving,” arXiv:1504.01716 (2015).

Spinoulas, L.

M. Iliadis, L. Spinoulas, and A. K. Katsaggelos, “DeepBinaryMask: learning a binary mask for video compressive sensing,” Digital Signal Processing 96, 102591 (2020).
[Crossref]

M. Iliadis, L. Spinoulas, and A. K. Katsaggelos, “Deep fully-connected networks for video compressive sensing,” Digital Signal Process. 72, 9–18 (2018).
[Crossref]

R. Koller, L. Schmid, N. Matsuda, T. Niederberger, L. Spinoulas, O. Cossairt, G. Schuster, and A. K. Katsaggelos, “High spatio-temporal resolution video with compressed sensing,” Opt. Express 23, 15992–16007 (2015).
[Crossref]

Sreelal, A.

P. P. Srinivasan, T. Wang, A. Sreelal, R. Ramamoorthi, and R. Ng, “Learning to synthesize a 4D RGBD light field from a single image,” IEEE International Conference on Computer Vision (ICCV), Venice, Italy, October22–29, 2017, pp. 2262–2270.

Srinivasan, P. P.

P. P. Srinivasan, T. Wang, A. Sreelal, R. Ramamoorthi, and R. Ng, “Learning to synthesize a 4D RGBD light field from a single image,” IEEE International Conference on Computer Vision (ICCV), Venice, Italy, October22–29, 2017, pp. 2262–2270.

Stack, R.

D. J. Brady, M. Gehm, R. Stack, D. Marks, D. Kittle, D. R. Golish, E. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Stack, R. A.

Steinberger, M.

F. Heide, M. Steinberger, Y.-T. Tsai, M. Rouf, D. Pająk, D. Reddy, O. Gallo, J. Liu, W. Heidrich, K. Egiazarian, J. Kautz, and K. A. Pulli, “FlexISP: a flexible camera image processing framework,” ACM Trans. Graph. 33, 231 (2014).
[Crossref]

Steiner, B.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.

Stevens, M.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, J. D. Targove, S. Wein, P. Brewer, D. G. Madden, and K. H. Shafique, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Stolberg, H.-J.

P. Pirsch and H.-J. Stolberg, “VLSI implementations of image and video multimedia processing systems,” IEEE Trans. Circuits Syst. Video Technol. 8, 878–891 (1998).
[Crossref]

Subramanian, K.

D. Isele, R. Rahimi, A. Cosgun, K. Subramanian, and K. Fujimura, “Navigating occluded intersections with autonomous vehicles using deep reinforcement learning,” in IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2018), pp. 2034–2039.

Sukegawa, S.

S. Sukegawa, T. Umebayashi, T. Nakajima, H. Kawanobe, K. Koseki, I. Hirota, T. Haruta, M. Kasai, K. Fukumoto, T. Wakano, K. Inoue, H. Takahashi, T. Nagano, Y. Nitta, T. Hirayama, and N. Fukushima, “A 1/4-inch 8 mpixel back-illuminated stacked CMOS image sensor,” in IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE, 2013), pp. 484–485.

Sullivan, G. J.

G. J. Sullivan and T. Wiegand, “Rate-distortion optimization for video compression,” IEEE Signal Process Mag. 15(6), 74–90 (1998).
[Crossref]

Sun, D.

D. Sun, X. Yang, M.-Y. Liu, and J. Kautz, “PWC-net: CNNS for optical flow using pyramid, warping, and cost volume,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 8934–8943.

Sun, J.

M. Cheng, Z. Ma, S. Asif, Y. Xu, H. Liu, W. Bao, and J. Sun, “A dual camera system for high spatiotemporal resolution video acquisition,” IEEE Comp. Arch. Lett. 1, 1 (2020).

J. Sun and M. F. Tappen, “Separable Markov random field model and its applications in low level vision,” IEEE Trans. Image Process. 22, 402–407 (2012).
[Crossref]

X. Hu, H. Mu, X. Zhang, Z. Wang, T. Tan, and J. Sun, “Meta-SR: a magnification-arbitrary network for super-resolution,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 1575–1584.

Sun, X.

Sun, Y.

Suo, J.

X. Yuan, Y. Liu, J. Suo, and Q. Dai, “Plug-and-play algorithms for large-scale snapshot compressive imaging,”, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (IEEE, 2020), pp. 1447–1457.

Suter, D.

J. Zaragoza, T.-J. Chin, M. S. Brown, and D. Suter, “As-projective-as-possible image stitching with moving DLT,” in IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 2339–2346.

Sutic, A.

C. Wu, L. F. Isikdogan, S. Rao, B. Nayak, T. Gerasimow, A. Sutic, L. Ain-kedem, and G. Michael, “VisionISP: Repurposing the image signal processor for computer vision applications,” in IEEE International Conference on Image Processing (ICIP) (2019), pp. 4624–4628.

Sutskever, I.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 1097–1105.

Syu, N.-S.

N.-S. Syu, Y.-S. Chen, and Y.-Y. Chuang, “Learning deep convolutional networks for demosaicing,” arXiv:1802.03769 (2018).

Szegedy, C.

S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” arXiv:1502.03167 (2015).

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1–9.

Szeliski, R.

R. Szeliski, “Image mosaicing for tele-reality applications,” in IEEE Workshop on Applications of Computer Vision (1994), pp. 44–53.

Taboury, J.

Taha, T. M.

M. Z. Alom, T. M. Taha, C. Yakopcic, S. Westberg, P. Sidike, M. S. Nasrin, B. C. Van Esesn, A. A. S. Awwal, and V. K. Asari, “The history began from AlexNet: a comprehensive survey on deep learning approaches,” arXiv:1803.01164 (2018).

Tai, S.-C.

S.-C. Tai, T.-W. Liao, Y.-Y. Chang, and C.-P. Yeh, “Automatic white balance algorithm through the average equalization and threshold,” in 8th International Conference on Information Science and Digital Content Technology (ICIDT) (IEEE, 2012), Vol. 3, pp. 571–576.

Tai, Y.

Y. Tai, J. Yang, and X. Liu, “Image super-resolution via deep recursive residual network,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 3147–3155.

Takahashi, H.

S. Sukegawa, T. Umebayashi, T. Nakajima, H. Kawanobe, K. Koseki, I. Hirota, T. Haruta, M. Kasai, K. Fukumoto, T. Wakano, K. Inoue, H. Takahashi, T. Nagano, Y. Nitta, T. Hirayama, and N. Fukushima, “A 1/4-inch 8 mpixel back-illuminated stacked CMOS image sensor,” in IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE, 2013), pp. 484–485.

Takhar, D.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 606509 (2006).
[Crossref]

Talvala, E.-V.

A. Adams, E.-V. Talvala, S. H. Park, D. E. Jacobs, B. Ajdin, N. Gelfand, J. Dolson, D. Vaquero, J. Baek, M. Tico, H. P. A. Lensch, W. Matusik, K. Pulli, M. Horowitz, and M. Levoy, “The Frankencamera: an experimental platform for computational photography,” ACM Trans. Graph. 29, 1–12 (2010).
[Crossref]

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Tan, R.

R. Tan, K. Zhang, W. Zuo, and L. Zhang, “Color image demosaicking via deep residual learning,” in IEEE International Conference on Multimedia and Expo (ICME) (IEEE, 2017), pp. 793–798.

Tan, T.

X. Hu, H. Mu, X. Zhang, Z. Wang, T. Tan, and J. Sun, “Meta-SR: a magnification-arbitrary network for super-resolution,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 1575–1584.

Tanaka, M.

D. Kiku, Y. Monno, M. Tanaka, and M. Okutomi, “Minimized-Laplacian residual interpolation for color image demosaicking,” Proc. SPIE 9023, 90230L (2014).
[Crossref]

Y. Monno, D. Kiku, M. Tanaka, and M. Okutomi, “Adaptive residual interpolation for color image demosaicking,” in IEEE International Conference on Image Processing (ICIP) (IEEE, 2015), pp. 3861–3865.

Tandon, S.

B. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, P. Rajpurkar, T. Migimatsu, R. Cheng-Yue, F. Mujica, A. Coates, and A. Y. Ng, “An empirical evaluation of deep learning on highway driving,” arXiv:1504.01716 (2015).

Tang, J.

Z. Li and J. Tang, “Weakly supervised deep matrix factorization for social image understanding,” IEEE Trans. Image Process. 26, 276–288 (2016).
[Crossref]

Tang, T.

R. Gutierrez, E. Fossum, and T. Tang, “Auto-focus technology,” in International Image Sensor Workshop (2007), pp. 20–25.

Tang, X.

C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in European Conference on Computer Vision (Springer, 2014), pp. 184–199.

W. Shi, C. C. Loy, and X. Tang, “Deep specialized network for illuminant estimation,” in European Conference on Computer Vision (Springer, 2016), pp. 371–387.

Tanida, J.

Tao, T.

E. J. Candes, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl. Math. 59, 1207–1223 (2006).
[Crossref]

Tao, Y.

Tappen, M. F.

J. Sun and M. F. Tappen, “Separable Markov random field model and its applications in low level vision,” IEEE Trans. Image Process. 22, 402–407 (2012).
[Crossref]

Targove, J. D.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, J. D. Targove, S. Wein, P. Brewer, D. G. Madden, and K. H. Shafique, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Tasimi, K.

Te Kolste, R.

Teh, Y.-W.

G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural Comput. 18, 1527–1554 (2006).
[Crossref]

Tejani, A.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4681–4690.

Teney, D.

P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang, “Bottom-up and top-down attention for image captioning and visual question answering,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 6077–6086.

Teodosio, L. A.

L. A. Teodosio and M. Mills, “Panoramic overviews for navigating real-world scenes,” in 1st ACM International Conference on Multimedia (1993), pp. 359–364.

Testa, D. D.

M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to end learning for self-driving cars,” arXiv:1604.07316 (2016).

Theis, L.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4681–4690.

Thétas, S.

Tian, C.

C. Tian, Y. Xu, L. Fei, J. Wang, J. Wen, and N. Luo, “Enhanced CNN for image denoising,” CAAI Trans. Intell. Technol. 4, 17–23 (2019).
[Crossref]

Tian, X.

C. Chen, Z. Xiong, X. Tian, Z. Zha, and F. Wu, “Camera lens super-resolution,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 1652–1660.

Tian, Y.

W. Yang, X. Zhang, Y. Tian, W. Wang, J.-H. Xue, and Q. Liao, “Deep learning for single image super-resolution: a brief review,” IEEE Trans. Multimedia 21, 3106–3121 (2019).

Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 2472–2481.

Tico, M.

A. Adams, E.-V. Talvala, S. H. Park, D. E. Jacobs, B. Ajdin, N. Gelfand, J. Dolson, D. Vaquero, J. Baek, M. Tico, H. P. A. Lensch, W. Matusik, K. Pulli, M. Horowitz, and M. Levoy, “The Frankencamera: an experimental platform for computational photography,” ACM Trans. Graph. 29, 1–12 (2010).
[Crossref]

Timofte, R.

K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Gool, and R. Timofte, “Plug-and-play image restoration with deep denoiser prior,” arXiv abs/2008.13751 (2020).

K. Zhang, L. Gool, and R. Timofte, “Deep unfolding network for image super-resolution,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020), pp. 3214–3223.

F. Mentzer, E. Agustsson, M. Tschannen, R. Timofte, and L. Van Gool, “Conditional probability models for deep image compression,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018).

Tola, E.

E. Tola, V. Lepetit, and P. Fua, “Daisy: an efficient dense descriptor applied to wide-baseline stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 815–830 (2010).
[Crossref]

Tong, T.

T. Tong, G. Li, X. Liu, and Q. Gao, “Image super-resolution using dense skip connections,” in IEEE International Conference on Computer Vision (2017), pp. 4799–4807.

Torgersen, T. C.

S. Prasad, T. C. Torgersen, V. P. Pauca, R. J. Plemmons, and J. van der Gracht, “Engineering the pupil phase to improve image quality,” Proc. SPIE 5108, 1–12 (2003).
[Crossref]

Totz, J.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4681–4690.

Tremblay, E. J.

Troxel, D. E.

J. A. Parker, R. V. Kenyon, and D. E. Troxel, “Comparison of interpolating methods for image resampling,” IEEE Trans. Med. Imaging 2, 31–39 (1983).
[Crossref]

Tsai, T.-H.

Tsai, Y.-T.

F. Heide, M. Steinberger, Y.-T. Tsai, M. Rouf, D. Pająk, D. Reddy, O. Gallo, J. Liu, W. Heidrich, K. Egiazarian, J. Kautz, and K. A. Pulli, “FlexISP: a flexible camera image processing framework,” ACM Trans. Graph. 33, 231 (2014).
[Crossref]

Tschannen, M.

F. Mentzer, E. Agustsson, M. Tschannen, R. Timofte, and L. Van Gool, “Conditional probability models for deep image compression,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018).

Tucker, P.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.

Tumblin, J.

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. 25, 795–804 (2006).
[Crossref]

R. Raskar and J. Tumblin, Computational Photography: Mastering New Techniques for Lenses, Lighting, and Sensors (AK Peters, 2009).

Tünnermann, A.

Turk, G.

V. Kwatra, A. Schödl, I. Essa, G. Turk, and A. Bobick, “Graphcut textures: image and video synthesis using graph cuts,” ACM Trans. Graph. 22, 277–286 (2003).
[Crossref]

Tuytelaars, T.

H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: speeded up robust features,” in European Conference on Computer Vision (ECCV), A. Leonardis, H. Bischof, and A. Pinz, eds. (Springer,2006), pp. 404–417.

Ugur, K.

J. Lainema, F. Bossen, W.-J. Han, J. Min, and K. Ugur, “Intra coding of the HEVC standard,” IEEE Trans. Circuits Syst. Video Technol. 22, 1792–1801 (2012).
[Crossref]

Ukita, N.

M. Haris, G. Shakhnarovich, and N. Ukita, “Deep back-projection networks for super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 1664–1673.

Ulyanov, D.

D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 9446–9454.

Umebayashi, T.

S. Sukegawa, T. Umebayashi, T. Nakajima, H. Kawanobe, K. Koseki, I. Hirota, T. Haruta, M. Kasai, K. Fukumoto, T. Wakano, K. Inoue, H. Takahashi, T. Nagano, Y. Nitta, T. Hirayama, and N. Fukushima, “A 1/4-inch 8 mpixel back-illuminated stacked CMOS image sensor,” in IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE, 2013), pp. 484–485.

Underwood, C.

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 (10November2015).

Vaish, V.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

van den Driessche, G.

D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of Go without human knowledge,” Nature 550, 354–359 (2017).
[Crossref]

van der Gracht, J.

S. Prasad, T. C. Torgersen, V. P. Pauca, R. J. Plemmons, and J. van der Gracht, “Engineering the pupil phase to improve image quality,” Proc. SPIE 5108, 1–12 (2003).
[Crossref]

Van Der Smagt, P.

A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox, “FlowNet: learning optical flow with convolutional networks,” in IEEE International Conference on Computer Vision (2015), pp. 2758–2766.

Van Esesn, B. C.

M. Z. Alom, T. M. Taha, C. Yakopcic, S. Westberg, P. Sidike, M. S. Nasrin, B. C. Van Esesn, A. A. S. Awwal, and V. K. Asari, “The history began from AlexNet: a comprehensive survey on deep learning approaches,” arXiv:1803.01164 (2018).

Van Gool, L.

H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: speeded up robust features,” in European Conference on Computer Vision (ECCV), A. Leonardis, H. Bischof, and A. Pinz, eds. (Springer,2006), pp. 404–417.

F. Mentzer, E. Agustsson, M. Tschannen, R. Timofte, and L. Van Gool, “Conditional probability models for deep image compression,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018).

Vanhoucke, V.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1–9.

Vaquero, D.

A. Adams, E.-V. Talvala, S. H. Park, D. E. Jacobs, B. Ajdin, N. Gelfand, J. Dolson, D. Vaquero, J. Baek, M. Tico, H. P. A. Lensch, W. Matusik, K. Pulli, M. Horowitz, and M. Levoy, “The Frankencamera: an experimental platform for computational photography,” ACM Trans. Graph. 29, 1–12 (2010).
[Crossref]

Vaquero, J. J.

A. Santos, C. O. de Solórzano, J. J. Vaquero, J. Pena, N. Malpica, and F. Del Pozo, “Evaluation of autofocus functions in molecular cytogenetic analysis,” J. Microsc. 188, 264–272 (1997).
[Crossref]

Vasilyev, A.

J. Redgrave, A. Meixner, N. Goulding-Hotta, A. Vasilyev, and O. Shacham, “Pixel visual core: Google’s fully programmable image vision and AI processor for mobile devices,” in Hot Chips: A Symposium on High Performance Chips (2018).

Vasudevan, V.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.

Vedaldi, A.

D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 9446–9454.

Veeraraghavan, A.

D. Reddy, A. Veeraraghavan, and R. Chellappa, “P2C2: programmable pixel compressive camera for high speed imaging,” in Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2011), pp. 329–336.

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in IEEE International Conference on Computational Photography (IEEE, 2014), pp. 1–10.

Vehvilainen, M.

A. Danielyan, M. Vehvilainen, A. Foi, V. Katkovnik, and K. Egiazarian, “Cross-color BM3D filtering of noisy raw data,” in International Workshop on Local and Non-local Approximation in Image Processing (IEEE, 2009), pp. 125–129.

Veli, M.

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361, 1004–1008 (2018).
[Crossref]

Venkataraman, K.

K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: an ultra-thin high performance monolithic camera array,” ACM Trans. Graph. 32, 166 (2013).
[Crossref]

Vera, E.

D. J. Brady, M. Gehm, R. Stack, D. Marks, D. Kittle, D. R. Golish, E. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Von Neumann, J.

J. Von Neumann, “First draft of a report on the EDVAC,” IEEE Ann. Hist. Comput. 15, 27–75 (1993).
[Crossref]

Wach, H. B.

Wachel, P.

P. Śliwiński and P. Wachel, “A simple model for on-sensor phase-detection autofocusing algorithm,” J. Comput. Commun. 1, 11–17 (2013).
[Crossref]

Wakano, T.

S. Sukegawa, T. Umebayashi, T. Nakajima, H. Kawanobe, K. Koseki, I. Hirota, T. Haruta, M. Kasai, K. Fukumoto, T. Wakano, K. Inoue, H. Takahashi, T. Nagano, Y. Nitta, T. Hirayama, and N. Fukushima, “A 1/4-inch 8 mpixel back-illuminated stacked CMOS image sensor,” in IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE, 2013), pp. 484–485.

Wakin, M. B.

E. J. Candès and M. B. Wakin, “An introduction to compressive sampling [a sensing/sampling paradigm that goes against the common knowledge in data acquisition],” IEEE Signal Process Mag. 25(2), 21–30 (2008).
[Crossref]

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 606509 (2006).
[Crossref]

Wandell, B. A.

S. H. Park, H. S. Kim, S. Lansel, M. Parmar, and B. A. Wandell, “A case for denoising before demosaicking color filter array data,” in Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers (IEEE, 2009), pp. 860–864.

Wang, B.

Y. Hu, B. Wang, and S. Lin, “FC4: fully convolutional color constancy with confidence-weighted pooling,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4085–4094.

Wang, D.

Z. Wu, D. Wang, and F. Zhou, “Bilateral prediction and intersection calculation autofocus method for automated microscopy,” J. Microsc. 248, 271–280 (2012).
[Crossref]

Wang, H.

Y. Zhao, T. Yue, L. Chen, H. Wang, Z. Ma, D. J. Brady, and X. Cao, “Heterogeneous camera array for multispectral light field imaging,” Opt. Express 25, 14008–14022 (2017).
[Crossref]

H. Zheng, M. Ji, H. Wang, Y. Liu, and L. Fang, “CrossNet: an end-to-end reference-based super resolution network using cross-scale warping,” in European Conference on Computer Vision (ECCV) (ECCV, 2018), pp. 87–104.

H. Zheng, M. Guo, H. Wang, Y. Liu, and L. Fang, “Combining exemplar-based approach and learning-based approach for light field super-resolution using a hybrid imaging system,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2481–2486.

J. Wu, H. Wang, X. Wang, and Y. Zhang, “A novel light field super-resolution framework based on hybrid imaging system,” in Visual Communications and Image Processing (IEEE, 2015), pp. 1–4.

H. Zheng, M. Ji, H. Wang, Y. Liu, and L. Fang, “Learning cross-scale correspondence and patch-based synthesis for reference-based super-resolution,” in British Machine Vision Conference (2017).

Wang, J.

C. Tian, Y. Xu, L. Fei, J. Wang, J. Wen, and N. Luo, “Enhanced CNN for image denoising,” CAAI Trans. Intell. Technol. 4, 17–23 (2019).
[Crossref]

G. Li, J. Wang, and Y. Zhang, “Design of 8 mega-pixel mobile phone camera,” J. Appl. Opt. 32, 420–425 (2011).

J.-Y. Huo, Y.-L. Chang, J. Wang, and X.-X. Wei, “Robust automatic white balance algorithm using gray color points in images,” IEEE Trans. Consum. Electron. 52, 541–546 (2006).
[Crossref]

X. Yan, D. J. Brady, J. Wang, C. Huang, Z. Li, S. Yan, D. Liu, and Z. Ma, “Compressive sampling for array cameras,” arXiv:1908.10903 (2019).

Wang, L.

Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image super-resolution using very deep residual channel attention networks,” in European Conference on Computer Vision (ECCV) (2018), pp. 286–301.

Wang, L. V.

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref]

Wang, S.

S. Wang, Y. Zhang, P. Deng, and F. Zhou, “Fast automatic white balancing method by color histogram stretching,” in 4th International Congress on Image and Signal Processing (IEEE, 2011), Vol. 2, pp. 979–983.

Wang, T.

B. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, P. Rajpurkar, T. Migimatsu, R. Cheng-Yue, F. Mujica, A. Coates, and A. Y. Ng, “An empirical evaluation of deep learning on highway driving,” arXiv:1504.01716 (2015).

P. P. Srinivasan, T. Wang, A. Sreelal, R. Ramamoorthi, and R. Ng, “Learning to synthesize a 4D RGBD light field from a single image,” IEEE International Conference on Computer Vision (ICCV), Venice, Italy, October22–29, 2017, pp. 2262–2270.

Wang, W.

W. Yang, X. Zhang, Y. Tian, W. Wang, J.-H. Xue, and Q. Liao, “Deep learning for single image super-resolution: a brief review,” IEEE Trans. Multimedia 21, 3106–3121 (2019).

Wang, X.

M. Zhu, X. Wang, and Y. Wang, “Human-like autonomous car-following model with deep reinforcement learning,” Transp. Res. C 97, 348–368 (2018).
[Crossref]

X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and C. C. Loy, “ESRGAN: enhanced super-resolution generative adversarial networks,” in European Conference on Computer Vision (ECCV) (2018).

J. Wu, H. Wang, X. Wang, and Y. Zhang, “A novel light field super-resolution framework based on hybrid imaging system,” in Visual Communications and Image Processing (IEEE, 2015), pp. 1–4.

Wang, Y.

M. Zhu, X. Wang, and Y. Wang, “Human-like autonomous car-following model with deep reinforcement learning,” Transp. Res. C 97, 348–368 (2018).
[Crossref]

Y. Wang, H. Feng, Z. Xu, Q. Li, Y. Chen, and M. Cen, “Fast auto-focus scheme based on optical defocus fitting model,” J. Mod. Opt. 65, 858–868 (2018).
[Crossref]

X. Xu, S. Liu, T. Chuang, Y. Huang, S. Lei, K. Rapaka, C. Pang, V. Seregin, Y. Wang, and M. Karczewicz, “Intra block copy in HEVC screen content coding extensions,” IEEE J. Emerg. Sel. Top. Circuits Syst. 6, 409–419 (2016).
[Crossref]

H. Liu, T. Chen, P. Guo, Q. Shen, X. Cao, Y. Wang, and Z. Ma, “Non-local attention optimized deep image compression,” arXiv:1904.09757 (2019).

Y. Wang, Y. Liu, W. Heidrich, and Q. Dai, “The light field attachment: turning a DSLR into a light field camera using a low budget camera ring,” in IEEE Transactions on Visualization and Computer Graphics (IEEE, 2016).

Wang, Y.-C. F.

F.-L. He, Y.-C. F. Wang, and K.-L. Hua, “Self-learning approach to color demosaicking via support vector regression,” in 19th IEEE International Conference on Image Processing (IEEE, 2012), pp. 2765–2768.

Wang, Y.-Q.

Y.-Q. Wang, “A multilayer neural network for image demosaicking,” in IEEE International Conference on Image Processing (ICIP) (IEEE, 2014), pp. 1852–1856.

Wang, Z.

Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” in 37th Asilomar Conference on Signals, Systems & Computers (IEEE, 2003), Vol. 2, pp. 1398–1402.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4681–4690.

X. Hu, H. Mu, X. Zhang, Z. Wang, T. Tan, and J. Sun, “Meta-SR: a magnification-arbitrary network for super-resolution,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 1575–1584.

Warden, P.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.

Wei, L.

L. Wei and E. Roberts, “Neural network control of focal position during time-lapse microscopy of cells,” Sci. Rep. 8, 7313 (2018).
[Crossref]

Wei, X.-X.

J.-Y. Huo, Y.-L. Chang, J. Wang, and X.-X. Wei, “Robust automatic white balance algorithm using gray color points in images,” IEEE Trans. Consum. Electron. 52, 541–546 (2006).
[Crossref]

Wein, S.

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, J. D. Targove, S. Wein, P. Brewer, D. G. Madden, and K. H. Shafique, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

Wen, J.

C. Tian, Y. Xu, L. Fei, J. Wang, J. Wen, and N. Luo, “Enhanced CNN for image denoising,” CAAI Trans. Intell. Technol. 4, 17–23 (2019).
[Crossref]

Weng, C.-C.

C.-C. Weng, H. Chen, and C.-S. Fuh, “A novel automatic white balance method for digital still cameras,” in IEEE International Symposium on Circuits and Systems (IEEE, 2005), pp. 3801–3804.

Westberg, S.

M. Z. Alom, T. M. Taha, C. Yakopcic, S. Westberg, P. Sidike, M. S. Nasrin, B. C. Van Esesn, A. A. S. Awwal, and V. K. Asari, “The history began from AlexNet: a comprehensive survey on deep learning approaches,” arXiv:1803.01164 (2018).

Wetzstein, G.

J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
[Crossref]

Wicke, M.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.

Wiegand, T.

G. J. Sullivan and T. Wiegand, “Rate-distortion optimization for video compression,” IEEE Signal Process Mag. 15(6), 74–90 (1998).
[Crossref]

Wilburn, B.

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 (10November2015).

Willett, R.

Williams, R. J.

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323, 533 (1986).
[Crossref]

Wittenbrink, C.

M. Ditty, T. Architecture, J. Montrym, and C. Wittenbrink, “NVIDIA’s Tegra K1 system-on-chip,” in IEEE Hot Chips 26 Symposium (HCS) (IEEE, 2014), pp. 1–26.

Wright, J.

J. Yang, J. Wright, T. S. Huang, and Y. Ma, “Image super-resolution via sparse representation,” IEEE Trans. Image Process. 19, 2861–2873 (2010).
[Crossref]

J. Yang, J. Wright, T. Huang, and Y. Ma, “Image super-resolution as sparse representation of raw image patches,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

Wronski, B.

B. Wronski, I. Garcia-Dorado, M. Ernst, D. Kelly, M. Krainin, C.-K. Liang, M. Levoy, and P. Milanfar, “Handheld multi-frame super-resolution,” ACM Trans. Graph. 38, 1–18 (2019).
[Crossref]

Wu, C.

C. Wu, L. F. Isikdogan, S. Rao, B. Nayak, T. Gerasimow, A. Sutic, L. Ain-kedem, and G. Michael, “VisionISP: Repurposing the image signal processor for computer vision applications,” in IEEE International Conference on Image Processing (ICIP) (2019), pp. 4624–4628.

Wu, F.

C. Chen, Z. Xiong, X. Tian, Z. Zha, and F. Wu, “Camera lens super-resolution,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 1652–1660.

Wu, J.

J. Wu, H. Wang, X. Wang, and Y. Zhang, “A novel light field super-resolution framework based on hybrid imaging system,” in Visual Communications and Image Processing (IEEE, 2015), pp. 1–4.

Wu, S.

X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and C. C. Loy, “ESRGAN: enhanced super-resolution generative adversarial networks,” in European Conference on Computer Vision (ECCV) (2018).

Wu, X.

L. Zhang, X. Wu, A. Buades, and X. Li, “Color demosaicking by local directional interpolation and nonlocal adaptive thresholding,” J. Electron. Imaging 20, 043010 (2011).
[Crossref]

L. Zhang and X. Wu, “Color demosaicking via directional linear minimum mean square-error estimation,” IEEE Trans. Image Process. 14, 2167–2178 (2005).
[Crossref]

Wu, Z.

Z. Wu, D. Wang, and F. Zhou, “Bilateral prediction and intersection calculation autofocus method for automated microscopy,” J. Microsc. 248, 271–280 (2012).
[Crossref]

Xian-Guo, M.

Z. Ji-Yan, H. Yuan-Qing, X. Fei-Bing, and M. Xian-Guo, “Design of 10 mega-pixel mobile phone lens,” in 3rd International Conference on Instrumentation, Measurement, Computer, Communication and Control (IEEE, 2013), pp. 569–573.

Xiao, J.

C. Chen, A. Seff, A. Kornhauser, and J. Xiao, “DeepDriving: learning affordance for direct perception in autonomous driving,” in IEEE International Conference on Computer Vision (2015), pp. 2722–2730.

Xie, J.

J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with deep neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 341–349.

Xiong, Z.

C. Chen, Z. Xiong, X. Tian, Z. Zha, and F. Wu, “Camera lens super-resolution,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 1652–1660.

Xu, D.

G. Lu, W. Ouyang, D. Xu, X. Zhang, C. Cai, and Z. Gao, “DVC: an end-to-end deep video compression framework,” in IEEE Conference on Computer Vision and Pattern Recognition (2019), pp. 11006–11015.

Xu, H.

H. Xu, Y. Gao, F. Yu, and T. Darrell, “End-to-end learning of driving models from large-scale video datasets,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 2174–2182.

Xu, J.

S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, “An iterative regularization method for total variation-based image restoration,” Multiscale Model. Simul. 4, 460–489 (2005).
[Crossref]

C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to see in the dark,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 3291–3300.

Xu, L.

J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with deep neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 341–349.

Xu, X.

X. Xu, S. Liu, T. Chuang, Y. Huang, S. Lei, K. Rapaka, C. Pang, V. Seregin, Y. Wang, and M. Karczewicz, “Intra block copy in HEVC screen content coding extensions,” IEEE J. Emerg. Sel. Top. Circuits Syst. 6, 409–419 (2016).
[Crossref]

Xu, Y.

M. Cheng, Z. Ma, S. Asif, Y. Xu, H. Liu, W. Bao, and J. Sun, “A dual camera system for high spatiotemporal resolution video acquisition,” IEEE Comp. Arch. Lett. 1, 1 (2020).

C. Tian, Y. Xu, L. Fei, J. Wang, J. Wen, and N. Luo, “Enhanced CNN for image denoising,” CAAI Trans. Intell. Technol. 4, 17–23 (2019).
[Crossref]

Xu, Z.

Y. Wang, H. Feng, Z. Xu, Q. Li, Y. Chen, and M. Cen, “Fast auto-focus scheme based on optical defocus fitting model,” J. Mod. Opt. 65, 858–868 (2018).
[Crossref]

Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5, 337–344 (2018).
[Crossref]

Xue, J.-H.

W. Yang, X. Zhang, Y. Tian, W. Wang, J.-H. Xue, and Q. Liao, “Deep learning for single image super-resolution: a brief review,” IEEE Trans. Multimedia 21, 3106–3121 (2019).

Yakopcic, C.

M. Z. Alom, T. M. Taha, C. Yakopcic, S. Westberg, P. Sidike, M. S. Nasrin, B. C. Van Esesn, A. A. S. Awwal, and V. K. Asari, “The history began from AlexNet: a comprehensive survey on deep learning approaches,” arXiv:1803.01164 (2018).

Yamada, K.

Yan, S.

X. Yan, D. J. Brady, J. Wang, C. Huang, Z. Li, S. Yan, D. Liu, and Z. Ma, “Compressive sampling for array cameras,” arXiv:1908.10903 (2019).

Yan, X.

X. Yan, D. J. Brady, J. Wang, C. Huang, Z. Li, S. Yan, D. Liu, and Z. Ma, “Compressive sampling for array cameras,” arXiv:1908.10903 (2019).

Yang, J.

P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–10545 (2013).
[Crossref]

J. Yang, J. Wright, T. S. Huang, and Y. Ma, “Image super-resolution via sparse representation,” IEEE Trans. Image Process. 19, 2861–2873 (2010).
[Crossref]

J. Yang, J. Wright, T. Huang, and Y. Ma, “Image super-resolution as sparse representation of raw image patches,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

J. Yang, Z. Lin, and S. Cohen, “Fast image super-resolution based on in-place example regression,” in IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 1059–1066.

Y. Tai, J. Yang, and X. Liu, “Image super-resolution via deep recursive residual network,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 3147–3155.

Yang, M.

C. Brandli, R. Berner, M. Yang, S.-C. Liu, and T. Delbruck, “A 240× 80 130 db 3 µs latency global shutter spatiotemporal vision sensor,” IEEE J. Solid-State Circuits 49, 2333–2341 (2014).
[Crossref]

Yang, W.

W. Yang, X. Zhang, Y. Tian, W. Wang, J.-H. Xue, and Q. Liao, “Deep learning for single image super-resolution: a brief review,” IEEE Trans. Multimedia 21, 3106–3121 (2019).

Yang, X.

D. Sun, X. Yang, M.-Y. Liu, and J. Kautz, “PWC-net: CNNS for optical flow using pyramid, warping, and cost volume,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 8934–8943.

Yao, Y.

Y. Yao, B. Abidi, N. Doggaz, and M. Abidi, “Evaluation of sharpness measures and search algorithms for the auto focusing of high-magnification images,” Proc. SPIE 6246, 62460G (2006).
[Crossref]

Yardimci, N. T.

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361, 1004–1008 (2018).
[Crossref]

Yazdanfar, S.

Ye, L.

Q. Shen, J. Cai, L. Liu, H. Liu, T. Chen, L. Ye, and Z. Ma, “CodedVision: towards joint image understanding and compression via end-to-end learning,” in Pacific Rim Conference on Multimedia (Springer, 2018), pp. 3–14.

Yeh, C.-P.

S.-C. Tai, T.-W. Liao, Y.-Y. Chang, and C.-P. Yeh, “Automatic white balance algorithm through the average equalization and threshold,” in 8th International Conference on Information Science and Digital Content Technology (ICIDT) (IEEE, 2012), Vol. 3, pp. 571–576.

Yeres, P.

M. Bojarski, P. Yeres, A. Choromanska, K. Choromanski, B. Firner, L. Jackel, and U. Muller, “Explaining how a deep neural network trained with end-to-end learning steers a car,” arXiv:1704.07911 (2017).

Yin, H.

H. Yin, H. Jia, J. Zhou, and Z. Gao, “Survey on algorithms and VLSI architectures for MPEG-like video coders,” J. Signal Process. Syst. 88, 357–410 (2017).
[Crossref]

Yin, W.

S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, “An iterative regularization method for total variation-based image restoration,” Multiscale Model. Simul. 4, 460–489 (2005).
[Crossref]

Yogamani, S.

A. E. Sallab, M. Abdou, E. Perot, and S. Yogamani, “Deep reinforcement learning framework for autonomous driving,” Electron. Imaging 2017, 70–76 (2017).
[Crossref]

Yoo, J.-C.

J.-C. Yoo and T. H. Han, “Fast normalized cross-correlation,” Circuits Syst. Signal Process. 28, 819 (2009).
[Crossref]

Yoo, Y.

R. Ramanath, W. E. Snyder, Y. Yoo, and M. S. Drew, “Color image processing pipeline,” IEEE Signal Process Mag. 22(1), 34–43 (2005).
[Crossref]

Yousefi, S.

S. Yousefi, M. Rahman, and N. Kehtarnavaz, “A new auto-focus sharpness function for digital and smart-phone cameras,” IEEE Trans. Consum. Electron. 57, 1003–1009 (2011).
[Crossref]

Yu, F.

H. Xu, Y. Gao, F. Yu, and T. Darrell, “End-to-end learning of driving models from large-scale video datasets,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 2174–2182.

Yu, K.

X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and C. C. Loy, “ESRGAN: enhanced super-resolution generative adversarial networks,” in European Conference on Computer Vision (ECCV) (2018).

Yu, Y.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.

Yuan, M.

W. Dong, M. Yuan, X. Li, and G. Shi, “Joint demosaicing and denoising with perceptual optimization on a generative adversarial network,” arXiv:1802.04723 (2018).

Yuan, X.

M. Qiao, Z. Meng, J. Ma, and X. Yuan, “Deep learning for video compressive sensing,” APL Photon. 5, 030801 (2020).
[Crossref]

M. Qiao, X. Liu, and X. Yuan, “Snapshot spatial–temporal compressive imaging,” Opt. Lett. 45, 1659–1662 (2020).
[Crossref]

Y. Sun, X. Yuan, and S. Pang, “High-speed compressive range imaging based on active illumination,” Opt. Express 24, 22836–22846 (2016).
[Crossref]

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process Mag. 33(5), 95–108 (2016).
[Crossref]

T.-H. Tsai, X. Yuan, and D. J. Brady, “Spatial light modulator based color polarization imaging,” Opt. Express 23, 11912–11926 (2015).
[Crossref]

P. Llull, X. Yuan, L. Carin, and D. J. Brady, “Image translation for single-shot focal tomography,” Optica 2, 822–825 (2015).
[Crossref]

T.-H. Tsai, P. Llull, X. Yuan, L. Carin, and D. J. Brady, “Spectral-temporal compressive imaging,” Opt. Lett. 40, 4054–4057 (2015).
[Crossref]

P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–10545 (2013).
[Crossref]

X. Yuan, Y. Liu, J. Suo, and Q. Dai, “Plug-and-play algorithms for large-scale snapshot compressive imaging,”, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (IEEE, 2020), pp. 1447–1457.

X. Miao, X. Yuan, Y. Pu, and V. Athitsos, “λ-net: reconstruct hyperspectral images from a snapshot measurement,” in IEEE/CVF Conference on Computer Vision (ICCV) (2019), Vol. 1.

X. Yuan, L. Fang, Q. Dai, D. J. Brady, and Y. Liu, “Multiscale gigapixel video: a cross resolution image matching and warping approach,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2017), pp. 1–9.

Yuan-Qing, H.

Z. Ji-Yan, H. Yuan-Qing, X. Fei-Bing, and M. Xian-Guo, “Design of 10 mega-pixel mobile phone lens,” in 3rd International Conference on Instrumentation, Measurement, Computer, Communication and Control (IEEE, 2013), pp. 569–573.

Yue, T.

Y. Zhao, T. Yue, L. Chen, H. Wang, Z. Ma, D. J. Brady, and X. Cao, “Heterogeneous camera array for multispectral light field imaging,” Opt. Express 25, 14008–14022 (2017).
[Crossref]

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process Mag. 33(5), 95–108 (2016).
[Crossref]

T. Chen, H. Liu, Q. Shen, T. Yue, X. Cao, and Z. Ma, “DeepCoder: A deep neural network based video compression,” in IEEE Visual Communications and Image Processing (VCIP) (IEEE, 2017), pp. 1–4.

H. Liu, T. Chen, Q. Shen, T. Yue, and Z. Ma, “Deep image compression via end-to-end learning,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2018), pp. 2575–2578.

Yuille, A. L.

L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFS,” IEEE Trans. Pattern Anal. Mach. Intell. 40, 834–848 (2017).
[Crossref]

Zagoruyko, S.

S. Zagoruyko and N. Komodakis, “Learning to compare image patches via convolutional neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 4353–4361.

Zaragoza, J.

J. Zaragoza, T.-J. Chin, M. S. Brown, and D. Suter, “As-projective-as-possible image stitching with moving DLT,” in IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 2339–2346.

Zbontar, J.

J. Zbontar and Y. LeCun, “Computing the stereo matching cost with a convolutional neural network,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1592–1599.

Zha, Z.

C. Chen, Z. Xiong, X. Tian, Z. Zha, and F. Wu, “Camera lens super-resolution,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 1652–1660.

Zhang, D.

M. Li, W. Zuo, S. Gu, D. Zhao, and D. Zhang, “Learning convolutional networks for content-weighted image compression,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 3214–3223.

Zhang, J.

M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to end learning for self-driving cars,” arXiv:1604.07316 (2016).

Zhang, K.

K. Zhang, W. Zuo, and L. Zhang, “FFDNet: toward a fast and flexible solution for CNN-based image denoising,” IEEE Trans. Image Process. 27, 4608–4622 (2018).
[Crossref]

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising,” IEEE Trans. Image Process. 26, 3142–3155 (2017).
[Crossref]

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 3929–3938.

R. Tan, K. Zhang, W. Zuo, and L. Zhang, “Color image demosaicking via deep residual learning,” in IEEE International Conference on Multimedia and Expo (ICME) (IEEE, 2017), pp. 793–798.

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 2808–2817.

K. Zhang, L. Gool, and R. Timofte, “Deep unfolding network for image super-resolution,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020), pp. 3214–3223.

K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Gool, and R. Timofte, “Plug-and-play image restoration with deep denoiser prior,” arXiv abs/2008.13751 (2020).

K. Zhang, W. Zuo, and L. Zhang, “Deep plug-and-play super-resolution for arbitrary blur kernels,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 1671–1681.

Zhang, L.

K. Zhang, W. Zuo, and L. Zhang, “FFDNet: toward a fast and flexible solution for CNN-based image denoising,” IEEE Trans. Image Process. 27, 4608–4622 (2018).
[Crossref]

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising,” IEEE Trans. Image Process. 26, 3142–3155 (2017).
[Crossref]

W. Dong, L. Zhang, G. Shi, and X. Li, “Nonlocally centralized sparse representation for image restoration,” IEEE Trans. Image Process. 22, 1620–1630 (2012).
[Crossref]

L. Zhang, X. Wu, A. Buades, and X. Li, “Color demosaicking by local directional interpolation and nonlocal adaptive thresholding,” J. Electron. Imaging 20, 043010 (2011).
[Crossref]

X. Li, B. Gunturk, and L. Zhang, “Image demosaicing: a systematic survey,” Proc. SPIE 6822, 68221J (2008).
[Crossref]

L. Zhang and X. Wu, “Color demosaicking via directional linear minimum mean square-error estimation,” IEEE Trans. Image Process. 14, 2167–2178 (2005).
[Crossref]

R. Tan, K. Zhang, W. Zuo, and L. Zhang, “Color image demosaicking via deep residual learning,” in IEEE International Conference on Multimedia and Expo (ICME) (IEEE, 2017), pp. 793–798.

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 3929–3938.

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 2808–2817.

K. Zhang, W. Zuo, and L. Zhang, “Deep plug-and-play super-resolution for arbitrary blur kernels,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 1671–1681.

K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Gool, and R. Timofte, “Plug-and-play image restoration with deep denoiser prior,” arXiv abs/2008.13751 (2020).

P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang, “Bottom-up and top-down attention for image captioning and visual question answering,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 6077–6086.

Zhang, R.

R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in European Conference on Computer Vision (Springer, 2016), pp. 649–666.

Zhang, X.

W. Yang, X. Zhang, Y. Tian, W. Wang, J.-H. Xue, and Q. Liao, “Deep learning for single image super-resolution: a brief review,” IEEE Trans. Multimedia 21, 3106–3121 (2019).

X. Hu, H. Mu, X. Zhang, Z. Wang, T. Tan, and J. Sun, “Meta-SR: a magnification-arbitrary network for super-resolution,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 1575–1584.

G. Lu, W. Ouyang, D. Xu, X. Zhang, C. Cai, and Z. Gao, “DVC: an end-to-end deep video compression framework,” in IEEE Conference on Computer Vision and Pattern Recognition (2019), pp. 11006–11015.

M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to end learning for self-driving cars,” arXiv:1604.07316 (2016).

Zhang, Y.

S. Jiang, J. Liao, Z. Bian, K. Guo, Y. Zhang, and G. Zheng, “Transform-and multi-domain deep learning for single-frame rapid autofocusing in whole slide imaging,” Biomed. Opt. Express 9, 1601–1612 (2018).
[Crossref]

G. Li, J. Wang, and Y. Zhang, “Design of 8 mega-pixel mobile phone camera,” J. Appl. Opt. 32, 420–425 (2011).

Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 2472–2481.

Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image super-resolution using very deep residual channel attention networks,” in European Conference on Computer Vision (ECCV) (2018), pp. 286–301.

J. Wu, H. Wang, X. Wang, and Y. Zhang, “A novel light field super-resolution framework based on hybrid imaging system,” in Visual Communications and Image Processing (IEEE, 2015), pp. 1–4.

S. Wang, Y. Zhang, P. Deng, and F. Zhou, “Fast automatic white balancing method by color histogram stretching,” in 4th International Congress on Image and Signal Processing (IEEE, 2011), Vol. 2, pp. 979–983.

Zhao, D.

M. Li, W. Zuo, S. Gu, D. Zhao, and D. Zhang, “Learning convolutional networks for content-weighted image compression,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 3214–3223.

Zhao, J.

M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to end learning for self-driving cars,” arXiv:1604.07316 (2016).

Zhao, Q.

Zhao, Y.

Zheng, G.

Zheng, H.

H. Zheng, M. Guo, H. Wang, Y. Liu, and L. Fang, “Combining exemplar-based approach and learning-based approach for light field super-resolution using a hybrid imaging system,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2481–2486.

H. Zheng, M. Ji, H. Wang, Y. Liu, and L. Fang, “CrossNet: an end-to-end reference-based super resolution network using cross-scale warping,” in European Conference on Computer Vision (ECCV) (ECCV, 2018), pp. 87–104.

H. Zheng, M. Ji, H. Wang, Y. Liu, and L. Fang, “Learning cross-scale correspondence and patch-based synthesis for reference-based super-resolution,” in British Machine Vision Conference (2017).

Zheng, X.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.

Zhong, B.

Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image super-resolution using very deep residual channel attention networks,” in European Conference on Computer Vision (ECCV) (2018), pp. 286–301.

Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 2472–2481.

Zhong, Z.

M.-B. Lien, C.-H. Liu, I. Y. Chun, S. Ravishankar, H. Nien, M. Zhou, J. A. Fessler, Z. Zhong, and T. B. Norris, “Ranging and light field imaging with transparent photodetectors,” Nat. Photonics 14, 143–148 (2020).
[Crossref]

Zhou, F.

Z. Wu, D. Wang, and F. Zhou, “Bilateral prediction and intersection calculation autofocus method for automated microscopy,” J. Microsc. 248, 271–280 (2012).
[Crossref]

S. Wang, Y. Zhang, P. Deng, and F. Zhou, “Fast automatic white balancing method by color histogram stretching,” in 4th International Congress on Image and Signal Processing (IEEE, 2011), Vol. 2, pp. 979–983.

Zhou, J.

H. Yin, H. Jia, J. Zhou, and Z. Gao, “Survey on algorithms and VLSI architectures for MPEG-like video coders,” J. Signal Process. Syst. 88, 357–410 (2017).
[Crossref]

Zhou, M.

M.-B. Lien, C.-H. Liu, I. Y. Chun, S. Ravishankar, H. Nien, M. Zhou, J. A. Fessler, Z. Zhong, and T. B. Norris, “Ranging and light field imaging with transparent photodetectors,” Nat. Photonics 14, 143–148 (2020).
[Crossref]

Zhou, R.

J. He, R. Zhou, and Z. Hong, “Modified fast climbing search auto-focus algorithm with adaptive step size searching technique for digital camera,” IEEE Trans. Consum. Electron. 49, 257–262 (2003).
[Crossref]

Zhou, T.

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 1125–1134.

Zhu, J.-Y.

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 1125–1134.

Zhu, M.

M. Zhu, X. Wang, and Y. Wang, “Human-like autonomous car-following model with deep reinforcement learning,” Transp. Res. C 97, 348–368 (2018).
[Crossref]

Zieba, K.

M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to end learning for self-driving cars,” arXiv:1604.07316 (2016).

Zisserman, A.

D. Capel and A. Zisserman, “Automated mosaicing with super-resolution zoom,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (1998), Vol. 98, pp. 885–891.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556 (2014).

J. Mairal, F. R. Bach, J. Ponce, G. Sapiro, and A. Zisserman, “Non-local sparse models for image restoration,” in International Conference on Computer Vision (ICCV) (2009), Vol. 29, pp. 54–62.

Zuo, W.

K. Zhang, W. Zuo, and L. Zhang, “FFDNet: toward a fast and flexible solution for CNN-based image denoising,” IEEE Trans. Image Process. 27, 4608–4622 (2018).
[Crossref]

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising,” IEEE Trans. Image Process. 26, 3142–3155 (2017).
[Crossref]

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 3929–3938.

R. Tan, K. Zhang, W. Zuo, and L. Zhang, “Color image demosaicking via deep residual learning,” in IEEE International Conference on Multimedia and Expo (ICME) (IEEE, 2017), pp. 793–798.

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 2808–2817.

K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Gool, and R. Timofte, “Plug-and-play image restoration with deep denoiser prior,” arXiv abs/2008.13751 (2020).

K. Zhang, W. Zuo, and L. Zhang, “Deep plug-and-play super-resolution for arbitrary blur kernels,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 1671–1681.

M. Li, W. Zuo, S. Gu, D. Zhao, and D. Zhang, “Learning convolutional networks for content-weighted image compression,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 3214–3223.

ACM Trans. Graph. (11)

F. Heide, M. Steinberger, Y.-T. Tsai, M. Rouf, D. Pająk, D. Reddy, O. Gallo, J. Liu, W. Heidrich, K. Egiazarian, J. Kautz, and K. A. Pulli, “FlexISP: a flexible camera image processing framework,” ACM Trans. Graph. 33, 231 (2014).
[Crossref]

S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Trans. Graph. 35, 192 (2016).
[Crossref]

A. Adams, E.-V. Talvala, S. H. Park, D. E. Jacobs, B. Ajdin, N. Gelfand, J. Dolson, D. Vaquero, J. Baek, M. Tico, H. P. A. Lensch, W. Matusik, K. Pulli, M. Horowitz, and M. Levoy, “The Frankencamera: an experimental platform for computational photography,” ACM Trans. Graph. 29, 1–12 (2010).
[Crossref]

M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, “Deep joint demosaicking and denoising,” ACM Trans. Graph. 35, 191 (2016).
[Crossref]

B. Wronski, I. Garcia-Dorado, M. Ernst, D. Kelly, M. Krainin, C.-K. Liang, M. Levoy, and P. Milanfar, “Handheld multi-frame super-resolution,” ACM Trans. Graph. 38, 1–18 (2019).
[Crossref]

G. Freedman and R. Fattal, “Image and video upscaling from local self-examples,” ACM Trans. Graph. 30, 12 (2011).
[Crossref]

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26, 70 (2007).
[Crossref]

K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: an ultra-thin high performance monolithic camera array,” ACM Trans. Graph. 32, 166 (2013).
[Crossref]

B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

V. Kwatra, A. Schödl, I. Essa, G. Turk, and A. Bobick, “Graphcut textures: image and video synthesis using graph cuts,” ACM Trans. Graph. 22, 277–286 (2003).
[Crossref]

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. 25, 795–804 (2006).
[Crossref]

ACM Trans. Graphics (1)

P. J. Burt and E. H. Adelson, “A multiresolution spline with application to image mosaics,” ACM Trans. Graphics 2, 217–236 (1983).
[Crossref]

Adv. Opt. Photon. (3)

Adv. Robot. (1)

H. A. Pierson and M. S. Gashler, “Deep learning in robotics: a review of recent research,” Adv. Robot. 31, 821–835 (2017).
[Crossref]

APL Photon. (1)

M. Qiao, Z. Meng, J. Ma, and X. Yuan, “Deep learning for video compressive sensing,” APL Photon. 5, 030801 (2020).
[Crossref]

Appl. Opt. (16)

E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995).
[Crossref]

D. J. Brady and D. L. Marks, “Coding for compressive focal tomography,” Appl. Opt. 50, 4436–4449 (2011).
[Crossref]

M. A. Neifeld and P. Shankar, “Feature-specific imaging,” Appl. Opt. 42, 3379–3389 (2003).
[Crossref]

M. Shankar, N. P. Pitsianis, and D. J. Brady, “Compressive video sensors using multichannel imagers,” Appl. Opt. 49, B9–B17 (2010).
[Crossref]

J. Ojeda-Castaneda, R. Ramos, and A. Noyola-Isgleas, “High focal depth by apodization and digital restoration,” Appl. Opt. 27, 2583–2586 (1988).
[Crossref]

S. Bradburn, W. T. Cathey, and E. R. Dowski, “Realizations of focus invariance in optical–digital systems with wave-front coding,” Appl. Opt. 36, 9157–9166 (1997).
[Crossref]

H. B. Wach, E. R. Dowski, and W. T. Cathey, “Control of chromatic focal shift through wave-front coding,” Appl. Opt. 37, 5359–5367 (1998).
[Crossref]

E. J. Tremblay, D. L. Marks, D. J. Brady, and J. E. Ford, “Design and scaling of monocentric multiscale imagers,” Appl. Opt. 51, 4691–4702 (2012).
[Crossref]

P. M. Shankar, W. C. Hasenplaugh, R. L. Morrison, R. A. Stack, and M. A. Neifeld, “Multiaperture imaging,” Appl. Opt. 45, 2871–2883 (2006).
[Crossref]

V. R. Bhakta, M. Somayaji, S. C. Douglas, and M. P. Christensen, “Experimentally validated computational imaging with adaptive multiaperture folded architecture,” Appl. Opt. 49, B51–B58 (2010).
[Crossref]

J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): concept and experimental verification,” Appl. Opt. 40, 1806–1813 (2001).
[Crossref]

J. Duparré, P. Dannberg, P. Schreiber, A. Bräuer, and A. Tünnermann, “Thin compound-eye camera,” Appl. Opt. 44, 2949–2956 (2005).
[Crossref]

M. Shankar, R. Willett, N. Pitsianis, T. Schulz, R. Gibbons, R. Te Kolste, J. Carriere, C. Chen, D. Prather, and D. Brady, “Thin infrared imaging systems through multichannel sampling,” Appl. Opt. 47, B1–B10 (2008).
[Crossref]

G. Druart, N. Guérineau, R. Hadar, S. Thétas, J. Taboury, S. Rommeluère, J. Primot, and M. Fendler, “Demonstration of an infrared microcamera inspired by Xenos peckii vision,” Appl. Opt. 48, 3368–3374 (2009).
[Crossref]

A. Portnoy, N. Pitsianis, X. Sun, D. Brady, R. Gibbons, A. Silver, R. Te Kolste, C. Chen, T. Dillon, and D. Prather, “Design and characterization of thin multiple aperture infrared cameras,” Appl. Opt. 48, 2115–2126 (2009).
[Crossref]

C. Guo, Z. Ma, X. Guo, W. Li, X. Qi, and Q. Zhao, “Fast auto-focusing search algorithm for a high-speed and high-resolution camera based on the image histogram feature function,” Appl. Opt. 57, F44–F49 (2018).
[Crossref]

Appl. Soft Comput. (1)

C.-Y. Chen, R.-C. Hwang, and Y.-J. Chen, “A passive auto-focus camera control system,” Appl. Soft Comput. 10, 296–303 (2010).
[Crossref]

Artif. Intell. (1)

B. K. Horn and B. G. Schunck, “Determining optical flow,” Artif. Intell. 17, 185–203 (1981).
[Crossref]

Astron. J. (1)

L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745–754 (1974).
[Crossref]

Biomed. Opt. Express (1)

Bull. Math. Biophys. (1)

W. S. McCulloch and W. Pitts, “A logical calculus of the ideas immanent in nervous activity,” Bull. Math. Biophys. 5, 115–133 (1943).
[Crossref]

CAAI Trans. Intell. Technol. (1)

C. Tian, Y. Xu, L. Fei, J. Wang, J. Wen, and N. Luo, “Enhanced CNN for image denoising,” CAAI Trans. Intell. Technol. 4, 17–23 (2019).
[Crossref]

Circuits Syst. Signal Process. (1)

J.-C. Yoo and T. H. Han, “Fast normalized cross-correlation,” Circuits Syst. Signal Process. 28, 819 (2009).
[Crossref]

Commun. Pure Appl. Math. (1)

E. J. Candes, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl. Math. 59, 1207–1223 (2006).
[Crossref]

Comput. Graph. Forum (1)

Y. Bando, B.-Y. Chen, and T. Nishita, “Motion deblurring from a single image using circular sensor motion,” Comput. Graph. Forum 30, 1869–1878 (2011).
[Crossref]

Comput. Vision Image Understanding (1)

M. J. Black and P. Anandan, “The robust estimation of multiple motions: parametric and piecewise-smooth flow fields,” Comput. Vision Image Understanding 63, 75–104 (1996).
[Crossref]

Digital Signal Process. (1)

M. Iliadis, L. Spinoulas, and A. K. Katsaggelos, “Deep fully-connected networks for video compressive sensing,” Digital Signal Process. 72, 9–18 (2018).
[Crossref]

Digital Signal Processing (1)

M. Iliadis, L. Spinoulas, and A. K. Katsaggelos, “DeepBinaryMask: learning a binary mask for video compressive sensing,” Digital Signal Processing 96, 102591 (2020).
[Crossref]

Electron. Imaging (2)

P. Sen, “Overview of state-of-the-art algorithms for stack-based high-dynamic range (hdr) imaging,” Electron. Imaging 2018, 311 (2018).
[Crossref]

A. E. Sallab, M. Abdou, E. Perot, and S. Yogamani, “Deep reinforcement learning framework for autonomous driving,” Electron. Imaging 2017, 70–76 (2017).
[Crossref]

Eng. Res. Express (1)

W. Pang and D. J. Brady, “Distributed focus and digital zoom,” Eng. Res. Express 2, 035019 (2020).
[Crossref]

Eng. Technol. (1)

P. Dempsey, “The teardown: Huawei mate 10 pro,” Eng. Technol. 13, 80–81 (2018).
[Crossref]

EURASIP J. Image Video Process. (1)

S. Rahman, M. M. Rahman, M. Abdullah-Al-Wadud, G. D. Al-Quaderi, and M. Shoyaib, “An adaptive gamma correction for image enhancement,” EURASIP J. Image Video Process. 2016, 35 (2016).
[Crossref]

IEEE Ann. Hist. Comput. (1)

J. Von Neumann, “First draft of a report on the EDVAC,” IEEE Ann. Hist. Comput. 15, 27–75 (1993).
[Crossref]

IEEE Comp. Arch. Lett. (1)

M. Cheng, Z. Ma, S. Asif, Y. Xu, H. Liu, W. Bao, and J. Sun, “A dual camera system for high spatiotemporal resolution video acquisition,” IEEE Comp. Arch. Lett. 1, 1 (2020).

IEEE Computer (1)

M. Levoy, “Light fields and computational imaging,” IEEE Computer 39, 46–55 (2006).
[Crossref]

IEEE Consum. Electron. Mag. (1)

J. Lemley, S. Bazrafkan, and P. Corcoran, “Deep learning for consumer devices and services: pushing the limits for machine learning, artificial intelligence, and computer vision,” IEEE Consum. Electron. Mag. 6(2), 48–56 (2017).
[Crossref]

IEEE J. Emerg. Sel. Top. Circuits Syst. (1)

X. Xu, S. Liu, T. Chuang, Y. Huang, S. Lei, K. Rapaka, C. Pang, V. Seregin, Y. Wang, and M. Karczewicz, “Intra block copy in HEVC screen content coding extensions,” IEEE J. Emerg. Sel. Top. Circuits Syst. 6, 409–419 (2016).
[Crossref]

IEEE J. Solid-State Circuits (1)

C. Brandli, R. Berner, M. Yang, S.-C. Liu, and T. Delbruck, “A 240× 80 130 db 3 µs latency global shutter spatiotemporal vision sensor,” IEEE J. Solid-State Circuits 49, 2333–2341 (2014).
[Crossref]

IEEE Signal Process Mag. (7)

B. K. Gunturk, J. Glotzbach, Y. Altunbasak, R. W. Schafer, and R. M. Mersereau, “Demosaicking: color filter array interpolation,” IEEE Signal Process Mag. 22(1), 44–54 (2005).
[Crossref]

R. Ramanath, W. E. Snyder, Y. Yoo, and M. S. Drew, “Color image processing pipeline,” IEEE Signal Process Mag. 22(1), 34–43 (2005).
[Crossref]

E. J. Candès and M. B. Wakin, “An introduction to compressive sampling [a sensing/sampling paradigm that goes against the common knowledge in data acquisition],” IEEE Signal Process Mag. 25(2), 21–30 (2008).
[Crossref]

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: toward dynamic capture of the spectral world,” IEEE Signal Process Mag. 33(5), 95–108 (2016).
[Crossref]

G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process Mag. 31(1), 105–115 (2014).
[Crossref]

G. J. Sullivan and T. Wiegand, “Rate-distortion optimization for video compression,” IEEE Signal Process Mag. 15(6), 74–90 (1998).
[Crossref]

A. Skodras, C. Christopoulos, and T. Ebrahimi, “The JPEG 2000 still image compression standard,” IEEE Signal Process Mag. 18(5), 36–58 (2001).
[Crossref]

IEEE Trans. Circuits Syst. Video Technol. (5)

J. Lainema, F. Bossen, W.-J. Han, J. Min, and K. Ugur, “Intra coding of the HEVC standard,” IEEE Trans. Circuits Syst. Video Technol. 22, 1792–1801 (2012).
[Crossref]

P. Pirsch and H.-J. Stolberg, “VLSI implementations of image and video multimedia processing systems,” IEEE Trans. Circuits Syst. Video Technol. 8, 878–891 (1998).
[Crossref]

S.-Y. Lee, Y. Kumar, J.-M. Cho, S.-W. Lee, and S.-W. Kim, “Enhanced autofocus algorithm using robust focus measure and fuzzy reasoning,” IEEE Trans. Circuits Syst. Video Technol. 18, 983–1246 (2008).
[Crossref]

R. Cong, J. Lei, H. Fu, M.-M. Cheng, W. Lin, and Q. Huang, “Review of visual saliency detection with comprehensive information,” IEEE Trans. Circuits Syst. Video Technol. 29, 2941–2959 (2018).
[Crossref]

C. Piciarelli, L. Esterle, A. Khan, B. Rinner, and G. L. Foresti, “Dynamic reconfiguration in camera networks: a short survey,” IEEE Trans. Circuits Syst. Video Technol. 26, 965–977 (2015).
[Crossref]

IEEE Trans. Comput. (1)

N. Ahmed, T. Natarajan, and K. R. Rao, “Discrete cosine transform,” IEEE Trans. Comput. C- 23, 90–93 (1974).
[Crossref]

IEEE Trans. Consum. Electron. (6)

Y.-C. Liu, W.-H. Chan, and Y.-Q. Chen, “Automatic white balance for digital still camera,” IEEE Trans. Consum. Electron. 41, 460–466 (1995).
[Crossref]

J. He, R. Zhou, and Z. Hong, “Modified fast climbing search auto-focus algorithm with adaptive step size searching technique for digital camera,” IEEE Trans. Consum. Electron. 49, 257–262 (2003).
[Crossref]

S. Yousefi, M. Rahman, and N. Kehtarnavaz, “A new auto-focus sharpness function for digital and smart-phone cameras,” IEEE Trans. Consum. Electron. 57, 1003–1009 (2011).
[Crossref]

J.-W. Han, J.-H. Kim, H.-T. Lee, and S.-J. Ko, “A novel training based auto-focus for mobile-phone cameras,” IEEE Trans. Consum. Electron. 57, 232–238 (2011).
[Crossref]

J.-Y. Huo, Y.-L. Chang, J. Wang, and X.-X. Wei, “Robust automatic white balance algorithm using gray color points in images,” IEEE Trans. Consum. Electron. 52, 541–546 (2006).
[Crossref]

M. T. Rahman and N. Kehtarnavaz, “Real-time face-priority auto focus for digital and cell-phone cameras,” IEEE Trans. Consum. Electron. 54, 1506–1513 (2008).
[Crossref]

IEEE Trans. Electron Devices (1)

E. R. Fossum, “CMOS image sensors: electronic camera-on-a-chip,” IEEE Trans. Electron Devices 44, 1689–1698 (1997).
[Crossref]

IEEE Trans. Image Process. (15)

K. Barnard, V. Cardei, and B. Funt, “A comparison of computational color constancy algorithms. I: methodology and experiments with synthesized data,” IEEE Trans. Image Process. 11, 972–984 (2002).
[Crossref]

K. Hirakawa and T. W. Parks, “Joint demosaicing and denoising,” IEEE Trans. Image Process. 15, 2146–2157 (2006).
[Crossref]

M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. Image Process. 15, 3736–3745 (2006).
[Crossref]

W. Dong, L. Zhang, G. Shi, and X. Li, “Nonlocally centralized sparse representation for image restoration,” IEEE Trans. Image Process. 22, 1620–1630 (2012).
[Crossref]

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref]

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising,” IEEE Trans. Image Process. 26, 3142–3155 (2017).
[Crossref]

K. Zhang, W. Zuo, and L. Zhang, “FFDNet: toward a fast and flexible solution for CNN-based image denoising,” IEEE Trans. Image Process. 27, 4608–4622 (2018).
[Crossref]

D. Khashabi, S. Nowozin, J. Jancsary, and A. W. Fitzgibbon, “Joint demosaicing and denoising via learned nonparametric random fields,” IEEE Trans. Image Process. 23, 4968–4981 (2014).
[Crossref]

J. Sun and M. F. Tappen, “Separable Markov random field model and its applications in low level vision,” IEEE Trans. Image Process. 22, 402–407 (2012).
[Crossref]

K. Hirakawa and T. W. Parks, “Adaptive homogeneity-directed demosaicing algorithm,” IEEE Trans. Image Process. 14, 360–369 (2005).
[Crossref]

L. Zhang and X. Wu, “Color demosaicking via directional linear minimum mean square-error estimation,” IEEE Trans. Image Process. 14, 2167–2178 (2005).
[Crossref]

E. Schwartz, R. Giryes, and A. M. Bronstein, “DeepISP: toward learning an end-to-end image processing pipeline,” IEEE Trans. Image Process. 28, 912–923 (2018).
[Crossref]

S. Bianco, C. Cusano, and R. Schettini, “Single and multiple illuminant estimation using convolutional neural networks,” IEEE Trans. Image Process. 26, 4347–4362 (2017).
[Crossref]

J. Yang, J. Wright, T. S. Huang, and Y. Ma, “Image super-resolution via sparse representation,” IEEE Trans. Image Process. 19, 2861–2873 (2010).
[Crossref]

Z. Li and J. Tang, “Weakly supervised deep matrix factorization for social image understanding,” IEEE Trans. Image Process. 26, 276–288 (2016).
[Crossref]

IEEE Trans. Inf. Theory (1)

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006).
[Crossref]

IEEE Trans. Med. Imaging (1)

J. A. Parker, R. V. Kenyon, and D. E. Troxel, “Comparison of interpolating methods for image resampling,” IEEE Trans. Med. Imaging 2, 31–39 (1983).
[Crossref]

IEEE Trans. Multimedia (1)

W. Yang, X. Zhang, Y. Tian, W. Wang, J.-H. Xue, and Q. Liao, “Deep learning for single image super-resolution: a brief review,” IEEE Trans. Multimedia 21, 3106–3121 (2019).

IEEE Trans. Pattern Anal. Mach. Intell. (6)

L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFS,” IEEE Trans. Pattern Anal. Mach. Intell. 40, 834–848 (2017).
[Crossref]

V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: a deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).
[Crossref]

E. Tola, V. Lepetit, and P. Fua, “Daisy: an efficient dense descriptor applied to wide-baseline stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 815–830 (2010).
[Crossref]

T. Brox and J. Malik, “Large displacement optical flow: descriptor matching in variational motion estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 500–513 (2010).
[Crossref]

Y. Chen and T. Pock, “Trainable nonlinear reaction diffusion: a flexible framework for fast and effective image restoration,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 1256–1272 (2016).
[Crossref]

U. Schmidt, J. Jancsary, S. Nowozin, S. Roth, and C. Rother, “Cascades of regression tree fields for image restoration,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 677–689 (2015).
[Crossref]

Int. J. Comput. Vision (3)

A. Buades, B. Coll, and J.-M. Morel, “Nonlocal image and movie denoising,” Int. J. Comput. Vision 76, 123–139 (2008).
[Crossref]

E. Krotkov, “Focusing,” Int. J. Comput. Vision 1, 223–237 (1988).
[Crossref]

M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” Int. J. Comput. Vision 74, 59–73 (2007).
[Crossref]

Int. J. Rob. Res. (1)

J. Kober, J. A. Bagnell, and J. Peters, “Reinforcement learning in robotics: a survey,” Int. J. Rob. Res. 32, 1238–1274 (2013).
[Crossref]

J. Appl. Opt. (1)

G. Li, J. Wang, and Y. Zhang, “Design of 8 mega-pixel mobile phone camera,” J. Appl. Opt. 32, 420–425 (2011).

J. Comput. Commun. (1)

P. Śliwiński and P. Wachel, “A simple model for on-sensor phase-detection autofocusing algorithm,” J. Comput. Commun. 1, 11–17 (2013).
[Crossref]

J. Electron. Imaging (1)

L. Zhang, X. Wu, A. Buades, and X. Li, “Color demosaicking by local directional interpolation and nonlocal adaptive thresholding,” J. Electron. Imaging 20, 043010 (2011).
[Crossref]

J. Microsc. (2)

A. Santos, C. O. de Solórzano, J. J. Vaquero, J. Pena, N. Malpica, and F. Del Pozo, “Evaluation of autofocus functions in molecular cytogenetic analysis,” J. Microsc. 188, 264–272 (1997).
[Crossref]

Z. Wu, D. Wang, and F. Zhou, “Bilateral prediction and intersection calculation autofocus method for automated microscopy,” J. Microsc. 248, 271–280 (2012).
[Crossref]

J. Mod. Opt. (1)

Y. Wang, H. Feng, Z. Xu, Q. Li, Y. Chen, and M. Cen, “Fast auto-focus scheme based on optical defocus fitting model,” J. Mod. Opt. 65, 858–868 (2018).
[Crossref]

J. Opt. Soc. Am. (1)

J. Opt. Soc. Am. A (1)

J. Royal Stat. Soc. B (1)

A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” J. Royal Stat. Soc. B 39, 1–22 (1977).
[Crossref]

J. Signal Process. Syst. (1)

H. Yin, H. Jia, J. Zhou, and Z. Gao, “Survey on algorithms and VLSI architectures for MPEG-like video coders,” J. Signal Process. Syst. 88, 357–410 (2017).
[Crossref]

Math. Control Signals Syst. (1)

G. Cybenko, “Approximation by superpositions of a sigmoidal function,” Math. Control Signals Syst. 2, 303–314 (1989).
[Crossref]

Multiscale Model. Simul. (2)

A. Buades, B. Coll, and J.-M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Simul. 4, 490–530 (2005).
[Crossref]

S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, “An iterative regularization method for total variation-based image restoration,” Multiscale Model. Simul. 4, 460–489 (2005).
[Crossref]

Nat. Photonics (1)

M.-B. Lien, C.-H. Liu, I. Y. Chun, S. Ravishankar, H. Nien, M. Zhou, J. A. Fessler, Z. Zhong, and T. B. Norris, “Ranging and light field imaging with transparent photodetectors,” Nat. Photonics 14, 143–148 (2020).
[Crossref]

Nature (5)

D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of Go without human knowledge,” Nature 550, 354–359 (2017).
[Crossref]

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323, 533 (1986).
[Crossref]

D. Psaltis, D. Brady, X.-G. Gu, and S. Lin, “Holography in artificial neural networks,” Nature 343, 325–330 (1990).
[Crossref]

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref]

D. J. Brady, M. Gehm, R. Stack, D. Marks, D. Kittle, D. R. Golish, E. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012).
[Crossref]

Neural Comput. (1)

G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural Comput. 18, 1527–1554 (2006).
[Crossref]

Neural Netw. (1)

V. Agarwal, A. V. Gribok, and M. A. Abidi, “Machine learning approach to color constancy,” Neural Netw. 20, 559–563 (2007).
[Crossref]

Neurocomputing (1)

J. Misra and I. Saha, “Artificial neural networks in hardware: a survey of two decades of progress,” Neurocomputing 74, 239–255 (2010).
[Crossref]

Opt. Eng. (1)

D. S. Kittle, D. L. Marks, and D. J. Brady, “Design and fabrication of an ultraviolet-visible coded aperture snapshot spectral imager,” Opt. Eng. 51, 071403 (2012).
[Crossref]

Opt. Express (8)

Opt. Lett. (5)

Opt. Photon. News (1)

J. Mait, “A history of imaging: revisiting the past to chart the future,” Opt. Photon. News 17(2), 22–27 (2006).
[Crossref]

Optica (4)

Pattern Recogn. (1)

K.-S. Oh and K. Jung, “GPU implementation of neural networks,” Pattern Recogn. 37, 1311–1314 (2004).
[Crossref]

Phys. Med. (1)

M. Beister, D. Kolditz, and W. A. Kalender, “Iterative reconstruction methods in X-ray CT,” Phys. Med. 28, 94–108 (2012).
[Crossref]

Physica D (1)

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
[Crossref]

Proc. IEEE (3)

P. Pirsch, N. Demassieux, and W. Gehrke, “VLSI architectures for video compression—a survey,” Proc. IEEE 83, 220–246 (1995).
[Crossref]

C. Mead, “Neuromorphic electronic systems,” Proc. IEEE 78, 1629–1636 (1990).
[Crossref]

R. T. Collins, A. J. Lipton, H. Fujiyoshi, and T. Kanade, “Algorithms for cooperative multisensor surveillance,” Proc. IEEE 89, 1456–1477 (2001).
[Crossref]

Proc. SPIE (9)

S. Prasad, T. C. Torgersen, V. P. Pauca, R. J. Plemmons, and J. van der Gracht, “Engineering the pupil phase to improve image quality,” Proc. SPIE 5108, 1–12 (2003).
[Crossref]

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE 6065, 606509 (2006).
[Crossref]

B.-K. Park, S.-S. Kim, D.-S. Chung, S.-D. Lee, and C.-Y. Kim, “Fast and accurate auto focusing algorithm based on two defocused images using discrete cosine transform,” Proc. SPIE 6817, 68170D (2008).
[Crossref]

Y. Yao, B. Abidi, N. Doggaz, and M. Abidi, “Evaluation of sharpness measures and search algorithms for the auto focusing of high-magnification images,” Proc. SPIE 6246, 62460G (2006).
[Crossref]

K. Briechle and U. D. Hanebeck, “Template matching using fast normalized cross correlation,” Proc. SPIE 4387, 95–102 (2001).
[Crossref]

B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, J. D. Targove, S. Wein, P. Brewer, D. G. Madden, and K. H. Shafique, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008).
[Crossref]

D. Kiku, Y. Monno, M. Tanaka, and M. Okutomi, “Minimized-Laplacian residual interpolation for color image demosaicking,” Proc. SPIE 9023, 90230L (2014).
[Crossref]

O. Kapah and H. Z. Hel-Or, “Demosaicking using artificial neural networks,” Proc. SPIE 3962, 112–120 (2000).
[Crossref]

X. Li, B. Gunturk, and L. Zhang, “Image demosaicing: a systematic survey,” Proc. SPIE 6822, 68221J (2008).
[Crossref]

Psychol. Rev. (1)

F. Rosenblatt, “The perceptron: a probabilistic model for information storage and organization in the brain,” Psychol. Rev. 65, 386–408 (1958).
[Crossref]

Real-Time Imaging (1)

N. Kehtarnavaz and H.-J. Oh, “Development and real-time implementation of a rule-based auto-focus algorithm,” Real-Time Imaging 9, 197–203 (2003).
[Crossref]

Sci. Am. (1)

E. H. Land, “The retinex theory of color vision,” Sci. Am. 237, 108–129 (1977).
[Crossref]

Sci. Innov. (1)

O. Baltag, “History of automatic focusing reflected by patents,” Sci. Innov. 3,1–17 (2015).
[Crossref]

Sci. Rep. (2)

L. Wei and E. Roberts, “Neural network control of focal position during time-lapse microscopy of cells,” Sci. Rep. 8, 7313 (2018).
[Crossref]

J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
[Crossref]

Science (2)

X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361, 1004–1008 (2018).
[Crossref]

G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science 313, 504–507 (2006).
[Crossref]

Signal Process. Image Commun. (1)

D. Menon and G. Calvagno, “Color image demosaicking: an overview,” Signal Process. Image Commun. 26, 518–533 (2011).
[Crossref]

Transp. Res. C (1)

M. Zhu, X. Wang, and Y. Wang, “Human-like autonomous car-following model with deep reinforcement learning,” Transp. Res. C 97, 348–368 (2018).
[Crossref]

Other (154)

S. Shalev-Shwartz, S. Shammah, and A. Shashua, “Safe, multi-agent, reinforcement learning for autonomous driving,” arXiv:1610.03295 (2016).

D. Isele, R. Rahimi, A. Cosgun, K. Subramanian, and K. Fujimura, “Navigating occluded intersections with autonomous vehicles using deep reinforcement learning,” in IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2018), pp. 2034–2039.

K. Makantasis, M. Kontorinaki, and I. Nikolos, “A deep reinforcement learning driving policy for autonomous road vehicles,” arXiv:1905.09046 (2019).

C. Paxton, V. Raman, G. D. Hager, and M. Kobilarov, “Combining neural networks and tree search for task and motion planning in challenging environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2017), pp. 6059–6066.

P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang, “Bottom-up and top-down attention for image captioning and visual question answering,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 6077–6086.

C. Chen, A. Seff, A. Kornhauser, and J. Xiao, “DeepDriving: learning affordance for direct perception in autonomous driving,” in IEEE International Conference on Computer Vision (2015), pp. 2722–2730.

M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to end learning for self-driving cars,” arXiv:1604.07316 (2016).

M. Bojarski, P. Yeres, A. Choromanska, K. Choromanski, B. Firner, L. Jackel, and U. Muller, “Explaining how a deep neural network trained with end-to-end learning steers a car,” arXiv:1704.07911 (2017).

H. Xu, Y. Gao, F. Yu, and T. Darrell, “End-to-end learning of driving models from large-scale video datasets,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 2174–2182.

“Bayer area scan color cameras compared to 3-CCD color cameras,” 2020, https://www.adimec.com/bayer-area-scan-color-cameras-compared-to-3-ccd-color-cameras-part-1/ .

D. Güera and E. J. Delp, “Deepfake video detection using recurrent neural networks,” in 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) (IEEE, 2018), pp. 1–6.

N.-S. Syu, Y.-S. Chen, and Y.-Y. Chuang, “Learning deep convolutional networks for demosaicing,” arXiv:1802.03769 (2018).

C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to see in the dark,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 3291–3300.

S. Sukegawa, T. Umebayashi, T. Nakajima, H. Kawanobe, K. Koseki, I. Hirota, T. Haruta, M. Kasai, K. Fukumoto, T. Wakano, K. Inoue, H. Takahashi, T. Nagano, Y. Nitta, T. Hirayama, and N. Fukushima, “A 1/4-inch 8 mpixel back-illuminated stacked CMOS image sensor,” in IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE, 2013), pp. 484–485.

Y.-Q. Wang, “A multilayer neural network for image demosaicking,” in IEEE International Conference on Image Processing (ICIP) (IEEE, 2014), pp. 1852–1856.

F.-L. He, Y.-C. F. Wang, and K.-L. Hua, “Self-learning approach to color demosaicking via support vector regression,” in 19th IEEE International Conference on Image Processing (IEEE, 2012), pp. 2765–2768.

R. Tan, K. Zhang, W. Zuo, and L. Zhang, “Color image demosaicking via deep residual learning,” in IEEE International Conference on Multimedia and Expo (ICME) (IEEE, 2017), pp. 793–798.

Y. Monno, D. Kiku, M. Tanaka, and M. Okutomi, “Adaptive residual interpolation for color image demosaicking,” in IEEE International Conference on Image Processing (ICIP) (IEEE, 2015), pp. 3861–3865.

I. Pekkucuksen and Y. Altunbasak, “Gradient based threshold free color filter array interpolation,” in IEEE International Conference on Image Processing (IEEE, 2010), pp. 137–140.

J. C. Kim and J. W. Chong, “Facial contour correcting method and device,” U.S.patent app.16/304,337 (4July2019).

A. Bora, A. Jalal, E. Price, and A. G. Dimakis, “Compressed sensing using generative models,” in 34th International Conference on Machine Learning (JMLR, 2017), Vol. 70, pp. 537–546.

D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 9446–9454.

Y. Blau and T. Michaeli, “The perception-distortion tradeoff,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018).

O. Cossairt and S. Nayar, “Spectral focal sweep: extended depth of field from chromatic aberrations,” in IEEE International Conference on Computational Photography (ICCP) (2010), pp. 1–8.

M. Aittala and F. Durand, “Burst image deblurring using permutation invariant convolutional neural networks,” in European Conference on Computer Vision (ECCV) (2018), pp. 731–747.

E. Y. Lam, “Combining gray world and retinex theory for automatic white balance in digital photography,” in 9th International Symposium on Consumer Electronics (ISCE) (IEEE, 2005), pp. 134–139.

C.-C. Weng, H. Chen, and C.-S. Fuh, “A novel automatic white balance method for digital still cameras,” in IEEE International Symposium on Circuits and Systems (IEEE, 2005), pp. 3801–3804.

E. Y. Lam and G. S. Fung, “Automatic white balancing in digital photography,” in Single-Sensor Imaging (CRC Press, 2018), pp. 287–314.

S. Bianco, C. Cusano, and R. Schettini, “Color constancy using CNNS,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2015), pp. 81–89.

W. Shi, C. C. Loy, and X. Tang, “Deep specialized network for illuminant estimation,” in European Conference on Computer Vision (Springer, 2016), pp. 371–387.

Y. Hu, B. Wang, and S. Lin, “FC4: fully convolutional color constancy with confidence-weighted pooling,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4085–4094.

L. Condat and S. Mosaddegh, “Joint demosaicking and denoising by total variation minimization,” in 19th IEEE International Conference on Image Processing (IEEE, 2012), pp. 2781–2784.

T. Klatzer, K. Hammernik, P. Knobelreiter, and T. Pock, “Learning joint demosaicing and denoising based on sequential energy minimization,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2016), pp. 1–11.

S. H. Park, H. S. Kim, S. Lansel, M. Parmar, and B. A. Wandell, “A case for denoising before demosaicking color filter array data,” in Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers (IEEE, 2009), pp. 860–864.

A. Danielyan, M. Vehvilainen, A. Foi, V. Katkovnik, and K. Egiazarian, “Cross-color BM3D filtering of noisy raw data,” in International Workshop on Local and Non-local Approximation in Image Processing (IEEE, 2009), pp. 125–129.

F. Kokkinos and S. Lefkimmiatis, “Deep image demosaicking using a cascade of convolutional residual denoising networks,” in European Conference on Computer Vision (ECCV) (2018), pp. 303–319.

F. Kokkinos and S. Lefkimmiatis, “Iterative residual network for deep joint image demosaicking and denoising,” arXiv:1807.06403 (2018).

W. Dong, M. Yuan, X. Li, and G. Shi, “Joint demosaicing and denoising with perceptual optimization on a generative adversarial network,” arXiv:1802.04723 (2018).

S. Lefkimmiatis, “Universal denoising networks: a novel CNN architecture for image denoising,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 3204–3213.

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 3929–3938.

V. Jain and S. Seung, “Natural image denoising with convolutional networks,” in Advances in Neural Information Processing Systems (2009), pp. 769–776.

H. C. Burger, C. J. Schuler, and S. Harmeling, “Image denoising: can plain neural networks compete with BM3D?” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 2392–2399.

J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with deep neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 341–349.

B. Ahn and N. I. Cho, “Block-matching convolutional neural network for image denoising,” arXiv:1704.00524 (2017).

J. Mairal, F. R. Bach, J. Ponce, G. Sapiro, and A. Zisserman, “Non-local sparse models for image restoration,” in International Conference on Computer Vision (ICCV) (2009), Vol. 29, pp. 54–62.

U. Schmidt and S. Roth, “Shrinkage fields for effective image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 2774–2781.

X. Lan, S. Roth, D. Huttenlocher, and M. J. Black, “Efficient belief propagation with learned higher-order Markov random fields,” in European Conference on Computer Vision (Springer, 2006), pp. 269–282.

A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 2 (IEEE, 2005), pp. 60–65.

S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” arXiv:1502.03167 (2015).

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556 (2014).

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1–9.

G. Larsson, M. Maire, and G. Shakhnarovich, “FractalNet: ultra-deep neural networks without residuals,” arXiv:1605.07648 (2016).

M. Z. Alom, T. M. Taha, C. Yakopcic, S. Westberg, P. Sidike, M. S. Nasrin, B. C. Van Esesn, A. A. S. Awwal, and V. K. Asari, “The history began from AlexNet: a comprehensive survey on deep learning approaches,” arXiv:1803.01164 (2018).

M. Minsky and S. A. Papert, Perceptrons: An Introduction to Computational Geometry (MIT, 2017).

P. Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (Basic Books, 2015).

Y. LeCun, B. E. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. E. Hubbard, and L. D. Jackel, “Handwritten digit recognition with a back-propagation network,” in Advances in Neural Information Processing Systems (1990), pp. 396–404.

Y. LeCun, K. Kavukcuoglu, and C. Farabet, “Convolutional networks and applications in vision,” in IEEE International Symposium on Circuits and Systems (IEEE, 2010), pp. 253–256.

P. Grohs, D. Perekrestenko, D. Elbrächter, and H. Bölcskei, “Deep neural network approximation theory,” arXiv:1901.02220 (2019).

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR 2 (2005), pp. 1–11.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 1097–1105.

B. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, P. Rajpurkar, T. Migimatsu, R. Cheng-Yue, F. Mujica, A. Coates, and A. Y. Ng, “An empirical evaluation of deep learning on highway driving,” arXiv:1504.01716 (2015).

M. Harwit, Hadamard Transform Optics (Elsevier, 2012).

R. Raskar and J. Tumblin, Computational Photography: Mastering New Techniques for Lenses, Lighting, and Sensors (AK Peters, 2009).

M. Buckler, S. Jayasuriya, and A. Sampson, “Reconfiguring the imaging pipeline for computer vision,” in IEEE International Conference on Computer Vision (ICCV) (2017).

C. Wu, L. F. Isikdogan, S. Rao, B. Nayak, T. Gerasimow, A. Sutic, L. Ain-kedem, and G. Michael, “VisionISP: Repurposing the image signal processor for computer vision applications,” in IEEE International Conference on Image Processing (ICIP) (2019), pp. 4624–4628.

D. Moloney, B. Barry, R. Richmond, F. Connor, C. Brick, and D. Donohoe, “Myriad 2: eye of the computational vision storm,” in IEEE Hot Chips 26 Symposium (HCS) (IEEE, 2014), pp. 1–18.

M. Ditty, T. Architecture, J. Montrym, and C. Wittenbrink, “NVIDIA’s Tegra K1 system-on-chip,” in IEEE Hot Chips 26 Symposium (HCS) (IEEE, 2014), pp. 1–26.

J. Redgrave, A. Meixner, N. Goulding-Hotta, A. Vasilyev, and O. Shacham, “Pixel visual core: Google’s fully programmable image vision and AI processor for mobile devices,” in Hot Chips: A Symposium on High Performance Chips (2018).

“Ambarella CV22 product brief,” 2018, https://www.ambarella.com/wp-content/uploads/CV22-product-brief-consumer.pdf .

A. Gulli and S. Pal, Deep Learning with Keras (Packt, 2017).

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (2016), pp. 265–283.

Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: convolutional architecture for fast feature embedding,” in 22nd ACM International Conference on Multimedia (ACM, 2014), pp. 675–678.

A. Paszke, S. Gross, S. Chintala, and G. Chanan, PyTorch: Tensors and Dynamic Neural Networks in Python with Strong GPU Acceleration (2017), Vol. 6.

“Kodak photocd dataset,” http://r0k.us/graphics/kodak/ .

D. J. Brady, Optical Imaging and Spectroscopy (Wiley, 2009).

B. E. Bayer, “Color imaging array,” U.S. patent3,971,065 (20July1976).

X. Yuan, Y. Liu, J. Suo, and Q. Dai, “Plug-and-play algorithms for large-scale snapshot compressive imaging,”, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (IEEE, 2020), pp. 1447–1457.

S. K. Nayar and T. Mitsunaga, “High dynamic range imaging: spatially varying pixel exposures,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2000), vol. 1, pp. 472–479.

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in International Conference on Computer Vision (IEEE, 2011), pp. 287–294.

D. Reddy, A. Veeraraghavan, and R. Chellappa, “P2C2: programmable pixel compressive camera for high speed imaging,” in Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2011), pp. 329–336.

J. Zbontar and Y. LeCun, “Computing the stereo matching cost with a convolutional neural network,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1592–1599.

S. Zagoruyko and N. Komodakis, “Learning to compare image patches via convolutional neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 4353–4361.

P. Fischer, A. Dosovitskiy, and T. Brox, “Descriptor matching with convolutional neural networks: a comparison to sift,” arXiv:1405.5769 (2014).

D. DeTone, T. Malisiewicz, and A. Rabinovich, “Deep image homography estimation,” arXiv:1606.03798 (2016).

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 3431–3440.

H. Zheng, M. Ji, H. Wang, Y. Liu, and L. Fang, “Learning cross-scale correspondence and patch-based synthesis for reference-based super-resolution,” in British Machine Vision Conference (2017).

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in IEEE International Conference on Computational Photography (IEEE, 2014), pp. 1–10.

Y. Wang, Y. Liu, W. Heidrich, and Q. Dai, “The light field attachment: turning a DSLR into a light field camera using a low budget camera ring,” in IEEE Transactions on Visualization and Computer Graphics (IEEE, 2016).

A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE Computer Society, 2005), Vol. 2, pp. 60–65.

J. Wu, H. Wang, X. Wang, and Y. Zhang, “A novel light field super-resolution framework based on hybrid imaging system,” in Visual Communications and Image Processing (IEEE, 2015), pp. 1–4.

H. Zheng, M. Guo, H. Wang, Y. Liu, and L. Fang, “Combining exemplar-based approach and learning-based approach for light field super-resolution using a hybrid imaging system,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2481–2486.

H. Zheng, M. Ji, H. Wang, Y. Liu, and L. Fang, “CrossNet: an end-to-end reference-based super resolution network using cross-scale warping,” in European Conference on Computer Vision (ECCV) (ECCV, 2018), pp. 87–104.

P. P. Srinivasan, T. Wang, A. Sreelal, R. Ramamoorthi, and R. Ng, “Learning to synthesize a 4D RGBD light field from a single image,” IEEE International Conference on Computer Vision (ICCV), Venice, Italy, October22–29, 2017, pp. 2262–2270.

O. S. Cossairt, D. Miau, and S. K. Nayar, “Gigapixel computational imaging,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2011), pp. 1–8.

D. G. Lowe, “Object recognition from local scale-invariant features,” in 7th IEEE International Conference on Computer Vision (1999), Vol. 2, pp. 1150–1157.

H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: speeded up robust features,” in European Conference on Computer Vision (ECCV), A. Leonardis, H. Bischof, and A. Pinz, eds. (Springer,2006), pp. 404–417.

E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: an efficient alternative to sift or surf,” in International Conference on Computer Vision (2011), pp. 2564–2571.

B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in 7th International Joint Conference on Artificial Intelligence (IJCAI) (Morgan Kaufmann, 1981), Vol. 2, pp. 674–679.

D. Pollock, P. Reardon, T. Rogers, C. Underwood, G. Egnal, B. Wilburn, and S. Pitalo, “Multi-lens array system and method,” U.S. patent9,182,228 (10November2015).

R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in European Conference on Computer Vision (Springer, 2016), pp. 649–666.

D. Eigen, C. Puhrsch, and R. Fergus, “Depth map prediction from a single image using a multi-scale deep network,” in Advances in Neural Information Processing Systems (2014), pp. 2366–2374.

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 1125–1134.

L. A. Teodosio and M. Mills, “Panoramic overviews for navigating real-world scenes,” in 1st ACM International Conference on Multimedia (1993), pp. 359–364.

S. Mann and R. W. Picard, “Virtual bellows: constructing high quality stills from video,” in 1st International Conference on Image Processing (1994), Vol. 1, pp. 363–367.

R. Szeliski, “Image mosaicing for tele-reality applications,” in IEEE Workshop on Applications of Computer Vision (1994), pp. 44–53.

D. Capel and A. Zisserman, “Automated mosaicing with super-resolution zoom,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (1998), Vol. 98, pp. 885–891.

M. Brown and D. G. Lowe, “Recognising panoramas,” in IEEE International Conference on Computer Vision (ICCV) (2003), Vol. 3, p. 1218.

A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox, “FlowNet: learning optical flow with convolutional networks,” in IEEE International Conference on Computer Vision (2015), pp. 2758–2766.

E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, “FlowNet 2.0: evolution of optical flow estimation with deep networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 2462–2470.

A. Ranjan and M. J. Black, “Optical flow estimation using a spatial pyramid network,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4161–4170.

D. Sun, X. Yang, M.-Y. Liu, and J. Kautz, “PWC-net: CNNS for optical flow using pyramid, warping, and cost volume,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 8934–8943.

J. Zaragoza, T.-J. Chin, M. S. Brown, and D. Suter, “As-projective-as-possible image stitching with moving DLT,” in IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 2339–2346.

C.-C. Lin, S. U. Pankanti, K. N. Ramamurthy, and A. Y. Aravkin, “Adaptive as-natural-as-possible image stitching,” in IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1155–1163.

X. Yuan, L. Fang, Q. Dai, D. J. Brady, and Y. Liu, “Multiscale gigapixel video: a cross resolution image matching and warping approach,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2017), pp. 1–9.

M. Afifi, “Semantic white balance: semantic color constancy using convolutional neural network,” arXiv:1802.00153 (2018).

S. Wang, Y. Zhang, P. Deng, and F. Zhou, “Fast automatic white balancing method by color histogram stretching,” in 4th International Congress on Image and Signal Processing (IEEE, 2011), Vol. 2, pp. 979–983.

S.-C. Tai, T.-W. Liao, Y.-Y. Chang, and C.-P. Yeh, “Automatic white balance algorithm through the average equalization and threshold,” in 8th International Conference on Information Science and Digital Content Technology (ICIDT) (IEEE, 2012), Vol. 3, pp. 571–576.

D. Glasner, S. Bagon, and M. Irani, “Super-resolution from a single image,” in IEEE 12th International Conference on Computer Vision (IEEE, 2009), pp. 349–356.

J. Yang, Z. Lin, and S. Cohen, “Fast image super-resolution based on in-place example regression,” in IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 1059–1066.

C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in European Conference on Computer Vision (Springer, 2014), pp. 184–199.

J. Yang, J. Wright, T. Huang, and Y. Ma, “Image super-resolution as sparse representation of raw image patches,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 1646–1654.

B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2017).

Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 2472–2481.

X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and C. C. Loy, “ESRGAN: enhanced super-resolution generative adversarial networks,” in European Conference on Computer Vision (ECCV) (2018).

Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image super-resolution using very deep residual channel attention networks,” in European Conference on Computer Vision (ECCV) (2018), pp. 286–301.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4681–4690.

J. Kim, J. K. Lee, and K. M. Lee, “Deeply-recursive convolutional network for image super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 1637–1645.

Y. Tai, J. Yang, and X. Liu, “Image super-resolution via deep recursive residual network,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 3147–3155.

M. Haris, G. Shakhnarovich, and N. Ukita, “Deep back-projection networks for super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 1664–1673.

T. Tong, G. Li, X. Liu, and Q. Gao, “Image super-resolution using dense skip connections,” in IEEE International Conference on Computer Vision (2017), pp. 4799–4807.

X. Hu, H. Mu, X. Zhang, Z. Wang, T. Tan, and J. Sun, “Meta-SR: a magnification-arbitrary network for super-resolution,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 1575–1584.

K. Zhang, L. Gool, and R. Timofte, “Deep unfolding network for image super-resolution,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020), pp. 3214–3223.

C. Chen, Z. Xiong, X. Tian, Z. Zha, and F. Wu, “Camera lens super-resolution,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 1652–1660.

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 2808–2817.

K. Zhang, W. Zuo, and L. Zhang, “Deep plug-and-play super-resolution for arbitrary blur kernels,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 1671–1681.

K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Gool, and R. Timofte, “Plug-and-play image restoration with deep denoiser prior,” arXiv abs/2008.13751 (2020).

R. Gutierrez, E. Fossum, and T. Tang, “Auto-focus technology,” in International Image Sensor Workshop (2007), pp. 20–25.

R. Cicala, “Sony FE 135mm f1.8 GM early MTF results,” 2019, https://www.lensrentals.com/blog/2019/03/sony-fe-135mm-f1-8-gm-early-mtf-results/ .

Z. Ji-Yan, H. Yuan-Qing, X. Fei-Bing, and M. Xian-Guo, “Design of 10 mega-pixel mobile phone lens,” in 3rd International Conference on Instrumentation, Measurement, Computer, Communication and Control (IEEE, 2013), pp. 569–573.

X. Miao, X. Yuan, Y. Pu, and V. Athitsos, “λ-net: reconstruct hyperspectral images from a snapshot measurement,” in IEEE/CVF Conference on Computer Vision (ICCV) (2019), Vol. 1.

J. Ballé, D. Minnen, S. Singh, S. J. Hwang, and N. Johnston, “Variational image compression with a scale hyperprior,” arXiv:1802.01436 (2018).

O. Rippel and L. Bourdev, “Real-time adaptive image compression,” in International Conference on Machine Learning (ICML, 2017), pp. 2922–2930.

F. Mentzer, E. Agustsson, M. Tschannen, R. Timofte, and L. Van Gool, “Conditional probability models for deep image compression,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018).

M. Li, W. Zuo, S. Gu, D. Zhao, and D. Zhang, “Learning convolutional networks for content-weighted image compression,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 3214–3223.

H. Liu, T. Chen, Q. Shen, T. Yue, and Z. Ma, “Deep image compression via end-to-end learning,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2018), pp. 2575–2578.

Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” in 37th Asilomar Conference on Signals, Systems & Computers (IEEE, 2003), Vol. 2, pp. 1398–1402.

H. Liu, T. Chen, P. Guo, Q. Shen, X. Cao, Y. Wang, and Z. Ma, “Non-local attention optimized deep image compression,” arXiv:1904.09757 (2019).

H. Liu, T. Chen, Q. Shen, and Z. Ma, “Practical stacked non-local attention modules for image compression,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2019).

T. Chen, H. Liu, Q. Shen, T. Yue, X. Cao, and Z. Ma, “DeepCoder: A deep neural network based video compression,” in IEEE Visual Communications and Image Processing (VCIP) (IEEE, 2017), pp. 1–4.

G. Lu, W. Ouyang, D. Xu, X. Zhang, C. Cai, and Z. Gao, “DVC: an end-to-end deep video compression framework,” in IEEE Conference on Computer Vision and Pattern Recognition (2019), pp. 11006–11015.

H. Liu, T. Chen, M. Lu, Q. Shen, and Z. Ma, “Neural video compression using spatio-temporal priors,” arXiv:1902.07383 (2019).

Q. Shen, J. Cai, L. Liu, H. Liu, T. Chen, L. Ye, and Z. Ma, “CodedVision: towards joint image understanding and compression via end-to-end learning,” in Pacific Rim Conference on Multimedia (Springer, 2018), pp. 3–14.

X. Yan, D. J. Brady, J. Wang, C. Huang, Z. Li, S. Yan, D. Liu, and Z. Ma, “Compressive sampling for array cameras,” arXiv:1908.10903 (2019).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (19)

Figure 1.
Figure 1. Demosaicing example. (a) Demosaicing code. (b) Illustration of the sample network.
Figure 2.
Figure 2. Camera pipeline [100]. The blue bounding box illustrates the functions of a typical ISP.
Figure 3.
Figure 3. Results of representative demosaicing methods, including adaptive homogeneity-directed demosaicing (AHD) [111], gradient-based threshold free color filter array interpolation (GBTF) [112], directional linear minimum mean square-error estimation (DLMMSE) [113], local directional interpolation and nonlocal adaptive thresholding (LDI-NAT) [52], minimized-Laplacian residual interpolation (MLRI) [114], adaptive residual interpolation (ARI) [115], and deep residual learning (DRL) [110]. The zoomed region of “railings” demonstrates the success of deep learning in [110] significantly reducing artifacts.
Figure 4.
Figure 4. Comparisons among (a) noisy image, (b) TLSdemosaic [138], (c) TV [139], and (d) learning-based method [141] in joint demosaicing and denoising.
Figure 5.
Figure 5. Singe image super-resolution using the generative adversarial network described in [174].
Figure 6.
Figure 6. Reconstruction of snapshot compressed video using plug and play generalized alternating projections (PnP-GAP) and the deep-learning denoiser FFDNet. Forty-eight video frames are reconstructed from a single coded aperture modulated frame [69]. The image at the upper left shows the raw coded data captured to produce the image.
Figure 7.
Figure 7. Illustration of neural image and video compression. (a) Neural image compression (NIC). (b) Neural video compression (NVC).
Figure 8.
Figure 8. Example 4K RGB images from noncompressed raw Bayer data with ISPs (left column), DLACS-only compressed raw Bayer data followed by decompression and ISPs (middle column), and DLACS-JPEG-hybrid compressed raw Bayer data followed by decompression and ISPs (right column). Figures from [211].
Figure 9.
Figure 9. Image correspondence methods, including (a) template matching [253], (b) speeded up robust features [242], (c) variational optical flow [248], and (d) PWC-net optical flow [252]. In (c) and (d) the left/upper image is the transparent overlay of input frames, and the right/bottom image is the estimated flow.
Figure 10.
Figure 10. Illustration of a representative stitching pipeline based on the overlap of adjacent images [262]. (a) Input images, (b) image with features, (c) matched features between adjacent images, (d) warped and aligned images, and (e) blended result.
Figure 11.
Figure 11. Illustration of cross-scale stitching scheme [267]. The input images consist of a low-res large-FoV global image and a set of high-res small-FoV local images. The local images are warped and corrected according to its corresponding part in the global image.
Figure 12.
Figure 12. Illustration of reference-based super-resolution (RefSR) in CrossNet, which performs multiscale warping for feature alignment and synthesis in an end-to-end fashion.
Figure 13.
Figure 13. Qualitative comparisons between SISR and RefSR methods. SISR methods include SRCNN [166], VDSR [169], and MDSR [170]. RefSR methods cover the patch-based PatchMatch [277], SSNet [276], and the flow-based CrossNet [282].
Figure 14.
Figure 14. Demonstration of applying RefSR method in image stitching pipeline. AWnet [284] is used as the RefSR method. (a) Low-res global image, (b) RefSR stitching result, (c) de-parallax capability, (d) recovered details, and (e) defects.
Figure 15.
Figure 15. Overview of the CNN-based AF system. (a) Input to the discriminator or the estimator is a block of size $512 \times 512$ from the image. Focus discriminator: the filter size/number of filters/stride for the two convolutional layers are $8 \times 8/1/8$ and $8 \times 8/1/8$. The dimensions of the fully connected layers are 10 and 2. The dropout rate for the dropout layer is 0.5. Step estimator: the filter size/number of filters/stride for the three convolutional layers are $8 \times 8/4/8$, $4 \times 4/8/4$, and $4 \times 4/8/4$. The dimensions of the fully connected layers are 1024, 512, 10, and 1. (b) The network configuration is tested on a camera module with an Evetar lens (25 mm, F/2.4) and a CMOS image sensor (Sony IMX274 4 K).
Figure 16.
Figure 16. Dynamic focus control system. $I$ represents the measurement from the camera, $J$ represents the fused all-in-focus frame, $F$ represents the focus position, and $R$ represents the reward. The subscript $t$ denotes the time stamp. The fusing algorithm takes in the current measurement and the previous output frame and generates the new all-in-focus frame. In this system, the processing unit acts as the agent, which infers the focus adjustment from the state $\{{F_t},{I_t},{J_t}\}$.
Figure 17.
Figure 17. Distributed control processing architecture for array cameras. Each blue bounding box represents a camera, and a centralized unit (red bounding box) is required to collect and process the captured data from each camera.
Figure 18.
Figure 18. Spectral image sampling strategies. (a) Bayer sampling strategy on the spectral image data cube and (b) multiscale spectral sampling strategy.
Figure 19.
Figure 19. Foveated spatial sampling.

Tables (2)

Tables Icon

Table 1. Quantitative Comparisons between SISR and RefSR on Light-Field Dataset Flower [283], under Different Viewpoints (1,1) and (7,7)a

Tables Icon

Table 2. Efficiency Comparison between the Fibonacci Search [293], the Rule-Based Search [294], and the CNN-Based Methoda

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

y = f ( w x + b ) ,
g = H f + n ,
f est = H ^ g ,
f est = argmin [ | H f est g | 2 + σ ( f est ) ] ,
E [ t = 0 T R t ]
E [ t = 0 γ t R t ] .

Metrics