Abstract

We propose the use of multiplexed illumination to enable the classification of visually similar objects that cannot normally be distinguished. We construct a compact red, green, blue, and near-infrared light stage and develop a method to jointly select informative illumination patterns and training a classifier that uses the resulting images. We use the light stage to model training samples and synthesize noise-accurate images that drive the training process and a time-efficient greedy pattern selection scheme. The system delivers fast, accurate classification of previously indistinguishable samples, outperforming fixed-illuminant and conventional noise-optimal patterns. This work has potential applications spanning forgery detection and quality control in agriculture and manufacturing.

© 2021 Optical Society of America

Full Article  |  PDF Article
More Like This
Algorithm for training the minimum error one-class classifier of images

J. T. Guillen-Bonilla, E. Kurmyshev, and E. González
Appl. Opt. 47(4) 541-547 (2008)

Depth acquisition in single-pixel imaging with multiplexed illumination

Huayi Wang, Liheng Bian, and Jun Zhang
Opt. Express 29(4) 4866-4874 (2021)

Deep neural network for coded mask cryptographical imaging

Ya-Ti Chang Lee, Yi-Chun Fang, and Chung-Hao Tien
Appl. Opt. 60(6) 1686-1693 (2021)

References

  • View by:
  • |
  • |
  • |

  1. Y. Asano, M. Meguro, C. Wang, A. Lam, Y. Zheng, T. Okabe, and I. Sato, “Coded illumination and imaging for fluorescence based classification,” in European Conference on Computer Vision (ECCV) (2018), pp. 502–516.
  2. T. C. Moran, A. D. Kaye, A. Rao, and F. R. Bueno, “The roles of X rays and other types of electromagnetic radiation in evaluating paintings for forgery and restoration,” J. Forensic Radiol. Imaging 5, 38–46 (2016).
    [Crossref]
  3. K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
    [Crossref]
  4. C. Kampouris and A. Ghosh, “ICL multispectral light stage: building a versatile LED sphere with off-the-shelf components,” in Eurographics Workshop on Material Appearance Modeling (Eurographics Association, 2018), pp. 1–4.
  5. C. Wang and T. Okabe, “Joint optimization of coded illumination and grayscale conversion for one-shot raw material classification,” in British Machine Vision Conference (BMVC) (2017).
  6. K. Mitra, O. S. Cossairt, and A. Veeraraghavan, “A framework for analysis of computational imaging systems: role of signal prior, sensor noise and multiplexing,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 1909–1921 (2014).
    [Crossref]
  7. Light stage object classifier, https://roboticimaging.org/Projects/LSClassifier/ .
  8. Y. Y. Schechner, S. K. Nayar, and P. N. Belhumeur, “Multiplexing for optimal lighting,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 1339–1354 (2007).
    [Crossref]
  9. N. Ratner and Y. Y. Schechner, “Illumination multiplexing within fundamental limits,” in Computer Vision and Pattern Recognition (CVPR) (IEEE, 2007), pp. 1–8.
  10. M. Alterman, Y. Y. Schechner, and A. Weiss, “Multiplexed fluorescence unmixing,” in International Conference on Computational Photography (ICCP) (IEEE, 2010), pp. 1–8.
  11. I. Sato, T. Okabe, Y. Sato, and K. Ikeuchi, “Using extended light sources for modeling object appearance under varying illumination,” in International Conference on Computer Vision (ICCV) (IEEE, 2005), Vol. 1, pp. 325–332.
  12. A. Wenger, A. Gardner, C. Tchou, J. Unger, T. Hawkins, and P. Debevec, “Performance relighting and reflectance transformation with time-multiplexed illumination,” ACM Trans. Graph. 24, 756–764 (2005).
    [Crossref]
  13. P. Debevec, “The light stages and their applications to photoreal digital actors,” in SIGGRAPH Asia (2012), Vol. 2.
  14. L. Zhaoxiang and L. Gang, “Apple maturity discrimination and positioning system in an apple harvesting robot,” N. Z. J. Agric. Res. 50, 1103–1113 (2007).
    [Crossref]
  15. H. Blasinski, J. Farrell, and B. Wandell, “Designing illuminant spectral power distributions for surface classification,” in Computer Vision and Pattern Recognition (CVPR) (2017), pp. 2164–2173.
  16. M. Nagata, J. G. Tallada, and T. Kobayashi, “Bruise detection using NIR hyperspectral imaging for strawberry (fragaria× ananassa duch.),” Environ. Control Biol. 44, 133–142 (2006).
    [Crossref]
  17. S. D. Roy, D. H. Das, M. K. Bhowmik, and A. K. Ghosh, “Bruise detection in apples using infrared imaging,” in International Conference on Electrical and Computer Engineering (ICECE) (IEEE, 2016), pp. 118–122.
  18. M. Teena, A. Manickavasagan, L. Ravikanth, and D. Jayas, “Near infrared (NIR) hyperspectral imaging to classify fungal infected date fruits,” J. Stored Prod. Res. 59, 306–313 (2014).
    [Crossref]
  19. Q. Lü, M.-J. Tang, J.-R. Cai, J.-W. Zhao, and S. ViTTayapaduNg, “Vis/NIR hyperspectral imaging for detection of hidden bruises on kiwifruits,” Czech J. Food Sci. 29, 595–602 (2011).
    [Crossref]
  20. B. M. Nicola, E. Lötze, A. Peirs, N. Scheerlinck, and K. I. Theron, “Non-destructive measurement of bitter pit in apple fruit using NIR hyperspectral imaging,” Postharvest Biol. Technol. 40, 1–6 (2006).
    [Crossref]
  21. J. P. Wachs, H. I. Stern, T. Burks, and V. Alchanatis, “Low and high-level visual feature-based apple detection from multi-modal images,” Precis. Agric. 11, 717–735 (2010).
    [Crossref]
  22. C. LeGendre, X. Yu, D. Liu, J. Busch, A. Jones, S. Pattanaik, and P. Debevec, “Practical multispectral lighting reproduction,” ACM Trans. Graph. 35, 1–11 (2016).
    [Crossref]
  23. C. Liu, W. T. Freeman, R. Szeliski, and S. B. Kang, “Noise estimation from a single image,” in Computer Vision and Pattern Recognition (CVPR) (IEEE, 2006), Vol. 1, pp. 901–908.

2019 (1)

K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
[Crossref]

2016 (2)

T. C. Moran, A. D. Kaye, A. Rao, and F. R. Bueno, “The roles of X rays and other types of electromagnetic radiation in evaluating paintings for forgery and restoration,” J. Forensic Radiol. Imaging 5, 38–46 (2016).
[Crossref]

C. LeGendre, X. Yu, D. Liu, J. Busch, A. Jones, S. Pattanaik, and P. Debevec, “Practical multispectral lighting reproduction,” ACM Trans. Graph. 35, 1–11 (2016).
[Crossref]

2014 (2)

K. Mitra, O. S. Cossairt, and A. Veeraraghavan, “A framework for analysis of computational imaging systems: role of signal prior, sensor noise and multiplexing,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 1909–1921 (2014).
[Crossref]

M. Teena, A. Manickavasagan, L. Ravikanth, and D. Jayas, “Near infrared (NIR) hyperspectral imaging to classify fungal infected date fruits,” J. Stored Prod. Res. 59, 306–313 (2014).
[Crossref]

2011 (1)

Q. Lü, M.-J. Tang, J.-R. Cai, J.-W. Zhao, and S. ViTTayapaduNg, “Vis/NIR hyperspectral imaging for detection of hidden bruises on kiwifruits,” Czech J. Food Sci. 29, 595–602 (2011).
[Crossref]

2010 (1)

J. P. Wachs, H. I. Stern, T. Burks, and V. Alchanatis, “Low and high-level visual feature-based apple detection from multi-modal images,” Precis. Agric. 11, 717–735 (2010).
[Crossref]

2007 (2)

L. Zhaoxiang and L. Gang, “Apple maturity discrimination and positioning system in an apple harvesting robot,” N. Z. J. Agric. Res. 50, 1103–1113 (2007).
[Crossref]

Y. Y. Schechner, S. K. Nayar, and P. N. Belhumeur, “Multiplexing for optimal lighting,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 1339–1354 (2007).
[Crossref]

2006 (2)

M. Nagata, J. G. Tallada, and T. Kobayashi, “Bruise detection using NIR hyperspectral imaging for strawberry (fragaria× ananassa duch.),” Environ. Control Biol. 44, 133–142 (2006).
[Crossref]

B. M. Nicola, E. Lötze, A. Peirs, N. Scheerlinck, and K. I. Theron, “Non-destructive measurement of bitter pit in apple fruit using NIR hyperspectral imaging,” Postharvest Biol. Technol. 40, 1–6 (2006).
[Crossref]

2005 (1)

A. Wenger, A. Gardner, C. Tchou, J. Unger, T. Hawkins, and P. Debevec, “Performance relighting and reflectance transformation with time-multiplexed illumination,” ACM Trans. Graph. 24, 756–764 (2005).
[Crossref]

Alchanatis, V.

J. P. Wachs, H. I. Stern, T. Burks, and V. Alchanatis, “Low and high-level visual feature-based apple detection from multi-modal images,” Precis. Agric. 11, 717–735 (2010).
[Crossref]

Alterman, M.

M. Alterman, Y. Y. Schechner, and A. Weiss, “Multiplexed fluorescence unmixing,” in International Conference on Computational Photography (ICCP) (IEEE, 2010), pp. 1–8.

Asano, Y.

Y. Asano, M. Meguro, C. Wang, A. Lam, Y. Zheng, T. Okabe, and I. Sato, “Coded illumination and imaging for fluorescence based classification,” in European Conference on Computer Vision (ECCV) (2018), pp. 502–516.

Belhumeur, P. N.

Y. Y. Schechner, S. K. Nayar, and P. N. Belhumeur, “Multiplexing for optimal lighting,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 1339–1354 (2007).
[Crossref]

Bhowmik, M. K.

S. D. Roy, D. H. Das, M. K. Bhowmik, and A. K. Ghosh, “Bruise detection in apples using infrared imaging,” in International Conference on Electrical and Computer Engineering (ICECE) (IEEE, 2016), pp. 118–122.

Blasinski, H.

H. Blasinski, J. Farrell, and B. Wandell, “Designing illuminant spectral power distributions for surface classification,” in Computer Vision and Pattern Recognition (CVPR) (2017), pp. 2164–2173.

Bueno, F. R.

T. C. Moran, A. D. Kaye, A. Rao, and F. R. Bueno, “The roles of X rays and other types of electromagnetic radiation in evaluating paintings for forgery and restoration,” J. Forensic Radiol. Imaging 5, 38–46 (2016).
[Crossref]

Burks, T.

J. P. Wachs, H. I. Stern, T. Burks, and V. Alchanatis, “Low and high-level visual feature-based apple detection from multi-modal images,” Precis. Agric. 11, 717–735 (2010).
[Crossref]

Busch, J.

K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
[Crossref]

C. LeGendre, X. Yu, D. Liu, J. Busch, A. Jones, S. Pattanaik, and P. Debevec, “Practical multispectral lighting reproduction,” ACM Trans. Graph. 35, 1–11 (2016).
[Crossref]

Cai, J.-R.

Q. Lü, M.-J. Tang, J.-R. Cai, J.-W. Zhao, and S. ViTTayapaduNg, “Vis/NIR hyperspectral imaging for detection of hidden bruises on kiwifruits,” Czech J. Food Sci. 29, 595–602 (2011).
[Crossref]

Cooper, E.

K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
[Crossref]

Cossairt, O. S.

K. Mitra, O. S. Cossairt, and A. Veeraraghavan, “A framework for analysis of computational imaging systems: role of signal prior, sensor noise and multiplexing,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 1909–1921 (2014).
[Crossref]

Das, D. H.

S. D. Roy, D. H. Das, M. K. Bhowmik, and A. K. Ghosh, “Bruise detection in apples using infrared imaging,” in International Conference on Electrical and Computer Engineering (ICECE) (IEEE, 2016), pp. 118–122.

Davidson, P.

K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
[Crossref]

Debevec, P.

K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
[Crossref]

C. LeGendre, X. Yu, D. Liu, J. Busch, A. Jones, S. Pattanaik, and P. Debevec, “Practical multispectral lighting reproduction,” ACM Trans. Graph. 35, 1–11 (2016).
[Crossref]

A. Wenger, A. Gardner, C. Tchou, J. Unger, T. Hawkins, and P. Debevec, “Performance relighting and reflectance transformation with time-multiplexed illumination,” ACM Trans. Graph. 24, 756–764 (2005).
[Crossref]

P. Debevec, “The light stages and their applications to photoreal digital actors,” in SIGGRAPH Asia (2012), Vol. 2.

Dou, M.

K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
[Crossref]

Dourgarian, J.

K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
[Crossref]

Fanello, S.

K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
[Crossref]

Farrell, J.

H. Blasinski, J. Farrell, and B. Wandell, “Designing illuminant spectral power distributions for surface classification,” in Computer Vision and Pattern Recognition (CVPR) (2017), pp. 2164–2173.

Freeman, W. T.

C. Liu, W. T. Freeman, R. Szeliski, and S. B. Kang, “Noise estimation from a single image,” in Computer Vision and Pattern Recognition (CVPR) (IEEE, 2006), Vol. 1, pp. 901–908.

Fyffe, G.

K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
[Crossref]

Gang, L.

L. Zhaoxiang and L. Gang, “Apple maturity discrimination and positioning system in an apple harvesting robot,” N. Z. J. Agric. Res. 50, 1103–1113 (2007).
[Crossref]

Gardner, A.

A. Wenger, A. Gardner, C. Tchou, J. Unger, T. Hawkins, and P. Debevec, “Performance relighting and reflectance transformation with time-multiplexed illumination,” ACM Trans. Graph. 24, 756–764 (2005).
[Crossref]

Ghosh, A.

C. Kampouris and A. Ghosh, “ICL multispectral light stage: building a versatile LED sphere with off-the-shelf components,” in Eurographics Workshop on Material Appearance Modeling (Eurographics Association, 2018), pp. 1–4.

Ghosh, A. K.

S. D. Roy, D. H. Das, M. K. Bhowmik, and A. K. Ghosh, “Bruise detection in apples using infrared imaging,” in International Conference on Electrical and Computer Engineering (ICECE) (IEEE, 2016), pp. 118–122.

Guo, K.

K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
[Crossref]

Harvey, G.

K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
[Crossref]

Hawkins, T.

A. Wenger, A. Gardner, C. Tchou, J. Unger, T. Hawkins, and P. Debevec, “Performance relighting and reflectance transformation with time-multiplexed illumination,” ACM Trans. Graph. 24, 756–764 (2005).
[Crossref]

Ikeuchi, K.

I. Sato, T. Okabe, Y. Sato, and K. Ikeuchi, “Using extended light sources for modeling object appearance under varying illumination,” in International Conference on Computer Vision (ICCV) (IEEE, 2005), Vol. 1, pp. 325–332.

Izadi, S.

K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
[Crossref]

Jayas, D.

M. Teena, A. Manickavasagan, L. Ravikanth, and D. Jayas, “Near infrared (NIR) hyperspectral imaging to classify fungal infected date fruits,” J. Stored Prod. Res. 59, 306–313 (2014).
[Crossref]

Jones, A.

C. LeGendre, X. Yu, D. Liu, J. Busch, A. Jones, S. Pattanaik, and P. Debevec, “Practical multispectral lighting reproduction,” ACM Trans. Graph. 35, 1–11 (2016).
[Crossref]

Kampouris, C.

C. Kampouris and A. Ghosh, “ICL multispectral light stage: building a versatile LED sphere with off-the-shelf components,” in Eurographics Workshop on Material Appearance Modeling (Eurographics Association, 2018), pp. 1–4.

Kang, S. B.

C. Liu, W. T. Freeman, R. Szeliski, and S. B. Kang, “Noise estimation from a single image,” in Computer Vision and Pattern Recognition (CVPR) (IEEE, 2006), Vol. 1, pp. 901–908.

Kaye, A. D.

T. C. Moran, A. D. Kaye, A. Rao, and F. R. Bueno, “The roles of X rays and other types of electromagnetic radiation in evaluating paintings for forgery and restoration,” J. Forensic Radiol. Imaging 5, 38–46 (2016).
[Crossref]

Kobayashi, T.

M. Nagata, J. G. Tallada, and T. Kobayashi, “Bruise detection using NIR hyperspectral imaging for strawberry (fragaria× ananassa duch.),” Environ. Control Biol. 44, 133–142 (2006).
[Crossref]

Kowdle, A.

K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
[Crossref]

Lam, A.

Y. Asano, M. Meguro, C. Wang, A. Lam, Y. Zheng, T. Okabe, and I. Sato, “Coded illumination and imaging for fluorescence based classification,” in European Conference on Computer Vision (ECCV) (2018), pp. 502–516.

LeGendre, C.

C. LeGendre, X. Yu, D. Liu, J. Busch, A. Jones, S. Pattanaik, and P. Debevec, “Practical multispectral lighting reproduction,” ACM Trans. Graph. 35, 1–11 (2016).
[Crossref]

Lincoln, P.

K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
[Crossref]

Liu, C.

C. Liu, W. T. Freeman, R. Szeliski, and S. B. Kang, “Noise estimation from a single image,” in Computer Vision and Pattern Recognition (CVPR) (IEEE, 2006), Vol. 1, pp. 901–908.

Liu, D.

C. LeGendre, X. Yu, D. Liu, J. Busch, A. Jones, S. Pattanaik, and P. Debevec, “Practical multispectral lighting reproduction,” ACM Trans. Graph. 35, 1–11 (2016).
[Crossref]

Lötze, E.

B. M. Nicola, E. Lötze, A. Peirs, N. Scheerlinck, and K. I. Theron, “Non-destructive measurement of bitter pit in apple fruit using NIR hyperspectral imaging,” Postharvest Biol. Technol. 40, 1–6 (2006).
[Crossref]

Lü, Q.

Q. Lü, M.-J. Tang, J.-R. Cai, J.-W. Zhao, and S. ViTTayapaduNg, “Vis/NIR hyperspectral imaging for detection of hidden bruises on kiwifruits,” Czech J. Food Sci. 29, 595–602 (2011).
[Crossref]

Manickavasagan, A.

M. Teena, A. Manickavasagan, L. Ravikanth, and D. Jayas, “Near infrared (NIR) hyperspectral imaging to classify fungal infected date fruits,” J. Stored Prod. Res. 59, 306–313 (2014).
[Crossref]

Meguro, M.

Y. Asano, M. Meguro, C. Wang, A. Lam, Y. Zheng, T. Okabe, and I. Sato, “Coded illumination and imaging for fluorescence based classification,” in European Conference on Computer Vision (ECCV) (2018), pp. 502–516.

Mitra, K.

K. Mitra, O. S. Cossairt, and A. Veeraraghavan, “A framework for analysis of computational imaging systems: role of signal prior, sensor noise and multiplexing,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 1909–1921 (2014).
[Crossref]

Moran, T. C.

T. C. Moran, A. D. Kaye, A. Rao, and F. R. Bueno, “The roles of X rays and other types of electromagnetic radiation in evaluating paintings for forgery and restoration,” J. Forensic Radiol. Imaging 5, 38–46 (2016).
[Crossref]

Nagata, M.

M. Nagata, J. G. Tallada, and T. Kobayashi, “Bruise detection using NIR hyperspectral imaging for strawberry (fragaria× ananassa duch.),” Environ. Control Biol. 44, 133–142 (2006).
[Crossref]

Nayar, S. K.

Y. Y. Schechner, S. K. Nayar, and P. N. Belhumeur, “Multiplexing for optimal lighting,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 1339–1354 (2007).
[Crossref]

Nicola, B. M.

B. M. Nicola, E. Lötze, A. Peirs, N. Scheerlinck, and K. I. Theron, “Non-destructive measurement of bitter pit in apple fruit using NIR hyperspectral imaging,” Postharvest Biol. Technol. 40, 1–6 (2006).
[Crossref]

Okabe, T.

I. Sato, T. Okabe, Y. Sato, and K. Ikeuchi, “Using extended light sources for modeling object appearance under varying illumination,” in International Conference on Computer Vision (ICCV) (IEEE, 2005), Vol. 1, pp. 325–332.

Y. Asano, M. Meguro, C. Wang, A. Lam, Y. Zheng, T. Okabe, and I. Sato, “Coded illumination and imaging for fluorescence based classification,” in European Conference on Computer Vision (ECCV) (2018), pp. 502–516.

C. Wang and T. Okabe, “Joint optimization of coded illumination and grayscale conversion for one-shot raw material classification,” in British Machine Vision Conference (BMVC) (2017).

Orts-Escolano, S.

K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
[Crossref]

Pandey, R.

K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
[Crossref]

Pattanaik, S.

C. LeGendre, X. Yu, D. Liu, J. Busch, A. Jones, S. Pattanaik, and P. Debevec, “Practical multispectral lighting reproduction,” ACM Trans. Graph. 35, 1–11 (2016).
[Crossref]

Peirs, A.

B. M. Nicola, E. Lötze, A. Peirs, N. Scheerlinck, and K. I. Theron, “Non-destructive measurement of bitter pit in apple fruit using NIR hyperspectral imaging,” Postharvest Biol. Technol. 40, 1–6 (2006).
[Crossref]

Rao, A.

T. C. Moran, A. D. Kaye, A. Rao, and F. R. Bueno, “The roles of X rays and other types of electromagnetic radiation in evaluating paintings for forgery and restoration,” J. Forensic Radiol. Imaging 5, 38–46 (2016).
[Crossref]

Ratner, N.

N. Ratner and Y. Y. Schechner, “Illumination multiplexing within fundamental limits,” in Computer Vision and Pattern Recognition (CVPR) (IEEE, 2007), pp. 1–8.

Ravikanth, L.

M. Teena, A. Manickavasagan, L. Ravikanth, and D. Jayas, “Near infrared (NIR) hyperspectral imaging to classify fungal infected date fruits,” J. Stored Prod. Res. 59, 306–313 (2014).
[Crossref]

Rhemann, C.

K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
[Crossref]

Roy, S. D.

S. D. Roy, D. H. Das, M. K. Bhowmik, and A. K. Ghosh, “Bruise detection in apples using infrared imaging,” in International Conference on Electrical and Computer Engineering (ICECE) (IEEE, 2016), pp. 118–122.

Sato, I.

I. Sato, T. Okabe, Y. Sato, and K. Ikeuchi, “Using extended light sources for modeling object appearance under varying illumination,” in International Conference on Computer Vision (ICCV) (IEEE, 2005), Vol. 1, pp. 325–332.

Y. Asano, M. Meguro, C. Wang, A. Lam, Y. Zheng, T. Okabe, and I. Sato, “Coded illumination and imaging for fluorescence based classification,” in European Conference on Computer Vision (ECCV) (2018), pp. 502–516.

Sato, Y.

I. Sato, T. Okabe, Y. Sato, and K. Ikeuchi, “Using extended light sources for modeling object appearance under varying illumination,” in International Conference on Computer Vision (ICCV) (IEEE, 2005), Vol. 1, pp. 325–332.

Schechner, Y. Y.

Y. Y. Schechner, S. K. Nayar, and P. N. Belhumeur, “Multiplexing for optimal lighting,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 1339–1354 (2007).
[Crossref]

N. Ratner and Y. Y. Schechner, “Illumination multiplexing within fundamental limits,” in Computer Vision and Pattern Recognition (CVPR) (IEEE, 2007), pp. 1–8.

M. Alterman, Y. Y. Schechner, and A. Weiss, “Multiplexed fluorescence unmixing,” in International Conference on Computational Photography (ICCP) (IEEE, 2010), pp. 1–8.

Scheerlinck, N.

B. M. Nicola, E. Lötze, A. Peirs, N. Scheerlinck, and K. I. Theron, “Non-destructive measurement of bitter pit in apple fruit using NIR hyperspectral imaging,” Postharvest Biol. Technol. 40, 1–6 (2006).
[Crossref]

Stern, H. I.

J. P. Wachs, H. I. Stern, T. Burks, and V. Alchanatis, “Low and high-level visual feature-based apple detection from multi-modal images,” Precis. Agric. 11, 717–735 (2010).
[Crossref]

Szeliski, R.

C. Liu, W. T. Freeman, R. Szeliski, and S. B. Kang, “Noise estimation from a single image,” in Computer Vision and Pattern Recognition (CVPR) (IEEE, 2006), Vol. 1, pp. 901–908.

Tallada, J. G.

M. Nagata, J. G. Tallada, and T. Kobayashi, “Bruise detection using NIR hyperspectral imaging for strawberry (fragaria× ananassa duch.),” Environ. Control Biol. 44, 133–142 (2006).
[Crossref]

Tang, D.

K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
[Crossref]

Tang, M.-J.

Q. Lü, M.-J. Tang, J.-R. Cai, J.-W. Zhao, and S. ViTTayapaduNg, “Vis/NIR hyperspectral imaging for detection of hidden bruises on kiwifruits,” Czech J. Food Sci. 29, 595–602 (2011).
[Crossref]

Taylor, J.

K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
[Crossref]

Tchou, C.

A. Wenger, A. Gardner, C. Tchou, J. Unger, T. Hawkins, and P. Debevec, “Performance relighting and reflectance transformation with time-multiplexed illumination,” ACM Trans. Graph. 24, 756–764 (2005).
[Crossref]

Teena, M.

M. Teena, A. Manickavasagan, L. Ravikanth, and D. Jayas, “Near infrared (NIR) hyperspectral imaging to classify fungal infected date fruits,” J. Stored Prod. Res. 59, 306–313 (2014).
[Crossref]

Theron, K. I.

B. M. Nicola, E. Lötze, A. Peirs, N. Scheerlinck, and K. I. Theron, “Non-destructive measurement of bitter pit in apple fruit using NIR hyperspectral imaging,” Postharvest Biol. Technol. 40, 1–6 (2006).
[Crossref]

Tkach, A.

K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
[Crossref]

Unger, J.

A. Wenger, A. Gardner, C. Tchou, J. Unger, T. Hawkins, and P. Debevec, “Performance relighting and reflectance transformation with time-multiplexed illumination,” ACM Trans. Graph. 24, 756–764 (2005).
[Crossref]

Veeraraghavan, A.

K. Mitra, O. S. Cossairt, and A. Veeraraghavan, “A framework for analysis of computational imaging systems: role of signal prior, sensor noise and multiplexing,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 1909–1921 (2014).
[Crossref]

ViTTayapaduNg, S.

Q. Lü, M.-J. Tang, J.-R. Cai, J.-W. Zhao, and S. ViTTayapaduNg, “Vis/NIR hyperspectral imaging for detection of hidden bruises on kiwifruits,” Czech J. Food Sci. 29, 595–602 (2011).
[Crossref]

Wachs, J. P.

J. P. Wachs, H. I. Stern, T. Burks, and V. Alchanatis, “Low and high-level visual feature-based apple detection from multi-modal images,” Precis. Agric. 11, 717–735 (2010).
[Crossref]

Wandell, B.

H. Blasinski, J. Farrell, and B. Wandell, “Designing illuminant spectral power distributions for surface classification,” in Computer Vision and Pattern Recognition (CVPR) (2017), pp. 2164–2173.

Wang, C.

C. Wang and T. Okabe, “Joint optimization of coded illumination and grayscale conversion for one-shot raw material classification,” in British Machine Vision Conference (BMVC) (2017).

Y. Asano, M. Meguro, C. Wang, A. Lam, Y. Zheng, T. Okabe, and I. Sato, “Coded illumination and imaging for fluorescence based classification,” in European Conference on Computer Vision (ECCV) (2018), pp. 502–516.

Weiss, A.

M. Alterman, Y. Y. Schechner, and A. Weiss, “Multiplexed fluorescence unmixing,” in International Conference on Computational Photography (ICCP) (IEEE, 2010), pp. 1–8.

Wenger, A.

A. Wenger, A. Gardner, C. Tchou, J. Unger, T. Hawkins, and P. Debevec, “Performance relighting and reflectance transformation with time-multiplexed illumination,” ACM Trans. Graph. 24, 756–764 (2005).
[Crossref]

Whalen, M.

K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
[Crossref]

Yu, X.

K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
[Crossref]

C. LeGendre, X. Yu, D. Liu, J. Busch, A. Jones, S. Pattanaik, and P. Debevec, “Practical multispectral lighting reproduction,” ACM Trans. Graph. 35, 1–11 (2016).
[Crossref]

Zhao, J.-W.

Q. Lü, M.-J. Tang, J.-R. Cai, J.-W. Zhao, and S. ViTTayapaduNg, “Vis/NIR hyperspectral imaging for detection of hidden bruises on kiwifruits,” Czech J. Food Sci. 29, 595–602 (2011).
[Crossref]

Zhaoxiang, L.

L. Zhaoxiang and L. Gang, “Apple maturity discrimination and positioning system in an apple harvesting robot,” N. Z. J. Agric. Res. 50, 1103–1113 (2007).
[Crossref]

Zheng, Y.

Y. Asano, M. Meguro, C. Wang, A. Lam, Y. Zheng, T. Okabe, and I. Sato, “Coded illumination and imaging for fluorescence based classification,” in European Conference on Computer Vision (ECCV) (2018), pp. 502–516.

ACM Trans. Graph. (3)

K. Guo, P. Lincoln, P. Davidson, J. Busch, X. Yu, M. Whalen, G. Harvey, S. Orts-Escolano, R. Pandey, J. Dourgarian, D. Tang, A. Tkach, A. Kowdle, E. Cooper, M. Dou, S. Fanello, G. Fyffe, C. Rhemann, J. Taylor, P. Debevec, and S. Izadi, “The relightables: volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph. 38, 1–19 (2019).
[Crossref]

A. Wenger, A. Gardner, C. Tchou, J. Unger, T. Hawkins, and P. Debevec, “Performance relighting and reflectance transformation with time-multiplexed illumination,” ACM Trans. Graph. 24, 756–764 (2005).
[Crossref]

C. LeGendre, X. Yu, D. Liu, J. Busch, A. Jones, S. Pattanaik, and P. Debevec, “Practical multispectral lighting reproduction,” ACM Trans. Graph. 35, 1–11 (2016).
[Crossref]

Czech J. Food Sci. (1)

Q. Lü, M.-J. Tang, J.-R. Cai, J.-W. Zhao, and S. ViTTayapaduNg, “Vis/NIR hyperspectral imaging for detection of hidden bruises on kiwifruits,” Czech J. Food Sci. 29, 595–602 (2011).
[Crossref]

Environ. Control Biol. (1)

M. Nagata, J. G. Tallada, and T. Kobayashi, “Bruise detection using NIR hyperspectral imaging for strawberry (fragaria× ananassa duch.),” Environ. Control Biol. 44, 133–142 (2006).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (2)

K. Mitra, O. S. Cossairt, and A. Veeraraghavan, “A framework for analysis of computational imaging systems: role of signal prior, sensor noise and multiplexing,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 1909–1921 (2014).
[Crossref]

Y. Y. Schechner, S. K. Nayar, and P. N. Belhumeur, “Multiplexing for optimal lighting,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 1339–1354 (2007).
[Crossref]

J. Forensic Radiol. Imaging (1)

T. C. Moran, A. D. Kaye, A. Rao, and F. R. Bueno, “The roles of X rays and other types of electromagnetic radiation in evaluating paintings for forgery and restoration,” J. Forensic Radiol. Imaging 5, 38–46 (2016).
[Crossref]

J. Stored Prod. Res. (1)

M. Teena, A. Manickavasagan, L. Ravikanth, and D. Jayas, “Near infrared (NIR) hyperspectral imaging to classify fungal infected date fruits,” J. Stored Prod. Res. 59, 306–313 (2014).
[Crossref]

N. Z. J. Agric. Res. (1)

L. Zhaoxiang and L. Gang, “Apple maturity discrimination and positioning system in an apple harvesting robot,” N. Z. J. Agric. Res. 50, 1103–1113 (2007).
[Crossref]

Postharvest Biol. Technol. (1)

B. M. Nicola, E. Lötze, A. Peirs, N. Scheerlinck, and K. I. Theron, “Non-destructive measurement of bitter pit in apple fruit using NIR hyperspectral imaging,” Postharvest Biol. Technol. 40, 1–6 (2006).
[Crossref]

Precis. Agric. (1)

J. P. Wachs, H. I. Stern, T. Burks, and V. Alchanatis, “Low and high-level visual feature-based apple detection from multi-modal images,” Precis. Agric. 11, 717–735 (2010).
[Crossref]

Other (11)

C. Liu, W. T. Freeman, R. Szeliski, and S. B. Kang, “Noise estimation from a single image,” in Computer Vision and Pattern Recognition (CVPR) (IEEE, 2006), Vol. 1, pp. 901–908.

H. Blasinski, J. Farrell, and B. Wandell, “Designing illuminant spectral power distributions for surface classification,” in Computer Vision and Pattern Recognition (CVPR) (2017), pp. 2164–2173.

S. D. Roy, D. H. Das, M. K. Bhowmik, and A. K. Ghosh, “Bruise detection in apples using infrared imaging,” in International Conference on Electrical and Computer Engineering (ICECE) (IEEE, 2016), pp. 118–122.

Light stage object classifier, https://roboticimaging.org/Projects/LSClassifier/ .

P. Debevec, “The light stages and their applications to photoreal digital actors,” in SIGGRAPH Asia (2012), Vol. 2.

Y. Asano, M. Meguro, C. Wang, A. Lam, Y. Zheng, T. Okabe, and I. Sato, “Coded illumination and imaging for fluorescence based classification,” in European Conference on Computer Vision (ECCV) (2018), pp. 502–516.

C. Kampouris and A. Ghosh, “ICL multispectral light stage: building a versatile LED sphere with off-the-shelf components,” in Eurographics Workshop on Material Appearance Modeling (Eurographics Association, 2018), pp. 1–4.

C. Wang and T. Okabe, “Joint optimization of coded illumination and grayscale conversion for one-shot raw material classification,” in British Machine Vision Conference (BMVC) (2017).

N. Ratner and Y. Y. Schechner, “Illumination multiplexing within fundamental limits,” in Computer Vision and Pattern Recognition (CVPR) (IEEE, 2007), pp. 1–8.

M. Alterman, Y. Y. Schechner, and A. Weiss, “Multiplexed fluorescence unmixing,” in International Conference on Computational Photography (ICCP) (IEEE, 2010), pp. 1–8.

I. Sato, T. Okabe, Y. Sato, and K. Ikeuchi, “Using extended light sources for modeling object appearance under varying illumination,” in International Conference on Computer Vision (ICCV) (IEEE, 2005), Vol. 1, pp. 325–332.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Visually similar objects, like real and synthetic fruit, fool even the most sophisticated image classifiers. Our active multiplexed illumination scheme allows conventional classifiers to distinguish these challenging examples, and our method for selecting illumination patterns yields fast and accurate classification results.
Fig. 2.
Fig. 2. (a) We form high-quality models of training objects using the compact light stage, and use these in a physically accurate rendering pipeline to simulate different illumination conditions. This allows us to jointly train a classifier and optimize illumination patterns in simulation. (b) At inference time, only the optimized illumination patterns are used, yielding fast and accurate classification.
Fig. 3.
Fig. 3. Compact light stage (a) in profile and (b) from above. (c) Depicts typical images obtained with a single illumination site active at a time.
Fig. 4.
Fig. 4. Characterizing imaging noise. The Basler acA1920-150 µm at 15 dB gain and 30 ms exposure time yields a characteristic (a) that is well described by an affine noise model in Eq. (1). (b) In Section 4.A, we generalize this model following Eq. (2) to three noise characteristics ${\Sigma _{1\ldots 3}}$ corresponding to the typical camera gain and exposure settings in Eqs. (8)–(10). The measured (dark) points conform closely to the predicted noise curves (solid lines). We validate noise synthesis by rendering scenes for each camera setting, yielding the lighter points that agree well with the models.
Fig. 5.
Fig. 5. Greedy pattern selection: The illumination matrix (top left) is incrementally grown one pattern at a time. For each new column, we perform an exhaustive search over all ${2^N}$ possible new patterns, for each rendering the corresponding relit imagery, training, and testing a classifier, and selecting the one that yields the highest overall accuracy. Train and test are repeated to make best use of the dataset.
Fig. 6.
Fig. 6. We validate rendering by generating imagery at different illumination conditions and camera settings: (a) SIM, Exp time 16 ms, 3 lights; (c) SIM, Exp time 8 ms, 4 lights; and (e) SIM, Exp time 4.7 ms, 2 lights. We then compare them with corresponding captured imagery: (b) CAPT, Exp time 16 ms, 3 lights; (d) CAPT, Exp time 8 ms, 4 lights; and (f) CAPT, Exp time 4.7 ms, 2 lights. Both the change in appearance due to different lighting patterns and the appearance of noise are a very close match.
Fig. 7.
Fig. 7. Multiplexing patterns optimized for the three noise characteristics ${\Sigma _{1\ldots 3}}$, with white corresponding to a light being on: (a) Proposed, Σ1; (b) Proposed, ${\Sigma _{2}}$; (c) Proposed, ${\Sigma _{3}}$ patterns optimized with the proposed greedy method; and (d) SNR, ${\Sigma _{1}}$, (e) SNR, ${\Sigma _{2}}$, (f) SNR, ${\Sigma _{3}}$ SNR-optimal patterns. The proposed method supports early termination of training when performance peaks. The shaded columns indicate patterns that did not improve performance.
Fig. 8.
Fig. 8. Two experiments on different samples: SIM uses noise-accurate relit images for both training and testing, while CAPT is evaluated on previously unseen samples captured using trained illumination patterns. Accuracy versus image count shows the proposed (solid) method outperforming the SNR-optimal (dashed) approach across all three noise characteristics ${\Sigma _{1\ldots 3}}$ with synthetic imagery, and a similar performance on captured imagery. The proposed approach performs better with fewer images, allowing a faster performance for a given accuracy.

Tables (2)

Tables Icon

Table 1. Peak Performance for Proposed, SNR-optimal, and Naive Fixed-illumination Approaches, Tested on Simulated and Captured Dataa

Tables Icon

Table 2. Training and Inference Time for Proposed and SNR-optimal Approaches, Tested on Simulated and Captured Dataa

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

σ 2 = σ p 2 I ¯ + σ r 2 ,
Σ = G 2 G 0 2 ( E E 0 σ p 2 I ¯ + σ r 2 ) .
I ¯ = L w ,
I N ( I ¯ , Σ ) ,
Σ W = d i a g ( σ p 2 R ¯ c o l W + σ r 2 ) ,
M S E = 1 n t r [ ( W T Σ W 1 W ) 1 ] .
σ 2 = 0.7 I ¯ + 66.
Σ 1 : σ p 2 = 0.25 , σ r 2 = 8.35 ,
Σ 2 : σ p 2 = 0.50 , σ r 2 = 33.23 , a n d
Σ 3 : σ p 2 = 0.94 , σ r 2 = 117.37.

Metrics