Abstract

High-throughput imaging is applied to provide observations for accurate statements on phenomena in biology and this has been successfully applied in the domain of cells, i.e. cytomics. In the domain of whole organisms, we need to take the hurdles to ensure that the imaging can be accomplished with a sufficient throughput and reproducibility. For vertebrate biology, zebrafish is a popular model system for High-throughput applications. The development of the Vertebrate Automated Screening Technology (VAST BioImager), a microscope mounted system, enables the application of zebrafish high-throughput screening. The VAST BioImager contains a capillary that holds a zebrafish for imaging. Through the rotation of the capillary, multiple axial-views of a specimen can be acquired. For the VAST BioImager, fluorescence and/or confocal microscopes are used. Quantitation of a specific signal as derived from a label in one fluorescent channel requires insight in the zebrafish volume to be able to normalize quantitation to volume units. However, from the setup of the VAST BioImager, a specimen volume cannot be straightforwardly derived. We present a high-throughput axial-view imaging architecture based on the VAST BioImager. We propose profile-based 3D reconstruction to produce 3D volumetric representations for zebrafish larvae using the axial-views. Volume and surface area can then be derived from the 3D reconstruction to obtain the shape characteristics in high-throughput measurements. In addition, we develop a calibration and a validation of our methodology. From our measurements we show that with a limited amount of views, accurate measurements of volume and surface area for zebrafish larvae can be obtained. We have applied the proposed method on a range of developmental stages in zebrafish and produced metrical references for the volume and surface area for each stage.

© 2017 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Label-free imaging of zebrafish larvae in vivo by photoacoustic microscopy

Shuoqi Ye, Ran Yang, Jingwei Xiong, K. Kirk Shung, Qifa Zhou, Changhui Li, and Qiushi Ren
Biomed. Opt. Express 3(2) 360-365 (2012)

In vivo imaging and quantitative analysis of zebrafish embryos by digital holographic microscopy

Jian Gao, Joseph A. Lyon, Daniel P. Szeto, and Jun Chen
Biomed. Opt. Express 3(10) 2623-2635 (2012)

High-throughput imaging of zebrafish embryos using a linear-CCD-based flow imaging system

Lifeng Liu, Guang Yang, Shoupeng Liu, Linbo Wang, Xibin Yang, Huiming Qu, Xiaofen Liu, Le Cao, Weijun Pan, and Hui Li
Biomed. Opt. Express 8(12) 5651-5662 (2017)

References

  • View by:
  • |
  • |
  • |

  1. W. J. Veneman, R. Marin-Juez, J. de Sonneville, A. Ordas, S. Jong-Raadsen, A. H. Meijer, and H. P. Spaink, “Establishment and optimization of a high-throughput setup to study staphylococcus epidermidis and mycobacterium marinum infection as a model for drug discovery,” Journal of Visualized Experiments 88, e51649 (2014).
  2. C. Pardo-Martin, T. Y. Chang, B. K. Koo, C. L. Gilleland, S. C. Wasserman, and M. F. Yanik, “High-throughput in vivo vertebrate screening,” Nature methods 7(8), 634–636 (2010).
    [Crossref] [PubMed]
  3. Y. Guo, W. J. Veneman, H. P. Spaink, and V. J. Verbeek, “Silhouette-based 3D model for zebrafish High-throughput imaging,” in IEEE International Conference on Image Processing Theory, Tools and Applications (IEEE2015), pp. 403–408.
  4. G. Xu and Z. Zhang, Epipolar Geometry in Stereo, Motion and Object Recognition: A Unified Approach (Springer Science & Business Media2013).
  5. Z. Zhang, R. Deriche, O. Faugeras, and Q. T. Luong, “A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry,” Artificial Intelligence 78(1), 87–119 (1995).
    [Crossref]
  6. W. N. Martin and J. K. Aggarwal, “Volumetric descriptions of objects from multiple views,” IEEE Trans. Pattern Analysis and Machine Intelligence 5(2), 150 (1983).
    [Crossref]
  7. R. Szeliski, “Rapid octree construction from image sequences,” CVGIP: Image understanding,  58(1), 23–32 (1993).
    [Crossref]
  8. A. Laurentini, “The visual hull concept for silhouette-based image understanding,” IEEE Trans. Pattern Analysis and Machine Intelligence 16(2), 150–162 (1994).
    [Crossref]
  9. A. W. Fitzgibbon, G. Cross, and A. Zisserman, “Automatic 3D model construction for turn-table sequences,” in European Workshop on 3D Structure from Multiple Images of Large-Scale Environments (SpringerBerlin Heidelberg1998), pp. 155–170.
  10. W. Matusik, C. Buehler, R. Raskar, S. J. Gortler, and L. McMillan, “Image-based visual hulls,” in ACM Conference on Computer graphics and interactive techniques (ACM2000), pp. 369–374.
  11. J. S. Franco and E. Boyer, “Exact polyhedral visual hulls,” in British Machine Vision Conference (BMVA2003), pp. 329–338.
  12. S. Lazebnik, Y. Furukawa, and J. Ponce, “Projective visual hulls,” International Journal of Computer Vision 74(2), 137–165 (2007).
    [Crossref]
  13. Y. Furukawa and J. Ponce, “Carved visual hulls for image-based modelling,” in European Conference on Computer Vision (SpringerBerlin Heidelberg2006), pp. 564–577.
  14. K. N. Kutulakos and S. M. Seitz, “A theory of shape by space carving,” International Journal of Computer Vision 38(3), 199–218 (2000).
    [Crossref]
  15. G. Vogiatzis, C. Hernandez, and R. Cipolla, “Reconstruction in the round using photometric normals and silhouettes,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE2006), pp. 1847–1854.
  16. K. Kolev, T. Brox, and D. Cremers, “Robust variational segmentation of 3D objects from multiple views,” in Joint Pattern Recognition Symposium (SpringerBerlin Heidelberg2006), pp. 688–697.
    [Crossref]
  17. K. Kolev, M. Klodt, T. Brox, and D. Cremers, “Continuous global optimization in multiview 3d reconstruction,” International Journal of Computer Vision 84(1), 80–96 (2009).
    [Crossref]
  18. D. Cremers and K. Kolev, “Multiview stereo and silhouette consistency through convex functionals on convex domains,” IEEE Trans. Pattern Analysis and Machine Intelligence 33(6), 1161–1174 (2011).
    [Crossref]
  19. K. Kolev, T. Brox, and D. Cremers, “Fast joint estimation of silhouettes and dense 3D geometry from multiple images,” IEEE Trans. Pattern Analysis and Machine Intelligence 34(3), 493–505 (2012).
    [Crossref]
  20. H. P. Lensch, W. Heidrich, and H. P. Seidel, “A silhouette-based algorithm for texture registration and stitching,” Graphical Models 63(4), 245–262 (2001).
    [Crossref]
  21. C. Hernández, F. Schmitt, and R. Cipolla, “Silhouette coherence for camera calibration under circular motion,” IEEE Trans. Pattern Analysis and Machine Intelligence 29(2), 343–349 (2007).
    [Crossref]
  22. V. Caselles, R. Kimmel, and G. Sapiro, “Geodesic active contours,” International Journal of Computer Vision 22(1), 61–79 (1997).
    [Crossref]
  23. D. Comaniciu and P. Meer, “Mean shift: A robust approach toward feature space analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence 24(5), 603–619 (2002).
    [Crossref]
  24. L. Cao and F. J. Verbeek, “Analytical evaluation of algorithms for point cloud surface reconstruction using shape features,” J. Electronic Imaging 22(4), 043008 (2013).
    [Crossref]
  25. J. C. Lagarias, A. James, M. H. Reeds, H. Wright, and E. Paul, “Wright Convergence properties of the Nelder-Mead simplex method in low dimensions,” SIAM J. Optimization 9(1), 112–147 (1998).
    [Crossref]
  26. N. Hansen and A. Ostermeier, “Completely derandomized self-adaptation in evolution strategies,” Evolutionary Computation 9(2), 159–195 (2001).
    [Crossref] [PubMed]
  27. N. Hansen and S. Kern, “Evaluating the CMA evolution strategy on multi-modal test functions,” in International Conference on Parallel Problem Solving from Nature (SpringerBerlin Heidelberg2004), pp. 282–291.
  28. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. pattern analysis and machine intelligence,  22(11), 1330–1334 (2000).
    [Crossref]
  29. M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” International Journal of Computer Vision 74(1), 59–73 (2007).
    [Crossref]
  30. S. Galliani, K. Lasinger, and K. Schindler, “Massively parallel multiview stereopsis by surface normal diffusion,” in IEEE International Conference on Computer Vision (IEEE2015), pp. 873–881.
  31. N. Savinov, C. Häne, and M. Pollefeys, “Discrete optimization of ray potentials for semantic 3d reconstruction,” in IEEE International Conference on Computer Vision and Pattern Recognition (IEEE2015), pp. 5511–5518.
  32. Y. Zhang, B. J. Matuszewski, L. K. Shark, and C. J. Moore, “Medical image segmentation using new hybrid level-set method,” in IEEE International Conference on BioMedical Visualization (IEEE2008), pp. 71–76.
  33. M. Desbrun, M. Meyer, P. Schröder, and A. H. Barr, “Implicit fairing of irregular meshes using diffusion and curvature flow,” in the 26th Annual Conference on Computer Graphics and Interactive Ttechniques (ACM Press/Addison-Wesley Publishing Co.1999), pp. 317–324.
  34. X. Tang, M. van’t Hoff, J. Hoogenboom, Y. Guo, F. Cai, G. Lamers, and F. Verbeek, “Fluorescence and bright-field 3D image fusion based on sinogram unification for optical projection tomography,” in IEEE International Conference on Bioinformatics and Biomedicine (IEEE2016).
  35. C. Pardo-Martin, A. Allalou, J. Medina, P. M. Eimon, C. Wählby, and M. F. Yanik, “High-throughput hyperdimensional vertebrate phenotyping,” Nature Communications 4, 1467 (2013).
    [Crossref] [PubMed]

2014 (1)

W. J. Veneman, R. Marin-Juez, J. de Sonneville, A. Ordas, S. Jong-Raadsen, A. H. Meijer, and H. P. Spaink, “Establishment and optimization of a high-throughput setup to study staphylococcus epidermidis and mycobacterium marinum infection as a model for drug discovery,” Journal of Visualized Experiments 88, e51649 (2014).

2013 (2)

L. Cao and F. J. Verbeek, “Analytical evaluation of algorithms for point cloud surface reconstruction using shape features,” J. Electronic Imaging 22(4), 043008 (2013).
[Crossref]

C. Pardo-Martin, A. Allalou, J. Medina, P. M. Eimon, C. Wählby, and M. F. Yanik, “High-throughput hyperdimensional vertebrate phenotyping,” Nature Communications 4, 1467 (2013).
[Crossref] [PubMed]

2012 (1)

K. Kolev, T. Brox, and D. Cremers, “Fast joint estimation of silhouettes and dense 3D geometry from multiple images,” IEEE Trans. Pattern Analysis and Machine Intelligence 34(3), 493–505 (2012).
[Crossref]

2011 (1)

D. Cremers and K. Kolev, “Multiview stereo and silhouette consistency through convex functionals on convex domains,” IEEE Trans. Pattern Analysis and Machine Intelligence 33(6), 1161–1174 (2011).
[Crossref]

2010 (1)

C. Pardo-Martin, T. Y. Chang, B. K. Koo, C. L. Gilleland, S. C. Wasserman, and M. F. Yanik, “High-throughput in vivo vertebrate screening,” Nature methods 7(8), 634–636 (2010).
[Crossref] [PubMed]

2009 (1)

K. Kolev, M. Klodt, T. Brox, and D. Cremers, “Continuous global optimization in multiview 3d reconstruction,” International Journal of Computer Vision 84(1), 80–96 (2009).
[Crossref]

2007 (3)

C. Hernández, F. Schmitt, and R. Cipolla, “Silhouette coherence for camera calibration under circular motion,” IEEE Trans. Pattern Analysis and Machine Intelligence 29(2), 343–349 (2007).
[Crossref]

S. Lazebnik, Y. Furukawa, and J. Ponce, “Projective visual hulls,” International Journal of Computer Vision 74(2), 137–165 (2007).
[Crossref]

M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” International Journal of Computer Vision 74(1), 59–73 (2007).
[Crossref]

2002 (1)

D. Comaniciu and P. Meer, “Mean shift: A robust approach toward feature space analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence 24(5), 603–619 (2002).
[Crossref]

2001 (2)

N. Hansen and A. Ostermeier, “Completely derandomized self-adaptation in evolution strategies,” Evolutionary Computation 9(2), 159–195 (2001).
[Crossref] [PubMed]

H. P. Lensch, W. Heidrich, and H. P. Seidel, “A silhouette-based algorithm for texture registration and stitching,” Graphical Models 63(4), 245–262 (2001).
[Crossref]

2000 (2)

K. N. Kutulakos and S. M. Seitz, “A theory of shape by space carving,” International Journal of Computer Vision 38(3), 199–218 (2000).
[Crossref]

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. pattern analysis and machine intelligence,  22(11), 1330–1334 (2000).
[Crossref]

1998 (1)

J. C. Lagarias, A. James, M. H. Reeds, H. Wright, and E. Paul, “Wright Convergence properties of the Nelder-Mead simplex method in low dimensions,” SIAM J. Optimization 9(1), 112–147 (1998).
[Crossref]

1997 (1)

V. Caselles, R. Kimmel, and G. Sapiro, “Geodesic active contours,” International Journal of Computer Vision 22(1), 61–79 (1997).
[Crossref]

1995 (1)

Z. Zhang, R. Deriche, O. Faugeras, and Q. T. Luong, “A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry,” Artificial Intelligence 78(1), 87–119 (1995).
[Crossref]

1994 (1)

A. Laurentini, “The visual hull concept for silhouette-based image understanding,” IEEE Trans. Pattern Analysis and Machine Intelligence 16(2), 150–162 (1994).
[Crossref]

1993 (1)

R. Szeliski, “Rapid octree construction from image sequences,” CVGIP: Image understanding,  58(1), 23–32 (1993).
[Crossref]

1983 (1)

W. N. Martin and J. K. Aggarwal, “Volumetric descriptions of objects from multiple views,” IEEE Trans. Pattern Analysis and Machine Intelligence 5(2), 150 (1983).
[Crossref]

Aggarwal, J. K.

W. N. Martin and J. K. Aggarwal, “Volumetric descriptions of objects from multiple views,” IEEE Trans. Pattern Analysis and Machine Intelligence 5(2), 150 (1983).
[Crossref]

Allalou, A.

C. Pardo-Martin, A. Allalou, J. Medina, P. M. Eimon, C. Wählby, and M. F. Yanik, “High-throughput hyperdimensional vertebrate phenotyping,” Nature Communications 4, 1467 (2013).
[Crossref] [PubMed]

Barr, A. H.

M. Desbrun, M. Meyer, P. Schröder, and A. H. Barr, “Implicit fairing of irregular meshes using diffusion and curvature flow,” in the 26th Annual Conference on Computer Graphics and Interactive Ttechniques (ACM Press/Addison-Wesley Publishing Co.1999), pp. 317–324.

Boyer, E.

J. S. Franco and E. Boyer, “Exact polyhedral visual hulls,” in British Machine Vision Conference (BMVA2003), pp. 329–338.

Brown, M.

M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” International Journal of Computer Vision 74(1), 59–73 (2007).
[Crossref]

Brox, T.

K. Kolev, T. Brox, and D. Cremers, “Fast joint estimation of silhouettes and dense 3D geometry from multiple images,” IEEE Trans. Pattern Analysis and Machine Intelligence 34(3), 493–505 (2012).
[Crossref]

K. Kolev, M. Klodt, T. Brox, and D. Cremers, “Continuous global optimization in multiview 3d reconstruction,” International Journal of Computer Vision 84(1), 80–96 (2009).
[Crossref]

K. Kolev, T. Brox, and D. Cremers, “Robust variational segmentation of 3D objects from multiple views,” in Joint Pattern Recognition Symposium (SpringerBerlin Heidelberg2006), pp. 688–697.
[Crossref]

Buehler, C.

W. Matusik, C. Buehler, R. Raskar, S. J. Gortler, and L. McMillan, “Image-based visual hulls,” in ACM Conference on Computer graphics and interactive techniques (ACM2000), pp. 369–374.

Cai, F.

X. Tang, M. van’t Hoff, J. Hoogenboom, Y. Guo, F. Cai, G. Lamers, and F. Verbeek, “Fluorescence and bright-field 3D image fusion based on sinogram unification for optical projection tomography,” in IEEE International Conference on Bioinformatics and Biomedicine (IEEE2016).

Cao, L.

L. Cao and F. J. Verbeek, “Analytical evaluation of algorithms for point cloud surface reconstruction using shape features,” J. Electronic Imaging 22(4), 043008 (2013).
[Crossref]

Caselles, V.

V. Caselles, R. Kimmel, and G. Sapiro, “Geodesic active contours,” International Journal of Computer Vision 22(1), 61–79 (1997).
[Crossref]

Chang, T. Y.

C. Pardo-Martin, T. Y. Chang, B. K. Koo, C. L. Gilleland, S. C. Wasserman, and M. F. Yanik, “High-throughput in vivo vertebrate screening,” Nature methods 7(8), 634–636 (2010).
[Crossref] [PubMed]

Cipolla, R.

C. Hernández, F. Schmitt, and R. Cipolla, “Silhouette coherence for camera calibration under circular motion,” IEEE Trans. Pattern Analysis and Machine Intelligence 29(2), 343–349 (2007).
[Crossref]

G. Vogiatzis, C. Hernandez, and R. Cipolla, “Reconstruction in the round using photometric normals and silhouettes,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE2006), pp. 1847–1854.

Comaniciu, D.

D. Comaniciu and P. Meer, “Mean shift: A robust approach toward feature space analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence 24(5), 603–619 (2002).
[Crossref]

Cremers, D.

K. Kolev, T. Brox, and D. Cremers, “Fast joint estimation of silhouettes and dense 3D geometry from multiple images,” IEEE Trans. Pattern Analysis and Machine Intelligence 34(3), 493–505 (2012).
[Crossref]

D. Cremers and K. Kolev, “Multiview stereo and silhouette consistency through convex functionals on convex domains,” IEEE Trans. Pattern Analysis and Machine Intelligence 33(6), 1161–1174 (2011).
[Crossref]

K. Kolev, M. Klodt, T. Brox, and D. Cremers, “Continuous global optimization in multiview 3d reconstruction,” International Journal of Computer Vision 84(1), 80–96 (2009).
[Crossref]

K. Kolev, T. Brox, and D. Cremers, “Robust variational segmentation of 3D objects from multiple views,” in Joint Pattern Recognition Symposium (SpringerBerlin Heidelberg2006), pp. 688–697.
[Crossref]

Cross, G.

A. W. Fitzgibbon, G. Cross, and A. Zisserman, “Automatic 3D model construction for turn-table sequences,” in European Workshop on 3D Structure from Multiple Images of Large-Scale Environments (SpringerBerlin Heidelberg1998), pp. 155–170.

de Sonneville, J.

W. J. Veneman, R. Marin-Juez, J. de Sonneville, A. Ordas, S. Jong-Raadsen, A. H. Meijer, and H. P. Spaink, “Establishment and optimization of a high-throughput setup to study staphylococcus epidermidis and mycobacterium marinum infection as a model for drug discovery,” Journal of Visualized Experiments 88, e51649 (2014).

Deriche, R.

Z. Zhang, R. Deriche, O. Faugeras, and Q. T. Luong, “A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry,” Artificial Intelligence 78(1), 87–119 (1995).
[Crossref]

Desbrun, M.

M. Desbrun, M. Meyer, P. Schröder, and A. H. Barr, “Implicit fairing of irregular meshes using diffusion and curvature flow,” in the 26th Annual Conference on Computer Graphics and Interactive Ttechniques (ACM Press/Addison-Wesley Publishing Co.1999), pp. 317–324.

Eimon, P. M.

C. Pardo-Martin, A. Allalou, J. Medina, P. M. Eimon, C. Wählby, and M. F. Yanik, “High-throughput hyperdimensional vertebrate phenotyping,” Nature Communications 4, 1467 (2013).
[Crossref] [PubMed]

Faugeras, O.

Z. Zhang, R. Deriche, O. Faugeras, and Q. T. Luong, “A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry,” Artificial Intelligence 78(1), 87–119 (1995).
[Crossref]

Fitzgibbon, A. W.

A. W. Fitzgibbon, G. Cross, and A. Zisserman, “Automatic 3D model construction for turn-table sequences,” in European Workshop on 3D Structure from Multiple Images of Large-Scale Environments (SpringerBerlin Heidelberg1998), pp. 155–170.

Franco, J. S.

J. S. Franco and E. Boyer, “Exact polyhedral visual hulls,” in British Machine Vision Conference (BMVA2003), pp. 329–338.

Furukawa, Y.

S. Lazebnik, Y. Furukawa, and J. Ponce, “Projective visual hulls,” International Journal of Computer Vision 74(2), 137–165 (2007).
[Crossref]

Y. Furukawa and J. Ponce, “Carved visual hulls for image-based modelling,” in European Conference on Computer Vision (SpringerBerlin Heidelberg2006), pp. 564–577.

Galliani, S.

S. Galliani, K. Lasinger, and K. Schindler, “Massively parallel multiview stereopsis by surface normal diffusion,” in IEEE International Conference on Computer Vision (IEEE2015), pp. 873–881.

Gilleland, C. L.

C. Pardo-Martin, T. Y. Chang, B. K. Koo, C. L. Gilleland, S. C. Wasserman, and M. F. Yanik, “High-throughput in vivo vertebrate screening,” Nature methods 7(8), 634–636 (2010).
[Crossref] [PubMed]

Gortler, S. J.

W. Matusik, C. Buehler, R. Raskar, S. J. Gortler, and L. McMillan, “Image-based visual hulls,” in ACM Conference on Computer graphics and interactive techniques (ACM2000), pp. 369–374.

Guo, Y.

Y. Guo, W. J. Veneman, H. P. Spaink, and V. J. Verbeek, “Silhouette-based 3D model for zebrafish High-throughput imaging,” in IEEE International Conference on Image Processing Theory, Tools and Applications (IEEE2015), pp. 403–408.

X. Tang, M. van’t Hoff, J. Hoogenboom, Y. Guo, F. Cai, G. Lamers, and F. Verbeek, “Fluorescence and bright-field 3D image fusion based on sinogram unification for optical projection tomography,” in IEEE International Conference on Bioinformatics and Biomedicine (IEEE2016).

Häne, C.

N. Savinov, C. Häne, and M. Pollefeys, “Discrete optimization of ray potentials for semantic 3d reconstruction,” in IEEE International Conference on Computer Vision and Pattern Recognition (IEEE2015), pp. 5511–5518.

Hansen, N.

N. Hansen and A. Ostermeier, “Completely derandomized self-adaptation in evolution strategies,” Evolutionary Computation 9(2), 159–195 (2001).
[Crossref] [PubMed]

N. Hansen and S. Kern, “Evaluating the CMA evolution strategy on multi-modal test functions,” in International Conference on Parallel Problem Solving from Nature (SpringerBerlin Heidelberg2004), pp. 282–291.

Heidrich, W.

H. P. Lensch, W. Heidrich, and H. P. Seidel, “A silhouette-based algorithm for texture registration and stitching,” Graphical Models 63(4), 245–262 (2001).
[Crossref]

Hernandez, C.

G. Vogiatzis, C. Hernandez, and R. Cipolla, “Reconstruction in the round using photometric normals and silhouettes,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE2006), pp. 1847–1854.

Hernández, C.

C. Hernández, F. Schmitt, and R. Cipolla, “Silhouette coherence for camera calibration under circular motion,” IEEE Trans. Pattern Analysis and Machine Intelligence 29(2), 343–349 (2007).
[Crossref]

Hoogenboom, J.

X. Tang, M. van’t Hoff, J. Hoogenboom, Y. Guo, F. Cai, G. Lamers, and F. Verbeek, “Fluorescence and bright-field 3D image fusion based on sinogram unification for optical projection tomography,” in IEEE International Conference on Bioinformatics and Biomedicine (IEEE2016).

James, A.

J. C. Lagarias, A. James, M. H. Reeds, H. Wright, and E. Paul, “Wright Convergence properties of the Nelder-Mead simplex method in low dimensions,” SIAM J. Optimization 9(1), 112–147 (1998).
[Crossref]

Jong-Raadsen, S.

W. J. Veneman, R. Marin-Juez, J. de Sonneville, A. Ordas, S. Jong-Raadsen, A. H. Meijer, and H. P. Spaink, “Establishment and optimization of a high-throughput setup to study staphylococcus epidermidis and mycobacterium marinum infection as a model for drug discovery,” Journal of Visualized Experiments 88, e51649 (2014).

Kern, S.

N. Hansen and S. Kern, “Evaluating the CMA evolution strategy on multi-modal test functions,” in International Conference on Parallel Problem Solving from Nature (SpringerBerlin Heidelberg2004), pp. 282–291.

Kimmel, R.

V. Caselles, R. Kimmel, and G. Sapiro, “Geodesic active contours,” International Journal of Computer Vision 22(1), 61–79 (1997).
[Crossref]

Klodt, M.

K. Kolev, M. Klodt, T. Brox, and D. Cremers, “Continuous global optimization in multiview 3d reconstruction,” International Journal of Computer Vision 84(1), 80–96 (2009).
[Crossref]

Kolev, K.

K. Kolev, T. Brox, and D. Cremers, “Fast joint estimation of silhouettes and dense 3D geometry from multiple images,” IEEE Trans. Pattern Analysis and Machine Intelligence 34(3), 493–505 (2012).
[Crossref]

D. Cremers and K. Kolev, “Multiview stereo and silhouette consistency through convex functionals on convex domains,” IEEE Trans. Pattern Analysis and Machine Intelligence 33(6), 1161–1174 (2011).
[Crossref]

K. Kolev, M. Klodt, T. Brox, and D. Cremers, “Continuous global optimization in multiview 3d reconstruction,” International Journal of Computer Vision 84(1), 80–96 (2009).
[Crossref]

K. Kolev, T. Brox, and D. Cremers, “Robust variational segmentation of 3D objects from multiple views,” in Joint Pattern Recognition Symposium (SpringerBerlin Heidelberg2006), pp. 688–697.
[Crossref]

Koo, B. K.

C. Pardo-Martin, T. Y. Chang, B. K. Koo, C. L. Gilleland, S. C. Wasserman, and M. F. Yanik, “High-throughput in vivo vertebrate screening,” Nature methods 7(8), 634–636 (2010).
[Crossref] [PubMed]

Kutulakos, K. N.

K. N. Kutulakos and S. M. Seitz, “A theory of shape by space carving,” International Journal of Computer Vision 38(3), 199–218 (2000).
[Crossref]

Lagarias, J. C.

J. C. Lagarias, A. James, M. H. Reeds, H. Wright, and E. Paul, “Wright Convergence properties of the Nelder-Mead simplex method in low dimensions,” SIAM J. Optimization 9(1), 112–147 (1998).
[Crossref]

Lamers, G.

X. Tang, M. van’t Hoff, J. Hoogenboom, Y. Guo, F. Cai, G. Lamers, and F. Verbeek, “Fluorescence and bright-field 3D image fusion based on sinogram unification for optical projection tomography,” in IEEE International Conference on Bioinformatics and Biomedicine (IEEE2016).

Lasinger, K.

S. Galliani, K. Lasinger, and K. Schindler, “Massively parallel multiview stereopsis by surface normal diffusion,” in IEEE International Conference on Computer Vision (IEEE2015), pp. 873–881.

Laurentini, A.

A. Laurentini, “The visual hull concept for silhouette-based image understanding,” IEEE Trans. Pattern Analysis and Machine Intelligence 16(2), 150–162 (1994).
[Crossref]

Lazebnik, S.

S. Lazebnik, Y. Furukawa, and J. Ponce, “Projective visual hulls,” International Journal of Computer Vision 74(2), 137–165 (2007).
[Crossref]

Lensch, H. P.

H. P. Lensch, W. Heidrich, and H. P. Seidel, “A silhouette-based algorithm for texture registration and stitching,” Graphical Models 63(4), 245–262 (2001).
[Crossref]

Lowe, D. G.

M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” International Journal of Computer Vision 74(1), 59–73 (2007).
[Crossref]

Luong, Q. T.

Z. Zhang, R. Deriche, O. Faugeras, and Q. T. Luong, “A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry,” Artificial Intelligence 78(1), 87–119 (1995).
[Crossref]

Marin-Juez, R.

W. J. Veneman, R. Marin-Juez, J. de Sonneville, A. Ordas, S. Jong-Raadsen, A. H. Meijer, and H. P. Spaink, “Establishment and optimization of a high-throughput setup to study staphylococcus epidermidis and mycobacterium marinum infection as a model for drug discovery,” Journal of Visualized Experiments 88, e51649 (2014).

Martin, W. N.

W. N. Martin and J. K. Aggarwal, “Volumetric descriptions of objects from multiple views,” IEEE Trans. Pattern Analysis and Machine Intelligence 5(2), 150 (1983).
[Crossref]

Matusik, W.

W. Matusik, C. Buehler, R. Raskar, S. J. Gortler, and L. McMillan, “Image-based visual hulls,” in ACM Conference on Computer graphics and interactive techniques (ACM2000), pp. 369–374.

Matuszewski, B. J.

Y. Zhang, B. J. Matuszewski, L. K. Shark, and C. J. Moore, “Medical image segmentation using new hybrid level-set method,” in IEEE International Conference on BioMedical Visualization (IEEE2008), pp. 71–76.

McMillan, L.

W. Matusik, C. Buehler, R. Raskar, S. J. Gortler, and L. McMillan, “Image-based visual hulls,” in ACM Conference on Computer graphics and interactive techniques (ACM2000), pp. 369–374.

Medina, J.

C. Pardo-Martin, A. Allalou, J. Medina, P. M. Eimon, C. Wählby, and M. F. Yanik, “High-throughput hyperdimensional vertebrate phenotyping,” Nature Communications 4, 1467 (2013).
[Crossref] [PubMed]

Meer, P.

D. Comaniciu and P. Meer, “Mean shift: A robust approach toward feature space analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence 24(5), 603–619 (2002).
[Crossref]

Meijer, A. H.

W. J. Veneman, R. Marin-Juez, J. de Sonneville, A. Ordas, S. Jong-Raadsen, A. H. Meijer, and H. P. Spaink, “Establishment and optimization of a high-throughput setup to study staphylococcus epidermidis and mycobacterium marinum infection as a model for drug discovery,” Journal of Visualized Experiments 88, e51649 (2014).

Meyer, M.

M. Desbrun, M. Meyer, P. Schröder, and A. H. Barr, “Implicit fairing of irregular meshes using diffusion and curvature flow,” in the 26th Annual Conference on Computer Graphics and Interactive Ttechniques (ACM Press/Addison-Wesley Publishing Co.1999), pp. 317–324.

Moore, C. J.

Y. Zhang, B. J. Matuszewski, L. K. Shark, and C. J. Moore, “Medical image segmentation using new hybrid level-set method,” in IEEE International Conference on BioMedical Visualization (IEEE2008), pp. 71–76.

Ordas, A.

W. J. Veneman, R. Marin-Juez, J. de Sonneville, A. Ordas, S. Jong-Raadsen, A. H. Meijer, and H. P. Spaink, “Establishment and optimization of a high-throughput setup to study staphylococcus epidermidis and mycobacterium marinum infection as a model for drug discovery,” Journal of Visualized Experiments 88, e51649 (2014).

Ostermeier, A.

N. Hansen and A. Ostermeier, “Completely derandomized self-adaptation in evolution strategies,” Evolutionary Computation 9(2), 159–195 (2001).
[Crossref] [PubMed]

Pardo-Martin, C.

C. Pardo-Martin, A. Allalou, J. Medina, P. M. Eimon, C. Wählby, and M. F. Yanik, “High-throughput hyperdimensional vertebrate phenotyping,” Nature Communications 4, 1467 (2013).
[Crossref] [PubMed]

C. Pardo-Martin, T. Y. Chang, B. K. Koo, C. L. Gilleland, S. C. Wasserman, and M. F. Yanik, “High-throughput in vivo vertebrate screening,” Nature methods 7(8), 634–636 (2010).
[Crossref] [PubMed]

Paul, E.

J. C. Lagarias, A. James, M. H. Reeds, H. Wright, and E. Paul, “Wright Convergence properties of the Nelder-Mead simplex method in low dimensions,” SIAM J. Optimization 9(1), 112–147 (1998).
[Crossref]

Pollefeys, M.

N. Savinov, C. Häne, and M. Pollefeys, “Discrete optimization of ray potentials for semantic 3d reconstruction,” in IEEE International Conference on Computer Vision and Pattern Recognition (IEEE2015), pp. 5511–5518.

Ponce, J.

S. Lazebnik, Y. Furukawa, and J. Ponce, “Projective visual hulls,” International Journal of Computer Vision 74(2), 137–165 (2007).
[Crossref]

Y. Furukawa and J. Ponce, “Carved visual hulls for image-based modelling,” in European Conference on Computer Vision (SpringerBerlin Heidelberg2006), pp. 564–577.

Raskar, R.

W. Matusik, C. Buehler, R. Raskar, S. J. Gortler, and L. McMillan, “Image-based visual hulls,” in ACM Conference on Computer graphics and interactive techniques (ACM2000), pp. 369–374.

Reeds, M. H.

J. C. Lagarias, A. James, M. H. Reeds, H. Wright, and E. Paul, “Wright Convergence properties of the Nelder-Mead simplex method in low dimensions,” SIAM J. Optimization 9(1), 112–147 (1998).
[Crossref]

Sapiro, G.

V. Caselles, R. Kimmel, and G. Sapiro, “Geodesic active contours,” International Journal of Computer Vision 22(1), 61–79 (1997).
[Crossref]

Savinov, N.

N. Savinov, C. Häne, and M. Pollefeys, “Discrete optimization of ray potentials for semantic 3d reconstruction,” in IEEE International Conference on Computer Vision and Pattern Recognition (IEEE2015), pp. 5511–5518.

Schindler, K.

S. Galliani, K. Lasinger, and K. Schindler, “Massively parallel multiview stereopsis by surface normal diffusion,” in IEEE International Conference on Computer Vision (IEEE2015), pp. 873–881.

Schmitt, F.

C. Hernández, F. Schmitt, and R. Cipolla, “Silhouette coherence for camera calibration under circular motion,” IEEE Trans. Pattern Analysis and Machine Intelligence 29(2), 343–349 (2007).
[Crossref]

Schröder, P.

M. Desbrun, M. Meyer, P. Schröder, and A. H. Barr, “Implicit fairing of irregular meshes using diffusion and curvature flow,” in the 26th Annual Conference on Computer Graphics and Interactive Ttechniques (ACM Press/Addison-Wesley Publishing Co.1999), pp. 317–324.

Seidel, H. P.

H. P. Lensch, W. Heidrich, and H. P. Seidel, “A silhouette-based algorithm for texture registration and stitching,” Graphical Models 63(4), 245–262 (2001).
[Crossref]

Seitz, S. M.

K. N. Kutulakos and S. M. Seitz, “A theory of shape by space carving,” International Journal of Computer Vision 38(3), 199–218 (2000).
[Crossref]

Shark, L. K.

Y. Zhang, B. J. Matuszewski, L. K. Shark, and C. J. Moore, “Medical image segmentation using new hybrid level-set method,” in IEEE International Conference on BioMedical Visualization (IEEE2008), pp. 71–76.

Spaink, H. P.

W. J. Veneman, R. Marin-Juez, J. de Sonneville, A. Ordas, S. Jong-Raadsen, A. H. Meijer, and H. P. Spaink, “Establishment and optimization of a high-throughput setup to study staphylococcus epidermidis and mycobacterium marinum infection as a model for drug discovery,” Journal of Visualized Experiments 88, e51649 (2014).

Y. Guo, W. J. Veneman, H. P. Spaink, and V. J. Verbeek, “Silhouette-based 3D model for zebrafish High-throughput imaging,” in IEEE International Conference on Image Processing Theory, Tools and Applications (IEEE2015), pp. 403–408.

Szeliski, R.

R. Szeliski, “Rapid octree construction from image sequences,” CVGIP: Image understanding,  58(1), 23–32 (1993).
[Crossref]

Tang, X.

X. Tang, M. van’t Hoff, J. Hoogenboom, Y. Guo, F. Cai, G. Lamers, and F. Verbeek, “Fluorescence and bright-field 3D image fusion based on sinogram unification for optical projection tomography,” in IEEE International Conference on Bioinformatics and Biomedicine (IEEE2016).

van’t Hoff, M.

X. Tang, M. van’t Hoff, J. Hoogenboom, Y. Guo, F. Cai, G. Lamers, and F. Verbeek, “Fluorescence and bright-field 3D image fusion based on sinogram unification for optical projection tomography,” in IEEE International Conference on Bioinformatics and Biomedicine (IEEE2016).

Veneman, W. J.

W. J. Veneman, R. Marin-Juez, J. de Sonneville, A. Ordas, S. Jong-Raadsen, A. H. Meijer, and H. P. Spaink, “Establishment and optimization of a high-throughput setup to study staphylococcus epidermidis and mycobacterium marinum infection as a model for drug discovery,” Journal of Visualized Experiments 88, e51649 (2014).

Y. Guo, W. J. Veneman, H. P. Spaink, and V. J. Verbeek, “Silhouette-based 3D model for zebrafish High-throughput imaging,” in IEEE International Conference on Image Processing Theory, Tools and Applications (IEEE2015), pp. 403–408.

Verbeek, F.

X. Tang, M. van’t Hoff, J. Hoogenboom, Y. Guo, F. Cai, G. Lamers, and F. Verbeek, “Fluorescence and bright-field 3D image fusion based on sinogram unification for optical projection tomography,” in IEEE International Conference on Bioinformatics and Biomedicine (IEEE2016).

Verbeek, F. J.

L. Cao and F. J. Verbeek, “Analytical evaluation of algorithms for point cloud surface reconstruction using shape features,” J. Electronic Imaging 22(4), 043008 (2013).
[Crossref]

Verbeek, V. J.

Y. Guo, W. J. Veneman, H. P. Spaink, and V. J. Verbeek, “Silhouette-based 3D model for zebrafish High-throughput imaging,” in IEEE International Conference on Image Processing Theory, Tools and Applications (IEEE2015), pp. 403–408.

Vogiatzis, G.

G. Vogiatzis, C. Hernandez, and R. Cipolla, “Reconstruction in the round using photometric normals and silhouettes,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE2006), pp. 1847–1854.

Wählby, C.

C. Pardo-Martin, A. Allalou, J. Medina, P. M. Eimon, C. Wählby, and M. F. Yanik, “High-throughput hyperdimensional vertebrate phenotyping,” Nature Communications 4, 1467 (2013).
[Crossref] [PubMed]

Wasserman, S. C.

C. Pardo-Martin, T. Y. Chang, B. K. Koo, C. L. Gilleland, S. C. Wasserman, and M. F. Yanik, “High-throughput in vivo vertebrate screening,” Nature methods 7(8), 634–636 (2010).
[Crossref] [PubMed]

Wright, H.

J. C. Lagarias, A. James, M. H. Reeds, H. Wright, and E. Paul, “Wright Convergence properties of the Nelder-Mead simplex method in low dimensions,” SIAM J. Optimization 9(1), 112–147 (1998).
[Crossref]

Xu, G.

G. Xu and Z. Zhang, Epipolar Geometry in Stereo, Motion and Object Recognition: A Unified Approach (Springer Science & Business Media2013).

Yanik, M. F.

C. Pardo-Martin, A. Allalou, J. Medina, P. M. Eimon, C. Wählby, and M. F. Yanik, “High-throughput hyperdimensional vertebrate phenotyping,” Nature Communications 4, 1467 (2013).
[Crossref] [PubMed]

C. Pardo-Martin, T. Y. Chang, B. K. Koo, C. L. Gilleland, S. C. Wasserman, and M. F. Yanik, “High-throughput in vivo vertebrate screening,” Nature methods 7(8), 634–636 (2010).
[Crossref] [PubMed]

Zhang, Y.

Y. Zhang, B. J. Matuszewski, L. K. Shark, and C. J. Moore, “Medical image segmentation using new hybrid level-set method,” in IEEE International Conference on BioMedical Visualization (IEEE2008), pp. 71–76.

Zhang, Z.

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. pattern analysis and machine intelligence,  22(11), 1330–1334 (2000).
[Crossref]

Z. Zhang, R. Deriche, O. Faugeras, and Q. T. Luong, “A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry,” Artificial Intelligence 78(1), 87–119 (1995).
[Crossref]

G. Xu and Z. Zhang, Epipolar Geometry in Stereo, Motion and Object Recognition: A Unified Approach (Springer Science & Business Media2013).

Zisserman, A.

A. W. Fitzgibbon, G. Cross, and A. Zisserman, “Automatic 3D model construction for turn-table sequences,” in European Workshop on 3D Structure from Multiple Images of Large-Scale Environments (SpringerBerlin Heidelberg1998), pp. 155–170.

Artificial Intelligence (1)

Z. Zhang, R. Deriche, O. Faugeras, and Q. T. Luong, “A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry,” Artificial Intelligence 78(1), 87–119 (1995).
[Crossref]

CVGIP: Image understanding (1)

R. Szeliski, “Rapid octree construction from image sequences,” CVGIP: Image understanding,  58(1), 23–32 (1993).
[Crossref]

Evolutionary Computation (1)

N. Hansen and A. Ostermeier, “Completely derandomized self-adaptation in evolution strategies,” Evolutionary Computation 9(2), 159–195 (2001).
[Crossref] [PubMed]

Graphical Models (1)

H. P. Lensch, W. Heidrich, and H. P. Seidel, “A silhouette-based algorithm for texture registration and stitching,” Graphical Models 63(4), 245–262 (2001).
[Crossref]

IEEE Trans. Pattern Analysis and Machine Intelligence (6)

C. Hernández, F. Schmitt, and R. Cipolla, “Silhouette coherence for camera calibration under circular motion,” IEEE Trans. Pattern Analysis and Machine Intelligence 29(2), 343–349 (2007).
[Crossref]

D. Comaniciu and P. Meer, “Mean shift: A robust approach toward feature space analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence 24(5), 603–619 (2002).
[Crossref]

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. pattern analysis and machine intelligence,  22(11), 1330–1334 (2000).
[Crossref]

A. Laurentini, “The visual hull concept for silhouette-based image understanding,” IEEE Trans. Pattern Analysis and Machine Intelligence 16(2), 150–162 (1994).
[Crossref]

W. N. Martin and J. K. Aggarwal, “Volumetric descriptions of objects from multiple views,” IEEE Trans. Pattern Analysis and Machine Intelligence 5(2), 150 (1983).
[Crossref]

D. Cremers and K. Kolev, “Multiview stereo and silhouette consistency through convex functionals on convex domains,” IEEE Trans. Pattern Analysis and Machine Intelligence 33(6), 1161–1174 (2011).
[Crossref]

K. Kolev, T. Brox, and D. Cremers, “Fast joint estimation of silhouettes and dense 3D geometry from multiple images,” IEEE Trans. Pattern Analysis and Machine Intelligence 34(3), 493–505 (2012).
[Crossref]

International Journal of Computer Vision (5)

K. N. Kutulakos and S. M. Seitz, “A theory of shape by space carving,” International Journal of Computer Vision 38(3), 199–218 (2000).
[Crossref]

S. Lazebnik, Y. Furukawa, and J. Ponce, “Projective visual hulls,” International Journal of Computer Vision 74(2), 137–165 (2007).
[Crossref]

M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” International Journal of Computer Vision 74(1), 59–73 (2007).
[Crossref]

V. Caselles, R. Kimmel, and G. Sapiro, “Geodesic active contours,” International Journal of Computer Vision 22(1), 61–79 (1997).
[Crossref]

K. Kolev, M. Klodt, T. Brox, and D. Cremers, “Continuous global optimization in multiview 3d reconstruction,” International Journal of Computer Vision 84(1), 80–96 (2009).
[Crossref]

J. Electronic Imaging (1)

L. Cao and F. J. Verbeek, “Analytical evaluation of algorithms for point cloud surface reconstruction using shape features,” J. Electronic Imaging 22(4), 043008 (2013).
[Crossref]

Journal of Visualized Experiments (1)

W. J. Veneman, R. Marin-Juez, J. de Sonneville, A. Ordas, S. Jong-Raadsen, A. H. Meijer, and H. P. Spaink, “Establishment and optimization of a high-throughput setup to study staphylococcus epidermidis and mycobacterium marinum infection as a model for drug discovery,” Journal of Visualized Experiments 88, e51649 (2014).

Nature Communications (1)

C. Pardo-Martin, A. Allalou, J. Medina, P. M. Eimon, C. Wählby, and M. F. Yanik, “High-throughput hyperdimensional vertebrate phenotyping,” Nature Communications 4, 1467 (2013).
[Crossref] [PubMed]

Nature methods (1)

C. Pardo-Martin, T. Y. Chang, B. K. Koo, C. L. Gilleland, S. C. Wasserman, and M. F. Yanik, “High-throughput in vivo vertebrate screening,” Nature methods 7(8), 634–636 (2010).
[Crossref] [PubMed]

SIAM J. Optimization (1)

J. C. Lagarias, A. James, M. H. Reeds, H. Wright, and E. Paul, “Wright Convergence properties of the Nelder-Mead simplex method in low dimensions,” SIAM J. Optimization 9(1), 112–147 (1998).
[Crossref]

Other (14)

S. Galliani, K. Lasinger, and K. Schindler, “Massively parallel multiview stereopsis by surface normal diffusion,” in IEEE International Conference on Computer Vision (IEEE2015), pp. 873–881.

N. Savinov, C. Häne, and M. Pollefeys, “Discrete optimization of ray potentials for semantic 3d reconstruction,” in IEEE International Conference on Computer Vision and Pattern Recognition (IEEE2015), pp. 5511–5518.

Y. Zhang, B. J. Matuszewski, L. K. Shark, and C. J. Moore, “Medical image segmentation using new hybrid level-set method,” in IEEE International Conference on BioMedical Visualization (IEEE2008), pp. 71–76.

M. Desbrun, M. Meyer, P. Schröder, and A. H. Barr, “Implicit fairing of irregular meshes using diffusion and curvature flow,” in the 26th Annual Conference on Computer Graphics and Interactive Ttechniques (ACM Press/Addison-Wesley Publishing Co.1999), pp. 317–324.

X. Tang, M. van’t Hoff, J. Hoogenboom, Y. Guo, F. Cai, G. Lamers, and F. Verbeek, “Fluorescence and bright-field 3D image fusion based on sinogram unification for optical projection tomography,” in IEEE International Conference on Bioinformatics and Biomedicine (IEEE2016).

N. Hansen and S. Kern, “Evaluating the CMA evolution strategy on multi-modal test functions,” in International Conference on Parallel Problem Solving from Nature (SpringerBerlin Heidelberg2004), pp. 282–291.

Y. Guo, W. J. Veneman, H. P. Spaink, and V. J. Verbeek, “Silhouette-based 3D model for zebrafish High-throughput imaging,” in IEEE International Conference on Image Processing Theory, Tools and Applications (IEEE2015), pp. 403–408.

G. Xu and Z. Zhang, Epipolar Geometry in Stereo, Motion and Object Recognition: A Unified Approach (Springer Science & Business Media2013).

A. W. Fitzgibbon, G. Cross, and A. Zisserman, “Automatic 3D model construction for turn-table sequences,” in European Workshop on 3D Structure from Multiple Images of Large-Scale Environments (SpringerBerlin Heidelberg1998), pp. 155–170.

W. Matusik, C. Buehler, R. Raskar, S. J. Gortler, and L. McMillan, “Image-based visual hulls,” in ACM Conference on Computer graphics and interactive techniques (ACM2000), pp. 369–374.

J. S. Franco and E. Boyer, “Exact polyhedral visual hulls,” in British Machine Vision Conference (BMVA2003), pp. 329–338.

Y. Furukawa and J. Ponce, “Carved visual hulls for image-based modelling,” in European Conference on Computer Vision (SpringerBerlin Heidelberg2006), pp. 564–577.

G. Vogiatzis, C. Hernandez, and R. Cipolla, “Reconstruction in the round using photometric normals and silhouettes,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE2006), pp. 1847–1854.

K. Kolev, T. Brox, and D. Cremers, “Robust variational segmentation of 3D objects from multiple views,” in Joint Pattern Recognition Symposium (SpringerBerlin Heidelberg2006), pp. 688–697.
[Crossref]

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 A schematic illustration of the axial-view imaging system based on the VAST BioImager. Zebrafish larvae are loaded from a reservoir. The VAST system detects the position and direction of the object in the capillary. The stepper motors manipulate the position and view of the specimen. Images of a corresponding axial-view can be acquired by the associated VAST-camera #1. The VAST-camera is, in particular, suitable for obtaining overview images of the specimen. Originally, this camera is concerned with the control of the system but, by us, it is adapted for axial image acquisition. In addition, the whole system is mounted on a microscope equipped with high resolution camera #2. The microscope camera enables both of the organ- and cellular- level imaging. The blue dashed curves represent the communication between the imaging components and the control platform.
Fig. 2
Fig. 2 Flowchart of the 3D reconstruction and measurements of zebrafish larvae for the high-throughput axial-view imaging system. (A) Axial-view images of zebrafish as acquired from the VAST-camera. (B) Profiles are obtained from the segmentation of axial views. (C) Initialization of camera configurations estimated from the VAST BioImager. (D) Visualization of parameterization and calibration of the axial-view camera system. The orange dots represent center of the camera lens; the blue lines represent the principal axis of the camera. (E) Reconstructed 3D zebrafish models with a volumetric representation shown on the left and a texture-mapping model on the right. The ⊞ symbol indicates the integration of different computational modules. For further details see section 3.
Fig. 3
Fig. 3 Results of various segmentation methods for zebrafish in VAST. The blue bounding box indicates an accurate part of the segmentation; red bounding box indicates an inaccurate part of the segmentation. (A) The segmentation obtained by the mean shift algorithm. This method produces a whole shape representation of the zebrafish but fails in sensitivity to the edges. (B) The segmentation obtained by the improved level set method. This method fails to detect the transparent regions as found in zebrafish tail area. (C) An accurate segmentation is obtained from a hybrid approach including the previous two methods with additional refinements.
Fig. 4
Fig. 4 A visualization of the parameterization of 3D transformation from camera center to object center. We assume that the object is positioned in the focal plane and a good quality image is acquired, so the projection line from camera center to the object center is defined as the focal length. The ∠α and ∠γ represent the 3D rotation angles of camera center around Y and Z axis. The ∠φ is defined as the “translation” angle from the object center to the image center. It models the 3D translations of camera center over Y and Z axis.
Fig. 5
Fig. 5 Illustration of the importance of camera system calibration. 3D reconstructions of one zebrafish, without (A) and with (B) camera calibration, are depicted. In the first row of (A) and (B) from three different viewpoints are shown. The overlap between the projected image from the 3D models and the original image are shown in red in the second row of (A) and (B). (A) Using an uncalibrated camera configurations results in poor 3D reconstruction and thus a relatively small overlap. (B) Using a calibrated camera system generates an accurate and natural 3D shape. It can be appreciated that projecting the 3D shape to the original axial view results in an almost perfect overlap with respect to the original object.
Fig. 6
Fig. 6 Illustration of example images of 3 calibration particles and their 3D reconstruction. Each row indicates one calibration particle example. (A) Original RGB images. (B) Extracted profiles. (C) Reconstructed 3D models by an ASD=7 (left column) and an ASD=21 (right column). Although the ASD=7 already produces acceptable 3D shapes for the calibration particles, while some carving artifacts are still visible. A better result is shown for the 3D models reconstructed with an ASD=21.
Fig. 7
Fig. 7 Voxel residual volume of zebrafish larval stages against ASD. The descending trends of the voxel residual volume are clearly demonstrated in the graph for all 3 groups of zebrafish larvae at increasing ASD. However, after an ASD of 7 the decrease tends to be assymptotic, especially after an ASD of 21. This means that dense axial-view sampling (like 42, 84, etc.) for our 3D zebrafish reconstruction and measurements is unnecessary. The consistent differences of the volume for the three larval stages can be observed.
Fig. 8
Fig. 8 Surface area of zebrafish larval stages against ASD. The trend in surface area is similar to that of the volume as depicted in Fig. 7. The surface area is computed from a mesh, to suppress noise a mesh smoothing, i.e. implicit fairing, with 10 iterations was applied to all meshes [33].
Fig. 9
Fig. 9 Visualizations of 3D reconstructions for a range of different ASD using the same zebrafish larva instance. The use of an ASD=4 (A) and an ASD=7 (B) results in the 3D reconstructions presenting lots of sharp and flat surface elements which are generated by the carving effects of profile-based 3D reconstruction method. The use of an ASD=12 (C) and an ASD=14 (D) results in an improvement of the 3D models, but some carving artifacts remain. Using an ASD=21(E) to an ASD=84 (H) results in accurate and natural-shaped 3D models. For our particular problem domain, this suggests that both sparse and dense axial-view sampling are not the optimal configuration for zebrafish larva 3D reconstruction. Sparse axial-view sampling produces poor 3D models while dense axial-views sampling requires more computation time whilst the effects do not improve accordingly.
Fig. 10
Fig. 10 Visualization of 3D models of 3 zebrafish larval stages (3 dpf, 4 dpf and 5 dpf). Each box represents a reconstructed 3D model for one specific zebrafish larvae visualized from three different viewpoints. The 3D volumetric representations are shown in green on the left side in each box. The models with texture-mapping are in the right side of each box. (A) 3D models of two selected 3 dpf zebrafish larvae. (B) 3D models of two selected 4 dpf zebrafish larvae. (C) 3D models of two selected 5 dpf zebrafish larvae. Variation in size and shape between stages and within stages (interclass and intraclass) can be appreciated from the visualizations. A remarkable intraclass discrimination originates from the size and color of the yolk. In addition, animations of the 3D zebrafish models are available at: http://bio-imaging.liacs.nl/galleries/VAST-3Dimg/.
Fig. 11
Fig. 11 Distribution of volume of zebrafish larval stages (3 – 5 dpf). X axis: volume; Y axis: normalized probability density. The color-filled triangles on the Volume-axis indicate the locations of the mean for each of the 3 distributions. The numerical values of the mean and standard deviation are indicated with corresponding double-sided colored arrows. One can see that the volume of older zebrafish larvae is always larger than the younger ones. The growth rate increases more from 4 dpf to 5 dpf compared to that from 3 dpf to 4 dpf.
Fig. 12
Fig. 12 Distribution of surface area of zebrafish larval stages (3 – 5 dpf). X axis: surface area; Y axis: normalized probability density.
Fig. 13
Fig. 13 Joint distribution of volume and surface area of zebrafish larval stages (3 – 5 dpf). X axis: volume; Y axis: surface area. The 3 joint distributions have overlap with respect to each other, especially for the 4 dpf zebrafish larvae. Nevertheless, individual distribution centers can still be separated. The color schema is similar to that of Fig. 11 and Fig. 12

Tables (6)

Tables Icon

Table 1 Diameter statistics of calibration particles (M) measured from 3D reconstructed models (μm) and the corresponding (T) T-test scores.

Tables Icon

Table 2 Volume statistics of (M) measurements from the 3D reconstructed models and (A) analytic calculation from the fitted diameter for calibration particles (×107 μm3).

Tables Icon

Table 3 Surface area statistics of (M) measurements from the 3D reconstructed models and (A) analytic calculation from the fitted diameter for calibration particles (×105 μm2).

Tables Icon

Table 4 Volume statics of zebrafish of the 3 larval stages under study as derived from the 3D reconstructed models (×108 μm3).

Tables Icon

Table 5 Surface area statics of zebrafish of the 3 larval stages under study as derived from the 3D reconstructed models (×106 μm2).

Tables Icon

Table 6 Statics of volume (×108 μm3) and surface area (×106 μm2) for living zebrafish larvae

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

x ˜ = ( f k x s u x 0 f k y u y 0 0 1 ) ( 1 0 0 0 0 1 0 0 0 0 1 0 ) X ,
ψ = ( f , k x , k y , u x , u y , s , intrinsic α , γ , φ , ω 1 : N 1 extrinsic ) T .
𝒳 i = { X | S i ( x i j ) 0 , x i j = P i X j , X j 𝒳 } ,
𝒳 * = i = 1 , , N 𝒳 i
V j = i = 1 N 1 [ S i ( x i j ) 0 ]
𝒳 * = { X | V j ( 1 ) N , X j 𝒳 } , [ 0 , 1 ) .
f ( ψ ) = 1 N i = 1 N C ( S i , P i ( ψ ) 𝒳 ) ,
ψ * = arg ψ Ψ max f ( ψ ) .
f ( ψ ) = | 𝒳 * ( ψ ) | .

Metrics