Abstract

The field of view of high-resolution ophthalmoscopes that require the use of adaptive optics (AO) wavefront correction is limited by the isoplanatic patch of the eye, which varies across individual eyes and with the portion of the pupil used for illumination and/or imaging. Therefore all current AO ophthalmoscopes have small fields of view comparable to, or smaller than, the isoplanatic patch, and the resulting images have to be stitched off-line to create larger montages. These montages are currently assembled either manually, by expert human graders, or automatically, often requiring several hours per montage. This arguably limits the applicability of AO ophthalmoscopy to studies with small cohorts and moreover, prevents the ability to review a real-time captured montage of all locations during image acquisition to further direct targeted imaging. In this work, we propose stitching the images with our novel algorithm, which uses oriented fast rotated brief (ORB) descriptors, local sensitivity hashing, and by searching for a ‘good enough’ transformation, rather than the best possible, to achieve processing times of 1–2 minutes per montage of 250 images. Moreover, the proposed method produces montages which are as accurate as previous methods, when considering the image similarity metrics: normalised mutual information (NMI), and normalised cross correlation (NCC).

Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

Full Article  |  PDF Article
OSA Recommended Articles
Multi-modal automatic montaging of adaptive optics retinal images

Min Chen, Robert F. Cooper, Grace K. Han, James Gee, David H. Brainard, and Jessica I. W. Morgan
Biomed. Opt. Express 7(12) 4899-4918 (2016)

Unsupervised identification of cone photoreceptors in non-confocal adaptive optics scanning light ophthalmoscope images

Christos Bergeles, Adam M. Dubis, Benjamin Davidson, Melissa Kasilian, Angelos Kalitzeos, Joseph Carroll, Alfredo Dubra, Michel Michaelides, and Sebastien Ourselin
Biomed. Opt. Express 8(6) 3081-3094 (2017)

Deep learning based detection of cone photoreceptors with multimodal adaptive optics scanning light ophthalmoscope images of achromatopsia

David Cunefare, Christopher S. Langlo, Emily J. Patterson, Sarah Blau, Alfredo Dubra, Joseph Carroll, and Sina Farsiu
Biomed. Opt. Express 9(8) 3740-3756 (2018)

References

  • View by:
  • |
  • |
  • |

  1. P. Godara, A. M. Dubis, A. Roorda, J. L. Duncan, and J. Carroll, “Adaptive optics retinal imaging: emerging clinical applications,” Optom Vis Sci 87, 930–941 (2010).
    [Crossref] [PubMed]
  2. M. Georgiou, A. Kalitzeos, E. J. Patterson, A. Dubra, J. Carroll, and M. Michaelides, “Adaptive optics imaging of inherited retinal diseases,” Br. J. Ophthalmol. (2017). Doi: .
    [Crossref] [PubMed]
  3. M. Chen, R. Cooper, G. K. Han, J. Gee, D. H. Brainard, and J. I. W. Morgan, “Multi-modal automatic montaging of adaptive optics retinal images,” Biomed. Opt. Express 7, 4899–4918 (2016).
    [Crossref] [PubMed]
  4. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60, 91–110 (2004).
    [Crossref]
  5. D. Scoles, Y. N. Sulai, C. S. Langlo, G. A. Fishman, C. A. Curcio, J. Carroll, and A. Dubra, “In vivo imaging of human cone photoreceptor inner segments,” Invest. Ophthalmol. Vis. Sci. 55, 4244–4251 (2014).
    [Crossref] [PubMed]
  6. A. Dubra, Y. Sulai, J. L. Norris, R. F. Cooper, A. M. Dubis, D. R. Williams, and J. Carroll, “Noninvasive imaging of the human rod photoreceptor mosaic using a confocal adaptive optics scanning ophthalmoscope,” Biomed. Opt. Express 2, 1864–1876 (2011).
    [Crossref] [PubMed]
  7. A. Dubra and Z. Harvey, Registration of 2D Images from Fast Scanning Ophthalmic Instruments (SpringerBerlin Heidelberg, 2010), pp. 60–71.
  8. E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” in Computer Vision – ECCV 2006, A. Leonardis, H. Bischof, and A. Pinz, eds. (SpringerBerlin Heidelberg, 2006), pp. 430–443.
    [Crossref]
  9. C. Harris and M. Stephens, “A combined corner and edge detector,” in In Proc. of Fourth Alvey Vision Conference, (1988), pp. 147–151.
  10. P. L. Rosin, “Measuring corner properties,” Comput. Vis. Image Underst. 73, 291–307 (1999).
    [Crossref]
  11. M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent elementary features,” in Computer Vision – ECCV 2010, K. Daniilidis, P. Maragos, and N. Paragios, eds. (SpringerBerlin Heidelberg, 2010), pp. 778–792.
    [Crossref]
  12. E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” Proc. IEEE Int. Conf. on Comput. Vis. pp. 2564–2571 (2011).
  13. Q. Lv, W. Josephson, Z. Wang, M. Charikar, and K. Li, “Multi-probe LSH: efficient indexing for high-dimensional similarity search,” Proc. 33rd international conference on Very large data bases pp. 950–961 (2007).
  14. M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24, 381–395 (1981).
    [Crossref]
  15. G. Bradski, “The OpenCV Library,” Dr. Dobb’s J. Softw. Tools (2000).
  16. E. Karami, S. Prasad, and M. S. Shehata, “Image matching using SIFT, SURF, BRIEF and ORB: performance comparison for distorted images,” CoRR abs/1710.02726 (2017).
  17. A. Auclair, L. D. Cohen, and N. Vincent, “How to use sift vectors to analyze an image with database templates,” in Adaptive Multimedia Retrieval: Retrieval, User, and Semantics, N. Boujemaa, M. Detyniecki, and A. Nürnberger, eds. (SpringerBerlin Heidelberg, 2008), pp. 224–236.
  18. Davidson Ben, “Auto-Montager,” Gitlab (2018). Doi: https://gitlab.com/rmapbda/montager .

2016 (1)

2014 (1)

D. Scoles, Y. N. Sulai, C. S. Langlo, G. A. Fishman, C. A. Curcio, J. Carroll, and A. Dubra, “In vivo imaging of human cone photoreceptor inner segments,” Invest. Ophthalmol. Vis. Sci. 55, 4244–4251 (2014).
[Crossref] [PubMed]

2011 (1)

2010 (1)

P. Godara, A. M. Dubis, A. Roorda, J. L. Duncan, and J. Carroll, “Adaptive optics retinal imaging: emerging clinical applications,” Optom Vis Sci 87, 930–941 (2010).
[Crossref] [PubMed]

2004 (1)

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60, 91–110 (2004).
[Crossref]

1999 (1)

P. L. Rosin, “Measuring corner properties,” Comput. Vis. Image Underst. 73, 291–307 (1999).
[Crossref]

1981 (1)

M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24, 381–395 (1981).
[Crossref]

Auclair, A.

A. Auclair, L. D. Cohen, and N. Vincent, “How to use sift vectors to analyze an image with database templates,” in Adaptive Multimedia Retrieval: Retrieval, User, and Semantics, N. Boujemaa, M. Detyniecki, and A. Nürnberger, eds. (SpringerBerlin Heidelberg, 2008), pp. 224–236.

Ben, Davidson

Davidson Ben, “Auto-Montager,” Gitlab (2018). Doi: https://gitlab.com/rmapbda/montager .

Bolles, R. C.

M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24, 381–395 (1981).
[Crossref]

Bradski, G.

G. Bradski, “The OpenCV Library,” Dr. Dobb’s J. Softw. Tools (2000).

E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” Proc. IEEE Int. Conf. on Comput. Vis. pp. 2564–2571 (2011).

Brainard, D. H.

Calonder, M.

M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent elementary features,” in Computer Vision – ECCV 2010, K. Daniilidis, P. Maragos, and N. Paragios, eds. (SpringerBerlin Heidelberg, 2010), pp. 778–792.
[Crossref]

Carroll, J.

D. Scoles, Y. N. Sulai, C. S. Langlo, G. A. Fishman, C. A. Curcio, J. Carroll, and A. Dubra, “In vivo imaging of human cone photoreceptor inner segments,” Invest. Ophthalmol. Vis. Sci. 55, 4244–4251 (2014).
[Crossref] [PubMed]

A. Dubra, Y. Sulai, J. L. Norris, R. F. Cooper, A. M. Dubis, D. R. Williams, and J. Carroll, “Noninvasive imaging of the human rod photoreceptor mosaic using a confocal adaptive optics scanning ophthalmoscope,” Biomed. Opt. Express 2, 1864–1876 (2011).
[Crossref] [PubMed]

P. Godara, A. M. Dubis, A. Roorda, J. L. Duncan, and J. Carroll, “Adaptive optics retinal imaging: emerging clinical applications,” Optom Vis Sci 87, 930–941 (2010).
[Crossref] [PubMed]

M. Georgiou, A. Kalitzeos, E. J. Patterson, A. Dubra, J. Carroll, and M. Michaelides, “Adaptive optics imaging of inherited retinal diseases,” Br. J. Ophthalmol. (2017). Doi: .
[Crossref] [PubMed]

Charikar, M.

Q. Lv, W. Josephson, Z. Wang, M. Charikar, and K. Li, “Multi-probe LSH: efficient indexing for high-dimensional similarity search,” Proc. 33rd international conference on Very large data bases pp. 950–961 (2007).

Chen, M.

Cohen, L. D.

A. Auclair, L. D. Cohen, and N. Vincent, “How to use sift vectors to analyze an image with database templates,” in Adaptive Multimedia Retrieval: Retrieval, User, and Semantics, N. Boujemaa, M. Detyniecki, and A. Nürnberger, eds. (SpringerBerlin Heidelberg, 2008), pp. 224–236.

Cooper, R.

Cooper, R. F.

Curcio, C. A.

D. Scoles, Y. N. Sulai, C. S. Langlo, G. A. Fishman, C. A. Curcio, J. Carroll, and A. Dubra, “In vivo imaging of human cone photoreceptor inner segments,” Invest. Ophthalmol. Vis. Sci. 55, 4244–4251 (2014).
[Crossref] [PubMed]

Drummond, T.

E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” in Computer Vision – ECCV 2006, A. Leonardis, H. Bischof, and A. Pinz, eds. (SpringerBerlin Heidelberg, 2006), pp. 430–443.
[Crossref]

Dubis, A. M.

Dubra, A.

D. Scoles, Y. N. Sulai, C. S. Langlo, G. A. Fishman, C. A. Curcio, J. Carroll, and A. Dubra, “In vivo imaging of human cone photoreceptor inner segments,” Invest. Ophthalmol. Vis. Sci. 55, 4244–4251 (2014).
[Crossref] [PubMed]

A. Dubra, Y. Sulai, J. L. Norris, R. F. Cooper, A. M. Dubis, D. R. Williams, and J. Carroll, “Noninvasive imaging of the human rod photoreceptor mosaic using a confocal adaptive optics scanning ophthalmoscope,” Biomed. Opt. Express 2, 1864–1876 (2011).
[Crossref] [PubMed]

M. Georgiou, A. Kalitzeos, E. J. Patterson, A. Dubra, J. Carroll, and M. Michaelides, “Adaptive optics imaging of inherited retinal diseases,” Br. J. Ophthalmol. (2017). Doi: .
[Crossref] [PubMed]

A. Dubra and Z. Harvey, Registration of 2D Images from Fast Scanning Ophthalmic Instruments (SpringerBerlin Heidelberg, 2010), pp. 60–71.

Duncan, J. L.

P. Godara, A. M. Dubis, A. Roorda, J. L. Duncan, and J. Carroll, “Adaptive optics retinal imaging: emerging clinical applications,” Optom Vis Sci 87, 930–941 (2010).
[Crossref] [PubMed]

Fischler, M. A.

M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24, 381–395 (1981).
[Crossref]

Fishman, G. A.

D. Scoles, Y. N. Sulai, C. S. Langlo, G. A. Fishman, C. A. Curcio, J. Carroll, and A. Dubra, “In vivo imaging of human cone photoreceptor inner segments,” Invest. Ophthalmol. Vis. Sci. 55, 4244–4251 (2014).
[Crossref] [PubMed]

Fua, P.

M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent elementary features,” in Computer Vision – ECCV 2010, K. Daniilidis, P. Maragos, and N. Paragios, eds. (SpringerBerlin Heidelberg, 2010), pp. 778–792.
[Crossref]

Gee, J.

Georgiou, M.

M. Georgiou, A. Kalitzeos, E. J. Patterson, A. Dubra, J. Carroll, and M. Michaelides, “Adaptive optics imaging of inherited retinal diseases,” Br. J. Ophthalmol. (2017). Doi: .
[Crossref] [PubMed]

Godara, P.

P. Godara, A. M. Dubis, A. Roorda, J. L. Duncan, and J. Carroll, “Adaptive optics retinal imaging: emerging clinical applications,” Optom Vis Sci 87, 930–941 (2010).
[Crossref] [PubMed]

Han, G. K.

Harris, C.

C. Harris and M. Stephens, “A combined corner and edge detector,” in In Proc. of Fourth Alvey Vision Conference, (1988), pp. 147–151.

Harvey, Z.

A. Dubra and Z. Harvey, Registration of 2D Images from Fast Scanning Ophthalmic Instruments (SpringerBerlin Heidelberg, 2010), pp. 60–71.

Josephson, W.

Q. Lv, W. Josephson, Z. Wang, M. Charikar, and K. Li, “Multi-probe LSH: efficient indexing for high-dimensional similarity search,” Proc. 33rd international conference on Very large data bases pp. 950–961 (2007).

Kalitzeos, A.

M. Georgiou, A. Kalitzeos, E. J. Patterson, A. Dubra, J. Carroll, and M. Michaelides, “Adaptive optics imaging of inherited retinal diseases,” Br. J. Ophthalmol. (2017). Doi: .
[Crossref] [PubMed]

Karami, E.

E. Karami, S. Prasad, and M. S. Shehata, “Image matching using SIFT, SURF, BRIEF and ORB: performance comparison for distorted images,” CoRR abs/1710.02726 (2017).

Konolige, K.

E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” Proc. IEEE Int. Conf. on Comput. Vis. pp. 2564–2571 (2011).

Langlo, C. S.

D. Scoles, Y. N. Sulai, C. S. Langlo, G. A. Fishman, C. A. Curcio, J. Carroll, and A. Dubra, “In vivo imaging of human cone photoreceptor inner segments,” Invest. Ophthalmol. Vis. Sci. 55, 4244–4251 (2014).
[Crossref] [PubMed]

Lepetit, V.

M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent elementary features,” in Computer Vision – ECCV 2010, K. Daniilidis, P. Maragos, and N. Paragios, eds. (SpringerBerlin Heidelberg, 2010), pp. 778–792.
[Crossref]

Li, K.

Q. Lv, W. Josephson, Z. Wang, M. Charikar, and K. Li, “Multi-probe LSH: efficient indexing for high-dimensional similarity search,” Proc. 33rd international conference on Very large data bases pp. 950–961 (2007).

Lowe, D. G.

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60, 91–110 (2004).
[Crossref]

Lv, Q.

Q. Lv, W. Josephson, Z. Wang, M. Charikar, and K. Li, “Multi-probe LSH: efficient indexing for high-dimensional similarity search,” Proc. 33rd international conference on Very large data bases pp. 950–961 (2007).

Michaelides, M.

M. Georgiou, A. Kalitzeos, E. J. Patterson, A. Dubra, J. Carroll, and M. Michaelides, “Adaptive optics imaging of inherited retinal diseases,” Br. J. Ophthalmol. (2017). Doi: .
[Crossref] [PubMed]

Morgan, J. I. W.

Norris, J. L.

Patterson, E. J.

M. Georgiou, A. Kalitzeos, E. J. Patterson, A. Dubra, J. Carroll, and M. Michaelides, “Adaptive optics imaging of inherited retinal diseases,” Br. J. Ophthalmol. (2017). Doi: .
[Crossref] [PubMed]

Prasad, S.

E. Karami, S. Prasad, and M. S. Shehata, “Image matching using SIFT, SURF, BRIEF and ORB: performance comparison for distorted images,” CoRR abs/1710.02726 (2017).

Rabaud, V.

E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” Proc. IEEE Int. Conf. on Comput. Vis. pp. 2564–2571 (2011).

Roorda, A.

P. Godara, A. M. Dubis, A. Roorda, J. L. Duncan, and J. Carroll, “Adaptive optics retinal imaging: emerging clinical applications,” Optom Vis Sci 87, 930–941 (2010).
[Crossref] [PubMed]

Rosin, P. L.

P. L. Rosin, “Measuring corner properties,” Comput. Vis. Image Underst. 73, 291–307 (1999).
[Crossref]

Rosten, E.

E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” in Computer Vision – ECCV 2006, A. Leonardis, H. Bischof, and A. Pinz, eds. (SpringerBerlin Heidelberg, 2006), pp. 430–443.
[Crossref]

Rublee, E.

E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” Proc. IEEE Int. Conf. on Comput. Vis. pp. 2564–2571 (2011).

Scoles, D.

D. Scoles, Y. N. Sulai, C. S. Langlo, G. A. Fishman, C. A. Curcio, J. Carroll, and A. Dubra, “In vivo imaging of human cone photoreceptor inner segments,” Invest. Ophthalmol. Vis. Sci. 55, 4244–4251 (2014).
[Crossref] [PubMed]

Shehata, M. S.

E. Karami, S. Prasad, and M. S. Shehata, “Image matching using SIFT, SURF, BRIEF and ORB: performance comparison for distorted images,” CoRR abs/1710.02726 (2017).

Stephens, M.

C. Harris and M. Stephens, “A combined corner and edge detector,” in In Proc. of Fourth Alvey Vision Conference, (1988), pp. 147–151.

Strecha, C.

M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent elementary features,” in Computer Vision – ECCV 2010, K. Daniilidis, P. Maragos, and N. Paragios, eds. (SpringerBerlin Heidelberg, 2010), pp. 778–792.
[Crossref]

Sulai, Y.

Sulai, Y. N.

D. Scoles, Y. N. Sulai, C. S. Langlo, G. A. Fishman, C. A. Curcio, J. Carroll, and A. Dubra, “In vivo imaging of human cone photoreceptor inner segments,” Invest. Ophthalmol. Vis. Sci. 55, 4244–4251 (2014).
[Crossref] [PubMed]

Vincent, N.

A. Auclair, L. D. Cohen, and N. Vincent, “How to use sift vectors to analyze an image with database templates,” in Adaptive Multimedia Retrieval: Retrieval, User, and Semantics, N. Boujemaa, M. Detyniecki, and A. Nürnberger, eds. (SpringerBerlin Heidelberg, 2008), pp. 224–236.

Wang, Z.

Q. Lv, W. Josephson, Z. Wang, M. Charikar, and K. Li, “Multi-probe LSH: efficient indexing for high-dimensional similarity search,” Proc. 33rd international conference on Very large data bases pp. 950–961 (2007).

Williams, D. R.

Biomed. Opt. Express (2)

Commun. ACM (1)

M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24, 381–395 (1981).
[Crossref]

Comput. Vis. Image Underst. (1)

P. L. Rosin, “Measuring corner properties,” Comput. Vis. Image Underst. 73, 291–307 (1999).
[Crossref]

Int. J. Comput. Vis. (1)

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60, 91–110 (2004).
[Crossref]

Invest. Ophthalmol. Vis. Sci. (1)

D. Scoles, Y. N. Sulai, C. S. Langlo, G. A. Fishman, C. A. Curcio, J. Carroll, and A. Dubra, “In vivo imaging of human cone photoreceptor inner segments,” Invest. Ophthalmol. Vis. Sci. 55, 4244–4251 (2014).
[Crossref] [PubMed]

Optom Vis Sci (1)

P. Godara, A. M. Dubis, A. Roorda, J. L. Duncan, and J. Carroll, “Adaptive optics retinal imaging: emerging clinical applications,” Optom Vis Sci 87, 930–941 (2010).
[Crossref] [PubMed]

Other (11)

M. Georgiou, A. Kalitzeos, E. J. Patterson, A. Dubra, J. Carroll, and M. Michaelides, “Adaptive optics imaging of inherited retinal diseases,” Br. J. Ophthalmol. (2017). Doi: .
[Crossref] [PubMed]

A. Dubra and Z. Harvey, Registration of 2D Images from Fast Scanning Ophthalmic Instruments (SpringerBerlin Heidelberg, 2010), pp. 60–71.

E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” in Computer Vision – ECCV 2006, A. Leonardis, H. Bischof, and A. Pinz, eds. (SpringerBerlin Heidelberg, 2006), pp. 430–443.
[Crossref]

C. Harris and M. Stephens, “A combined corner and edge detector,” in In Proc. of Fourth Alvey Vision Conference, (1988), pp. 147–151.

M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent elementary features,” in Computer Vision – ECCV 2010, K. Daniilidis, P. Maragos, and N. Paragios, eds. (SpringerBerlin Heidelberg, 2010), pp. 778–792.
[Crossref]

E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” Proc. IEEE Int. Conf. on Comput. Vis. pp. 2564–2571 (2011).

Q. Lv, W. Josephson, Z. Wang, M. Charikar, and K. Li, “Multi-probe LSH: efficient indexing for high-dimensional similarity search,” Proc. 33rd international conference on Very large data bases pp. 950–961 (2007).

G. Bradski, “The OpenCV Library,” Dr. Dobb’s J. Softw. Tools (2000).

E. Karami, S. Prasad, and M. S. Shehata, “Image matching using SIFT, SURF, BRIEF and ORB: performance comparison for distorted images,” CoRR abs/1710.02726 (2017).

A. Auclair, L. D. Cohen, and N. Vincent, “How to use sift vectors to analyze an image with database templates,” in Adaptive Multimedia Retrieval: Retrieval, User, and Semantics, N. Boujemaa, M. Detyniecki, and A. Nürnberger, eds. (SpringerBerlin Heidelberg, 2008), pp. 224–236.

Davidson Ben, “Auto-Montager,” Gitlab (2018). Doi: https://gitlab.com/rmapbda/montager .

Supplementary Material (1)

NameDescription
» Code 1       Auto-Montager

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 The standard image processing pipeline for cone photoreceptor AO retinal imaging. (a) Image acquisition typically produces low quality images that are (b) coregistered to improve the signal-to-noise ratio. (c) The registered images are then montaged to create a larger field of view of the retina. (d) Finally, the locations of cones are marked in images extracted from the montage.
Fig. 2
Fig. 2 A retinal montage. (a) the three simultaneously acquired images from the same location. From top to bottom: confocal, split-detection, and dark field. These are collected together into a tile, in Photoshop these art layers are linked, so that moving one image, moves the other two; (b) completed retinal montage highlighting the position of the tile from (a). Each of the 3 montages, is a layer in Photoshop, allowing image analysts to easily move between the montages.
Fig. 3
Fig. 3 Stitching images using features. (a) Keypoints in each image (circles) and their corresponding matches (connected via a line). Note here there are two incorrect matches in red, which will be excluded after applying random sample consensus (RANSAC). (b) Result of aligning images using the calculated transformation.
Fig. 4
Fig. 4 Computing ORB Features. (a) There are 11 contiguous pixels on the circular arc (white dashed) around pixel (a, b) which are lighter than it (pixels with bold edges). (b) An intensity centroid (grey circle) is calculated giving the keypoint an orientation; and an example pixel pair used to calculate the BRIEF descriptor (c) The location of pixel pairs after aligning to the orientation. Note the changing colours of the pixel pairs is only to ensure visibility of the points within the figure.
Fig. 5
Fig. 5 Montages constructed with both methods, (a) built using the proposed method, (b) built using the SIFT method.
Fig. 6
Fig. 6 Proportion of tile pairs correctly matched with a given overlap.

Tables (6)

Tables Icon

Table 1 Description of data used to validate the proposed method.

Tables Icon

Algorithm 1 Auto-Montaging with ORB

Tables Icon

Table 3 Characterisation of algorithm performance. For each dataset this table shows the time each method took to construct the registrations, as well as how accurate each output montage was. Normalised cross correlation (NCC), normalised mutual information (NMI)

Tables Icon

Table 4 Size in pixels (p) of row or column overlaps required to ensure 100% matching of all tile pairs considered. Retinitis Pigmentosa (RPGR), Stargardt Disease (STGD), Achromatopsia (ACHM), Central Serious ChorioRetinopathy (CSCR). NA here indicates these conditions were not present in the data considered.

Tables Icon

Table 5 Times, in seconds, to complete each montage by hand, when ORB produced a different number of disjoint pieces than SIFT. Not completed (NC)

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

distance ( d 1 , d 2 ) < λ , λ > 0
I ( a i , b i ) > I ( a , b ) + λ , i { 1 , 2 , , η } ,
I ( a i , b i ) < I ( a , b ) λ , i { 1 , 2 , , η } ,
τ i ( ( a 1 i , b 1 i ) , ( a 2 i , b 2 i ) = { 1 I ( a 1 i , b 1 i ) < I ( a 2 i , b 2 i ) 0 I ( a 1 i , b 1 i ) I ( a 2 i , b 2 i ) ,
V ^ = { v i g i ( q ) } .

Metrics