Abstract

Misleading depth perception may greatly affect the correct identification of complex structures in image-guided surgery. In this study, we propose a novel importance-driven hybrid rendering method to enhance perception for navigated endoscopic surgery. First, the volume structures are enhanced using gradient-based shading to reduce the color information in low-priority regions and improve the distinctions between complicated structures. Second, an importance sorting method based on the order-independent transparency rendering is introduced to intensify the perception of multiple surfaces. Third, volume data are adaptively truncated and emphasized with respect to the perspective orientation and the illustration of critical information for viewing range extension. Various experimental results prove that with the combination of volume and surface rendering, our method can effectively improve the depth distinction of multiple objects both in simulated and clinical scenes. Our importance-driven surface rendering method demonstrates improved average performance and statistical significance as rated by 15 participants (five clinicians and ten non-clinicians) on a five-point Likert scale. Further, the average frame rate of hybrid rendering with thin-layer sectioning reaches 42 fps. Given that the process of the hybrid rendering is fully automatic, it can be utilized in real-time surgical navigation to improve the rendering efficiency and information validity.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Novel microscope-integrated stereoscopic heads-up display for intrasurgical optical coherence tomography

Liangbo Shen, Oscar Carrasco-Zevallos, Brenton Keller, Christian Viehland, Gar Waterman, Paul S. Hahn, Anthony N. Kuo, Cynthia A. Toth, and Joseph A. Izatt
Biomed. Opt. Express 7(5) 1711-1726 (2016)

Augmented reality based real-time subcutaneous vein imaging system

Danni Ai, Jian Yang, Jingfan Fan, Yitian Zhao, Xianzheng Song, Jianbing Shen, Ling Shao, and Yongtian Wang
Biomed. Opt. Express 7(7) 2565-2585 (2016)

High-performance integral-imaging-based light field augmented reality display using freeform optics

Hekun Huang and Hong Hua
Opt. Express 26(13) 17578-17590 (2018)

References

  • View by:
  • |
  • |
  • |

  1. S. Nicolau, L. Soler, D. Mutter, and J. Marescaux, “Augmented reality in laparoscopic surgical oncology,” Surg. Oncol. 20(3), 189–201 (2011).
    [Crossref] [PubMed]
  2. M. Sugimoto, “Recent advances in visualization, imaging, and navigation in hepatobiliary and pancreatic sciences,” J. Hepatobiliary Pancreat. Sci. 17(5), 574–576 (2010).
    [Crossref] [PubMed]
  3. M. Tuceryan, D. S. Greer, R. T. Whitaker, D. Breen, C. Crampton, E. Rose, and K. H. Ahlers, “Calibration Requirements and Procedures for Augmented Reality,” IEEE Trans. Vis. Comput. Graph. 31(3), 255–273 (1995).
    [Crossref]
  4. D. Drascic and P. Milgram, “Perceptual issues in augmented reality reality-virtuality continuum,” Virtual Real. 2351, 197321 (1996).
  5. E. Kruijff, J. E. Swan, and S. Feiner, “Perceptual issues in augmented reality revisited,” in 9th IEEE International Symposium on Mixed and Augmented Reality 2010: Science and Technology, ISMAR 2010 - Proceedings (2010).
    [Crossref]
  6. K. Lawonn, I. Viola, B. Preim, and T. Isenberg, “A survey of surface-based illustrative rendering for visualization,” Comput. Graph. Forum 37(6), 205 (2018).
  7. C. Bichlmeier, F. Wimmer, S. M. Heining, and N. Navab, “Contextual anatomic mimesis: Hybrid in-situ visualization method for improving multi-sensory depth perception in medical augmented reality,” in 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, ISMAR (2007).
  8. C. Bichlmeier and N. Navab, “Virtual Window for Improved Depth Perception in Medical AR,” International Workshop on Augmented Reality environments for Medical Imaging and Computer-aided Surgery (AMI-ARCS) (2006).
  9. H. Liao, T. Inomata, I. Sakuma, and T. Dohi, “3-D augmented reality for MRI-guided surgery using integral videography autostereoscopic image overlay,” IEEE Trans. Biomed. Eng. 57(6), 1476–1486 (2010).
    [Crossref] [PubMed]
  10. D. G. Armstrong, T. M. Rankin, N. A. Giovinco, J. L. Mills, and Y. Matsuoka, “A heads-up display for diabetic limb salvage surgery: a view through the google looking glass,” J. Diabetes Sci. Technol. 8(5), 951–956 (2014).
    [Crossref] [PubMed]
  11. X. Chen, L. Xu, Y. Wang, H. Wang, F. Wang, X. Zeng, Q. Wang, and J. Egger, “Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display,” J. Biomed. Inform. 55, 124–131 (2015).
    [Crossref] [PubMed]
  12. F. Volonté, F. Pugin, P. Bucher, M. Sugimoto, O. Ratib, and P. Morel, “Augmented reality and image overlay navigation with OsiriX in laparoscopic and robotic surgery: not only a matter of fashion,” J. Hepatobiliary Pancreat. Sci. 18(4), 506–509 (2011).
    [Crossref] [PubMed]
  13. J. Wang, H. Suenaga, K. Hoshi, L. Yang, E. Kobayashi, I. Sakuma, and H. Liao, “Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery,” IEEE Trans. Biomed. Eng. 61(4), 1295–1304 (2014).
    [Crossref] [PubMed]
  14. K. A. Gavaghan, M. Peterhans, T. Oliveira-Santos, and S. Weber, “A portable image overlay projection device for computer-aided open liver surgery,” IEEE Trans. Biomed. Eng. 58(6), 1855–1864 (2011).
    [Crossref] [PubMed]
  15. L. Besharati Tabrizi and M. Mahvash, “Augmented reality-guided neurosurgery: accuracy and intraoperative application of an image projection technique,” J. Neurosurg. 123(1), 206–211 (2015).
    [Crossref] [PubMed]
  16. B. J. Dixon, M. J. Daly, H. Chan, A. Vescan, I. J. Witterick, and J. C. Irish, “Augmented real-time navigation with critical structure proximity alerts for endoscopic skull base surgery,” Laryngoscope 124(4), 853–859 (2014).
    [Crossref] [PubMed]
  17. N. McLaughlin, R. L. Carrau, A. B. Kassam, and D. F. Kelly, “Neuronavigation in endonasal pituitary and skull base surgery using an autoregistration mask without head fixation: An assessment of accuracy and practicality,” J. Neurol. Surg. A Cent. Eur. Neurosurg. 73(6), 351–357 (2012).
    [Crossref] [PubMed]
  18. S. A. Nicolau, X. Pennec, L. Soler, X. Buy, A. Gangi, N. Ayache, and J. Marescaux, “An augmented reality system for liver thermal ablation: Design and evaluation on clinical cases,” Med. Image Anal. 13(3), 494–506 (2009).
    [Crossref] [PubMed]
  19. R. Souzaki, S. Ieiri, M. Uemura, K. Ohuchida, M. Tomikawa, Y. Kinoshita, Y. Koga, A. Suminoe, K. Kohashi, Y. Oda, T. Hara, M. Hashizume, and T. Taguchi, “An augmented reality navigation system for pediatric oncologic surgery based on preoperative CT and MRI images,” J. Pediatr. Surg. 48(12), 2479–2483 (2013).
    [Crossref] [PubMed]
  20. B. Preim, A. Baer, D. Cunningham, T. Isenberg, and T. Ropinski, “A Survey of Perceptually Motivated 3D Visualization of Medical Image Data,” Comput. Graph. Forum 35(3), 501–525 (2016).
    [Crossref]
  21. M. Kersten-Oertel, P. Jannin, and D. L. Collins, “The state of the art of visualization in mixed reality image guided surgery,” Comput. Med. Imaging Graph. 37(2), 98–112 (2013).
    [Crossref] [PubMed]
  22. P. Rheingans and D. Ebert, “Volume illustration: Nonphotorealistic rendering of volume models,” IEEE Trans. Vis. Comput. Graph. 7(3), 253–264 (2001).
    [Crossref]
  23. S. Bruckner and E. Gröller, “Enhancing depth-perception with flexible volumetric halos,” IEEE Trans. Vis. Comput. Graph. 13(6), 1344–1351 (2007).
    [Crossref] [PubMed]
  24. B. Csébfalvi, L. Mroz, H. Hauser, A. König, and E. Gröller, “Fast Visualization of Object Contours by Non-Photorealistic Volume Rendering,” (Proceedings of Eurographics 2001, Manchester, UK, September 2001) 20(3), 452–460 (2001).
  25. E. J. Lorenzo, R. Centeno, and M. Rodríguez-Artacho, “A framework for helping developers in the integration of external tools into virtual learning environments,” in Proceedings of the First International Conference on Technological Ecosystem for Enhancing Multiculturality - TEEM ’13 (2013), pp. 127–132.
    [Crossref]
  26. I. Viola, A. Kanitsar, and M. E. Gröller, “Importance-driven feature enhancement in volume visualization,” IEEE Trans. Vis. Comput. Graph. 11(4), 408–418 (2005).
    [Crossref] [PubMed]
  27. T. Porter and T. Duff, “Compositing digital images,” Comput. Graph. 18(3), 253–259 (1984).
    [Crossref]
  28. L. Bavoil and K. Myers, “Order Independent Transparency with Dual Depth Peeling,” Image Rochester NY 107(February), 22020–22025 (2008).
  29. H. Meshkin, “Sort-independent alpha blending,” Perpetual Entertainment, GDC Talk (2007).
  30. M. Mcguire and L. Bavoil, “Weighted Blended Order-Independent Transparency,” Journal of Computer Graphics Techniques 2(2), 122–141 (2013).
  31. K. Mühler, C. Tietjen, F. Ritter, and B. Preim, “The medical exploration toolkit: An efficient support for visual computing in surgical planning and training,” IEEE Trans. Vis. Comput. Graph. 16(1), 133–146 (2010).
    [Crossref] [PubMed]
  32. Y. Hayashi, K. Misawa, M. Oda, D. J. Hawkes, and K. Mori, “Clinical application of a surgical navigation system based on virtual laparoscopy in laparoscopic gastrectomy for gastric cancer,” Int. J. CARS 11(5), 827–836 (2016).
    [Crossref] [PubMed]
  33. C. Tietjen, T. Isenberg, and B. Preim, “Combining Silhouettes, Surface, and Volume Rendering for Surgery Education and Planning,” EUROGRAPHICS - IEEE VGTC Symposium on Visualization (2005).
  34. R. Brecheisen, A. Vilanova, B. Platel, and H. Romeny, “Flexible GPU-Based Multi-Volume Ray-Casting,” Proceedings of Vision, Modeling and Visualization (2008), pp. 303–312.
  35. S. S. Snibbe, “A Direct Manipulation Interface for 3D Computer Animation,” Comput. Graph. Forum 14(3), 271–283 (1995).
    [Crossref]
  36. M. Maule, J. L. D. Comba, R. P. Torchelsen, and R. Bastos, “A survey of raster-based transparency techniques,” Computers and Graphics (Pergamon) 35(6), 1023–1034 (2011).
    [Crossref]
  37. C. Hansen, J. Wieferich, F. Ritter, C. Rieder, and H. O. Peitgen, “Illustrative visualization of 3D planning models for augmented reality in liver surgery,” Int. J. CARS 5(2), 133–141 (2010).
    [Crossref] [PubMed]
  38. B. Preim and D. Bartz, Visualization in Medicine: Theory, Algorithms, and Applications (Moegan Kauffman, 2007).
  39. Y. Chu, J. Yang, S. Ma, D. Ai, W. Li, H. Song, L. Li, D. Chen, L. Chen, and Y. Wang, “Registration and fusion quantification of augmented reality based nasal endoscopic surgery,” Med. Image Anal. 42, 241–256 (2017).
    [Crossref] [PubMed]
  40. M. Levoy, “Efficient Ray Tracing of Volume Data,” ACM Trans. Graph. 9(3), 245–261 (1990).
    [Crossref]
  41. R. Marques, L. P. Santos, and P. Leškovský, “GPU Ray Casting,” in 17° Encontro Português de Computaçao Gráfica (En Anexo, 2009).
  42. V. R. Ramakrishnan, R. R. Orlandi, M. J. Citardi, T. L. Smith, M. P. Fried, and T. T. Kingdom, “The use of image-guided surgery in endoscopic sinus surgery: an evidence-based review with recommendations,” Int. Forum Allergy Rhinol. 3(3), 236–241 (2013).
    [Crossref] [PubMed]
  43. T. Okamoto, S. Onda, K. Yanaga, N. Suzuki, and A. Hattori, “Clinical application of navigation surgery using augmented reality in the abdominal field,” Surg. Today 45(4), 397–406 (2015).
    [Crossref] [PubMed]
  44. T. Gustafsson, “Concepts of Hybrid Data Rendering,” in SIGRAD 2017 (2017).
  45. Y. Zhang and K. L. Ma, “Lighting design for globally illuminated volume rendering,” IEEE Trans. Vis. Comput. Graph. 19(12), 2946–2955 (2013).
    [Crossref] [PubMed]
  46. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
    [Crossref] [PubMed]

2018 (1)

K. Lawonn, I. Viola, B. Preim, and T. Isenberg, “A survey of surface-based illustrative rendering for visualization,” Comput. Graph. Forum 37(6), 205 (2018).

2017 (1)

Y. Chu, J. Yang, S. Ma, D. Ai, W. Li, H. Song, L. Li, D. Chen, L. Chen, and Y. Wang, “Registration and fusion quantification of augmented reality based nasal endoscopic surgery,” Med. Image Anal. 42, 241–256 (2017).
[Crossref] [PubMed]

2016 (2)

Y. Hayashi, K. Misawa, M. Oda, D. J. Hawkes, and K. Mori, “Clinical application of a surgical navigation system based on virtual laparoscopy in laparoscopic gastrectomy for gastric cancer,” Int. J. CARS 11(5), 827–836 (2016).
[Crossref] [PubMed]

B. Preim, A. Baer, D. Cunningham, T. Isenberg, and T. Ropinski, “A Survey of Perceptually Motivated 3D Visualization of Medical Image Data,” Comput. Graph. Forum 35(3), 501–525 (2016).
[Crossref]

2015 (3)

L. Besharati Tabrizi and M. Mahvash, “Augmented reality-guided neurosurgery: accuracy and intraoperative application of an image projection technique,” J. Neurosurg. 123(1), 206–211 (2015).
[Crossref] [PubMed]

X. Chen, L. Xu, Y. Wang, H. Wang, F. Wang, X. Zeng, Q. Wang, and J. Egger, “Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display,” J. Biomed. Inform. 55, 124–131 (2015).
[Crossref] [PubMed]

T. Okamoto, S. Onda, K. Yanaga, N. Suzuki, and A. Hattori, “Clinical application of navigation surgery using augmented reality in the abdominal field,” Surg. Today 45(4), 397–406 (2015).
[Crossref] [PubMed]

2014 (3)

D. G. Armstrong, T. M. Rankin, N. A. Giovinco, J. L. Mills, and Y. Matsuoka, “A heads-up display for diabetic limb salvage surgery: a view through the google looking glass,” J. Diabetes Sci. Technol. 8(5), 951–956 (2014).
[Crossref] [PubMed]

B. J. Dixon, M. J. Daly, H. Chan, A. Vescan, I. J. Witterick, and J. C. Irish, “Augmented real-time navigation with critical structure proximity alerts for endoscopic skull base surgery,” Laryngoscope 124(4), 853–859 (2014).
[Crossref] [PubMed]

J. Wang, H. Suenaga, K. Hoshi, L. Yang, E. Kobayashi, I. Sakuma, and H. Liao, “Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery,” IEEE Trans. Biomed. Eng. 61(4), 1295–1304 (2014).
[Crossref] [PubMed]

2013 (5)

M. Kersten-Oertel, P. Jannin, and D. L. Collins, “The state of the art of visualization in mixed reality image guided surgery,” Comput. Med. Imaging Graph. 37(2), 98–112 (2013).
[Crossref] [PubMed]

Y. Zhang and K. L. Ma, “Lighting design for globally illuminated volume rendering,” IEEE Trans. Vis. Comput. Graph. 19(12), 2946–2955 (2013).
[Crossref] [PubMed]

R. Souzaki, S. Ieiri, M. Uemura, K. Ohuchida, M. Tomikawa, Y. Kinoshita, Y. Koga, A. Suminoe, K. Kohashi, Y. Oda, T. Hara, M. Hashizume, and T. Taguchi, “An augmented reality navigation system for pediatric oncologic surgery based on preoperative CT and MRI images,” J. Pediatr. Surg. 48(12), 2479–2483 (2013).
[Crossref] [PubMed]

M. Mcguire and L. Bavoil, “Weighted Blended Order-Independent Transparency,” Journal of Computer Graphics Techniques 2(2), 122–141 (2013).

V. R. Ramakrishnan, R. R. Orlandi, M. J. Citardi, T. L. Smith, M. P. Fried, and T. T. Kingdom, “The use of image-guided surgery in endoscopic sinus surgery: an evidence-based review with recommendations,” Int. Forum Allergy Rhinol. 3(3), 236–241 (2013).
[Crossref] [PubMed]

2012 (1)

N. McLaughlin, R. L. Carrau, A. B. Kassam, and D. F. Kelly, “Neuronavigation in endonasal pituitary and skull base surgery using an autoregistration mask without head fixation: An assessment of accuracy and practicality,” J. Neurol. Surg. A Cent. Eur. Neurosurg. 73(6), 351–357 (2012).
[Crossref] [PubMed]

2011 (4)

K. A. Gavaghan, M. Peterhans, T. Oliveira-Santos, and S. Weber, “A portable image overlay projection device for computer-aided open liver surgery,” IEEE Trans. Biomed. Eng. 58(6), 1855–1864 (2011).
[Crossref] [PubMed]

F. Volonté, F. Pugin, P. Bucher, M. Sugimoto, O. Ratib, and P. Morel, “Augmented reality and image overlay navigation with OsiriX in laparoscopic and robotic surgery: not only a matter of fashion,” J. Hepatobiliary Pancreat. Sci. 18(4), 506–509 (2011).
[Crossref] [PubMed]

S. Nicolau, L. Soler, D. Mutter, and J. Marescaux, “Augmented reality in laparoscopic surgical oncology,” Surg. Oncol. 20(3), 189–201 (2011).
[Crossref] [PubMed]

M. Maule, J. L. D. Comba, R. P. Torchelsen, and R. Bastos, “A survey of raster-based transparency techniques,” Computers and Graphics (Pergamon) 35(6), 1023–1034 (2011).
[Crossref]

2010 (4)

C. Hansen, J. Wieferich, F. Ritter, C. Rieder, and H. O. Peitgen, “Illustrative visualization of 3D planning models for augmented reality in liver surgery,” Int. J. CARS 5(2), 133–141 (2010).
[Crossref] [PubMed]

K. Mühler, C. Tietjen, F. Ritter, and B. Preim, “The medical exploration toolkit: An efficient support for visual computing in surgical planning and training,” IEEE Trans. Vis. Comput. Graph. 16(1), 133–146 (2010).
[Crossref] [PubMed]

M. Sugimoto, “Recent advances in visualization, imaging, and navigation in hepatobiliary and pancreatic sciences,” J. Hepatobiliary Pancreat. Sci. 17(5), 574–576 (2010).
[Crossref] [PubMed]

H. Liao, T. Inomata, I. Sakuma, and T. Dohi, “3-D augmented reality for MRI-guided surgery using integral videography autostereoscopic image overlay,” IEEE Trans. Biomed. Eng. 57(6), 1476–1486 (2010).
[Crossref] [PubMed]

2009 (1)

S. A. Nicolau, X. Pennec, L. Soler, X. Buy, A. Gangi, N. Ayache, and J. Marescaux, “An augmented reality system for liver thermal ablation: Design and evaluation on clinical cases,” Med. Image Anal. 13(3), 494–506 (2009).
[Crossref] [PubMed]

2008 (1)

L. Bavoil and K. Myers, “Order Independent Transparency with Dual Depth Peeling,” Image Rochester NY 107(February), 22020–22025 (2008).

2007 (1)

S. Bruckner and E. Gröller, “Enhancing depth-perception with flexible volumetric halos,” IEEE Trans. Vis. Comput. Graph. 13(6), 1344–1351 (2007).
[Crossref] [PubMed]

2005 (1)

I. Viola, A. Kanitsar, and M. E. Gröller, “Importance-driven feature enhancement in volume visualization,” IEEE Trans. Vis. Comput. Graph. 11(4), 408–418 (2005).
[Crossref] [PubMed]

2004 (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

2001 (2)

B. Csébfalvi, L. Mroz, H. Hauser, A. König, and E. Gröller, “Fast Visualization of Object Contours by Non-Photorealistic Volume Rendering,” (Proceedings of Eurographics 2001, Manchester, UK, September 2001) 20(3), 452–460 (2001).

P. Rheingans and D. Ebert, “Volume illustration: Nonphotorealistic rendering of volume models,” IEEE Trans. Vis. Comput. Graph. 7(3), 253–264 (2001).
[Crossref]

1996 (1)

D. Drascic and P. Milgram, “Perceptual issues in augmented reality reality-virtuality continuum,” Virtual Real. 2351, 197321 (1996).

1995 (2)

M. Tuceryan, D. S. Greer, R. T. Whitaker, D. Breen, C. Crampton, E. Rose, and K. H. Ahlers, “Calibration Requirements and Procedures for Augmented Reality,” IEEE Trans. Vis. Comput. Graph. 31(3), 255–273 (1995).
[Crossref]

S. S. Snibbe, “A Direct Manipulation Interface for 3D Computer Animation,” Comput. Graph. Forum 14(3), 271–283 (1995).
[Crossref]

1990 (1)

M. Levoy, “Efficient Ray Tracing of Volume Data,” ACM Trans. Graph. 9(3), 245–261 (1990).
[Crossref]

1984 (1)

T. Porter and T. Duff, “Compositing digital images,” Comput. Graph. 18(3), 253–259 (1984).
[Crossref]

Ahlers, K. H.

M. Tuceryan, D. S. Greer, R. T. Whitaker, D. Breen, C. Crampton, E. Rose, and K. H. Ahlers, “Calibration Requirements and Procedures for Augmented Reality,” IEEE Trans. Vis. Comput. Graph. 31(3), 255–273 (1995).
[Crossref]

Ai, D.

Y. Chu, J. Yang, S. Ma, D. Ai, W. Li, H. Song, L. Li, D. Chen, L. Chen, and Y. Wang, “Registration and fusion quantification of augmented reality based nasal endoscopic surgery,” Med. Image Anal. 42, 241–256 (2017).
[Crossref] [PubMed]

Armstrong, D. G.

D. G. Armstrong, T. M. Rankin, N. A. Giovinco, J. L. Mills, and Y. Matsuoka, “A heads-up display for diabetic limb salvage surgery: a view through the google looking glass,” J. Diabetes Sci. Technol. 8(5), 951–956 (2014).
[Crossref] [PubMed]

Ayache, N.

S. A. Nicolau, X. Pennec, L. Soler, X. Buy, A. Gangi, N. Ayache, and J. Marescaux, “An augmented reality system for liver thermal ablation: Design and evaluation on clinical cases,” Med. Image Anal. 13(3), 494–506 (2009).
[Crossref] [PubMed]

Baer, A.

B. Preim, A. Baer, D. Cunningham, T. Isenberg, and T. Ropinski, “A Survey of Perceptually Motivated 3D Visualization of Medical Image Data,” Comput. Graph. Forum 35(3), 501–525 (2016).
[Crossref]

Bastos, R.

M. Maule, J. L. D. Comba, R. P. Torchelsen, and R. Bastos, “A survey of raster-based transparency techniques,” Computers and Graphics (Pergamon) 35(6), 1023–1034 (2011).
[Crossref]

Bavoil, L.

M. Mcguire and L. Bavoil, “Weighted Blended Order-Independent Transparency,” Journal of Computer Graphics Techniques 2(2), 122–141 (2013).

L. Bavoil and K. Myers, “Order Independent Transparency with Dual Depth Peeling,” Image Rochester NY 107(February), 22020–22025 (2008).

Besharati Tabrizi, L.

L. Besharati Tabrizi and M. Mahvash, “Augmented reality-guided neurosurgery: accuracy and intraoperative application of an image projection technique,” J. Neurosurg. 123(1), 206–211 (2015).
[Crossref] [PubMed]

Bichlmeier, C.

C. Bichlmeier and N. Navab, “Virtual Window for Improved Depth Perception in Medical AR,” International Workshop on Augmented Reality environments for Medical Imaging and Computer-aided Surgery (AMI-ARCS) (2006).

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

Breen, D.

M. Tuceryan, D. S. Greer, R. T. Whitaker, D. Breen, C. Crampton, E. Rose, and K. H. Ahlers, “Calibration Requirements and Procedures for Augmented Reality,” IEEE Trans. Vis. Comput. Graph. 31(3), 255–273 (1995).
[Crossref]

Bruckner, S.

S. Bruckner and E. Gröller, “Enhancing depth-perception with flexible volumetric halos,” IEEE Trans. Vis. Comput. Graph. 13(6), 1344–1351 (2007).
[Crossref] [PubMed]

Bucher, P.

F. Volonté, F. Pugin, P. Bucher, M. Sugimoto, O. Ratib, and P. Morel, “Augmented reality and image overlay navigation with OsiriX in laparoscopic and robotic surgery: not only a matter of fashion,” J. Hepatobiliary Pancreat. Sci. 18(4), 506–509 (2011).
[Crossref] [PubMed]

Buy, X.

S. A. Nicolau, X. Pennec, L. Soler, X. Buy, A. Gangi, N. Ayache, and J. Marescaux, “An augmented reality system for liver thermal ablation: Design and evaluation on clinical cases,” Med. Image Anal. 13(3), 494–506 (2009).
[Crossref] [PubMed]

Carrau, R. L.

N. McLaughlin, R. L. Carrau, A. B. Kassam, and D. F. Kelly, “Neuronavigation in endonasal pituitary and skull base surgery using an autoregistration mask without head fixation: An assessment of accuracy and practicality,” J. Neurol. Surg. A Cent. Eur. Neurosurg. 73(6), 351–357 (2012).
[Crossref] [PubMed]

Centeno, R.

E. J. Lorenzo, R. Centeno, and M. Rodríguez-Artacho, “A framework for helping developers in the integration of external tools into virtual learning environments,” in Proceedings of the First International Conference on Technological Ecosystem for Enhancing Multiculturality - TEEM ’13 (2013), pp. 127–132.
[Crossref]

Chan, H.

B. J. Dixon, M. J. Daly, H. Chan, A. Vescan, I. J. Witterick, and J. C. Irish, “Augmented real-time navigation with critical structure proximity alerts for endoscopic skull base surgery,” Laryngoscope 124(4), 853–859 (2014).
[Crossref] [PubMed]

Chen, D.

Y. Chu, J. Yang, S. Ma, D. Ai, W. Li, H. Song, L. Li, D. Chen, L. Chen, and Y. Wang, “Registration and fusion quantification of augmented reality based nasal endoscopic surgery,” Med. Image Anal. 42, 241–256 (2017).
[Crossref] [PubMed]

Chen, L.

Y. Chu, J. Yang, S. Ma, D. Ai, W. Li, H. Song, L. Li, D. Chen, L. Chen, and Y. Wang, “Registration and fusion quantification of augmented reality based nasal endoscopic surgery,” Med. Image Anal. 42, 241–256 (2017).
[Crossref] [PubMed]

Chen, X.

X. Chen, L. Xu, Y. Wang, H. Wang, F. Wang, X. Zeng, Q. Wang, and J. Egger, “Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display,” J. Biomed. Inform. 55, 124–131 (2015).
[Crossref] [PubMed]

Chu, Y.

Y. Chu, J. Yang, S. Ma, D. Ai, W. Li, H. Song, L. Li, D. Chen, L. Chen, and Y. Wang, “Registration and fusion quantification of augmented reality based nasal endoscopic surgery,” Med. Image Anal. 42, 241–256 (2017).
[Crossref] [PubMed]

Citardi, M. J.

V. R. Ramakrishnan, R. R. Orlandi, M. J. Citardi, T. L. Smith, M. P. Fried, and T. T. Kingdom, “The use of image-guided surgery in endoscopic sinus surgery: an evidence-based review with recommendations,” Int. Forum Allergy Rhinol. 3(3), 236–241 (2013).
[Crossref] [PubMed]

Collins, D. L.

M. Kersten-Oertel, P. Jannin, and D. L. Collins, “The state of the art of visualization in mixed reality image guided surgery,” Comput. Med. Imaging Graph. 37(2), 98–112 (2013).
[Crossref] [PubMed]

Comba, J. L. D.

M. Maule, J. L. D. Comba, R. P. Torchelsen, and R. Bastos, “A survey of raster-based transparency techniques,” Computers and Graphics (Pergamon) 35(6), 1023–1034 (2011).
[Crossref]

Crampton, C.

M. Tuceryan, D. S. Greer, R. T. Whitaker, D. Breen, C. Crampton, E. Rose, and K. H. Ahlers, “Calibration Requirements and Procedures for Augmented Reality,” IEEE Trans. Vis. Comput. Graph. 31(3), 255–273 (1995).
[Crossref]

Csébfalvi, B.

B. Csébfalvi, L. Mroz, H. Hauser, A. König, and E. Gröller, “Fast Visualization of Object Contours by Non-Photorealistic Volume Rendering,” (Proceedings of Eurographics 2001, Manchester, UK, September 2001) 20(3), 452–460 (2001).

Cunningham, D.

B. Preim, A. Baer, D. Cunningham, T. Isenberg, and T. Ropinski, “A Survey of Perceptually Motivated 3D Visualization of Medical Image Data,” Comput. Graph. Forum 35(3), 501–525 (2016).
[Crossref]

Daly, M. J.

B. J. Dixon, M. J. Daly, H. Chan, A. Vescan, I. J. Witterick, and J. C. Irish, “Augmented real-time navigation with critical structure proximity alerts for endoscopic skull base surgery,” Laryngoscope 124(4), 853–859 (2014).
[Crossref] [PubMed]

Dixon, B. J.

B. J. Dixon, M. J. Daly, H. Chan, A. Vescan, I. J. Witterick, and J. C. Irish, “Augmented real-time navigation with critical structure proximity alerts for endoscopic skull base surgery,” Laryngoscope 124(4), 853–859 (2014).
[Crossref] [PubMed]

Dohi, T.

H. Liao, T. Inomata, I. Sakuma, and T. Dohi, “3-D augmented reality for MRI-guided surgery using integral videography autostereoscopic image overlay,” IEEE Trans. Biomed. Eng. 57(6), 1476–1486 (2010).
[Crossref] [PubMed]

Drascic, D.

D. Drascic and P. Milgram, “Perceptual issues in augmented reality reality-virtuality continuum,” Virtual Real. 2351, 197321 (1996).

Duff, T.

T. Porter and T. Duff, “Compositing digital images,” Comput. Graph. 18(3), 253–259 (1984).
[Crossref]

Ebert, D.

P. Rheingans and D. Ebert, “Volume illustration: Nonphotorealistic rendering of volume models,” IEEE Trans. Vis. Comput. Graph. 7(3), 253–264 (2001).
[Crossref]

Egger, J.

X. Chen, L. Xu, Y. Wang, H. Wang, F. Wang, X. Zeng, Q. Wang, and J. Egger, “Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display,” J. Biomed. Inform. 55, 124–131 (2015).
[Crossref] [PubMed]

Feiner, S.

E. Kruijff, J. E. Swan, and S. Feiner, “Perceptual issues in augmented reality revisited,” in 9th IEEE International Symposium on Mixed and Augmented Reality 2010: Science and Technology, ISMAR 2010 - Proceedings (2010).
[Crossref]

Fried, M. P.

V. R. Ramakrishnan, R. R. Orlandi, M. J. Citardi, T. L. Smith, M. P. Fried, and T. T. Kingdom, “The use of image-guided surgery in endoscopic sinus surgery: an evidence-based review with recommendations,” Int. Forum Allergy Rhinol. 3(3), 236–241 (2013).
[Crossref] [PubMed]

Gangi, A.

S. A. Nicolau, X. Pennec, L. Soler, X. Buy, A. Gangi, N. Ayache, and J. Marescaux, “An augmented reality system for liver thermal ablation: Design and evaluation on clinical cases,” Med. Image Anal. 13(3), 494–506 (2009).
[Crossref] [PubMed]

Gavaghan, K. A.

K. A. Gavaghan, M. Peterhans, T. Oliveira-Santos, and S. Weber, “A portable image overlay projection device for computer-aided open liver surgery,” IEEE Trans. Biomed. Eng. 58(6), 1855–1864 (2011).
[Crossref] [PubMed]

Giovinco, N. A.

D. G. Armstrong, T. M. Rankin, N. A. Giovinco, J. L. Mills, and Y. Matsuoka, “A heads-up display for diabetic limb salvage surgery: a view through the google looking glass,” J. Diabetes Sci. Technol. 8(5), 951–956 (2014).
[Crossref] [PubMed]

Greer, D. S.

M. Tuceryan, D. S. Greer, R. T. Whitaker, D. Breen, C. Crampton, E. Rose, and K. H. Ahlers, “Calibration Requirements and Procedures for Augmented Reality,” IEEE Trans. Vis. Comput. Graph. 31(3), 255–273 (1995).
[Crossref]

Gröller, E.

S. Bruckner and E. Gröller, “Enhancing depth-perception with flexible volumetric halos,” IEEE Trans. Vis. Comput. Graph. 13(6), 1344–1351 (2007).
[Crossref] [PubMed]

B. Csébfalvi, L. Mroz, H. Hauser, A. König, and E. Gröller, “Fast Visualization of Object Contours by Non-Photorealistic Volume Rendering,” (Proceedings of Eurographics 2001, Manchester, UK, September 2001) 20(3), 452–460 (2001).

Gröller, M. E.

I. Viola, A. Kanitsar, and M. E. Gröller, “Importance-driven feature enhancement in volume visualization,” IEEE Trans. Vis. Comput. Graph. 11(4), 408–418 (2005).
[Crossref] [PubMed]

Hansen, C.

C. Hansen, J. Wieferich, F. Ritter, C. Rieder, and H. O. Peitgen, “Illustrative visualization of 3D planning models for augmented reality in liver surgery,” Int. J. CARS 5(2), 133–141 (2010).
[Crossref] [PubMed]

Hara, T.

R. Souzaki, S. Ieiri, M. Uemura, K. Ohuchida, M. Tomikawa, Y. Kinoshita, Y. Koga, A. Suminoe, K. Kohashi, Y. Oda, T. Hara, M. Hashizume, and T. Taguchi, “An augmented reality navigation system for pediatric oncologic surgery based on preoperative CT and MRI images,” J. Pediatr. Surg. 48(12), 2479–2483 (2013).
[Crossref] [PubMed]

Hashizume, M.

R. Souzaki, S. Ieiri, M. Uemura, K. Ohuchida, M. Tomikawa, Y. Kinoshita, Y. Koga, A. Suminoe, K. Kohashi, Y. Oda, T. Hara, M. Hashizume, and T. Taguchi, “An augmented reality navigation system for pediatric oncologic surgery based on preoperative CT and MRI images,” J. Pediatr. Surg. 48(12), 2479–2483 (2013).
[Crossref] [PubMed]

Hattori, A.

T. Okamoto, S. Onda, K. Yanaga, N. Suzuki, and A. Hattori, “Clinical application of navigation surgery using augmented reality in the abdominal field,” Surg. Today 45(4), 397–406 (2015).
[Crossref] [PubMed]

Hauser, H.

B. Csébfalvi, L. Mroz, H. Hauser, A. König, and E. Gröller, “Fast Visualization of Object Contours by Non-Photorealistic Volume Rendering,” (Proceedings of Eurographics 2001, Manchester, UK, September 2001) 20(3), 452–460 (2001).

Hawkes, D. J.

Y. Hayashi, K. Misawa, M. Oda, D. J. Hawkes, and K. Mori, “Clinical application of a surgical navigation system based on virtual laparoscopy in laparoscopic gastrectomy for gastric cancer,” Int. J. CARS 11(5), 827–836 (2016).
[Crossref] [PubMed]

Hayashi, Y.

Y. Hayashi, K. Misawa, M. Oda, D. J. Hawkes, and K. Mori, “Clinical application of a surgical navigation system based on virtual laparoscopy in laparoscopic gastrectomy for gastric cancer,” Int. J. CARS 11(5), 827–836 (2016).
[Crossref] [PubMed]

Hoshi, K.

J. Wang, H. Suenaga, K. Hoshi, L. Yang, E. Kobayashi, I. Sakuma, and H. Liao, “Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery,” IEEE Trans. Biomed. Eng. 61(4), 1295–1304 (2014).
[Crossref] [PubMed]

Ieiri, S.

R. Souzaki, S. Ieiri, M. Uemura, K. Ohuchida, M. Tomikawa, Y. Kinoshita, Y. Koga, A. Suminoe, K. Kohashi, Y. Oda, T. Hara, M. Hashizume, and T. Taguchi, “An augmented reality navigation system for pediatric oncologic surgery based on preoperative CT and MRI images,” J. Pediatr. Surg. 48(12), 2479–2483 (2013).
[Crossref] [PubMed]

Inomata, T.

H. Liao, T. Inomata, I. Sakuma, and T. Dohi, “3-D augmented reality for MRI-guided surgery using integral videography autostereoscopic image overlay,” IEEE Trans. Biomed. Eng. 57(6), 1476–1486 (2010).
[Crossref] [PubMed]

Irish, J. C.

B. J. Dixon, M. J. Daly, H. Chan, A. Vescan, I. J. Witterick, and J. C. Irish, “Augmented real-time navigation with critical structure proximity alerts for endoscopic skull base surgery,” Laryngoscope 124(4), 853–859 (2014).
[Crossref] [PubMed]

Isenberg, T.

K. Lawonn, I. Viola, B. Preim, and T. Isenberg, “A survey of surface-based illustrative rendering for visualization,” Comput. Graph. Forum 37(6), 205 (2018).

B. Preim, A. Baer, D. Cunningham, T. Isenberg, and T. Ropinski, “A Survey of Perceptually Motivated 3D Visualization of Medical Image Data,” Comput. Graph. Forum 35(3), 501–525 (2016).
[Crossref]

C. Tietjen, T. Isenberg, and B. Preim, “Combining Silhouettes, Surface, and Volume Rendering for Surgery Education and Planning,” EUROGRAPHICS - IEEE VGTC Symposium on Visualization (2005).

Jannin, P.

M. Kersten-Oertel, P. Jannin, and D. L. Collins, “The state of the art of visualization in mixed reality image guided surgery,” Comput. Med. Imaging Graph. 37(2), 98–112 (2013).
[Crossref] [PubMed]

Kanitsar, A.

I. Viola, A. Kanitsar, and M. E. Gröller, “Importance-driven feature enhancement in volume visualization,” IEEE Trans. Vis. Comput. Graph. 11(4), 408–418 (2005).
[Crossref] [PubMed]

Kassam, A. B.

N. McLaughlin, R. L. Carrau, A. B. Kassam, and D. F. Kelly, “Neuronavigation in endonasal pituitary and skull base surgery using an autoregistration mask without head fixation: An assessment of accuracy and practicality,” J. Neurol. Surg. A Cent. Eur. Neurosurg. 73(6), 351–357 (2012).
[Crossref] [PubMed]

Kelly, D. F.

N. McLaughlin, R. L. Carrau, A. B. Kassam, and D. F. Kelly, “Neuronavigation in endonasal pituitary and skull base surgery using an autoregistration mask without head fixation: An assessment of accuracy and practicality,” J. Neurol. Surg. A Cent. Eur. Neurosurg. 73(6), 351–357 (2012).
[Crossref] [PubMed]

Kersten-Oertel, M.

M. Kersten-Oertel, P. Jannin, and D. L. Collins, “The state of the art of visualization in mixed reality image guided surgery,” Comput. Med. Imaging Graph. 37(2), 98–112 (2013).
[Crossref] [PubMed]

Kingdom, T. T.

V. R. Ramakrishnan, R. R. Orlandi, M. J. Citardi, T. L. Smith, M. P. Fried, and T. T. Kingdom, “The use of image-guided surgery in endoscopic sinus surgery: an evidence-based review with recommendations,” Int. Forum Allergy Rhinol. 3(3), 236–241 (2013).
[Crossref] [PubMed]

Kinoshita, Y.

R. Souzaki, S. Ieiri, M. Uemura, K. Ohuchida, M. Tomikawa, Y. Kinoshita, Y. Koga, A. Suminoe, K. Kohashi, Y. Oda, T. Hara, M. Hashizume, and T. Taguchi, “An augmented reality navigation system for pediatric oncologic surgery based on preoperative CT and MRI images,” J. Pediatr. Surg. 48(12), 2479–2483 (2013).
[Crossref] [PubMed]

Kobayashi, E.

J. Wang, H. Suenaga, K. Hoshi, L. Yang, E. Kobayashi, I. Sakuma, and H. Liao, “Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery,” IEEE Trans. Biomed. Eng. 61(4), 1295–1304 (2014).
[Crossref] [PubMed]

Koga, Y.

R. Souzaki, S. Ieiri, M. Uemura, K. Ohuchida, M. Tomikawa, Y. Kinoshita, Y. Koga, A. Suminoe, K. Kohashi, Y. Oda, T. Hara, M. Hashizume, and T. Taguchi, “An augmented reality navigation system for pediatric oncologic surgery based on preoperative CT and MRI images,” J. Pediatr. Surg. 48(12), 2479–2483 (2013).
[Crossref] [PubMed]

Kohashi, K.

R. Souzaki, S. Ieiri, M. Uemura, K. Ohuchida, M. Tomikawa, Y. Kinoshita, Y. Koga, A. Suminoe, K. Kohashi, Y. Oda, T. Hara, M. Hashizume, and T. Taguchi, “An augmented reality navigation system for pediatric oncologic surgery based on preoperative CT and MRI images,” J. Pediatr. Surg. 48(12), 2479–2483 (2013).
[Crossref] [PubMed]

König, A.

B. Csébfalvi, L. Mroz, H. Hauser, A. König, and E. Gröller, “Fast Visualization of Object Contours by Non-Photorealistic Volume Rendering,” (Proceedings of Eurographics 2001, Manchester, UK, September 2001) 20(3), 452–460 (2001).

Kruijff, E.

E. Kruijff, J. E. Swan, and S. Feiner, “Perceptual issues in augmented reality revisited,” in 9th IEEE International Symposium on Mixed and Augmented Reality 2010: Science and Technology, ISMAR 2010 - Proceedings (2010).
[Crossref]

Lawonn, K.

K. Lawonn, I. Viola, B. Preim, and T. Isenberg, “A survey of surface-based illustrative rendering for visualization,” Comput. Graph. Forum 37(6), 205 (2018).

Levoy, M.

M. Levoy, “Efficient Ray Tracing of Volume Data,” ACM Trans. Graph. 9(3), 245–261 (1990).
[Crossref]

Li, L.

Y. Chu, J. Yang, S. Ma, D. Ai, W. Li, H. Song, L. Li, D. Chen, L. Chen, and Y. Wang, “Registration and fusion quantification of augmented reality based nasal endoscopic surgery,” Med. Image Anal. 42, 241–256 (2017).
[Crossref] [PubMed]

Li, W.

Y. Chu, J. Yang, S. Ma, D. Ai, W. Li, H. Song, L. Li, D. Chen, L. Chen, and Y. Wang, “Registration and fusion quantification of augmented reality based nasal endoscopic surgery,” Med. Image Anal. 42, 241–256 (2017).
[Crossref] [PubMed]

Liao, H.

J. Wang, H. Suenaga, K. Hoshi, L. Yang, E. Kobayashi, I. Sakuma, and H. Liao, “Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery,” IEEE Trans. Biomed. Eng. 61(4), 1295–1304 (2014).
[Crossref] [PubMed]

H. Liao, T. Inomata, I. Sakuma, and T. Dohi, “3-D augmented reality for MRI-guided surgery using integral videography autostereoscopic image overlay,” IEEE Trans. Biomed. Eng. 57(6), 1476–1486 (2010).
[Crossref] [PubMed]

Lorenzo, E. J.

E. J. Lorenzo, R. Centeno, and M. Rodríguez-Artacho, “A framework for helping developers in the integration of external tools into virtual learning environments,” in Proceedings of the First International Conference on Technological Ecosystem for Enhancing Multiculturality - TEEM ’13 (2013), pp. 127–132.
[Crossref]

Ma, K. L.

Y. Zhang and K. L. Ma, “Lighting design for globally illuminated volume rendering,” IEEE Trans. Vis. Comput. Graph. 19(12), 2946–2955 (2013).
[Crossref] [PubMed]

Ma, S.

Y. Chu, J. Yang, S. Ma, D. Ai, W. Li, H. Song, L. Li, D. Chen, L. Chen, and Y. Wang, “Registration and fusion quantification of augmented reality based nasal endoscopic surgery,” Med. Image Anal. 42, 241–256 (2017).
[Crossref] [PubMed]

Mahvash, M.

L. Besharati Tabrizi and M. Mahvash, “Augmented reality-guided neurosurgery: accuracy and intraoperative application of an image projection technique,” J. Neurosurg. 123(1), 206–211 (2015).
[Crossref] [PubMed]

Marescaux, J.

S. Nicolau, L. Soler, D. Mutter, and J. Marescaux, “Augmented reality in laparoscopic surgical oncology,” Surg. Oncol. 20(3), 189–201 (2011).
[Crossref] [PubMed]

S. A. Nicolau, X. Pennec, L. Soler, X. Buy, A. Gangi, N. Ayache, and J. Marescaux, “An augmented reality system for liver thermal ablation: Design and evaluation on clinical cases,” Med. Image Anal. 13(3), 494–506 (2009).
[Crossref] [PubMed]

Matsuoka, Y.

D. G. Armstrong, T. M. Rankin, N. A. Giovinco, J. L. Mills, and Y. Matsuoka, “A heads-up display for diabetic limb salvage surgery: a view through the google looking glass,” J. Diabetes Sci. Technol. 8(5), 951–956 (2014).
[Crossref] [PubMed]

Maule, M.

M. Maule, J. L. D. Comba, R. P. Torchelsen, and R. Bastos, “A survey of raster-based transparency techniques,” Computers and Graphics (Pergamon) 35(6), 1023–1034 (2011).
[Crossref]

Mcguire, M.

M. Mcguire and L. Bavoil, “Weighted Blended Order-Independent Transparency,” Journal of Computer Graphics Techniques 2(2), 122–141 (2013).

McLaughlin, N.

N. McLaughlin, R. L. Carrau, A. B. Kassam, and D. F. Kelly, “Neuronavigation in endonasal pituitary and skull base surgery using an autoregistration mask without head fixation: An assessment of accuracy and practicality,” J. Neurol. Surg. A Cent. Eur. Neurosurg. 73(6), 351–357 (2012).
[Crossref] [PubMed]

Milgram, P.

D. Drascic and P. Milgram, “Perceptual issues in augmented reality reality-virtuality continuum,” Virtual Real. 2351, 197321 (1996).

Mills, J. L.

D. G. Armstrong, T. M. Rankin, N. A. Giovinco, J. L. Mills, and Y. Matsuoka, “A heads-up display for diabetic limb salvage surgery: a view through the google looking glass,” J. Diabetes Sci. Technol. 8(5), 951–956 (2014).
[Crossref] [PubMed]

Misawa, K.

Y. Hayashi, K. Misawa, M. Oda, D. J. Hawkes, and K. Mori, “Clinical application of a surgical navigation system based on virtual laparoscopy in laparoscopic gastrectomy for gastric cancer,” Int. J. CARS 11(5), 827–836 (2016).
[Crossref] [PubMed]

Morel, P.

F. Volonté, F. Pugin, P. Bucher, M. Sugimoto, O. Ratib, and P. Morel, “Augmented reality and image overlay navigation with OsiriX in laparoscopic and robotic surgery: not only a matter of fashion,” J. Hepatobiliary Pancreat. Sci. 18(4), 506–509 (2011).
[Crossref] [PubMed]

Mori, K.

Y. Hayashi, K. Misawa, M. Oda, D. J. Hawkes, and K. Mori, “Clinical application of a surgical navigation system based on virtual laparoscopy in laparoscopic gastrectomy for gastric cancer,” Int. J. CARS 11(5), 827–836 (2016).
[Crossref] [PubMed]

Mroz, L.

B. Csébfalvi, L. Mroz, H. Hauser, A. König, and E. Gröller, “Fast Visualization of Object Contours by Non-Photorealistic Volume Rendering,” (Proceedings of Eurographics 2001, Manchester, UK, September 2001) 20(3), 452–460 (2001).

Mühler, K.

K. Mühler, C. Tietjen, F. Ritter, and B. Preim, “The medical exploration toolkit: An efficient support for visual computing in surgical planning and training,” IEEE Trans. Vis. Comput. Graph. 16(1), 133–146 (2010).
[Crossref] [PubMed]

Mutter, D.

S. Nicolau, L. Soler, D. Mutter, and J. Marescaux, “Augmented reality in laparoscopic surgical oncology,” Surg. Oncol. 20(3), 189–201 (2011).
[Crossref] [PubMed]

Myers, K.

L. Bavoil and K. Myers, “Order Independent Transparency with Dual Depth Peeling,” Image Rochester NY 107(February), 22020–22025 (2008).

Navab, N.

C. Bichlmeier and N. Navab, “Virtual Window for Improved Depth Perception in Medical AR,” International Workshop on Augmented Reality environments for Medical Imaging and Computer-aided Surgery (AMI-ARCS) (2006).

Nicolau, S.

S. Nicolau, L. Soler, D. Mutter, and J. Marescaux, “Augmented reality in laparoscopic surgical oncology,” Surg. Oncol. 20(3), 189–201 (2011).
[Crossref] [PubMed]

Nicolau, S. A.

S. A. Nicolau, X. Pennec, L. Soler, X. Buy, A. Gangi, N. Ayache, and J. Marescaux, “An augmented reality system for liver thermal ablation: Design and evaluation on clinical cases,” Med. Image Anal. 13(3), 494–506 (2009).
[Crossref] [PubMed]

Oda, M.

Y. Hayashi, K. Misawa, M. Oda, D. J. Hawkes, and K. Mori, “Clinical application of a surgical navigation system based on virtual laparoscopy in laparoscopic gastrectomy for gastric cancer,” Int. J. CARS 11(5), 827–836 (2016).
[Crossref] [PubMed]

Oda, Y.

R. Souzaki, S. Ieiri, M. Uemura, K. Ohuchida, M. Tomikawa, Y. Kinoshita, Y. Koga, A. Suminoe, K. Kohashi, Y. Oda, T. Hara, M. Hashizume, and T. Taguchi, “An augmented reality navigation system for pediatric oncologic surgery based on preoperative CT and MRI images,” J. Pediatr. Surg. 48(12), 2479–2483 (2013).
[Crossref] [PubMed]

Ohuchida, K.

R. Souzaki, S. Ieiri, M. Uemura, K. Ohuchida, M. Tomikawa, Y. Kinoshita, Y. Koga, A. Suminoe, K. Kohashi, Y. Oda, T. Hara, M. Hashizume, and T. Taguchi, “An augmented reality navigation system for pediatric oncologic surgery based on preoperative CT and MRI images,” J. Pediatr. Surg. 48(12), 2479–2483 (2013).
[Crossref] [PubMed]

Okamoto, T.

T. Okamoto, S. Onda, K. Yanaga, N. Suzuki, and A. Hattori, “Clinical application of navigation surgery using augmented reality in the abdominal field,” Surg. Today 45(4), 397–406 (2015).
[Crossref] [PubMed]

Oliveira-Santos, T.

K. A. Gavaghan, M. Peterhans, T. Oliveira-Santos, and S. Weber, “A portable image overlay projection device for computer-aided open liver surgery,” IEEE Trans. Biomed. Eng. 58(6), 1855–1864 (2011).
[Crossref] [PubMed]

Onda, S.

T. Okamoto, S. Onda, K. Yanaga, N. Suzuki, and A. Hattori, “Clinical application of navigation surgery using augmented reality in the abdominal field,” Surg. Today 45(4), 397–406 (2015).
[Crossref] [PubMed]

Orlandi, R. R.

V. R. Ramakrishnan, R. R. Orlandi, M. J. Citardi, T. L. Smith, M. P. Fried, and T. T. Kingdom, “The use of image-guided surgery in endoscopic sinus surgery: an evidence-based review with recommendations,” Int. Forum Allergy Rhinol. 3(3), 236–241 (2013).
[Crossref] [PubMed]

Peitgen, H. O.

C. Hansen, J. Wieferich, F. Ritter, C. Rieder, and H. O. Peitgen, “Illustrative visualization of 3D planning models for augmented reality in liver surgery,” Int. J. CARS 5(2), 133–141 (2010).
[Crossref] [PubMed]

Pennec, X.

S. A. Nicolau, X. Pennec, L. Soler, X. Buy, A. Gangi, N. Ayache, and J. Marescaux, “An augmented reality system for liver thermal ablation: Design and evaluation on clinical cases,” Med. Image Anal. 13(3), 494–506 (2009).
[Crossref] [PubMed]

Peterhans, M.

K. A. Gavaghan, M. Peterhans, T. Oliveira-Santos, and S. Weber, “A portable image overlay projection device for computer-aided open liver surgery,” IEEE Trans. Biomed. Eng. 58(6), 1855–1864 (2011).
[Crossref] [PubMed]

Porter, T.

T. Porter and T. Duff, “Compositing digital images,” Comput. Graph. 18(3), 253–259 (1984).
[Crossref]

Preim, B.

K. Lawonn, I. Viola, B. Preim, and T. Isenberg, “A survey of surface-based illustrative rendering for visualization,” Comput. Graph. Forum 37(6), 205 (2018).

B. Preim, A. Baer, D. Cunningham, T. Isenberg, and T. Ropinski, “A Survey of Perceptually Motivated 3D Visualization of Medical Image Data,” Comput. Graph. Forum 35(3), 501–525 (2016).
[Crossref]

K. Mühler, C. Tietjen, F. Ritter, and B. Preim, “The medical exploration toolkit: An efficient support for visual computing in surgical planning and training,” IEEE Trans. Vis. Comput. Graph. 16(1), 133–146 (2010).
[Crossref] [PubMed]

C. Tietjen, T. Isenberg, and B. Preim, “Combining Silhouettes, Surface, and Volume Rendering for Surgery Education and Planning,” EUROGRAPHICS - IEEE VGTC Symposium on Visualization (2005).

Pugin, F.

F. Volonté, F. Pugin, P. Bucher, M. Sugimoto, O. Ratib, and P. Morel, “Augmented reality and image overlay navigation with OsiriX in laparoscopic and robotic surgery: not only a matter of fashion,” J. Hepatobiliary Pancreat. Sci. 18(4), 506–509 (2011).
[Crossref] [PubMed]

Ramakrishnan, V. R.

V. R. Ramakrishnan, R. R. Orlandi, M. J. Citardi, T. L. Smith, M. P. Fried, and T. T. Kingdom, “The use of image-guided surgery in endoscopic sinus surgery: an evidence-based review with recommendations,” Int. Forum Allergy Rhinol. 3(3), 236–241 (2013).
[Crossref] [PubMed]

Rankin, T. M.

D. G. Armstrong, T. M. Rankin, N. A. Giovinco, J. L. Mills, and Y. Matsuoka, “A heads-up display for diabetic limb salvage surgery: a view through the google looking glass,” J. Diabetes Sci. Technol. 8(5), 951–956 (2014).
[Crossref] [PubMed]

Ratib, O.

F. Volonté, F. Pugin, P. Bucher, M. Sugimoto, O. Ratib, and P. Morel, “Augmented reality and image overlay navigation with OsiriX in laparoscopic and robotic surgery: not only a matter of fashion,” J. Hepatobiliary Pancreat. Sci. 18(4), 506–509 (2011).
[Crossref] [PubMed]

Rheingans, P.

P. Rheingans and D. Ebert, “Volume illustration: Nonphotorealistic rendering of volume models,” IEEE Trans. Vis. Comput. Graph. 7(3), 253–264 (2001).
[Crossref]

Rieder, C.

C. Hansen, J. Wieferich, F. Ritter, C. Rieder, and H. O. Peitgen, “Illustrative visualization of 3D planning models for augmented reality in liver surgery,” Int. J. CARS 5(2), 133–141 (2010).
[Crossref] [PubMed]

Ritter, F.

C. Hansen, J. Wieferich, F. Ritter, C. Rieder, and H. O. Peitgen, “Illustrative visualization of 3D planning models for augmented reality in liver surgery,” Int. J. CARS 5(2), 133–141 (2010).
[Crossref] [PubMed]

K. Mühler, C. Tietjen, F. Ritter, and B. Preim, “The medical exploration toolkit: An efficient support for visual computing in surgical planning and training,” IEEE Trans. Vis. Comput. Graph. 16(1), 133–146 (2010).
[Crossref] [PubMed]

Rodríguez-Artacho, M.

E. J. Lorenzo, R. Centeno, and M. Rodríguez-Artacho, “A framework for helping developers in the integration of external tools into virtual learning environments,” in Proceedings of the First International Conference on Technological Ecosystem for Enhancing Multiculturality - TEEM ’13 (2013), pp. 127–132.
[Crossref]

Ropinski, T.

B. Preim, A. Baer, D. Cunningham, T. Isenberg, and T. Ropinski, “A Survey of Perceptually Motivated 3D Visualization of Medical Image Data,” Comput. Graph. Forum 35(3), 501–525 (2016).
[Crossref]

Rose, E.

M. Tuceryan, D. S. Greer, R. T. Whitaker, D. Breen, C. Crampton, E. Rose, and K. H. Ahlers, “Calibration Requirements and Procedures for Augmented Reality,” IEEE Trans. Vis. Comput. Graph. 31(3), 255–273 (1995).
[Crossref]

Sakuma, I.

J. Wang, H. Suenaga, K. Hoshi, L. Yang, E. Kobayashi, I. Sakuma, and H. Liao, “Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery,” IEEE Trans. Biomed. Eng. 61(4), 1295–1304 (2014).
[Crossref] [PubMed]

H. Liao, T. Inomata, I. Sakuma, and T. Dohi, “3-D augmented reality for MRI-guided surgery using integral videography autostereoscopic image overlay,” IEEE Trans. Biomed. Eng. 57(6), 1476–1486 (2010).
[Crossref] [PubMed]

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

Smith, T. L.

V. R. Ramakrishnan, R. R. Orlandi, M. J. Citardi, T. L. Smith, M. P. Fried, and T. T. Kingdom, “The use of image-guided surgery in endoscopic sinus surgery: an evidence-based review with recommendations,” Int. Forum Allergy Rhinol. 3(3), 236–241 (2013).
[Crossref] [PubMed]

Snibbe, S. S.

S. S. Snibbe, “A Direct Manipulation Interface for 3D Computer Animation,” Comput. Graph. Forum 14(3), 271–283 (1995).
[Crossref]

Soler, L.

S. Nicolau, L. Soler, D. Mutter, and J. Marescaux, “Augmented reality in laparoscopic surgical oncology,” Surg. Oncol. 20(3), 189–201 (2011).
[Crossref] [PubMed]

S. A. Nicolau, X. Pennec, L. Soler, X. Buy, A. Gangi, N. Ayache, and J. Marescaux, “An augmented reality system for liver thermal ablation: Design and evaluation on clinical cases,” Med. Image Anal. 13(3), 494–506 (2009).
[Crossref] [PubMed]

Song, H.

Y. Chu, J. Yang, S. Ma, D. Ai, W. Li, H. Song, L. Li, D. Chen, L. Chen, and Y. Wang, “Registration and fusion quantification of augmented reality based nasal endoscopic surgery,” Med. Image Anal. 42, 241–256 (2017).
[Crossref] [PubMed]

Souzaki, R.

R. Souzaki, S. Ieiri, M. Uemura, K. Ohuchida, M. Tomikawa, Y. Kinoshita, Y. Koga, A. Suminoe, K. Kohashi, Y. Oda, T. Hara, M. Hashizume, and T. Taguchi, “An augmented reality navigation system for pediatric oncologic surgery based on preoperative CT and MRI images,” J. Pediatr. Surg. 48(12), 2479–2483 (2013).
[Crossref] [PubMed]

Suenaga, H.

J. Wang, H. Suenaga, K. Hoshi, L. Yang, E. Kobayashi, I. Sakuma, and H. Liao, “Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery,” IEEE Trans. Biomed. Eng. 61(4), 1295–1304 (2014).
[Crossref] [PubMed]

Sugimoto, M.

F. Volonté, F. Pugin, P. Bucher, M. Sugimoto, O. Ratib, and P. Morel, “Augmented reality and image overlay navigation with OsiriX in laparoscopic and robotic surgery: not only a matter of fashion,” J. Hepatobiliary Pancreat. Sci. 18(4), 506–509 (2011).
[Crossref] [PubMed]

M. Sugimoto, “Recent advances in visualization, imaging, and navigation in hepatobiliary and pancreatic sciences,” J. Hepatobiliary Pancreat. Sci. 17(5), 574–576 (2010).
[Crossref] [PubMed]

Suminoe, A.

R. Souzaki, S. Ieiri, M. Uemura, K. Ohuchida, M. Tomikawa, Y. Kinoshita, Y. Koga, A. Suminoe, K. Kohashi, Y. Oda, T. Hara, M. Hashizume, and T. Taguchi, “An augmented reality navigation system for pediatric oncologic surgery based on preoperative CT and MRI images,” J. Pediatr. Surg. 48(12), 2479–2483 (2013).
[Crossref] [PubMed]

Suzuki, N.

T. Okamoto, S. Onda, K. Yanaga, N. Suzuki, and A. Hattori, “Clinical application of navigation surgery using augmented reality in the abdominal field,” Surg. Today 45(4), 397–406 (2015).
[Crossref] [PubMed]

Swan, J. E.

E. Kruijff, J. E. Swan, and S. Feiner, “Perceptual issues in augmented reality revisited,” in 9th IEEE International Symposium on Mixed and Augmented Reality 2010: Science and Technology, ISMAR 2010 - Proceedings (2010).
[Crossref]

Taguchi, T.

R. Souzaki, S. Ieiri, M. Uemura, K. Ohuchida, M. Tomikawa, Y. Kinoshita, Y. Koga, A. Suminoe, K. Kohashi, Y. Oda, T. Hara, M. Hashizume, and T. Taguchi, “An augmented reality navigation system for pediatric oncologic surgery based on preoperative CT and MRI images,” J. Pediatr. Surg. 48(12), 2479–2483 (2013).
[Crossref] [PubMed]

Tietjen, C.

K. Mühler, C. Tietjen, F. Ritter, and B. Preim, “The medical exploration toolkit: An efficient support for visual computing in surgical planning and training,” IEEE Trans. Vis. Comput. Graph. 16(1), 133–146 (2010).
[Crossref] [PubMed]

C. Tietjen, T. Isenberg, and B. Preim, “Combining Silhouettes, Surface, and Volume Rendering for Surgery Education and Planning,” EUROGRAPHICS - IEEE VGTC Symposium on Visualization (2005).

Tomikawa, M.

R. Souzaki, S. Ieiri, M. Uemura, K. Ohuchida, M. Tomikawa, Y. Kinoshita, Y. Koga, A. Suminoe, K. Kohashi, Y. Oda, T. Hara, M. Hashizume, and T. Taguchi, “An augmented reality navigation system for pediatric oncologic surgery based on preoperative CT and MRI images,” J. Pediatr. Surg. 48(12), 2479–2483 (2013).
[Crossref] [PubMed]

Torchelsen, R. P.

M. Maule, J. L. D. Comba, R. P. Torchelsen, and R. Bastos, “A survey of raster-based transparency techniques,” Computers and Graphics (Pergamon) 35(6), 1023–1034 (2011).
[Crossref]

Tuceryan, M.

M. Tuceryan, D. S. Greer, R. T. Whitaker, D. Breen, C. Crampton, E. Rose, and K. H. Ahlers, “Calibration Requirements and Procedures for Augmented Reality,” IEEE Trans. Vis. Comput. Graph. 31(3), 255–273 (1995).
[Crossref]

Uemura, M.

R. Souzaki, S. Ieiri, M. Uemura, K. Ohuchida, M. Tomikawa, Y. Kinoshita, Y. Koga, A. Suminoe, K. Kohashi, Y. Oda, T. Hara, M. Hashizume, and T. Taguchi, “An augmented reality navigation system for pediatric oncologic surgery based on preoperative CT and MRI images,” J. Pediatr. Surg. 48(12), 2479–2483 (2013).
[Crossref] [PubMed]

Vescan, A.

B. J. Dixon, M. J. Daly, H. Chan, A. Vescan, I. J. Witterick, and J. C. Irish, “Augmented real-time navigation with critical structure proximity alerts for endoscopic skull base surgery,” Laryngoscope 124(4), 853–859 (2014).
[Crossref] [PubMed]

Viola, I.

K. Lawonn, I. Viola, B. Preim, and T. Isenberg, “A survey of surface-based illustrative rendering for visualization,” Comput. Graph. Forum 37(6), 205 (2018).

I. Viola, A. Kanitsar, and M. E. Gröller, “Importance-driven feature enhancement in volume visualization,” IEEE Trans. Vis. Comput. Graph. 11(4), 408–418 (2005).
[Crossref] [PubMed]

Volonté, F.

F. Volonté, F. Pugin, P. Bucher, M. Sugimoto, O. Ratib, and P. Morel, “Augmented reality and image overlay navigation with OsiriX in laparoscopic and robotic surgery: not only a matter of fashion,” J. Hepatobiliary Pancreat. Sci. 18(4), 506–509 (2011).
[Crossref] [PubMed]

Wang, F.

X. Chen, L. Xu, Y. Wang, H. Wang, F. Wang, X. Zeng, Q. Wang, and J. Egger, “Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display,” J. Biomed. Inform. 55, 124–131 (2015).
[Crossref] [PubMed]

Wang, H.

X. Chen, L. Xu, Y. Wang, H. Wang, F. Wang, X. Zeng, Q. Wang, and J. Egger, “Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display,” J. Biomed. Inform. 55, 124–131 (2015).
[Crossref] [PubMed]

Wang, J.

J. Wang, H. Suenaga, K. Hoshi, L. Yang, E. Kobayashi, I. Sakuma, and H. Liao, “Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery,” IEEE Trans. Biomed. Eng. 61(4), 1295–1304 (2014).
[Crossref] [PubMed]

Wang, Q.

X. Chen, L. Xu, Y. Wang, H. Wang, F. Wang, X. Zeng, Q. Wang, and J. Egger, “Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display,” J. Biomed. Inform. 55, 124–131 (2015).
[Crossref] [PubMed]

Wang, Y.

Y. Chu, J. Yang, S. Ma, D. Ai, W. Li, H. Song, L. Li, D. Chen, L. Chen, and Y. Wang, “Registration and fusion quantification of augmented reality based nasal endoscopic surgery,” Med. Image Anal. 42, 241–256 (2017).
[Crossref] [PubMed]

X. Chen, L. Xu, Y. Wang, H. Wang, F. Wang, X. Zeng, Q. Wang, and J. Egger, “Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display,” J. Biomed. Inform. 55, 124–131 (2015).
[Crossref] [PubMed]

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

Weber, S.

K. A. Gavaghan, M. Peterhans, T. Oliveira-Santos, and S. Weber, “A portable image overlay projection device for computer-aided open liver surgery,” IEEE Trans. Biomed. Eng. 58(6), 1855–1864 (2011).
[Crossref] [PubMed]

Whitaker, R. T.

M. Tuceryan, D. S. Greer, R. T. Whitaker, D. Breen, C. Crampton, E. Rose, and K. H. Ahlers, “Calibration Requirements and Procedures for Augmented Reality,” IEEE Trans. Vis. Comput. Graph. 31(3), 255–273 (1995).
[Crossref]

Wieferich, J.

C. Hansen, J. Wieferich, F. Ritter, C. Rieder, and H. O. Peitgen, “Illustrative visualization of 3D planning models for augmented reality in liver surgery,” Int. J. CARS 5(2), 133–141 (2010).
[Crossref] [PubMed]

Witterick, I. J.

B. J. Dixon, M. J. Daly, H. Chan, A. Vescan, I. J. Witterick, and J. C. Irish, “Augmented real-time navigation with critical structure proximity alerts for endoscopic skull base surgery,” Laryngoscope 124(4), 853–859 (2014).
[Crossref] [PubMed]

Xu, L.

X. Chen, L. Xu, Y. Wang, H. Wang, F. Wang, X. Zeng, Q. Wang, and J. Egger, “Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display,” J. Biomed. Inform. 55, 124–131 (2015).
[Crossref] [PubMed]

Yanaga, K.

T. Okamoto, S. Onda, K. Yanaga, N. Suzuki, and A. Hattori, “Clinical application of navigation surgery using augmented reality in the abdominal field,” Surg. Today 45(4), 397–406 (2015).
[Crossref] [PubMed]

Yang, J.

Y. Chu, J. Yang, S. Ma, D. Ai, W. Li, H. Song, L. Li, D. Chen, L. Chen, and Y. Wang, “Registration and fusion quantification of augmented reality based nasal endoscopic surgery,” Med. Image Anal. 42, 241–256 (2017).
[Crossref] [PubMed]

Yang, L.

J. Wang, H. Suenaga, K. Hoshi, L. Yang, E. Kobayashi, I. Sakuma, and H. Liao, “Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery,” IEEE Trans. Biomed. Eng. 61(4), 1295–1304 (2014).
[Crossref] [PubMed]

Zeng, X.

X. Chen, L. Xu, Y. Wang, H. Wang, F. Wang, X. Zeng, Q. Wang, and J. Egger, “Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display,” J. Biomed. Inform. 55, 124–131 (2015).
[Crossref] [PubMed]

Zhang, Y.

Y. Zhang and K. L. Ma, “Lighting design for globally illuminated volume rendering,” IEEE Trans. Vis. Comput. Graph. 19(12), 2946–2955 (2013).
[Crossref] [PubMed]

(Proceedings of Eurographics 2001, Manchester, UK, September 2001) (1)

B. Csébfalvi, L. Mroz, H. Hauser, A. König, and E. Gröller, “Fast Visualization of Object Contours by Non-Photorealistic Volume Rendering,” (Proceedings of Eurographics 2001, Manchester, UK, September 2001) 20(3), 452–460 (2001).

ACM Trans. Graph. (1)

M. Levoy, “Efficient Ray Tracing of Volume Data,” ACM Trans. Graph. 9(3), 245–261 (1990).
[Crossref]

Comput. Graph. (1)

T. Porter and T. Duff, “Compositing digital images,” Comput. Graph. 18(3), 253–259 (1984).
[Crossref]

Comput. Graph. Forum (3)

B. Preim, A. Baer, D. Cunningham, T. Isenberg, and T. Ropinski, “A Survey of Perceptually Motivated 3D Visualization of Medical Image Data,” Comput. Graph. Forum 35(3), 501–525 (2016).
[Crossref]

S. S. Snibbe, “A Direct Manipulation Interface for 3D Computer Animation,” Comput. Graph. Forum 14(3), 271–283 (1995).
[Crossref]

K. Lawonn, I. Viola, B. Preim, and T. Isenberg, “A survey of surface-based illustrative rendering for visualization,” Comput. Graph. Forum 37(6), 205 (2018).

Comput. Med. Imaging Graph. (1)

M. Kersten-Oertel, P. Jannin, and D. L. Collins, “The state of the art of visualization in mixed reality image guided surgery,” Comput. Med. Imaging Graph. 37(2), 98–112 (2013).
[Crossref] [PubMed]

Computers and Graphics (Pergamon) (1)

M. Maule, J. L. D. Comba, R. P. Torchelsen, and R. Bastos, “A survey of raster-based transparency techniques,” Computers and Graphics (Pergamon) 35(6), 1023–1034 (2011).
[Crossref]

IEEE Trans. Biomed. Eng. (3)

H. Liao, T. Inomata, I. Sakuma, and T. Dohi, “3-D augmented reality for MRI-guided surgery using integral videography autostereoscopic image overlay,” IEEE Trans. Biomed. Eng. 57(6), 1476–1486 (2010).
[Crossref] [PubMed]

J. Wang, H. Suenaga, K. Hoshi, L. Yang, E. Kobayashi, I. Sakuma, and H. Liao, “Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery,” IEEE Trans. Biomed. Eng. 61(4), 1295–1304 (2014).
[Crossref] [PubMed]

K. A. Gavaghan, M. Peterhans, T. Oliveira-Santos, and S. Weber, “A portable image overlay projection device for computer-aided open liver surgery,” IEEE Trans. Biomed. Eng. 58(6), 1855–1864 (2011).
[Crossref] [PubMed]

IEEE Trans. Image Process. (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

IEEE Trans. Vis. Comput. Graph. (6)

Y. Zhang and K. L. Ma, “Lighting design for globally illuminated volume rendering,” IEEE Trans. Vis. Comput. Graph. 19(12), 2946–2955 (2013).
[Crossref] [PubMed]

M. Tuceryan, D. S. Greer, R. T. Whitaker, D. Breen, C. Crampton, E. Rose, and K. H. Ahlers, “Calibration Requirements and Procedures for Augmented Reality,” IEEE Trans. Vis. Comput. Graph. 31(3), 255–273 (1995).
[Crossref]

K. Mühler, C. Tietjen, F. Ritter, and B. Preim, “The medical exploration toolkit: An efficient support for visual computing in surgical planning and training,” IEEE Trans. Vis. Comput. Graph. 16(1), 133–146 (2010).
[Crossref] [PubMed]

P. Rheingans and D. Ebert, “Volume illustration: Nonphotorealistic rendering of volume models,” IEEE Trans. Vis. Comput. Graph. 7(3), 253–264 (2001).
[Crossref]

S. Bruckner and E. Gröller, “Enhancing depth-perception with flexible volumetric halos,” IEEE Trans. Vis. Comput. Graph. 13(6), 1344–1351 (2007).
[Crossref] [PubMed]

I. Viola, A. Kanitsar, and M. E. Gröller, “Importance-driven feature enhancement in volume visualization,” IEEE Trans. Vis. Comput. Graph. 11(4), 408–418 (2005).
[Crossref] [PubMed]

Image Rochester NY (1)

L. Bavoil and K. Myers, “Order Independent Transparency with Dual Depth Peeling,” Image Rochester NY 107(February), 22020–22025 (2008).

Int. Forum Allergy Rhinol. (1)

V. R. Ramakrishnan, R. R. Orlandi, M. J. Citardi, T. L. Smith, M. P. Fried, and T. T. Kingdom, “The use of image-guided surgery in endoscopic sinus surgery: an evidence-based review with recommendations,” Int. Forum Allergy Rhinol. 3(3), 236–241 (2013).
[Crossref] [PubMed]

Int. J. CARS (2)

Y. Hayashi, K. Misawa, M. Oda, D. J. Hawkes, and K. Mori, “Clinical application of a surgical navigation system based on virtual laparoscopy in laparoscopic gastrectomy for gastric cancer,” Int. J. CARS 11(5), 827–836 (2016).
[Crossref] [PubMed]

C. Hansen, J. Wieferich, F. Ritter, C. Rieder, and H. O. Peitgen, “Illustrative visualization of 3D planning models for augmented reality in liver surgery,” Int. J. CARS 5(2), 133–141 (2010).
[Crossref] [PubMed]

J. Biomed. Inform. (1)

X. Chen, L. Xu, Y. Wang, H. Wang, F. Wang, X. Zeng, Q. Wang, and J. Egger, “Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display,” J. Biomed. Inform. 55, 124–131 (2015).
[Crossref] [PubMed]

J. Diabetes Sci. Technol. (1)

D. G. Armstrong, T. M. Rankin, N. A. Giovinco, J. L. Mills, and Y. Matsuoka, “A heads-up display for diabetic limb salvage surgery: a view through the google looking glass,” J. Diabetes Sci. Technol. 8(5), 951–956 (2014).
[Crossref] [PubMed]

J. Hepatobiliary Pancreat. Sci. (2)

M. Sugimoto, “Recent advances in visualization, imaging, and navigation in hepatobiliary and pancreatic sciences,” J. Hepatobiliary Pancreat. Sci. 17(5), 574–576 (2010).
[Crossref] [PubMed]

F. Volonté, F. Pugin, P. Bucher, M. Sugimoto, O. Ratib, and P. Morel, “Augmented reality and image overlay navigation with OsiriX in laparoscopic and robotic surgery: not only a matter of fashion,” J. Hepatobiliary Pancreat. Sci. 18(4), 506–509 (2011).
[Crossref] [PubMed]

J. Neurol. Surg. A Cent. Eur. Neurosurg. (1)

N. McLaughlin, R. L. Carrau, A. B. Kassam, and D. F. Kelly, “Neuronavigation in endonasal pituitary and skull base surgery using an autoregistration mask without head fixation: An assessment of accuracy and practicality,” J. Neurol. Surg. A Cent. Eur. Neurosurg. 73(6), 351–357 (2012).
[Crossref] [PubMed]

J. Neurosurg. (1)

L. Besharati Tabrizi and M. Mahvash, “Augmented reality-guided neurosurgery: accuracy and intraoperative application of an image projection technique,” J. Neurosurg. 123(1), 206–211 (2015).
[Crossref] [PubMed]

J. Pediatr. Surg. (1)

R. Souzaki, S. Ieiri, M. Uemura, K. Ohuchida, M. Tomikawa, Y. Kinoshita, Y. Koga, A. Suminoe, K. Kohashi, Y. Oda, T. Hara, M. Hashizume, and T. Taguchi, “An augmented reality navigation system for pediatric oncologic surgery based on preoperative CT and MRI images,” J. Pediatr. Surg. 48(12), 2479–2483 (2013).
[Crossref] [PubMed]

Journal of Computer Graphics Techniques (1)

M. Mcguire and L. Bavoil, “Weighted Blended Order-Independent Transparency,” Journal of Computer Graphics Techniques 2(2), 122–141 (2013).

Laryngoscope (1)

B. J. Dixon, M. J. Daly, H. Chan, A. Vescan, I. J. Witterick, and J. C. Irish, “Augmented real-time navigation with critical structure proximity alerts for endoscopic skull base surgery,” Laryngoscope 124(4), 853–859 (2014).
[Crossref] [PubMed]

Med. Image Anal. (2)

S. A. Nicolau, X. Pennec, L. Soler, X. Buy, A. Gangi, N. Ayache, and J. Marescaux, “An augmented reality system for liver thermal ablation: Design and evaluation on clinical cases,” Med. Image Anal. 13(3), 494–506 (2009).
[Crossref] [PubMed]

Y. Chu, J. Yang, S. Ma, D. Ai, W. Li, H. Song, L. Li, D. Chen, L. Chen, and Y. Wang, “Registration and fusion quantification of augmented reality based nasal endoscopic surgery,” Med. Image Anal. 42, 241–256 (2017).
[Crossref] [PubMed]

Surg. Oncol. (1)

S. Nicolau, L. Soler, D. Mutter, and J. Marescaux, “Augmented reality in laparoscopic surgical oncology,” Surg. Oncol. 20(3), 189–201 (2011).
[Crossref] [PubMed]

Surg. Today (1)

T. Okamoto, S. Onda, K. Yanaga, N. Suzuki, and A. Hattori, “Clinical application of navigation surgery using augmented reality in the abdominal field,” Surg. Today 45(4), 397–406 (2015).
[Crossref] [PubMed]

Virtual Real. (1)

D. Drascic and P. Milgram, “Perceptual issues in augmented reality reality-virtuality continuum,” Virtual Real. 2351, 197321 (1996).

Other (10)

E. Kruijff, J. E. Swan, and S. Feiner, “Perceptual issues in augmented reality revisited,” in 9th IEEE International Symposium on Mixed and Augmented Reality 2010: Science and Technology, ISMAR 2010 - Proceedings (2010).
[Crossref]

C. Bichlmeier, F. Wimmer, S. M. Heining, and N. Navab, “Contextual anatomic mimesis: Hybrid in-situ visualization method for improving multi-sensory depth perception in medical augmented reality,” in 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, ISMAR (2007).

C. Bichlmeier and N. Navab, “Virtual Window for Improved Depth Perception in Medical AR,” International Workshop on Augmented Reality environments for Medical Imaging and Computer-aided Surgery (AMI-ARCS) (2006).

E. J. Lorenzo, R. Centeno, and M. Rodríguez-Artacho, “A framework for helping developers in the integration of external tools into virtual learning environments,” in Proceedings of the First International Conference on Technological Ecosystem for Enhancing Multiculturality - TEEM ’13 (2013), pp. 127–132.
[Crossref]

B. Preim and D. Bartz, Visualization in Medicine: Theory, Algorithms, and Applications (Moegan Kauffman, 2007).

C. Tietjen, T. Isenberg, and B. Preim, “Combining Silhouettes, Surface, and Volume Rendering for Surgery Education and Planning,” EUROGRAPHICS - IEEE VGTC Symposium on Visualization (2005).

R. Brecheisen, A. Vilanova, B. Platel, and H. Romeny, “Flexible GPU-Based Multi-Volume Ray-Casting,” Proceedings of Vision, Modeling and Visualization (2008), pp. 303–312.

T. Gustafsson, “Concepts of Hybrid Data Rendering,” in SIGRAD 2017 (2017).

R. Marques, L. P. Santos, and P. Leškovský, “GPU Ray Casting,” in 17° Encontro Português de Computaçao Gráfica (En Anexo, 2009).

H. Meshkin, “Sort-independent alpha blending,” Perpetual Entertainment, GDC Talk (2007).

Supplementary Material (1)

NameDescription
» Visualization 1       Hybrid rendering of clinical data

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 The AR rendering scene of skull base tumor resection surgery. Red regions represent for the internal carotid artery (ICA). Endoscopic image (EI) is in the center of this screenshot. The yellow rectangles demonstrate the areas with large aliasing artifacts.
Fig. 2
Fig. 2 Flowchart of the importance-driven hybrid rendering method. The major steps of our proposed method are in the four dashed line rectangles.
Fig. 3
Fig. 3 Layer peeling based on importance sorting. (a) As the ray of sight traverses the scene from left to right, the depth of scene increases from 0 to 1. (b) Surface targets with different important factors are labeled with different lines styles. (c) is the first (leftmost) layer that touched by the ray of sight. For the peeling, the i-th layer to be peeled is denoted by bold lines, whereas other surfaces are denoted by thin lines in (d), (e) and (f). The peeled layers are labeled in light gray.
Fig. 4
Fig. 4 Volume sectioning at arbitrary perspective view. (a) illustrates the building of two orthogonal planes UW and VW at the tip of the endoscope using the virtual endoscopic view. (b) is the view of UW plane in (U) direction and the VW plane viewed in (V) direction after 3D sectioning. The section range (m, n, d) can be customized as demanded.
Fig. 5
Fig. 5 Compositing process of hybrid rendering. (a) The cube represents CT data. o CT is the hybrid data space with two surface data, Surfaces 1 and 2. (b) The data space is sectioned to obtain the region of interest. (c) is the depth buffer synchronization from volume data rendering to surfaces. (d) synthesizes the color and transparency information of the volume data and the surface data at each sampling point in the hybrid space. Finally, the rendering is fused with the endoscopic image in (e).
Fig. 6
Fig. 6 Comparison of gradient-enhanced volume rendering. (a) is the lighting settings of global illumination scene, the main light and the fill light are used to shade the volume data set to enhance the depth perception. The comparisons are direct volume rendering (b) and linear shading (c). The proposed volume shading method with different parameters are (d), (e) and (f). Areas marked with red solid lines and blue dotted lines are enlarged on the right. The differences of SSIM between our method, direct volume rendering and linear gradient shading are shown in color map, wherein the smaller the difference is, the lighter the color will be.
Fig. 7
Fig. 7 SSIM between our approach (d, e, f) and volume rendering (b), linear gradient shading (c), respectively.
Fig. 8
Fig. 8 The importance sorting rendering results. The first row listed the top-view of simulated scene setup (a) and the rendering result of weighted average method (d). The second row are the rendering of the OIT blend (c) and depth peeling (d) methods. The third row, (e) and (f), are the result of our method under different exponential factors. The last rows, (g) and (h), are obtained using the importance sorting method.
Fig. 9
Fig. 9 Results of the questionnaire rated using a five-point Likert scale. (a) is the histogram of respondents’ scores. The vertical axis represents for the number counts of different consent. Each set of bars represent for the score decreasing from 5 to 1. And each method was sampled for 75 times. (b) is the boxplot of the Likert scale. The asterisks were the maximums and minimums of each method. In the middle of the boxes, small square marked the mean of the data and lines marked the median.
Fig. 10
Fig. 10 Surface and hybrid rendering of clinical data segmentation. The first column, (a), is the result of the Depth Peel method. The second column, (b), shows our method. (c) and (d) are the hybrid rendering results (see supplementary material Visualization 1).
Fig. 11
Fig. 11 Real-time sectioning and fusion. Real-time cube (a), (b) and thin-layer sectioning (c), (d). On clinical data, (e) and (f) are the full-size sectioning fusion, (g) and (h) are thin-layer sectioning by fusing with (e) and (f).
Fig. 12
Fig. 12 Rendering frame rates for different rendering modes. (a) is the frame rate with respect to the rotating angle. (b) is the frame rate of different rendering methods.

Tables (4)

Tables Icon

Table 1 SSIM values under different shade methods.

Tables Icon

Table 2 The surface data settings of simulation scene.

Tables Icon

Table 3 Likert scale results depth-enhancement surface rendering.

Tables Icon

Table 4 Rendering frame rates for different types of data.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

v i =f( P i )=f( x i , y i , z i ).
O b = O v ( k bc + k bw ( f ) ke ),
C g = C v ( k c + k s e k e f ),
C f = 1 i=1 n α i i=1 n C i [ S i +( 1 i=1 n (1 α i ) ) ]+ C 0 ( ( i=1 n (1 α i ) ) k i ),
T ERT PFT = T ERT OTS ( T PFT OTS ) 1 .
{ ( π W j P 0 ( x 0 ± m 2 , y 0 , z 0 ) )W=0, m ( π V j P 0 ( x 0 , y 0 ± n 2 , z 0 ) )V=0, n ( π U j P 0 ( x 0 , y 0 , z 0 ± d 2 ) )U=0, d , j{1,2},
A(z)=( 1A(z1) )α(z)+A(z1),
C r,g,b (z)=( 1A(z1) ) c r,g,b (z)+ C r,g,b (z1),
P Endo (i,j)=[ i j l ]=[ u/w v/w 1 ]×l= P Cam (u,v,w)×l/w,
ρ={ α α× e tan dist Rt π 2 0 dis[0,t) dis[t,R) dis[R,min(M,N)/2) ,
P Fusion (i,j)= n P GED (i,j)+ P Hybrid (i,j)+ P n (i,j) n .
{ X 1 Y 1 Z 1 =[ cosθ sinθ 0 sinθ cosθ 0 0 0 1 ][ cosθ 0 sinθ 0 1 0 sinθ 0 cosθ ]XYZ X 2 Y 2 Z 2 =[ 1 0 0 0 cosθ sinθ 0 sinθ cosθ ][ cosθ 0 sinθ 0 1 0 sinθ 0 cosθ ]XYZ .