Abstract

Using near-infrared (NIR) light with 700–1200 nm wavelength, transillumination images of small animals and thin parts of a human body such as a hand or foot can be obtained. They are two-dimensional (2D) images of internal absorbing structures in a turbid medium. A three-dimensional (3D) see-through image is obtainable if one can identify the depth of each part of the structure in the 2D image. Nevertheless, the obtained transillumination images are blurred severely because of the strong scattering in the turbid medium. Moreover, ascertaining the structure depth from a 2D transillumination image is difficult. To overcome these shortcomings, we have developed a new technique using deep learning principles. A fully convolutional network (FCN) was trained with 5,000 training pairs of clear and blurred images. Also, a convolutional neural network (CNN) was trained with 42,000 training pairs of blurred images and corresponding depths in a turbid medium. Numerous training images were provided by the convolution with a point spread function derived from diffusion approximation to the radiative transport equation. The validity of the proposed technique was confirmed through simulation. Experiments demonstrated its applicability. This technique can provide a new tool for the NIR imaging of animal bodies and biometric authentication of a human body.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In medical and biometric applications, the three-dimensional (3D) structure of blood vessel networks provides crucial information for diagnosis, treatment evaluation and personal authentication. As examples, this information is extremely helpful for evaluating cancer invasion depth, raising robot surgery precision, and stepping up vein authentication from 2D to 3D. Useful imaging techniques available in medical fields such as X-ray CT, MRI, and PET can provide high-quality 3D images, but they require hazardous radiation or large-scale equipment. Recently, an acousto-optic imaging technique has been used to visualize the 3D blood vessel structure of a human body [16]. This technique is safe and useful, but it requires both light and ultrasound. This requirement not only makes the system complicated; it also makes contact on the lesion unavoidable.

Optical transillumination imaging techniques are other candidates to visualize blood vessel networks. Using these techniques, non-contact measurements can be taken using simple, compact, and safe equipment. Major veins become visible under visible light illumination if the subcutaneous vein lies at a few millimeters depth under the skin [711]. Nevertheless, the image is not clear. The deeper blood vessels are not visible. When using near-infrared (NIR) light, the vein image can be visualized better because less scattering and absorption occurs than with visible light. Some instruments are available to provide a vessel network pattern using NIR light [1218]. However, the captured vein image before image processing is blurred severely because of light scattering in the interstitial tissue between the vein and the skin. Diffuse scattering at the body surface also degrades the vein image in non-contact measurement. These effects degrade the transillumination image quality and make visualization of the 3D structure difficult.

One can imagine a clear image from a blurred one. Similarly, one can estimate an object’s depth when immersed in a turbid medium, much as one can infer an approximate depth of a fish in muddy water from a blurred view. This seems to derive from many earlier experiences. Therefore, a clear image might be obtained from a blurred one; the depth of an absorber might be estimated using the well-trained neural network of a computer. Similar ideas for deblurring and depth estimation [1921] have been reported. For example, Afifi et al. modified a CNN for depth estimation in clear air from a single RGB image [19]. Lo et al. proposed a technique that utilizes monocular color images to estimate the depth information in clear air using a CNN [20]. However, their technique cannot be applied to the blurred image in a turbid medium. Sabir et al. presented a CNN-based technique to estimate the bulk optical properties (absorption and scattering coefficients) of a highly scattering medium such as biological tissue in diffuse optical tomography (DOT) [21]. Similarly, Yoo et al. proposed a technique to use a CNN to obtain the distribution of optical anomalies for DOT [22]. They used the same DOT system which required many optical fibers to obtain large number of input and output signals and extensive calculation to solve inverse problems. Our technique requires only a wide illumination device and a single camera with relatively simple video-capture software. In addition, few reports have been found on the combination of deblurring and depth estimation using deep learning, particularly for transillumination images. With a view toward the better visualization of the blood vessel network, we propose a new technique to obtain a clear 3D structure from a blurred 2D transillumination image. The validity of the proposed technique was examined in simulation. Its applicability was tested through experimentation.

2. Methods

2.1 Training data generation

The proposed technique is based on deblurring and depth estimation using a neural network trained for blurred images. A neural network (NN) designed for deep learning was used for this study. To train the NN for deblurring, we feed many pairs of clear and corresponding blurred images to a computer system. To train the NN for depth estimation, we feed many pairs of the depth of the absorber in a turbid medium and the blurred image of the absorber.

Generally, better performance of NN can be expected with a greater number of the training pair before reaching the overfitting limit. In our system, the number of the training pairs was from several hundreds to a few thousands in a single epoch. It is unrealistic to prepare such numerous training pairs in a practical measurement. Therefore, we generated blurred images by the convolution of original images with a point spread function (PSF). The PSF based on the model presented in Fig. 1 is given as [23]:

$$PSF(\rho ) = C\left[ {({{\mu_s}^{\prime } + {\mu_a}} )+ \left( {{\kappa_d} + \frac{1}{{\sqrt {{\rho^2} + {d^2}} }}} \right) \times \frac{d}{{\sqrt {{\rho^2} + {d^2}} }}} \right]\frac{{\exp \left( { - {\kappa_d}\sqrt {{\rho^2} + {d^2}} } \right)}}{{\sqrt {{\rho ^2} + {d^2}} }},$$
where ${\kappa _d} = {[{3{\mu_a}({{\mu_s}^{\prime } + {\mu_a}} )} ]^{1/2}}$. C, µs, µa and d respectively represent the constant with respect to ρ and d, the reduced scattering coefficient, the absorption coefficient, and the absorber depth.

 figure: Fig. 1.

Fig. 1. Geometry for PSF as light distribution observed at the scattering medium surface.

Download Full Size | PPT Slide | PDF

This PSF was derived originally for the light intensity distribution on the surface of a turbid medium for a point light source as presented in Fig. 1 [23]. In contrast, the transillumination image is a blurred shadow of an absorber in a turbid medium. It has been verified that this PSF is applicable to the blur in transillumination imaging regarding the image as a collection of point absorbers [24]. Using this PSF we can obtain the blurred images for specific depth in a turbid medium. The calculated images were good for training the neural network. In this calculation, the background of an image is assumed to be homogeneous. In practice, however, the medium is often inhomogeneous in scattering and absorption coefficients. The image blur is much more dependent on scattering than absorption. In macroscopic imaging of animals, the target of the imaging is often the absorption distribution, and the scattering coefficient does not vary much in the viewing area. Therefore, this PSF can be used to simulate the blur in practical variations. The applicability of this PSF in practice was reported before [23]. Figure 2 presents examples of training pairs obtained using PSF convolution with different depths. The original and the blurred images were used as a training pair for the fully convolutional network (FCN) to deblur the image. Depth d and the blurred image were used as a training pair for the convolutional neural network (CNN) to estimate the absorber depth.

 figure: Fig. 2.

Fig. 2. Training pairs generated by PSF convolution.

Download Full Size | PPT Slide | PDF

2.2 Deblurring with FCN

We can expect to obtain a blur-less image as an output of NN for a blurred image input if we train NN with many pairs of images before and after blurring. We used a NN developed for deep learning. In deep learning, CNN is commonly used for classification, detection and segmentation of an image. In our application, the NN output should be a modified image. Therefore, we used FCN for which the last fully connected layer of NN was replaced by a convolutional layer. For the FCN, we used an NN based on U-net with skip connections [25,26] to improve the image processing accuracy. Figure 3 presents the concept of deblurring with FCN.

 figure: Fig. 3.

Fig. 3. Concept of deblurring with FCN: (a) training process and (b) image deblurring process.

Download Full Size | PPT Slide | PDF

2.3 Depth estimation with CNN

In transillumination imaging of animal bodies, images of absorbers such as blood vessel networks are blurred by strong light scattering at the body tissue. The degree of the blur is dependent on the absorber depth in a turbid medium. As the depth increases, the transillumination image of the absorber becomes more blurred. Therefore, if we train NN with many pairs of a blurred image and a corresponding depth, then the NN will output the depth for a new blurred image input. This is a common task of classification in deep learning. Figure 4 portrays the concept of depth estimation with CNN.

 figure: Fig. 4.

Fig. 4. Depth estimation with CNN: (a) training process and (b) depth estimation process.

Download Full Size | PPT Slide | PDF

2.4 Clear 3D image from blurred 2D image

From transillumination imaging of an animal body, one obtains a 2D blurred image of an absorbing structure in the body. With FCN and CNN, one can deblur the 2D image and obtain the absorber depth. After dividing the image into small parts and obtaining the depth for each part, one can reconstruct a three-dimensional clear image of the absorbing structure. The rear absorber depth cannot be obtained in the part if a part of an absorber overlaps with another absorber in a single 2D image. In such a case, 2D images should be taken from a few orientations. Then the processes presented above should be repeated.

3. Feasibility test in simulation

3.1 Deblurring with FCN

As described in Sec. 2.2, we can expect to obtain a deblurred image as an output of FCN. The feasibility of this technique was examined through simulation. For FCN, we used the U-net with skip connections [25,26]. The skip connection in the U-net connects the coding network of a blurred image with the decoding network for a clear image such that the features of the sampling layer in the coding network can be transmitted directly to the sampling layer in the decoding network, which makes the location of the pixels in the network more accurate.

To train the FCN, we generated 5,000 pairs of clear and blurred images. The original images were 10 patterns that were made artificially to simulate images of the subcutaneous blood vessel network. The different blurred images were generated from original clear images by convolution with the PSF given in Eq. (1). The optical parameters were those of general human body tissue, or µs = 1.0 /mm and µa = 0.01 /mm. These parameters were used in all simulations described hereinafter. The PSFs with 10 depths were applied to the 10 patterns. The images were rotated in 50 orientations to produce 5,000 pairs training data. Subsequently, 5,000 training pairs fed into the FCN for training with the batch size, the filter size, and the epoch of 10, 3 × 3 and 100, respectively.

With the original clear patterns, they constituted 5,000 training pairs for FCN. To test the FCN, 400 testing images were generated from four original patterns which differed from the 10 original patterns used for training. The training was done on a workstation (Intel Core i7-7700 K CPU; 3.00 GHz; 32 GB memory). The FCN was run by Python in a workstation equipped with a graphic processing unit (GTX 1080Ti; GeForce).

Figure 5 presents examples of a training pair, an input test image, an output image from FCN and an original image before blurring. Using the trained FCN, we were able to restore the clear original image from the badly blurred image. Like the brain, the trained FCN were able to accommodate new blurred patterns well with different absorber depths. To analyze the deblurring effects, the quality of the output image from FCN was evaluated in correlation analysis. Figure 6 presents correlation between the output image and the original image before blurring. As the absorber depth increases, the blur becomes severe and the deblurred image quality became worse. In Fig. 6 the FCN performance was compared with that obtained by training of different types. The decrease of image quality with the absorber depth was considerable with fewer training data. With one fifth of the training data, the image quality decreased rapidly with the absorber depth. For training, more depths seemed to produce better results than more image-orientations. However, this difference was much smaller than that shown for the number of training pairs. These results show that we can get a clear image for the absorber as deep as several to 10 mm in a turbid medium. In this study, we did not add noises for training data. However, using this trained system, we could get clear noise-free image with little defects even for the input image with random noise. We also confirmed that we can get perfect image with the system trained with Gaussian noise. These analyses verified the FCN capability to deblur an image with sufficient training and with appropriate choice of training data. If we use the images captured in practical environment for training, the improvement in performance is expected. But it is often not easy to capture enough number of training images.

 figure: Fig. 5.

Fig. 5. Typical examples of images for FCN: (a) training pair, (b) input and output of FCN, and (c) original image before blurring.

Download Full Size | PPT Slide | PDF

 figure: Fig. 6.

Fig. 6. Correlation analysis between original and deblurred images.

Download Full Size | PPT Slide | PDF

3.2 Depth estimation by CNN

As described in Sec. 2.3, we can expect to obtain the absorber depth as an output of CNN. The feasibility of this technique was examined through simulation. For NN of deep learning, we used the CNN based on ResNet, as first introduced by He et al. in 2015 and placed in a top-5 accuracy network [27]. ResNet is a classification model that uses a very deep neural network. We can expect high accuracy in reality with a sufficient number of training data. The accuracy drops with the decrease of training data. To overcome this training difficulty and to make CNN applicable for our specific tasks, we use the PSF given in Eq. (1) to generate training data. For the training and test data, we generated 60,000 blurred images with known absorber depths. We prepared 10 kinds of original image shapes of absorber pattern rotated in 60 orientations, with depth of 0.1–10.0 mm (0.1 mm step). Each image was blurred by the convolution using Eq. (1) with specified depth. Training was made in a default value epoch using stochastic gradient descent with momentum for better optimization. The number of the data was 60,000. The typical numbers for the batch size, the learning rate, and the epoch were, respectively, 32, 10−4, and 10. We tried different numbers and found this condition was appropriate for computational time and the stability in our system. To reduce the learning rate gradually, we set the learning rate schedule as “piecewise” and shuffle every epoch that can make ResNet able to learn more representative features effectively.

These images were split randomly into two subsets: 70% for training and 30% for testing. The training was made on the same workstation, as described in Sec. 3.1. After training CNN, we fed test data that CNN had never been exposed to before. Then, we obtained the estimated depth as an output of the CNN. Figure 7 portrays examples of input images and output depths of the trained CNN. Figure 8 presents a comparison between the given and the estimated depths. Error bars show the mean and standard deviation of the estimated depths for 10 images. They agreed well within 2% average error up to 10 mm depth. From this result, we can expect the depth resolution about 1 mm and 2 mm for the 5 mm and 10 mm depth, respectively. The lateral resolution and the signal to noise ratio of the output image are close to those of the original image because of the high correlation coefficient between the original and the output images of FCN. This result suggests the feasibility of the depth estimation of an absorber in the blurred transillumination image using CNN.

 figure: Fig. 7.

Fig. 7. Examples of input image and estimated depth of CNN: (given depth) → (estimated depth)

Download Full Size | PPT Slide | PDF

 figure: Fig. 8.

Fig. 8. Correlation analysis between given and estimated depths: Error bars show mean ± standard deviation of N = 10 estimations.

Download Full Size | PPT Slide | PDF

3.3 Clear 3D image from a blurred 2D image

Figure 9(a) portrays the simulation model of the absorber in a turbid medium. Uniform light is illuminated from the left of the rectangular turbid medium. A transillumination image of a slant bar-absorber was observed through the right surface of the medium. The depth of the bar measured from the right surface varied 8.65–13.0 mm from the top to the bottom. The blurred image was obtained by application of the PSF of Eq. (1) to each part of the model with corresponding depth of the absorber. Figures 9(b) and 9(c) present the absorber image in clear water and a transillumination image of the slant bar, respectively. Figure 9(d) portrays the output from the trained FCN. Figure 9(e) shows the 3D image reconstructed using Fig. 9(d) and the depth distribution obtained from the trained CNN. This result suggests the feasibility of obtaining a clear 3D structure from the blurred 2D transillumination image.

 figure: Fig. 9.

Fig. 9. Imaging of clear 3D structure from blurred 2D transillumination image in simulation: (a) structure of simulation model, (b) original image obtained with transparent medium, (c) blurred 2D transillumination image, (d) output image from trained FCN, and (e) 3D image reconstructed with depths from trained CNN.

Download Full Size | PPT Slide | PDF

4. Verification by experimentation

4.1 Transillumination imaging system

Applicability of the proposed technique was examined in experiments. Figure 10 shows the outline of the transillumination imaging system. The light source was an array of 50 LEDs (810 nm wavelength, 50 × 1 mW optical power, OSLUX IR, PowerStar; EMSc UK Ltd.). A black painted Y-shape absorber (3.0 mm diameter, 75 mm height) was fixed in the rectangular acrylic container (40 × 100 × 60 mm3, internal size) filled with a turbid medium with tissue-equivalent optical parameters. Intralipos suspension (Otsuka Pharmaceutical Co. Ltd.) was mixed with pure water to produce the turbid medium (µs = 1.0 /mm, µa = 0.00536 /mm) [2831]. The one side of the container was illuminated with the light source. A transillumination image was recorded with a cooled CCD camera (ORCA-R2 C10600; Hamamatsu Photonics KK) from another side of the container. The absorber depth was varied from 1.00 to 10.0 mm from the observation surface of the container using a mechanical translation system.

 figure: Fig. 10.

Fig. 10. Outline of transillumination system.

Download Full Size | PPT Slide | PDF

4.2 Suppression of background inhomogeneity

Training data for FCN and CNN were generated on the assumption that the light illumination was uniform over a sufficient area around the absorbing object. In practical transillumination imaging, this assumption can be hardly satisfied because of the finite size of the light source. Figure 11(a) shows a typical transillumination image obtained in the experiment with a bar-absorber in a turbid medium. The effect of the non-uniform illumination appears in the background of the absorber image. We can eliminate the effect of the non-uniform illumination by dividing the transillumination image with the background distribution if we have the background intensity distribution without the absorber.

 figure: Fig. 11.

Fig. 11. Background elimination in transillumination image: (a) transillumination image through turbid medium, (b) measured background (BG) without absorber in medium, (c) calculated background (BG) with Eq. (2), (d) result of image division (a)/(b), and (e) result of image division (a)/(c).

Download Full Size | PPT Slide | PDF

In the experiment with a model phantom, it is not difficult to obtain the background image without the target absorber. However, in practical applications such as transillumination imaging of animal bodies, we cannot take a target absorber out of the body. Therefore, in such a case, we calculate the background image as a convolution of the light distribution at the illumination side of the turbid medium and the point spread function Eq. (1) with the depth of total thickness of the medium as

$${I_b}({x,y} )\textrm{ } = {I_s}({x,y} )\textrm{ }\ast \textrm{ }PSF({x,y;d = t} )\; ,$$
where Ib, Is, d, and t respectively denote the background light distribution, source light distribution at the illuminated surface of a turbid medium, depth of absorber in the turbid medium, and the turbid medium thickness. Because Is(x,y) and t are measurable at outside the body, the background light distribution Ib(x,y) can be obtained irrespective of the target absorber at unknown depth in the body.

The validity of this technique was tested using measured transillumination images. Figure 11 presents results of the background elimination. Figures 11(a)–11(e) respectively depict an observed transillumination image of a bar-shape absorber in a turbid medium, a measured background by extracting the absorber from the medium, a calculated background with Eq. (2), the result of image division (a)/(b), and the result of image division (a)/(c). Figure 12 presents a comparison of the intensity profiles along the central horizontal lines in Figs. 11(b) and 11(c), and Figs. 11(d) and 11(e). They agreed well. These results suggest that we can eliminate the effect of the inhomogeneous illumination by calculating the background image using the light source distribution and the outer thickness of the turbid medium. The calculation requires the reduced scattering coefficient and the absorption coefficient of the medium. These values are available from the literature or from separate measurement. The former value does not change much in normal physiological variation. The dependence of PSF on the latter value is much smaller than the former value. For the following experiments, this background elimination technique was applied to the measured transillumination images.

 figure: Fig. 12.

Fig. 12. Background elimination with measured and calculated background images: (a) intensity profiles along central horizontal lines in Figs. 11(b) and 11(c), and (b) intensity profiles in Figs. 11(d) and 11(e).

Download Full Size | PPT Slide | PDF

4.3. Deblurring by FCN

The applicability of the proposed technique to obtain a clear transillumination image from a blurred image using FCN was tested in experiments. In the container presented in Fig. 12, we placed an absorber made of 3-mm-diameter black plastic wire. For reference, after the container was filled with clear water, a transillumination image was taken. Then the water was replaced by the tissue-simulating turbid medium (µs = 1.0 /mm, µa = 0.00536 /mm). A transillumination image was taken. After the background elimination, the blurred image was fed to the FCN, which had been trained with the simulated images in the simulation described in Sec. 3.1. For additional analysis, output from the FCN was compared with the reference image taken through clear water.

Figure 13 presents examples of these images. The effectiveness of this technique was evaluated using correlation analysis. Figure 14 presents correlation between the output image from FCN and the corresponding image through clear water for 30 absorbers. As the absorber depth increased, the transillumination image was more blurred. The image recovery became more difficult. However, with our technique using skip-connection of FCN, even at 10.0 mm depth, a correlation coefficient of more than 0.94 was attained. This result verified the applicability of FCN to deblur transillumination images as deep as 10.0 mm. For comparison, correlation of the output images from the FCN without skip connection was analyzed. The correlation coefficient decreased and its variation increased with the depth. The fluctuation in the variation at the depth more than 5 mm was irregular. The difference in the correlation coefficients with and without the skip connection was apparent. This result demonstrates the degree of the effectiveness of the skip connection.

 figure: Fig. 13.

Fig. 13. Result of scattering suppression by FCN at depth d = 6.00 mm: (a) image taken through clear water; (b) transillumination image through turbid medium; (c) transillumination image after background elimination; and (d) output image from trained FCN.

Download Full Size | PPT Slide | PDF

 figure: Fig. 14.

Fig. 14. Correlation of deblurred images from FCN to original blurless images, N = 30.

Download Full Size | PPT Slide | PDF

4.4. Depth estimation by CNN

The applicability of the proposed technique to obtain the absorber depth from a 2D transillumination image using CNN was tested in experiments. Figure 15 shows the structure of the absorber with varying depth and its transillumination image. The background inhomogeneity was removed using Eq. (2). The blurred image was fed to the CNN trained with simulated images in the simulation described in Sec. 3.2. Figure 16 presents the result of depth estimation. As the depth increased, the estimation error increased. However, the average error was within 3.5%. High correlation between the given and estimated depths was confirmed. This result demonstrated the applicability of CNN to estimate the absorber depth as being at least as deep as 10 mm.

 figure: Fig. 15.

Fig. 15. Absorber used for depth estimation: (a) structure in turbid medium, (b) transillumination image, and (c) image after background elimination.

Download Full Size | PPT Slide | PDF

 figure: Fig. 16.

Fig. 16. Correlation between estimated depth by CNN and given depth during experimentation.

Download Full Size | PPT Slide | PDF

4.5. Clear 3D imaging from blurred 2D image

The applicability of the proposed technique to obtain clear 3D structure in a turbid medium from a blurred 2D transillumination image was tested through experimentation. Figure 17 presents the absorber structure in clear water, in turbid medium and the transillumination image. respectively. The absorber depth varied from one place to another. Figure 18 shows the 3D transillumination image obtained using the clear image from FCN and the depth distribution from CNN. This result verified the applicability of the proposed technique to obtain a clear 3D image from a single blurred 2D transillumination image.

 figure: Fig. 17.

Fig. 17. Absorber used for 3D imaging: (a) structure in turbid medium and (b) transillumination image.

Download Full Size | PPT Slide | PDF

 figure: Fig. 18.

Fig. 18. 3D images obtained from clear image from FCN and depth distribution from CNN: (a) ground true image in clear water, (b) 3D images viewed from different angles.

Download Full Size | PPT Slide | PDF

5. Conclusions

To expand the usefulness of transillumination imaging through turbid medium with NIR light, a technique was developed to obtain a clear 3D image from a blurred 2D transillumination image. The severe blur caused by the turbid medium was clarified using FCN trained with 5,000 training images in deep learning. The absorber depth in a turbid medium was estimated using CNN in deep learning with 42,000 training pairs. The difficulty of obtaining numerous training data was solved using convolution with a point spread function derived from the diffusion approximation to the equation of transfer. The problem posed by inhomogeneous illumination was resolved through background elimination using measurable quantities from outside the turbid medium. The feasibility of the proposed technique was confirmed in simulation. Its validity was verified through experimentation. The effectiveness of the proposed technique was demonstrated for the absorbing structure at a depth from several to 10 mm in the tissue-simulated turbid medium with 40 mm thickness. There have been many attempts to sharpen blurred images, but few have been able to make transillumination imaging useful in medical practice. The poor flexibility for different depths of multiple targets in a turbid medium is one of the reasons. The proposed technique can solve this problem. The tradeoff of this technique compared with others is the requirements for a large number of training data and large computational power. However, they can be solved with the use of an appropriate PSF and current progress of computers. In this study, we examined the feasibility of deep learning to clarify the blurred image and to estimate the absorber depth with FCN and CNN. It would be useful if we can combine them into a single network. It is a future task to implement these functions into one.

Results suggest that this technique is useful to observe the subcutaneous structure of the blood vessel network and identify its depth distribution as deep as several millimeters. This technique only requires optic, not require complicated contact, ultrasound or others supplements, it can provide a new tool for the diagnosis of dermatology, various cancers, vascular diseases, and tissue metabolism. It can also step up the vein authentication from 2D to 3D. The pursuit of application of the proposed technique to animal tissue should be continued.

Funding

Grant-in-Aid for Scientific Research from the Japan Society for the Promotion of Science (17H02112, 18K18865, 20K20537).

Acknowledgments

The authors are grateful to Mr. Xuan Wang and Mr. Ken Akasaka of the Graduate School of Information, Production and Systems, Waseda University for their help in developing deep learning systems. This study was supported by Grant-in-Aid for Scientific Research from the Japan Society for the Promotion of Science.

Disclosures

The authors declare that they have no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. M. Xu and L. V. Wang, “Photoacoustic imaging in biomedicine,” Rev. Sci. Instrum. 77(4), 041101 (2006). [CrossRef]  

2. E. Maeva, F. Severin, C. Miyasaka, B. R. Tittmann, and R. G. Maev, “Acoustic imaging of thick biological tissue,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 56(7), 1352–1358 (2009). [CrossRef]  

3. P. Beard, “Biomedical photoacoustic imaging,” Interface Focus. 1(4), 602–631 (2011). [CrossRef]  

4. R. G. Maev, “Advances in acoustic microscopy and high resolution ultrasonic imaging: From principles to new applications,” Proc. SPIE 9040, 904007 (2014). [CrossRef]  

5. A. B. E. Attia, G. Balasundaram, M. Moothanchery, U. S. Dinish, R. Bi, V. Ntziachristos, and M. Olivo, “A review of clinical photoacoustic imaging: Current and future trends,” Photoacoustics 16, 100144 (2019). [CrossRef]  

6. M. A. L. Bell, “Photoacoustic imaging for surgical guidance: principles, applications, and outlook,” J. Appl. Phys. 128(6), 060904 (2020). [CrossRef]  

7. E. C. Lee, H. Jung, and D. Kim, “New finger biometric method using near infrared imaging,” Sensors 11(3), 2319–2333 (2011). [CrossRef]  

8. N. J. Cuper, J. H. Klaessens, J. E. Jaspers, R. de Roode, H. J. Noordmans, J. C. de Graaff, and R. M. Verdaasdonk, “The use of near-infrared light for safe and effective visualization of subsurface blood vessels to facilitate blood withdrawal in children,” Med. Eng. & Phys. 35(4), 433–440 (2013). [CrossRef]  

9. A. M. García and P. R. Horche, “Light source optimizing in a biphotonic vein finder device: experimental and theoretical analysis,” Results Phys. 11, 975–983 (2018). [CrossRef]  

10. C. A. Mela, D. P. Lemmer, F. S. Bao, F. Papay, T. Hicks, and Y. Liu, “Real-time dual-modal vein imaging system,” Int. J. CARS 14(2), 203–213 (2019). [CrossRef]  

11. K. Efendiev, P. Grachev, A. Moskalev, and V. Loschenov, “Non-invasive high-contrast infrared imaging of blood vessels in biological tissues by the backscattered laser radiation method,” Infrared Phys. Technol. 111, 103562 (2020). [CrossRef]  

12. M. Kono, H. Ueki, and S. Umemura, “Near-infrared finger vein patterns for personal identification,” Appl. Opt. 41(35), 7429–7436 (2002). [CrossRef]  

13. F. Leblond, S. C. Davis, P. A. Valdés, and B. W. Pogue, “Pre-clinical whole-body fluorescence imaging: Review of instruments, methods and applications,” J. Photochem. Photobiol., B 98(1), 77–94 (2010). [CrossRef]  

14. J. Yang and Y. Shi, “Towards finger-vein image restoration and enhancement for finger-vein recognition,” Inf. Sci. 268, 33–52 (2014). [CrossRef]  

15. L. A. Sordillo, Y. Pu, S. Pratavieira, Y. Budansky, and R. R. Alfano, “Deep optical imaging of tissue using the second and third near-infrared spectral windows,” J. Biomed. Opt. 19(5), 056004 (2014). [CrossRef]  

16. D. Kim, Y. Kim, S. Yoon, and D. Lee, “Preliminary study for designing a novel vein-visualizing device,” Sensors 17(2), 304 (2017). [CrossRef]  

17. S. Merlo, V. Bello, E. Bodo, and S. Pizzurro, “A VCSEL-Based NIR transillumination system for morpho-functional imaging,” Sensors 19(4), 851 (2019). [CrossRef]  

18. C. T. Pan, M. D. Francisco, C. K. Yen, S. Y. Wang, and Y. L. Shiue, “Vein pattern locating technology for cannulation: a review of the low-cost vein finder prototypes utilizing near infrared (nir) light to improve peripheral subcutaneous vein selection for phlebotomy,” Sensors 19(16), 3573 (2019). [CrossRef]  

19. A. J. Afifi and O. Hellwich, “Object depth estimation from a single image using fully convolutional neural network,” 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 1–7 (2016).

20. F. P. W. Lo, Y. Sun, and B. Lo, “Depth estimation based on a single close-up image with volumetric annotations in the wild: a pilot study,” 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), 513–518 (2019).

21. S. Sabir, S. Cho, Y. Kim, R. Pua, D. Heo, K. H. Kim, Y. Choi, and S. Cho, “Convolutional neural network-based approach to estimate bulk optical properties in diffuse optical tomography,” Appl. Opt. 59(5), 1461–1470 (2020). [CrossRef]  

22. J. Yoo, S. Sabir, D. Heo, K. H. Kim, A. Wahab, Y. Choi, S. Lee, E. Y. Chae, H. H. Kim, Y. M. Bae, Y. W. Choi, S. Cho, and J. C. Ye, “Deep Learning Diffuse Optical Tomography,” IEEE Trans. Med. Imaging 39(4), 877–887 (2020). [CrossRef]  

23. K. Shimizu, K. Tochio, and Y. Kato, “Improvement of transcutaneous fluorescent images with a depth-dependent point-spread function,” Appl. Opt. 44(11), 2154–2161 (2005). [CrossRef]  

24. T. N. Tran, K. Yamamoto, T. Namita, Y. Kato, and K. Shimizu, “Three-dimensional transillumination image reconstruction for small animal with new scattering suppression technique,” Biomed. Opt. Express 5(5), 1321–1335 (2014). [CrossRef]  

25. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3431–3440 (2015).

26. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” Medical Image Computing and Computer-Assisted Intervention – MICCAI, 234–241 (2015).

27. L. He, G. Wang, and Z. Hu, “Learning depth from single images with deep neural network embedding focal length,” IEEE Trans. on Image Process. 27(9), 4676–4689 (2018). [CrossRef]  

28. A. Shahin, M. S. El-Daher, and W. Bachir, “Determination of the optical properties of Intralipid 20% over a broadband spectrum,” Photon.Lett.PL 10(4), 124–126 (2018). [CrossRef]  

29. E. Ohmae, N. Yoshizawa, K. Yoshimoto, M. Hayashi, H. Wada, T. Mimura, H. Suzuki, S. Homma, N. Suzuki, H. Ogura, H. Nasu, H. Sakahara, Y. Yamashita, and Y. Ueda, “Stable tissue-simulating phantoms with various water and lipid contents for diffuse optical spectroscopy,” Biomed. Opt. Express 9(11), 5792–5808 (2018). [CrossRef]  

30. A. N. Bashkatov, E. A. Genina, and V. V. Tuchin, “Optical properties of skin, subcutaneous, and muscle tissues: a review,” J. Innovative Opt. Health Sci. 04(01), 9–38 (2011). [CrossRef]  

31. S. L. Jacques, “Optical properties of biological tissues: a review,” Phys. Med. Biol. 58(11), R37–R61 (2013). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. M. Xu and L. V. Wang, “Photoacoustic imaging in biomedicine,” Rev. Sci. Instrum. 77(4), 041101 (2006).
    [Crossref]
  2. E. Maeva, F. Severin, C. Miyasaka, B. R. Tittmann, and R. G. Maev, “Acoustic imaging of thick biological tissue,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 56(7), 1352–1358 (2009).
    [Crossref]
  3. P. Beard, “Biomedical photoacoustic imaging,” Interface Focus. 1(4), 602–631 (2011).
    [Crossref]
  4. R. G. Maev, “Advances in acoustic microscopy and high resolution ultrasonic imaging: From principles to new applications,” Proc. SPIE 9040, 904007 (2014).
    [Crossref]
  5. A. B. E. Attia, G. Balasundaram, M. Moothanchery, U. S. Dinish, R. Bi, V. Ntziachristos, and M. Olivo, “A review of clinical photoacoustic imaging: Current and future trends,” Photoacoustics 16, 100144 (2019).
    [Crossref]
  6. M. A. L. Bell, “Photoacoustic imaging for surgical guidance: principles, applications, and outlook,” J. Appl. Phys. 128(6), 060904 (2020).
    [Crossref]
  7. E. C. Lee, H. Jung, and D. Kim, “New finger biometric method using near infrared imaging,” Sensors 11(3), 2319–2333 (2011).
    [Crossref]
  8. N. J. Cuper, J. H. Klaessens, J. E. Jaspers, R. de Roode, H. J. Noordmans, J. C. de Graaff, and R. M. Verdaasdonk, “The use of near-infrared light for safe and effective visualization of subsurface blood vessels to facilitate blood withdrawal in children,” Med. Eng. & Phys. 35(4), 433–440 (2013).
    [Crossref]
  9. A. M. García and P. R. Horche, “Light source optimizing in a biphotonic vein finder device: experimental and theoretical analysis,” Results Phys. 11, 975–983 (2018).
    [Crossref]
  10. C. A. Mela, D. P. Lemmer, F. S. Bao, F. Papay, T. Hicks, and Y. Liu, “Real-time dual-modal vein imaging system,” Int. J. CARS 14(2), 203–213 (2019).
    [Crossref]
  11. K. Efendiev, P. Grachev, A. Moskalev, and V. Loschenov, “Non-invasive high-contrast infrared imaging of blood vessels in biological tissues by the backscattered laser radiation method,” Infrared Phys. Technol. 111, 103562 (2020).
    [Crossref]
  12. M. Kono, H. Ueki, and S. Umemura, “Near-infrared finger vein patterns for personal identification,” Appl. Opt. 41(35), 7429–7436 (2002).
    [Crossref]
  13. F. Leblond, S. C. Davis, P. A. Valdés, and B. W. Pogue, “Pre-clinical whole-body fluorescence imaging: Review of instruments, methods and applications,” J. Photochem. Photobiol., B 98(1), 77–94 (2010).
    [Crossref]
  14. J. Yang and Y. Shi, “Towards finger-vein image restoration and enhancement for finger-vein recognition,” Inf. Sci. 268, 33–52 (2014).
    [Crossref]
  15. L. A. Sordillo, Y. Pu, S. Pratavieira, Y. Budansky, and R. R. Alfano, “Deep optical imaging of tissue using the second and third near-infrared spectral windows,” J. Biomed. Opt. 19(5), 056004 (2014).
    [Crossref]
  16. D. Kim, Y. Kim, S. Yoon, and D. Lee, “Preliminary study for designing a novel vein-visualizing device,” Sensors 17(2), 304 (2017).
    [Crossref]
  17. S. Merlo, V. Bello, E. Bodo, and S. Pizzurro, “A VCSEL-Based NIR transillumination system for morpho-functional imaging,” Sensors 19(4), 851 (2019).
    [Crossref]
  18. C. T. Pan, M. D. Francisco, C. K. Yen, S. Y. Wang, and Y. L. Shiue, “Vein pattern locating technology for cannulation: a review of the low-cost vein finder prototypes utilizing near infrared (nir) light to improve peripheral subcutaneous vein selection for phlebotomy,” Sensors 19(16), 3573 (2019).
    [Crossref]
  19. A. J. Afifi and O. Hellwich, “Object depth estimation from a single image using fully convolutional neural network,” 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 1–7 (2016).
  20. F. P. W. Lo, Y. Sun, and B. Lo, “Depth estimation based on a single close-up image with volumetric annotations in the wild: a pilot study,” 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), 513–518 (2019).
  21. S. Sabir, S. Cho, Y. Kim, R. Pua, D. Heo, K. H. Kim, Y. Choi, and S. Cho, “Convolutional neural network-based approach to estimate bulk optical properties in diffuse optical tomography,” Appl. Opt. 59(5), 1461–1470 (2020).
    [Crossref]
  22. J. Yoo, S. Sabir, D. Heo, K. H. Kim, A. Wahab, Y. Choi, S. Lee, E. Y. Chae, H. H. Kim, Y. M. Bae, Y. W. Choi, S. Cho, and J. C. Ye, “Deep Learning Diffuse Optical Tomography,” IEEE Trans. Med. Imaging 39(4), 877–887 (2020).
    [Crossref]
  23. K. Shimizu, K. Tochio, and Y. Kato, “Improvement of transcutaneous fluorescent images with a depth-dependent point-spread function,” Appl. Opt. 44(11), 2154–2161 (2005).
    [Crossref]
  24. T. N. Tran, K. Yamamoto, T. Namita, Y. Kato, and K. Shimizu, “Three-dimensional transillumination image reconstruction for small animal with new scattering suppression technique,” Biomed. Opt. Express 5(5), 1321–1335 (2014).
    [Crossref]
  25. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3431–3440 (2015).
  26. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” Medical Image Computing and Computer-Assisted Intervention – MICCAI, 234–241 (2015).
  27. L. He, G. Wang, and Z. Hu, “Learning depth from single images with deep neural network embedding focal length,” IEEE Trans. on Image Process. 27(9), 4676–4689 (2018).
    [Crossref]
  28. A. Shahin, M. S. El-Daher, and W. Bachir, “Determination of the optical properties of Intralipid 20% over a broadband spectrum,” Photon.Lett.PL 10(4), 124–126 (2018).
    [Crossref]
  29. E. Ohmae, N. Yoshizawa, K. Yoshimoto, M. Hayashi, H. Wada, T. Mimura, H. Suzuki, S. Homma, N. Suzuki, H. Ogura, H. Nasu, H. Sakahara, Y. Yamashita, and Y. Ueda, “Stable tissue-simulating phantoms with various water and lipid contents for diffuse optical spectroscopy,” Biomed. Opt. Express 9(11), 5792–5808 (2018).
    [Crossref]
  30. A. N. Bashkatov, E. A. Genina, and V. V. Tuchin, “Optical properties of skin, subcutaneous, and muscle tissues: a review,” J. Innovative Opt. Health Sci. 04(01), 9–38 (2011).
    [Crossref]
  31. S. L. Jacques, “Optical properties of biological tissues: a review,” Phys. Med. Biol. 58(11), R37–R61 (2013).
    [Crossref]

2020 (4)

M. A. L. Bell, “Photoacoustic imaging for surgical guidance: principles, applications, and outlook,” J. Appl. Phys. 128(6), 060904 (2020).
[Crossref]

K. Efendiev, P. Grachev, A. Moskalev, and V. Loschenov, “Non-invasive high-contrast infrared imaging of blood vessels in biological tissues by the backscattered laser radiation method,” Infrared Phys. Technol. 111, 103562 (2020).
[Crossref]

S. Sabir, S. Cho, Y. Kim, R. Pua, D. Heo, K. H. Kim, Y. Choi, and S. Cho, “Convolutional neural network-based approach to estimate bulk optical properties in diffuse optical tomography,” Appl. Opt. 59(5), 1461–1470 (2020).
[Crossref]

J. Yoo, S. Sabir, D. Heo, K. H. Kim, A. Wahab, Y. Choi, S. Lee, E. Y. Chae, H. H. Kim, Y. M. Bae, Y. W. Choi, S. Cho, and J. C. Ye, “Deep Learning Diffuse Optical Tomography,” IEEE Trans. Med. Imaging 39(4), 877–887 (2020).
[Crossref]

2019 (4)

S. Merlo, V. Bello, E. Bodo, and S. Pizzurro, “A VCSEL-Based NIR transillumination system for morpho-functional imaging,” Sensors 19(4), 851 (2019).
[Crossref]

C. T. Pan, M. D. Francisco, C. K. Yen, S. Y. Wang, and Y. L. Shiue, “Vein pattern locating technology for cannulation: a review of the low-cost vein finder prototypes utilizing near infrared (nir) light to improve peripheral subcutaneous vein selection for phlebotomy,” Sensors 19(16), 3573 (2019).
[Crossref]

C. A. Mela, D. P. Lemmer, F. S. Bao, F. Papay, T. Hicks, and Y. Liu, “Real-time dual-modal vein imaging system,” Int. J. CARS 14(2), 203–213 (2019).
[Crossref]

A. B. E. Attia, G. Balasundaram, M. Moothanchery, U. S. Dinish, R. Bi, V. Ntziachristos, and M. Olivo, “A review of clinical photoacoustic imaging: Current and future trends,” Photoacoustics 16, 100144 (2019).
[Crossref]

2018 (4)

A. M. García and P. R. Horche, “Light source optimizing in a biphotonic vein finder device: experimental and theoretical analysis,” Results Phys. 11, 975–983 (2018).
[Crossref]

L. He, G. Wang, and Z. Hu, “Learning depth from single images with deep neural network embedding focal length,” IEEE Trans. on Image Process. 27(9), 4676–4689 (2018).
[Crossref]

A. Shahin, M. S. El-Daher, and W. Bachir, “Determination of the optical properties of Intralipid 20% over a broadband spectrum,” Photon.Lett.PL 10(4), 124–126 (2018).
[Crossref]

E. Ohmae, N. Yoshizawa, K. Yoshimoto, M. Hayashi, H. Wada, T. Mimura, H. Suzuki, S. Homma, N. Suzuki, H. Ogura, H. Nasu, H. Sakahara, Y. Yamashita, and Y. Ueda, “Stable tissue-simulating phantoms with various water and lipid contents for diffuse optical spectroscopy,” Biomed. Opt. Express 9(11), 5792–5808 (2018).
[Crossref]

2017 (1)

D. Kim, Y. Kim, S. Yoon, and D. Lee, “Preliminary study for designing a novel vein-visualizing device,” Sensors 17(2), 304 (2017).
[Crossref]

2014 (4)

J. Yang and Y. Shi, “Towards finger-vein image restoration and enhancement for finger-vein recognition,” Inf. Sci. 268, 33–52 (2014).
[Crossref]

L. A. Sordillo, Y. Pu, S. Pratavieira, Y. Budansky, and R. R. Alfano, “Deep optical imaging of tissue using the second and third near-infrared spectral windows,” J. Biomed. Opt. 19(5), 056004 (2014).
[Crossref]

R. G. Maev, “Advances in acoustic microscopy and high resolution ultrasonic imaging: From principles to new applications,” Proc. SPIE 9040, 904007 (2014).
[Crossref]

T. N. Tran, K. Yamamoto, T. Namita, Y. Kato, and K. Shimizu, “Three-dimensional transillumination image reconstruction for small animal with new scattering suppression technique,” Biomed. Opt. Express 5(5), 1321–1335 (2014).
[Crossref]

2013 (2)

S. L. Jacques, “Optical properties of biological tissues: a review,” Phys. Med. Biol. 58(11), R37–R61 (2013).
[Crossref]

N. J. Cuper, J. H. Klaessens, J. E. Jaspers, R. de Roode, H. J. Noordmans, J. C. de Graaff, and R. M. Verdaasdonk, “The use of near-infrared light for safe and effective visualization of subsurface blood vessels to facilitate blood withdrawal in children,” Med. Eng. & Phys. 35(4), 433–440 (2013).
[Crossref]

2011 (3)

E. C. Lee, H. Jung, and D. Kim, “New finger biometric method using near infrared imaging,” Sensors 11(3), 2319–2333 (2011).
[Crossref]

P. Beard, “Biomedical photoacoustic imaging,” Interface Focus. 1(4), 602–631 (2011).
[Crossref]

A. N. Bashkatov, E. A. Genina, and V. V. Tuchin, “Optical properties of skin, subcutaneous, and muscle tissues: a review,” J. Innovative Opt. Health Sci. 04(01), 9–38 (2011).
[Crossref]

2010 (1)

F. Leblond, S. C. Davis, P. A. Valdés, and B. W. Pogue, “Pre-clinical whole-body fluorescence imaging: Review of instruments, methods and applications,” J. Photochem. Photobiol., B 98(1), 77–94 (2010).
[Crossref]

2009 (1)

E. Maeva, F. Severin, C. Miyasaka, B. R. Tittmann, and R. G. Maev, “Acoustic imaging of thick biological tissue,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 56(7), 1352–1358 (2009).
[Crossref]

2006 (1)

M. Xu and L. V. Wang, “Photoacoustic imaging in biomedicine,” Rev. Sci. Instrum. 77(4), 041101 (2006).
[Crossref]

2005 (1)

2002 (1)

Afifi, A. J.

A. J. Afifi and O. Hellwich, “Object depth estimation from a single image using fully convolutional neural network,” 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 1–7 (2016).

Alfano, R. R.

L. A. Sordillo, Y. Pu, S. Pratavieira, Y. Budansky, and R. R. Alfano, “Deep optical imaging of tissue using the second and third near-infrared spectral windows,” J. Biomed. Opt. 19(5), 056004 (2014).
[Crossref]

Attia, A. B. E.

A. B. E. Attia, G. Balasundaram, M. Moothanchery, U. S. Dinish, R. Bi, V. Ntziachristos, and M. Olivo, “A review of clinical photoacoustic imaging: Current and future trends,” Photoacoustics 16, 100144 (2019).
[Crossref]

Bachir, W.

A. Shahin, M. S. El-Daher, and W. Bachir, “Determination of the optical properties of Intralipid 20% over a broadband spectrum,” Photon.Lett.PL 10(4), 124–126 (2018).
[Crossref]

Bae, Y. M.

J. Yoo, S. Sabir, D. Heo, K. H. Kim, A. Wahab, Y. Choi, S. Lee, E. Y. Chae, H. H. Kim, Y. M. Bae, Y. W. Choi, S. Cho, and J. C. Ye, “Deep Learning Diffuse Optical Tomography,” IEEE Trans. Med. Imaging 39(4), 877–887 (2020).
[Crossref]

Balasundaram, G.

A. B. E. Attia, G. Balasundaram, M. Moothanchery, U. S. Dinish, R. Bi, V. Ntziachristos, and M. Olivo, “A review of clinical photoacoustic imaging: Current and future trends,” Photoacoustics 16, 100144 (2019).
[Crossref]

Bao, F. S.

C. A. Mela, D. P. Lemmer, F. S. Bao, F. Papay, T. Hicks, and Y. Liu, “Real-time dual-modal vein imaging system,” Int. J. CARS 14(2), 203–213 (2019).
[Crossref]

Bashkatov, A. N.

A. N. Bashkatov, E. A. Genina, and V. V. Tuchin, “Optical properties of skin, subcutaneous, and muscle tissues: a review,” J. Innovative Opt. Health Sci. 04(01), 9–38 (2011).
[Crossref]

Beard, P.

P. Beard, “Biomedical photoacoustic imaging,” Interface Focus. 1(4), 602–631 (2011).
[Crossref]

Bell, M. A. L.

M. A. L. Bell, “Photoacoustic imaging for surgical guidance: principles, applications, and outlook,” J. Appl. Phys. 128(6), 060904 (2020).
[Crossref]

Bello, V.

S. Merlo, V. Bello, E. Bodo, and S. Pizzurro, “A VCSEL-Based NIR transillumination system for morpho-functional imaging,” Sensors 19(4), 851 (2019).
[Crossref]

Bi, R.

A. B. E. Attia, G. Balasundaram, M. Moothanchery, U. S. Dinish, R. Bi, V. Ntziachristos, and M. Olivo, “A review of clinical photoacoustic imaging: Current and future trends,” Photoacoustics 16, 100144 (2019).
[Crossref]

Bodo, E.

S. Merlo, V. Bello, E. Bodo, and S. Pizzurro, “A VCSEL-Based NIR transillumination system for morpho-functional imaging,” Sensors 19(4), 851 (2019).
[Crossref]

Brox, T.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” Medical Image Computing and Computer-Assisted Intervention – MICCAI, 234–241 (2015).

Budansky, Y.

L. A. Sordillo, Y. Pu, S. Pratavieira, Y. Budansky, and R. R. Alfano, “Deep optical imaging of tissue using the second and third near-infrared spectral windows,” J. Biomed. Opt. 19(5), 056004 (2014).
[Crossref]

Chae, E. Y.

J. Yoo, S. Sabir, D. Heo, K. H. Kim, A. Wahab, Y. Choi, S. Lee, E. Y. Chae, H. H. Kim, Y. M. Bae, Y. W. Choi, S. Cho, and J. C. Ye, “Deep Learning Diffuse Optical Tomography,” IEEE Trans. Med. Imaging 39(4), 877–887 (2020).
[Crossref]

Cho, S.

Choi, Y.

J. Yoo, S. Sabir, D. Heo, K. H. Kim, A. Wahab, Y. Choi, S. Lee, E. Y. Chae, H. H. Kim, Y. M. Bae, Y. W. Choi, S. Cho, and J. C. Ye, “Deep Learning Diffuse Optical Tomography,” IEEE Trans. Med. Imaging 39(4), 877–887 (2020).
[Crossref]

S. Sabir, S. Cho, Y. Kim, R. Pua, D. Heo, K. H. Kim, Y. Choi, and S. Cho, “Convolutional neural network-based approach to estimate bulk optical properties in diffuse optical tomography,” Appl. Opt. 59(5), 1461–1470 (2020).
[Crossref]

Choi, Y. W.

J. Yoo, S. Sabir, D. Heo, K. H. Kim, A. Wahab, Y. Choi, S. Lee, E. Y. Chae, H. H. Kim, Y. M. Bae, Y. W. Choi, S. Cho, and J. C. Ye, “Deep Learning Diffuse Optical Tomography,” IEEE Trans. Med. Imaging 39(4), 877–887 (2020).
[Crossref]

Cuper, N. J.

N. J. Cuper, J. H. Klaessens, J. E. Jaspers, R. de Roode, H. J. Noordmans, J. C. de Graaff, and R. M. Verdaasdonk, “The use of near-infrared light for safe and effective visualization of subsurface blood vessels to facilitate blood withdrawal in children,” Med. Eng. & Phys. 35(4), 433–440 (2013).
[Crossref]

Darrell, T.

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3431–3440 (2015).

Davis, S. C.

F. Leblond, S. C. Davis, P. A. Valdés, and B. W. Pogue, “Pre-clinical whole-body fluorescence imaging: Review of instruments, methods and applications,” J. Photochem. Photobiol., B 98(1), 77–94 (2010).
[Crossref]

de Graaff, J. C.

N. J. Cuper, J. H. Klaessens, J. E. Jaspers, R. de Roode, H. J. Noordmans, J. C. de Graaff, and R. M. Verdaasdonk, “The use of near-infrared light for safe and effective visualization of subsurface blood vessels to facilitate blood withdrawal in children,” Med. Eng. & Phys. 35(4), 433–440 (2013).
[Crossref]

de Roode, R.

N. J. Cuper, J. H. Klaessens, J. E. Jaspers, R. de Roode, H. J. Noordmans, J. C. de Graaff, and R. M. Verdaasdonk, “The use of near-infrared light for safe and effective visualization of subsurface blood vessels to facilitate blood withdrawal in children,” Med. Eng. & Phys. 35(4), 433–440 (2013).
[Crossref]

Dinish, U. S.

A. B. E. Attia, G. Balasundaram, M. Moothanchery, U. S. Dinish, R. Bi, V. Ntziachristos, and M. Olivo, “A review of clinical photoacoustic imaging: Current and future trends,” Photoacoustics 16, 100144 (2019).
[Crossref]

Efendiev, K.

K. Efendiev, P. Grachev, A. Moskalev, and V. Loschenov, “Non-invasive high-contrast infrared imaging of blood vessels in biological tissues by the backscattered laser radiation method,” Infrared Phys. Technol. 111, 103562 (2020).
[Crossref]

El-Daher, M. S.

A. Shahin, M. S. El-Daher, and W. Bachir, “Determination of the optical properties of Intralipid 20% over a broadband spectrum,” Photon.Lett.PL 10(4), 124–126 (2018).
[Crossref]

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” Medical Image Computing and Computer-Assisted Intervention – MICCAI, 234–241 (2015).

Francisco, M. D.

C. T. Pan, M. D. Francisco, C. K. Yen, S. Y. Wang, and Y. L. Shiue, “Vein pattern locating technology for cannulation: a review of the low-cost vein finder prototypes utilizing near infrared (nir) light to improve peripheral subcutaneous vein selection for phlebotomy,” Sensors 19(16), 3573 (2019).
[Crossref]

García, A. M.

A. M. García and P. R. Horche, “Light source optimizing in a biphotonic vein finder device: experimental and theoretical analysis,” Results Phys. 11, 975–983 (2018).
[Crossref]

Genina, E. A.

A. N. Bashkatov, E. A. Genina, and V. V. Tuchin, “Optical properties of skin, subcutaneous, and muscle tissues: a review,” J. Innovative Opt. Health Sci. 04(01), 9–38 (2011).
[Crossref]

Grachev, P.

K. Efendiev, P. Grachev, A. Moskalev, and V. Loschenov, “Non-invasive high-contrast infrared imaging of blood vessels in biological tissues by the backscattered laser radiation method,” Infrared Phys. Technol. 111, 103562 (2020).
[Crossref]

Hayashi, M.

He, L.

L. He, G. Wang, and Z. Hu, “Learning depth from single images with deep neural network embedding focal length,” IEEE Trans. on Image Process. 27(9), 4676–4689 (2018).
[Crossref]

Hellwich, O.

A. J. Afifi and O. Hellwich, “Object depth estimation from a single image using fully convolutional neural network,” 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 1–7 (2016).

Heo, D.

J. Yoo, S. Sabir, D. Heo, K. H. Kim, A. Wahab, Y. Choi, S. Lee, E. Y. Chae, H. H. Kim, Y. M. Bae, Y. W. Choi, S. Cho, and J. C. Ye, “Deep Learning Diffuse Optical Tomography,” IEEE Trans. Med. Imaging 39(4), 877–887 (2020).
[Crossref]

S. Sabir, S. Cho, Y. Kim, R. Pua, D. Heo, K. H. Kim, Y. Choi, and S. Cho, “Convolutional neural network-based approach to estimate bulk optical properties in diffuse optical tomography,” Appl. Opt. 59(5), 1461–1470 (2020).
[Crossref]

Hicks, T.

C. A. Mela, D. P. Lemmer, F. S. Bao, F. Papay, T. Hicks, and Y. Liu, “Real-time dual-modal vein imaging system,” Int. J. CARS 14(2), 203–213 (2019).
[Crossref]

Homma, S.

Horche, P. R.

A. M. García and P. R. Horche, “Light source optimizing in a biphotonic vein finder device: experimental and theoretical analysis,” Results Phys. 11, 975–983 (2018).
[Crossref]

Hu, Z.

L. He, G. Wang, and Z. Hu, “Learning depth from single images with deep neural network embedding focal length,” IEEE Trans. on Image Process. 27(9), 4676–4689 (2018).
[Crossref]

Jacques, S. L.

S. L. Jacques, “Optical properties of biological tissues: a review,” Phys. Med. Biol. 58(11), R37–R61 (2013).
[Crossref]

Jaspers, J. E.

N. J. Cuper, J. H. Klaessens, J. E. Jaspers, R. de Roode, H. J. Noordmans, J. C. de Graaff, and R. M. Verdaasdonk, “The use of near-infrared light for safe and effective visualization of subsurface blood vessels to facilitate blood withdrawal in children,” Med. Eng. & Phys. 35(4), 433–440 (2013).
[Crossref]

Jung, H.

E. C. Lee, H. Jung, and D. Kim, “New finger biometric method using near infrared imaging,” Sensors 11(3), 2319–2333 (2011).
[Crossref]

Kato, Y.

Kim, D.

D. Kim, Y. Kim, S. Yoon, and D. Lee, “Preliminary study for designing a novel vein-visualizing device,” Sensors 17(2), 304 (2017).
[Crossref]

E. C. Lee, H. Jung, and D. Kim, “New finger biometric method using near infrared imaging,” Sensors 11(3), 2319–2333 (2011).
[Crossref]

Kim, H. H.

J. Yoo, S. Sabir, D. Heo, K. H. Kim, A. Wahab, Y. Choi, S. Lee, E. Y. Chae, H. H. Kim, Y. M. Bae, Y. W. Choi, S. Cho, and J. C. Ye, “Deep Learning Diffuse Optical Tomography,” IEEE Trans. Med. Imaging 39(4), 877–887 (2020).
[Crossref]

Kim, K. H.

J. Yoo, S. Sabir, D. Heo, K. H. Kim, A. Wahab, Y. Choi, S. Lee, E. Y. Chae, H. H. Kim, Y. M. Bae, Y. W. Choi, S. Cho, and J. C. Ye, “Deep Learning Diffuse Optical Tomography,” IEEE Trans. Med. Imaging 39(4), 877–887 (2020).
[Crossref]

S. Sabir, S. Cho, Y. Kim, R. Pua, D. Heo, K. H. Kim, Y. Choi, and S. Cho, “Convolutional neural network-based approach to estimate bulk optical properties in diffuse optical tomography,” Appl. Opt. 59(5), 1461–1470 (2020).
[Crossref]

Kim, Y.

Klaessens, J. H.

N. J. Cuper, J. H. Klaessens, J. E. Jaspers, R. de Roode, H. J. Noordmans, J. C. de Graaff, and R. M. Verdaasdonk, “The use of near-infrared light for safe and effective visualization of subsurface blood vessels to facilitate blood withdrawal in children,” Med. Eng. & Phys. 35(4), 433–440 (2013).
[Crossref]

Kono, M.

Leblond, F.

F. Leblond, S. C. Davis, P. A. Valdés, and B. W. Pogue, “Pre-clinical whole-body fluorescence imaging: Review of instruments, methods and applications,” J. Photochem. Photobiol., B 98(1), 77–94 (2010).
[Crossref]

Lee, D.

D. Kim, Y. Kim, S. Yoon, and D. Lee, “Preliminary study for designing a novel vein-visualizing device,” Sensors 17(2), 304 (2017).
[Crossref]

Lee, E. C.

E. C. Lee, H. Jung, and D. Kim, “New finger biometric method using near infrared imaging,” Sensors 11(3), 2319–2333 (2011).
[Crossref]

Lee, S.

J. Yoo, S. Sabir, D. Heo, K. H. Kim, A. Wahab, Y. Choi, S. Lee, E. Y. Chae, H. H. Kim, Y. M. Bae, Y. W. Choi, S. Cho, and J. C. Ye, “Deep Learning Diffuse Optical Tomography,” IEEE Trans. Med. Imaging 39(4), 877–887 (2020).
[Crossref]

Lemmer, D. P.

C. A. Mela, D. P. Lemmer, F. S. Bao, F. Papay, T. Hicks, and Y. Liu, “Real-time dual-modal vein imaging system,” Int. J. CARS 14(2), 203–213 (2019).
[Crossref]

Liu, Y.

C. A. Mela, D. P. Lemmer, F. S. Bao, F. Papay, T. Hicks, and Y. Liu, “Real-time dual-modal vein imaging system,” Int. J. CARS 14(2), 203–213 (2019).
[Crossref]

Lo, B.

F. P. W. Lo, Y. Sun, and B. Lo, “Depth estimation based on a single close-up image with volumetric annotations in the wild: a pilot study,” 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), 513–518 (2019).

Lo, F. P. W.

F. P. W. Lo, Y. Sun, and B. Lo, “Depth estimation based on a single close-up image with volumetric annotations in the wild: a pilot study,” 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), 513–518 (2019).

Long, J.

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3431–3440 (2015).

Loschenov, V.

K. Efendiev, P. Grachev, A. Moskalev, and V. Loschenov, “Non-invasive high-contrast infrared imaging of blood vessels in biological tissues by the backscattered laser radiation method,” Infrared Phys. Technol. 111, 103562 (2020).
[Crossref]

Maev, R. G.

R. G. Maev, “Advances in acoustic microscopy and high resolution ultrasonic imaging: From principles to new applications,” Proc. SPIE 9040, 904007 (2014).
[Crossref]

E. Maeva, F. Severin, C. Miyasaka, B. R. Tittmann, and R. G. Maev, “Acoustic imaging of thick biological tissue,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 56(7), 1352–1358 (2009).
[Crossref]

Maeva, E.

E. Maeva, F. Severin, C. Miyasaka, B. R. Tittmann, and R. G. Maev, “Acoustic imaging of thick biological tissue,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 56(7), 1352–1358 (2009).
[Crossref]

Mela, C. A.

C. A. Mela, D. P. Lemmer, F. S. Bao, F. Papay, T. Hicks, and Y. Liu, “Real-time dual-modal vein imaging system,” Int. J. CARS 14(2), 203–213 (2019).
[Crossref]

Merlo, S.

S. Merlo, V. Bello, E. Bodo, and S. Pizzurro, “A VCSEL-Based NIR transillumination system for morpho-functional imaging,” Sensors 19(4), 851 (2019).
[Crossref]

Mimura, T.

Miyasaka, C.

E. Maeva, F. Severin, C. Miyasaka, B. R. Tittmann, and R. G. Maev, “Acoustic imaging of thick biological tissue,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 56(7), 1352–1358 (2009).
[Crossref]

Moothanchery, M.

A. B. E. Attia, G. Balasundaram, M. Moothanchery, U. S. Dinish, R. Bi, V. Ntziachristos, and M. Olivo, “A review of clinical photoacoustic imaging: Current and future trends,” Photoacoustics 16, 100144 (2019).
[Crossref]

Moskalev, A.

K. Efendiev, P. Grachev, A. Moskalev, and V. Loschenov, “Non-invasive high-contrast infrared imaging of blood vessels in biological tissues by the backscattered laser radiation method,” Infrared Phys. Technol. 111, 103562 (2020).
[Crossref]

Namita, T.

Nasu, H.

Noordmans, H. J.

N. J. Cuper, J. H. Klaessens, J. E. Jaspers, R. de Roode, H. J. Noordmans, J. C. de Graaff, and R. M. Verdaasdonk, “The use of near-infrared light for safe and effective visualization of subsurface blood vessels to facilitate blood withdrawal in children,” Med. Eng. & Phys. 35(4), 433–440 (2013).
[Crossref]

Ntziachristos, V.

A. B. E. Attia, G. Balasundaram, M. Moothanchery, U. S. Dinish, R. Bi, V. Ntziachristos, and M. Olivo, “A review of clinical photoacoustic imaging: Current and future trends,” Photoacoustics 16, 100144 (2019).
[Crossref]

Ogura, H.

Ohmae, E.

Olivo, M.

A. B. E. Attia, G. Balasundaram, M. Moothanchery, U. S. Dinish, R. Bi, V. Ntziachristos, and M. Olivo, “A review of clinical photoacoustic imaging: Current and future trends,” Photoacoustics 16, 100144 (2019).
[Crossref]

Pan, C. T.

C. T. Pan, M. D. Francisco, C. K. Yen, S. Y. Wang, and Y. L. Shiue, “Vein pattern locating technology for cannulation: a review of the low-cost vein finder prototypes utilizing near infrared (nir) light to improve peripheral subcutaneous vein selection for phlebotomy,” Sensors 19(16), 3573 (2019).
[Crossref]

Papay, F.

C. A. Mela, D. P. Lemmer, F. S. Bao, F. Papay, T. Hicks, and Y. Liu, “Real-time dual-modal vein imaging system,” Int. J. CARS 14(2), 203–213 (2019).
[Crossref]

Pizzurro, S.

S. Merlo, V. Bello, E. Bodo, and S. Pizzurro, “A VCSEL-Based NIR transillumination system for morpho-functional imaging,” Sensors 19(4), 851 (2019).
[Crossref]

Pogue, B. W.

F. Leblond, S. C. Davis, P. A. Valdés, and B. W. Pogue, “Pre-clinical whole-body fluorescence imaging: Review of instruments, methods and applications,” J. Photochem. Photobiol., B 98(1), 77–94 (2010).
[Crossref]

Pratavieira, S.

L. A. Sordillo, Y. Pu, S. Pratavieira, Y. Budansky, and R. R. Alfano, “Deep optical imaging of tissue using the second and third near-infrared spectral windows,” J. Biomed. Opt. 19(5), 056004 (2014).
[Crossref]

Pu, Y.

L. A. Sordillo, Y. Pu, S. Pratavieira, Y. Budansky, and R. R. Alfano, “Deep optical imaging of tissue using the second and third near-infrared spectral windows,” J. Biomed. Opt. 19(5), 056004 (2014).
[Crossref]

Pua, R.

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” Medical Image Computing and Computer-Assisted Intervention – MICCAI, 234–241 (2015).

Sabir, S.

S. Sabir, S. Cho, Y. Kim, R. Pua, D. Heo, K. H. Kim, Y. Choi, and S. Cho, “Convolutional neural network-based approach to estimate bulk optical properties in diffuse optical tomography,” Appl. Opt. 59(5), 1461–1470 (2020).
[Crossref]

J. Yoo, S. Sabir, D. Heo, K. H. Kim, A. Wahab, Y. Choi, S. Lee, E. Y. Chae, H. H. Kim, Y. M. Bae, Y. W. Choi, S. Cho, and J. C. Ye, “Deep Learning Diffuse Optical Tomography,” IEEE Trans. Med. Imaging 39(4), 877–887 (2020).
[Crossref]

Sakahara, H.

Severin, F.

E. Maeva, F. Severin, C. Miyasaka, B. R. Tittmann, and R. G. Maev, “Acoustic imaging of thick biological tissue,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 56(7), 1352–1358 (2009).
[Crossref]

Shahin, A.

A. Shahin, M. S. El-Daher, and W. Bachir, “Determination of the optical properties of Intralipid 20% over a broadband spectrum,” Photon.Lett.PL 10(4), 124–126 (2018).
[Crossref]

Shelhamer, E.

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3431–3440 (2015).

Shi, Y.

J. Yang and Y. Shi, “Towards finger-vein image restoration and enhancement for finger-vein recognition,” Inf. Sci. 268, 33–52 (2014).
[Crossref]

Shimizu, K.

Shiue, Y. L.

C. T. Pan, M. D. Francisco, C. K. Yen, S. Y. Wang, and Y. L. Shiue, “Vein pattern locating technology for cannulation: a review of the low-cost vein finder prototypes utilizing near infrared (nir) light to improve peripheral subcutaneous vein selection for phlebotomy,” Sensors 19(16), 3573 (2019).
[Crossref]

Sordillo, L. A.

L. A. Sordillo, Y. Pu, S. Pratavieira, Y. Budansky, and R. R. Alfano, “Deep optical imaging of tissue using the second and third near-infrared spectral windows,” J. Biomed. Opt. 19(5), 056004 (2014).
[Crossref]

Sun, Y.

F. P. W. Lo, Y. Sun, and B. Lo, “Depth estimation based on a single close-up image with volumetric annotations in the wild: a pilot study,” 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), 513–518 (2019).

Suzuki, H.

Suzuki, N.

Tittmann, B. R.

E. Maeva, F. Severin, C. Miyasaka, B. R. Tittmann, and R. G. Maev, “Acoustic imaging of thick biological tissue,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 56(7), 1352–1358 (2009).
[Crossref]

Tochio, K.

Tran, T. N.

Tuchin, V. V.

A. N. Bashkatov, E. A. Genina, and V. V. Tuchin, “Optical properties of skin, subcutaneous, and muscle tissues: a review,” J. Innovative Opt. Health Sci. 04(01), 9–38 (2011).
[Crossref]

Ueda, Y.

Ueki, H.

Umemura, S.

Valdés, P. A.

F. Leblond, S. C. Davis, P. A. Valdés, and B. W. Pogue, “Pre-clinical whole-body fluorescence imaging: Review of instruments, methods and applications,” J. Photochem. Photobiol., B 98(1), 77–94 (2010).
[Crossref]

Verdaasdonk, R. M.

N. J. Cuper, J. H. Klaessens, J. E. Jaspers, R. de Roode, H. J. Noordmans, J. C. de Graaff, and R. M. Verdaasdonk, “The use of near-infrared light for safe and effective visualization of subsurface blood vessels to facilitate blood withdrawal in children,” Med. Eng. & Phys. 35(4), 433–440 (2013).
[Crossref]

Wada, H.

Wahab, A.

J. Yoo, S. Sabir, D. Heo, K. H. Kim, A. Wahab, Y. Choi, S. Lee, E. Y. Chae, H. H. Kim, Y. M. Bae, Y. W. Choi, S. Cho, and J. C. Ye, “Deep Learning Diffuse Optical Tomography,” IEEE Trans. Med. Imaging 39(4), 877–887 (2020).
[Crossref]

Wang, G.

L. He, G. Wang, and Z. Hu, “Learning depth from single images with deep neural network embedding focal length,” IEEE Trans. on Image Process. 27(9), 4676–4689 (2018).
[Crossref]

Wang, L. V.

M. Xu and L. V. Wang, “Photoacoustic imaging in biomedicine,” Rev. Sci. Instrum. 77(4), 041101 (2006).
[Crossref]

Wang, S. Y.

C. T. Pan, M. D. Francisco, C. K. Yen, S. Y. Wang, and Y. L. Shiue, “Vein pattern locating technology for cannulation: a review of the low-cost vein finder prototypes utilizing near infrared (nir) light to improve peripheral subcutaneous vein selection for phlebotomy,” Sensors 19(16), 3573 (2019).
[Crossref]

Xu, M.

M. Xu and L. V. Wang, “Photoacoustic imaging in biomedicine,” Rev. Sci. Instrum. 77(4), 041101 (2006).
[Crossref]

Yamamoto, K.

Yamashita, Y.

Yang, J.

J. Yang and Y. Shi, “Towards finger-vein image restoration and enhancement for finger-vein recognition,” Inf. Sci. 268, 33–52 (2014).
[Crossref]

Ye, J. C.

J. Yoo, S. Sabir, D. Heo, K. H. Kim, A. Wahab, Y. Choi, S. Lee, E. Y. Chae, H. H. Kim, Y. M. Bae, Y. W. Choi, S. Cho, and J. C. Ye, “Deep Learning Diffuse Optical Tomography,” IEEE Trans. Med. Imaging 39(4), 877–887 (2020).
[Crossref]

Yen, C. K.

C. T. Pan, M. D. Francisco, C. K. Yen, S. Y. Wang, and Y. L. Shiue, “Vein pattern locating technology for cannulation: a review of the low-cost vein finder prototypes utilizing near infrared (nir) light to improve peripheral subcutaneous vein selection for phlebotomy,” Sensors 19(16), 3573 (2019).
[Crossref]

Yoo, J.

J. Yoo, S. Sabir, D. Heo, K. H. Kim, A. Wahab, Y. Choi, S. Lee, E. Y. Chae, H. H. Kim, Y. M. Bae, Y. W. Choi, S. Cho, and J. C. Ye, “Deep Learning Diffuse Optical Tomography,” IEEE Trans. Med. Imaging 39(4), 877–887 (2020).
[Crossref]

Yoon, S.

D. Kim, Y. Kim, S. Yoon, and D. Lee, “Preliminary study for designing a novel vein-visualizing device,” Sensors 17(2), 304 (2017).
[Crossref]

Yoshimoto, K.

Yoshizawa, N.

Appl. Opt. (3)

Biomed. Opt. Express (2)

IEEE Trans. Med. Imaging (1)

J. Yoo, S. Sabir, D. Heo, K. H. Kim, A. Wahab, Y. Choi, S. Lee, E. Y. Chae, H. H. Kim, Y. M. Bae, Y. W. Choi, S. Cho, and J. C. Ye, “Deep Learning Diffuse Optical Tomography,” IEEE Trans. Med. Imaging 39(4), 877–887 (2020).
[Crossref]

IEEE Trans. on Image Process. (1)

L. He, G. Wang, and Z. Hu, “Learning depth from single images with deep neural network embedding focal length,” IEEE Trans. on Image Process. 27(9), 4676–4689 (2018).
[Crossref]

IEEE Trans. Ultrason., Ferroelect., Freq. Contr. (1)

E. Maeva, F. Severin, C. Miyasaka, B. R. Tittmann, and R. G. Maev, “Acoustic imaging of thick biological tissue,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 56(7), 1352–1358 (2009).
[Crossref]

Inf. Sci. (1)

J. Yang and Y. Shi, “Towards finger-vein image restoration and enhancement for finger-vein recognition,” Inf. Sci. 268, 33–52 (2014).
[Crossref]

Infrared Phys. Technol. (1)

K. Efendiev, P. Grachev, A. Moskalev, and V. Loschenov, “Non-invasive high-contrast infrared imaging of blood vessels in biological tissues by the backscattered laser radiation method,” Infrared Phys. Technol. 111, 103562 (2020).
[Crossref]

Int. J. CARS (1)

C. A. Mela, D. P. Lemmer, F. S. Bao, F. Papay, T. Hicks, and Y. Liu, “Real-time dual-modal vein imaging system,” Int. J. CARS 14(2), 203–213 (2019).
[Crossref]

Interface Focus. (1)

P. Beard, “Biomedical photoacoustic imaging,” Interface Focus. 1(4), 602–631 (2011).
[Crossref]

J. Appl. Phys. (1)

M. A. L. Bell, “Photoacoustic imaging for surgical guidance: principles, applications, and outlook,” J. Appl. Phys. 128(6), 060904 (2020).
[Crossref]

J. Biomed. Opt. (1)

L. A. Sordillo, Y. Pu, S. Pratavieira, Y. Budansky, and R. R. Alfano, “Deep optical imaging of tissue using the second and third near-infrared spectral windows,” J. Biomed. Opt. 19(5), 056004 (2014).
[Crossref]

J. Innovative Opt. Health Sci. (1)

A. N. Bashkatov, E. A. Genina, and V. V. Tuchin, “Optical properties of skin, subcutaneous, and muscle tissues: a review,” J. Innovative Opt. Health Sci. 04(01), 9–38 (2011).
[Crossref]

J. Photochem. Photobiol., B (1)

F. Leblond, S. C. Davis, P. A. Valdés, and B. W. Pogue, “Pre-clinical whole-body fluorescence imaging: Review of instruments, methods and applications,” J. Photochem. Photobiol., B 98(1), 77–94 (2010).
[Crossref]

Med. Eng. & Phys. (1)

N. J. Cuper, J. H. Klaessens, J. E. Jaspers, R. de Roode, H. J. Noordmans, J. C. de Graaff, and R. M. Verdaasdonk, “The use of near-infrared light for safe and effective visualization of subsurface blood vessels to facilitate blood withdrawal in children,” Med. Eng. & Phys. 35(4), 433–440 (2013).
[Crossref]

Photoacoustics (1)

A. B. E. Attia, G. Balasundaram, M. Moothanchery, U. S. Dinish, R. Bi, V. Ntziachristos, and M. Olivo, “A review of clinical photoacoustic imaging: Current and future trends,” Photoacoustics 16, 100144 (2019).
[Crossref]

Photon.Lett.PL (1)

A. Shahin, M. S. El-Daher, and W. Bachir, “Determination of the optical properties of Intralipid 20% over a broadband spectrum,” Photon.Lett.PL 10(4), 124–126 (2018).
[Crossref]

Phys. Med. Biol. (1)

S. L. Jacques, “Optical properties of biological tissues: a review,” Phys. Med. Biol. 58(11), R37–R61 (2013).
[Crossref]

Proc. SPIE (1)

R. G. Maev, “Advances in acoustic microscopy and high resolution ultrasonic imaging: From principles to new applications,” Proc. SPIE 9040, 904007 (2014).
[Crossref]

Results Phys. (1)

A. M. García and P. R. Horche, “Light source optimizing in a biphotonic vein finder device: experimental and theoretical analysis,” Results Phys. 11, 975–983 (2018).
[Crossref]

Rev. Sci. Instrum. (1)

M. Xu and L. V. Wang, “Photoacoustic imaging in biomedicine,” Rev. Sci. Instrum. 77(4), 041101 (2006).
[Crossref]

Sensors (4)

E. C. Lee, H. Jung, and D. Kim, “New finger biometric method using near infrared imaging,” Sensors 11(3), 2319–2333 (2011).
[Crossref]

D. Kim, Y. Kim, S. Yoon, and D. Lee, “Preliminary study for designing a novel vein-visualizing device,” Sensors 17(2), 304 (2017).
[Crossref]

S. Merlo, V. Bello, E. Bodo, and S. Pizzurro, “A VCSEL-Based NIR transillumination system for morpho-functional imaging,” Sensors 19(4), 851 (2019).
[Crossref]

C. T. Pan, M. D. Francisco, C. K. Yen, S. Y. Wang, and Y. L. Shiue, “Vein pattern locating technology for cannulation: a review of the low-cost vein finder prototypes utilizing near infrared (nir) light to improve peripheral subcutaneous vein selection for phlebotomy,” Sensors 19(16), 3573 (2019).
[Crossref]

Other (4)

A. J. Afifi and O. Hellwich, “Object depth estimation from a single image using fully convolutional neural network,” 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 1–7 (2016).

F. P. W. Lo, Y. Sun, and B. Lo, “Depth estimation based on a single close-up image with volumetric annotations in the wild: a pilot study,” 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), 513–518 (2019).

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3431–3440 (2015).

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” Medical Image Computing and Computer-Assisted Intervention – MICCAI, 234–241 (2015).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (18)

Fig. 1.
Fig. 1. Geometry for PSF as light distribution observed at the scattering medium surface.
Fig. 2.
Fig. 2. Training pairs generated by PSF convolution.
Fig. 3.
Fig. 3. Concept of deblurring with FCN: (a) training process and (b) image deblurring process.
Fig. 4.
Fig. 4. Depth estimation with CNN: (a) training process and (b) depth estimation process.
Fig. 5.
Fig. 5. Typical examples of images for FCN: (a) training pair, (b) input and output of FCN, and (c) original image before blurring.
Fig. 6.
Fig. 6. Correlation analysis between original and deblurred images.
Fig. 7.
Fig. 7. Examples of input image and estimated depth of CNN: (given depth) → (estimated depth)
Fig. 8.
Fig. 8. Correlation analysis between given and estimated depths: Error bars show mean ± standard deviation of N = 10 estimations.
Fig. 9.
Fig. 9. Imaging of clear 3D structure from blurred 2D transillumination image in simulation: (a) structure of simulation model, (b) original image obtained with transparent medium, (c) blurred 2D transillumination image, (d) output image from trained FCN, and (e) 3D image reconstructed with depths from trained CNN.
Fig. 10.
Fig. 10. Outline of transillumination system.
Fig. 11.
Fig. 11. Background elimination in transillumination image: (a) transillumination image through turbid medium, (b) measured background (BG) without absorber in medium, (c) calculated background (BG) with Eq. (2), (d) result of image division (a)/(b), and (e) result of image division (a)/(c).
Fig. 12.
Fig. 12. Background elimination with measured and calculated background images: (a) intensity profiles along central horizontal lines in Figs. 11(b) and 11(c), and (b) intensity profiles in Figs. 11(d) and 11(e).
Fig. 13.
Fig. 13. Result of scattering suppression by FCN at depth d = 6.00 mm: (a) image taken through clear water; (b) transillumination image through turbid medium; (c) transillumination image after background elimination; and (d) output image from trained FCN.
Fig. 14.
Fig. 14. Correlation of deblurred images from FCN to original blurless images, N = 30.
Fig. 15.
Fig. 15. Absorber used for depth estimation: (a) structure in turbid medium, (b) transillumination image, and (c) image after background elimination.
Fig. 16.
Fig. 16. Correlation between estimated depth by CNN and given depth during experimentation.
Fig. 17.
Fig. 17. Absorber used for 3D imaging: (a) structure in turbid medium and (b) transillumination image.
Fig. 18.
Fig. 18. 3D images obtained from clear image from FCN and depth distribution from CNN: (a) ground true image in clear water, (b) 3D images viewed from different angles.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

$$PSF(\rho ) = C\left[ {({{\mu_s}^{\prime } + {\mu_a}} )+ \left( {{\kappa_d} + \frac{1}{{\sqrt {{\rho^2} + {d^2}} }}} \right) \times \frac{d}{{\sqrt {{\rho^2} + {d^2}} }}} \right]\frac{{\exp \left( { - {\kappa_d}\sqrt {{\rho^2} + {d^2}} } \right)}}{{\sqrt {{\rho ^2} + {d^2}} }},$$
$${I_b}({x,y} )\textrm{ } = {I_s}({x,y} )\textrm{ }\ast \textrm{ }PSF({x,y;d = t} )\; ,$$