Optical tomography has been widely investigated for biomedical imaging applications. In recent years optical tomography has been combined with digital holography and has been employed to produce high-quality images of phase objects such as cells. In this paper we describe a method for imaging 3D phase objects in a tomographic configuration implemented by training an artificial neural network to reproduce the complex amplitude of the experimentally measured scattered light. The network is designed such that the voxel values of the refractive index of the 3D object are the variables that are adapted during the training process. We demonstrate the method experimentally by forming images of the 3D refractive index distribution of Hela cells.
© 2015 Optical Society of America
The learning approach to imaging we describe in this paper is related to adaptive techniques in phased antenna arrays , iterative imaging schemes [2,3], and inverse scattering [4,5]. In the optical domain an iterative approach was demonstrated by the Sentenac group [6,7], who used the coupled dipole approximation  for modeling light propagation in inhomogeneous media (a very accurate method but computationally intensive) to simulate light scattering from small objects () in a point scanning microscope configuration. Maleki and Devaney in 1993  demonstrated diffraction tomography using intensity measurements and iterative phase retrieval . Very recently an iterative optimization method was demonstrated  for imaging 3D objects using incoherent illumination and intensity detection. There are similarities but also complementary differences between our method and . Our method uses coherent light and relies on digital holography [12,13] to record the complex amplitude of the field, whereas direct intensity detection is used in . Both use the beam propagation method (BPM) [14,15] to model the scattering process and the error backpropagation method  to train the system. At the end of the training process the network discovers a 3D index distribution that is consistent with the experimental observations. We experimentally demonstrate the technique by imaging polystyrene beads and HeLa and hTERT-RPE1 cells.
The holographic recording employed in the method presented in this paper is advantageous for imaging phase objects such as the cells in the experiments. Moreover we included in this optimization algorithm sparsity constraints that significantly improve the quality of the reconstructions. We also compared our method with other coherent tomographic reconstruction techniques. The learning approach improved the quality of the images produced by all the direct (noniterative) tomographic reconstruction methods we tried.
2. EXPERIMENTAL SETUP
A schematic diagram of the experimental setup is shown in Fig. 1. It is a holographic tomography system , in which the sample is illuminated with multiple angles and the scattered light is holographically recorded. Several variation of the holographic tomography system have been demonstrated before [18–21]. The optical arrangement we used is most similar to the one described by Choi et al. . The first beam splitter divides the laser beam in the reference and signal arms. In the signal arm a rotating mirror varies the angle of illumination of the sample using the 4F system created by L1 and OB1. The sample is imaged onto the CMOS camera using the 4F system created by OB2 and L2. The reference beam is combined with the signal beam via the beam splitter (BS2) to form a hologram. Phase stability is maintained by using a differential measurement between the phase on a portion of the field of view on the detector that does not include the cell and the cell itself. In this way the system is insensitive to drifts in the relative phase between the reference and signal beams. The NAs are 1.45 and 1.4 for the illumination and imaging portions of the system (OB1 and OB2), respectively.
The samples to be measured were prepared by placing polystyrene beads and cells between two glass cover slides. The samples were illuminated with a continuous collimated wave at 561 nm at 80 different angles. The amplitude and phase of the light transmitted through the sample were imaged onto a 2D detector, where they were holographically recorded by introducing a reference beam. The recordings constitute the training set with which we train the computational model that simulates the experimental setup. We construct the network using the BPM. The inhomogeneous medium (beads or cells) is divided into thin slices along the propagation direction (). The propagation through each slice is calculated as a phase modulation due to the local transverse index variation followed by propagation in a thin slice of a homogenous medium having the average value of the index of refraction of the sample.
The transverse (x–y) resolution is limited by the numerical aperture of the imaging system composed of lenses OBJ2 and L2 in Fig. 1. This limit can in principle be exceeded because the illumination is not a single plane wave. This idea was explored for conventional tomography in , and it could also be used in conjunction with the learning approach we describe in this paper. The longitudinal (z) resolution is limited by the numerical aperture of the illuminating beam .
A schematic description of the BPM simulation is shown in Fig. 2. The straight lines connecting any two circles represent multiplication of the output of the unit located in the th layer of the network at , by the discretized Fresnel diffraction kernel where and are integers and is the wavelength of light. is the sampling interval in the transverse coordinates , whereas is the sampling interval along the propagation direction . The circles in the diagram of Fig. 2 perform a summation of the complex amplitude of the signals converging to each circle and also multiplication of this sum by . is the unknown 3D index perturbation of the object.
In the experiments the network has 420 layers with being the adaptable variable. In contrast with a conventional neural network, the output of the layered structure in Fig. 2 is a linear function of the input complex field amplitude. However, the dependence of the output is nonlinearly related to . The BPM can be trained using steepest descent exactly as the backpropagation algorithm in neural networks [24–26]. Specifically, the learning algorithm carries out the following minimization:27–29] to enhance the contrast, while is a parameter that can be tuned to maximize image quality by systematic search. We assessed the significance of the sparsity constraint by running the learning algorithm with and without this constraint. An example of the results is shown in Fig. S4 in Supplement 1. The positivity constraint takes advantage of the assumption that the index perturbation is real and positive. The optimization is carried out iteratively by taking the derivative of the error with respect to each of the adaptable parameters following steepest descent:
We first tested the system with polystyrene beads encapsulated between two glass slides in immersion oil. The sample was inserted in the optical system of Fig. 1, and 80 holograms were recorded by illuminating the sample at 80 distinct angles uniformly distributed in the range of to . The collected data make up the training set for the 420-layer BPM network, which simulates a physical propagation distance of 30 μm and a transverse window of (). The network was initialized with the standard filtered backprojection reconstruction algorithm (Radon transform) , and the resulting 3D images before and after 100 iterations are shown in Fig. 3. The final image produced by the learning algorithm is an accurate reproduction of the bead shape.
The power of the learning tomography method presented in this paper is that the reconstruction of the refractive index is not based on the Born approximation. The BPM does not account for reflections, but it allows multiple forward scattering events. In the case of multiple inhomogeneities, the Born approximation is not valid anymore and the reconstruction based on conventional tomographic techniques becomes inaccurate. In order to demonstrate this effect, we simulate a refractive index inhomogeneity (, ) that comprises two spherical beads on the optical axis at two different planes. Considering the center of the computational window to be the center of the x–y plane, the centers of the beads are placed at , , and , , at a distance of 6 μm away from each other. Figure 4 shows the results of the two different reconstruction schemes. Based on our previous explanation, since the Born approximation is not valid to describe the physical behavior of light propagation through this sample, the optical diffraction tomography method is not capable of reconstructing the object. Contrary to that, the learning tomography method presented in this paper is capable of dealing with multiple scattering and therefore correctly reconstructs the object.
A sample of a HeLa cell was also prepared, and the same procedure was followed to obtain a 3D image. The results are shown in Fig. 5, where the error function is plotted as a function of iteration number. In this instance, the system was initialized with a constant but nonzero value (). Also shown in Fig. 5 are the results obtained when the system was initialized with the Radon reconstruction from the same data. After 100 iterations both runs yield essentially identical results. Notice that the error in the final image (after 100 iterations) is significantly lower than the error of the Radon reconstruction. This is also evident by visual inspection of the images in Fig. 5, where the artifacts due to the missing cone  and diffraction  are removed by the learning process.
We use the result of tomographic reconstructions to initialize the learning algorithm. The results are included in Fig. 6, showing that diffraction tomography  and iterative Radon  give smaller initial error than simple Radon reconstructions but the learning algorithm in all cases reduces the error further and improves the quality of the reconstructed image. The four runs corresponding to the four different initial conditions converge to the same final reconstruction. The images corresponding to the three tomographic reconstructions used as initial conditions are presented in Supplement 1. Results from an experiment with a reduced range of illumination angles are also presented in Supplement 1.
As discussed earlier optical 3D imaging techniques rely on the assumption that the object being imaged does not significantly distort the illuminating beam. This is assumed, for example, in Radon or diffraction tomography. In other words, these 3D reconstruction methods rely on the assumption that the measured scattered light consists of photons that have only been scattered once before they reach the detector. The BPM, on the other hand, allows for multiple forward scattering events. The only simplification is that reflections are not taken into account; these could eventually be incorporated into the network equation without fundamentally altering the approach described in this paper. Since biological tissue is generally forward scattering, the BPM can be a good candidate to model propagation of thick biological samples, and this may be the most significant advantage of the learning approach. To demonstrate this point, we prepared two glass slides with a random distribution of hTERT-RPE1 cells (immortalized epithelial cells from retina) on each slide. When we attach the two slides together, we can find locations where two cells are aligned in , one on top of the other. Figures 7(a)–7(e) show the images of such a stack of two cells produced with a direct inversion using the Radon transform. Figures 7(f)–7(j) show the same object imaged with the proposed learning algorithm. The learning method was able to distinguish the two cells where the Radon reconstruction merged the two into a single pattern due to the blurring in , which is a consequence of the missing cone. We believe the origin of the ringing artifacts in the Radon image is due to the multiple scattering of light from one cell to another (as explained earlier).
5. DISCUSSION AND CONCLUSIONS
In conclusion, we have demonstrated a neural-network-based algorithm to solve the optical phase tomography problem and have applied it to biological (HeLa and hTERT-RPE1 cells) and synthetic (polystyrene beads) samples. The experimental measurements were performed with a conventional collimated illumination phase tomography setup, with coherent light and holograms recorded off-axis. The sample scattering potential was modeled as a neural network implementing a forward BPM. The network is organized in layers of neurons, each one of them representing an x–y plane in the BPM. The output of the network is compared to the experimental measurements, and the error is used to correct the weights (representing the refractive index contrast) in the neurons using standard error backpropagation techniques. The algorithm yields images of better quality than other tomographic reconstruction methods. In particular, the missing cone artifact is efficiently removed, as well as parasitic granular structures. We have shown that whether starting from a constant initial guess for the refractive index or with a conventional Radon tomographic image, the method essentially converges to the same result after 100 iterations. This approach opens rich perspectives for active correction of scattering in biological samples; in particular, it has the potential of increasing the resolution and the contrast in fluorescent and two-photon imaging.
European Research Council (ERC) (267439, FP7/2007-2013).
The work of U. S. Kamilov and M. Unser was supported by the European Research Council under the European Union’s Seventh Framework Programme. The authors thank Phelps Edward Allen and Valérian CR Dormoy for sample preparation and Nicolino Stasio, Donald Conkey, and Ye Pu for their helpful suggestions. We also thank YongKeun (Paul) Park and Kyoohyun Kim for providing the code and assistance for optical diffraction tomography.
See Supplement 1 for supporting content.
1. B. Widrow, P. E. Mantey, L. Griffiths, and B. Goode, “Adaptive antenna systems,” J. Acoust. Soc. Am. 42, 1175–1176 (1967). [CrossRef]
2. J. R. Fienup, “Phase-retrieval algorithms for a complicated optical system,” Appl. Opt. 32, 1737–1746 (1993). [CrossRef]
3. W. Van den Broek and C. T. Koch, “Method for retrieval of the three-dimensional object potential by inversion of dynamical electron scattering,” Phys. Rev. Lett. 109, 245502 (2012). [CrossRef]
4. N. Joachimowicz, C. Pichot, and J.-P. Hugonin, “Inverse scattering: an iterative numerical method for electromagnetic imaging,” IEEE Trans. Antennas Propag. 39, 1742–1753 (1991). [CrossRef]
5. A. Abubakar, P. M. Van den Berg, and J. J. Mallorqui, “Imaging of biomedical data using a multiplicative regularized contrast source inversion method,” IEEE Trans. Microwave Theor. Tech. 50, 1761–1771 (2002). [CrossRef]
6. G. Maire, F. Drsek, J. Girard, H. Giovannini, A. Talneau, D. Konan, K. Belkebir, P. C. Chaumet, and A. Sentenac, “Experimental demonstration of quantitative imaging beyond Abbe’s limit with optical diffraction tomography,” Phys. Rev. Lett. 102, 213905 (2009). [CrossRef]
7. O. Haeberlé, K. Belkebir, H. Giovaninni, and A. Sentenac, “Tomographic diffractive microscopy: basics, techniques and perspectives,” J. Mod. Opt. 57, 686–699 (2010). [CrossRef]
8. B. T. Draine and P. J. Flatau, “Discrete-dipole approximation for scattering calculations,” J. Opt. Soc. Am. A 11, 1491–1499 (1994). [CrossRef]
9. M. H. Maleki and A. J. Devaney, “Phase-retrieval and intensity-only reconstruction algorithms for optical diffraction tomography,” J. Opt. Soc. Am. A 10, 1086–1092 (1993). [CrossRef]
10. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]
11. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015). [CrossRef]
12. U. Schnars and W. Jueptner, Digital Holography (Springer, 2005).
13. I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22, 1268–1270 (1997). [CrossRef]
14. J. V. Roey, J. V. Donk, and P. E. Lagasse, “Beam-propagation method: analysis and assessment,” J. Opt. Soc. Am. 71, 803–810 (1981). [CrossRef]
15. J. W. Goodman, Introduction to Fourier Optics, 2nd ed. (McGraw-Hill, 1996).
16. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323, 533–536 (1986). [CrossRef]
17. E. Wolf, “Three-dimensional structure determination of semi-transparent objects from holographic data,” Opt. Commun. 1, 153–156 (1969). [CrossRef]
18. W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, “Tomographic phase microscopy,” Nat. Methods 4, 717–719 (2007). [CrossRef]
19. W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Extended depth of focus in tomographic phase microscopy using a propagation algorithm,” Opt. Lett. 33, 171–173 (2008). [CrossRef]
20. Y. Sung, W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Optical diffraction tomography for high resolution live cell imaging,” Opt. Express 17, 266–277 (2009). [CrossRef]
21. F. Charrière, A. Marian, F. Montfort, J. Kuehn, T. Colomb, E. Cuche, P. Marquet, and C. Depeursinge, “Cell refractive index tomography by digital holographic microscopy,” Opt. Lett. 31, 178–180 (2006). [CrossRef]
22. Y. Cotte, F. Toy, P. Jourdain, N. Pavillon, D. Boss, P. Magistretti, P. Marquet, and C. Depeursinge, “Marker-free phase nanoscopy,” Nat. Photonics 7, 113–117 (2013). [CrossRef]
23. V. Lauer, “New approach to optical diffraction tomography yielding a vector equation of diffraction tomography and a novel tomographic microscope,” J. Microsc. 205, 165–176 (2002). [CrossRef]
24. A. Beck and M. Teboulle, “Gradient-based algorithms with applications to signal recovery problems,” in Convex Optimization in Signal Processing and Communications, D. Palomar and Y. Eldar, eds. (Cambridge University, 2010), pp. 42–88.
25. L. Bottou, “Stochastic gradient descent tricks,” in Neural Networks: Tricks of the Trade, 2nd ed. (Springer, 2012), pp. 421–437.
26. C. M. Bishop, Neural Networks for Pattern Recognition (Oxford, 1995).
27. E. J. Candes, M. B. Wakin, and S. P. Boyd, “Enhancing sparsity by reweighted l1 minimization,” J. Fourier Anal. Appl. 14, 877–905 (2008).
28. E. Y. Sidky, M. A. Anastasio, and X. Pan, “Image reconstruction exploiting object sparsity in boundary-enhanced X-ray phase-contrast tomography,” Opt. Express 18, 10404–10422 (2010). [CrossRef]
29. M. Lustig, D. Donoho, and J. M. Pauly, “Sparse MRI: the application of compressed sensing for rapid MR imaging,” Magn. Reson. Med. 58, 1182–1195 (2007). [CrossRef]
30. R. M. Lewitt, “Reconstruction algorithms: transform methods,” Proc. IEEE 71, 390–408 (1983). [CrossRef]
31. K. Kim, H. Yoon, M. Diez-Silva, M. Dao, R. R. Dasari, and Y. Park, “High-resolution three-dimensional imaging of red blood cells parasitized by plasmodium falciparum and in situ hemozoin crystals using optical diffraction tomography,” J. Biomed. Opt. 19, 011005 (2014). [CrossRef]