We present a wide-field imaging approach to optically sense underwater sediment resuspension events. It uses wide-field multi-directional views and diffuse backlight. Our approach algorithmically quantifies the amount of material resuspended and its spatiotemporal distribution. The suspended particles affect the radiation that reaches the cameras, hence the captured images. By measuring the radiance during and prior to resuspension, we extract the optical depth on the line of sight per pixel. Using computed tomography (CT) principles, the optical depths yield estimation of the extinction coefficient of the suspension, per voxel. The suspended density is then derived from the reconstructed extinction coefficient.
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
We propose an approach for underwater optical computed tomography (CT) for the study of marine sedimentation. Sedimentation affects physical, chemical, and biological processes in the sea [1–3]. Sediment resuspension is the transport of previously settled particles from the seafloor back into the overlying water. Such an event occurs when near seafloor currents exceed a threshold velocity. The currents are induced by physical forces, such as waves, winds or tides; human activities, such as fishing, dredging and trawling; and biological activity. Biological resuspension occurs when fish and other animals search for food and shelter at the seafloor.
There is a gap of knowledge regarding the rate and extent of biological sediment resuspension in the ocean . Moreover, the relevance of biological sediment resuspension to biochemical underwater processes is not fully known. Recent publications [5–8] suggest that due to its frequent occurrences, biological activity may be a more significant contributor to sediment resuspension than physical forces and human activity. Due to these reasons it is important to devise methods for measuring these events. However, this is challenging [4, 9, 10]. Understanding resuspension requires a wide set of methods [4, 11–13]. Existing in-situ approaches for quantifying sediment resuspension or fluids are very localized and limited to cm-scale [14–18]. We seek methods that least disrupt resuspension events, hence we seek measurements from a distance. In aquatic environments, a sensing range of several meters is practical. In addition, the evolving sediment clouds have a meter-scale as well. Therefore, we seek multi-meter-scale far-field measurements of these events using cameras. The presented imaging approach (a) observes from a distance the water medium above the seafloor, (b) senses sediment resuspension events, and (c) algorithmically quantifies the resuspension.
The spatial distribution of the particles is three-dimensional (3D). Hence, we develop a 3D tomographic imaging system. To achieve this, the evolving sediment plume is imaged against a diffuse back-light. Imaging is done simultaneously from multiple directions, as illustrated in Fig. 1. The resuspended particles affect the light that reaches the cameras. Image analysis uses a CT principle, motivated by medical CT. In a small scale, optical CT was used in laboratory studies on the dynamics of fluids and particles [19–21], and a recent in-situ study of marine microscopic and mesoscopic organisms . Recently, CT has expanded to large-scale 3D atmospheric sensing of aerosols and clouds [23–25]. This work proposes a meter-scale CT to optically quantify underwater resuspension. We develop optical and algorithmic techniques to function through challenges posed by the underwater environment [26–29]. Initial partial results of our work have been presented in .
2.1. Image formation model
When a scene is only under ambient illumination, the image measures radiance per pixel p. An active illumination screen has radiance . Pixel p corresponds to a line of sight denoted LOSp, see Fig. 1. The extinction coefficient of bulk water at the site is assumed to be spatially homogeneous. It is denoted as βWater in units of [m−1]. For sediment-free water, according to Beer-Lambert law, the transmitted radiance reaching p is
We note that off–axis scattering affects the images. In preliminary lab experiments these off–axis scattering components were significantly lower than the model in Eq. (1). The attenuation of transmitted radiance is induced by absorption and scattering of light. Therefore, the water extinction coefficient satisfies
Let βSed(X) be the volumetric extinction coefficient of suspended sediment particles. Then the radiance measured through the suspension is
The unitless sediment optical depth at pixel p is
2.2. Tomographic reconstruction -inverse model
Let ap,v be the length [m] of ray segment of LOSp, in voxel v, as noted in Fig. 1. Let be the sediment extinction coefficient of voxel v. The vector βSed ∈ ℝn×1 represents the extinction coefficients of all voxels v ∈ 1..n, in a column-stack form. A finite-sum approximation of Eq. (5) is
Tomographic setups have multidirectional LOSs through the scene, Fig. 1. Let Nviews cameras observe the scene, each having resolution of Nwidth × Nheight pixels. The total number of pixels observing the scene is m = Nviews × Nwidth × Nheight. Then, τ ∈ ℝm×1 represents the sampled optical depths in all pixels p ∈ 1..m. Define as a projection matrix, whose elements are ap,v. Then,
Let α ≥ 0 be a regularization parameter, and be a 3D Laplacian operator which defines the smoothness term of βSed. Then the volumetric extinction coefficient can be estimated by:
The extinction of light depends on the density and optical properties of the suspended particles. Sediment particles in the medium have an extinction cross section σ in units of [m2]. Per voxel v, the number and mass densities of sediment particles are in units of and in units of , respectively. Each voxel has volume ϑ in units of [m3]. The mass of suspended particles in v is then
The extinction coefficient is
The particle mass density is linearly proportional to particle number density . Thus, from Eq. (11) there is a linear relation between and
We tested this concept using both lab experiments and simulations. The simulation environment contained an underwater 3D scene, a submerged diffuse illuminating screen, machine vision cameras and a 3D sediment cloud. Using a radiation transfer solver [31,32], we synthesized the observed underwater images. Then, we performed 3D tomographic reconstruction. The simulated ground truth helped design the imaging configuration by considering how camera specifications (type, amount and poses) affect the reconstruction quality.
During simulations we set . The volumetric domain has n = 128 × 50 × 128 voxels. Similarly to [23, 25, 33], the radiative transfer simulations relied on volumetric optical parameters, which are the extinction coefficient β(X), single scattering albedo ϖ(X) and anisotropy parameter gHG of a Henyey-Greenstein scattering phase function. We specifically used typical clear ocean water optical properties [34, 35]. In these waters, corresponding to RGB channels, βWater ≜ (0.583, 0.16, 0.15) [m−1], ϖWater(X) ≜ (0.228, 0.625, 0.667), and . The sediment extinction coefficient βSed(X) is spatially heterogeneous and spectrally uniform. As a proxy for a sediment cloud, we used an open source smoke phantom . We aimed to simulate a dense sediment cloud for which on average , corresponding to a 30[cm] visibility range. Thus we scaled the phantom’s range of extinction coefficients to β(X)∈[0, 12.2] [m−1]. In the simulations, we set ϖSed(X) = 0.
The imaging sensor follows a perspective camera model, with a set field of view, image resolution, and Bayer pattern. These parameters are set by the specifications of an off-the-shelf machine vision camera, e.g., IDS UI3260xCP-C. In particular these specifications enabled us to render realistic noise in the simulated images. Let be the photon signal generated by Monte-Carlo simulations of light propagation. The maximum photon signal generates Nwell photo–electrons in a saturated pixel. Let γe to be the number of photo–electrons per camera gray level. To induce noise we took the following steps:
- An effect similar to photon noise is induced by introducing zero–mean Gaussian noise which has a variance equal to in photo–electrons counts.
- To emulate readout noise, zero–mean Gaussian noise with fixed variance σread_e is added. Thus pixel intensity of a noisy image is in photo–electrons counts.
- We introduced quantization noise by converting photo–electrons counts to gray levels using .
For example, for the IDS UI3260xCP-C camera, the applied specifications  in photo–electron counts are : Nwell = 32870, σread_e = 6.2, γe ≈ 128.4. The renderings were performed on a machine of type M4.16Xlarge, of the Amazon Elastic Computing Cloud .
3.2. Simulated tomographic reconstructions
Based on camera poses and sediment phantom position, we calculated a sparse projection matrix A ∈ ℝm×n using ray tracing . We used the AIRtools implementation  of the Simultaneous Algebraic Reconstruction Technique (SART) . Reconstruction quality compares the estimated to the original phantom βSed, in terms of unitless global  and local  measures
Here we describe representative results for the simulations process of the scenario illustrated in Fig. 2(a). The cameras are uniformly spaced on a 125° horizontal arch at height 0.5 [m], facing the cloud from a 3[m] distance. The phantom is of size 1 × 0.39 × 1[m3] having voxel resolution of 0.78 [cm], and the camera type is IDS UI3260xCP-C. Each column in Fig. 2(b) shows rendered images and optical depths for three different views. The Reconstructed volumetric extinction coefficient of the cloud retrieved from eight cameras is presented at Fig. 2(c). The reconstruction error ϵ2 as a function of the number of cameras is presented in Fig. 2(d). We received similar results for other camera types and positions. The local error ϵ2 dropped as the number of cameras increased. The global error δ saturated at δ = 0.08.
4.1. System and method
We performed experiments in the research seawater pool of dimensions 6 × 3 × 3 [m3], at The Leon H. Charney School of Marine Sciences, University of Haifa, Israel. Inspired by the communicating vessels principle, we built an injection system, illustrated in Fig. 3(a), connected at its top to a 10 [L] source tank. The source tank contained MP SILICA particles suspended in water. The particle size range is 12–26 [µm]. This range suits particles of silt, clay, and fine sand, which exist along the Israeli Mediterranean shelf , at sites deeper than 30 [m]. The source tank was partially drained, creating a resuspended cloud emanating from the middle of the lighting screen.
The optical system contained eight machine vision cameras having a linear radiometric response. We used IDS UI3260xCP-C cameras with Tamron M112FM12 12 [mm] lenses, sealed inside designated housings having flat–ports (windows), as shown in Fig. 3(b). According to , when a perspective camera resides in an air chamber having a flat–port and is embedded in a water medium, refraction causes the imaging system to have a non-single viewpoint. We note that dome–ports, which we did not use here, can mitigate refraction distortions if the dome center aligns with the lens’ center of projection. Nonetheless, it is possible to approximate flat–port systems as having a single viewpoint  by setting a tight lens–port distance. In such conditions, refractions induce two-dimensional image distortions, which can be accommodated digitally using camera calibration. Therefore, each camera was placed inside the housing while keeping the port relatively tangent to the lens. The cameras are mounted on a frame above a lighting screen, Fig. 4(a). Each camera was directed to the volume of interest and set to have ∼ 2.7[m] working distance from the middle of the screen. The illumination screen is composed of sealed LEDs mounted between two diffuse white PVC boards, emitting a total of 24000 lumens, Fig. 4.
We used a calibration board, markers on the screen as in Fig. 4(c), openCV , and Agisoft software  to calibrate the system geometry. This led to the sparse projection matrix A used in Eqs. (8) and (9). Before each resuspension event, we imaged the lighting screen when active and not active, to acquire measurements of and respectively. Throughout each event, we imaged the evolving cloud to acquire measurements of ip , at 10 frames per second.
4.2. Tomography reconstructions
Using Eq. (6), we retrieved the optical depths of the suspended sediment cloud through time. Representative images are shown in Fig. 5(a). We performed reconstruction similarly to Section 3.2. The following steps, as shown in Fig. 5, improved the quality and runtime: (a) Pruning pixels by segmenting  and cropping of the normalized optical depth images. (b) Reconstructing an initial solution , using the un-pruned pixels, and α = 0. Then, deriving the visual hull  of . (c) Reconstructing within the visual hull, using α = 0.45. The 3D results in Fig. 5 are for the green channel in a 2 [cm] voxel resolution, thus having voxel volume ϑ = 8 [cm3].
Using an independent lab experiment, we calibrated the coefficient b which relates to in Eq. (12). This experiment is described in Appendix A. Then using Eqs. (12)– (14) we calculated the sediment cloud mass density and mass . This yielded an estimate of the evolving sediment cloud mass. We compared sediment mass estimation between two different experiments, each having a different density in the source tank: , and .The estimated mass of the clouds through 5.5 [sec] from the cloud’s initiation is plotted in Fig. 6(a). Each curve averages two experiment repetitions. Values in corresponding times are scatter-plotted in Fig. 6(b). The linear fit is consistent with the source densities ratio 30:22.5 = 1.33.
Ultimately as in any active system, the system size limits the measurement domain. We noticed that 5.5 [sec] after the cloud’s initiation, the cloud expanded beyond the screen area. This leads erroneously to negative value of τp, when using naively Eq. (6), beyond screen borders. These pixels were pruned in our algorithm. This phenomenon is emphasized using a figure which represents τp using a false-color palette; Fig. 7 shows τp in the optical green channel, 46 [sec] after the cloud’s initiation.
Our approach adapts optical CT principles to a multi-meter-scale underwater domain. Our work goes beyond proposal of the concept, to include a theoretical formulation, computer simulations, engineering of an optical tomographic system, algorithm and eventual empirical validation. We envision future developments for enabling field work in deep natural waters, which strive to minimize disturbance to nature. Future advancement may obviate active lighting in tomographic setups, for example relying on scatter of natural light . It may be beneficial to incorporate measurements of turbidity sensors in conjunction with our approach. Such systems open the door for quantitative in-situ research of marine sedimentation, and other underwater phenomena.
The algorithm we used assumes the resuspended cloud is diluted enough to suit the single– scattering approximation. In a dense and wide cloud, this approximation may bias results. In the calibration described in Fig. 8, when reaching higher sediment densities the relation between optical density and particle density becomes non-linear. Therefore, for optical thickness satisfying , we expect biased results. We believe that this bias can be largely mitigated using full 3D radiative transfer scattering tomography as in [23, 33, 48]. While this requires complex reconstruction algorithms [33, 48], the imaging system would still be similar to ours.
Appendix A Sediment density calibration
Sediment density vs. extinction calibration was done in a small water tank, in a dark room, see Fig. 9. A glassware beaker is fixed above a stirring device inside the tank. The beaker contains a suspension of particles in 1 [L] water. A magnet stirring stick is used to maintain a uniform suspension. We used the imaging sensor described in Section 4.1. The lighting array includes: White LED (1 [W], 6500 [K], OPA733WD, Optek Technology), resistor of 47Ω, power supply of 3.335 [V] (Horizon Electronics DHR), DVM (34401A Digital Multimeter), optical mirror, and shutters.
The camera sampled the intensity of the light passing through the beaker. We first took a clear water image IWater. Following this, we gradually added to the water beaker particles of roughly constant doses, until a final weight of 600 [mgr]. Here too, we used MP SILICA of size range 12 − 26 [µm]. For each session, we averaged the intensity inside a square measuring–area of 10×10 pixels in the center of the beam and of the imaging sensor, over a second. Denote by , irec measurements of clear water and suspension, respectively, similarly to Eqs. (1) and (4). As the density of suspension increases, image intensity drops. Thus, during use of higher suspension concentrations we used longer exposures, then normalized the measurements accordingly.Fig. 8. Following , a linear fit should rely only on low concentrations, for which multiple scattering is negligible. We included only measurements within the range of linear response. From the linear fit shown in Fig. 8, the estimated relation corresponding to RGB channels and used in Eq. (12) is .
The Israeli Ministry of Science, Technology, and Space (MOST) (3-12478).
We thank M. Fisher, I. Czerninski, Y. Goldfracht, L. Dehter, B. Herzberg, A. Levy , J. Fischer, S. Cohen, M. Groper, S. Farber for assisting in system construction and experiments. We thank V. Holodovsky, A. Levis, A. Aides, T. Katz, U. Shavit, G. Yahel, S. Grossbard for fruitful discussions, and I. Talmon and J. Erez for technical support. YYS is a Landau Fellow -supported by the Taub Foundation. His work is conducted in the Ollendorff Minerva Center. Minerva is funded through the BMBF. TT is supported by The Leona M. and Harry B. Helmsley Charitable Trust and The Maurice Hatter Foundation.
1. J. R. Valeur, A. Jensen, and M. Pejrup,“Turbidity, particle fluxes and mineralisation of carbon and nitrogen in a shallow coastal area,”Mar. Freshw. Res. 46,409–418 (1995). [CrossRef]
2. A. Tengberg, E. Almroth, and P. Hall,“Resuspension and its effects on organic carbon recycling and nutrient exchange in coastal sediments: in situ measurements using new experimental technology,”J. Exp. Mar. Biol. Ecol. 285,119–142 (2003). [CrossRef]
4. G. Yahel, M. Gilboa, S. Grossbard, A. Vainiger, T. Treibitz, Y. Schechner, U. Shavit, and T. Katz,“Biological activity: an overlooked, mechanism for sediment resuspension, transport, and modification in the ocean,” inProceedings of Particles in Europe Conference,(SEQUOIA,2018).
5. R. Yahel, G. Yahel, and A. Genin,“Daily cycles of suspended sand at coral reefs: a biological control,”Limnol. Oceanogr. 47,1071–1083 (2002). [CrossRef]
6. G. Yahel, R. Yahel, T. Katz, B. Lazar, B. Herut, and V. Tunnicliffe,“Fish activity: a major mechanism for sediment resuspension and organic matter remineralization in coastal marine sediments,”Mar. Ecol. Prog. Ser. 372,195–209 (2008). [CrossRef]
7. T. Katz, G. Yahel, M. Reidenbach, V. Tunnicliffe, B. Herut, J. Crusius, F. Whitney, P. V. Snelgrove, and B. Lazar,“Resuspension by fish facilitates the transport and redistribution of coastal sediments,”Limnol. Oceanogr. 57,945–958 (2012). [CrossRef]
8. T. Katz, G. Yahel, R. Yahel, V. Tunnicliffe, B. Herut, P. Snelgrove, J. Crusius, and B. Lazar,“Groundfish overfishing, diatom decline, and the marine silica cycle: Lessons from Saanich Inlet, Canada, and the Baltic Sea cod crash,” Glob. Biogeochem. Cycles 23, 4032 (2009). [CrossRef]
9. K. Robert and S. Juniper,“Surface-sediment bioturbation quantified with cameras on the NEPTUNE Canada cabled observatory,”Mar. Ecol. Prog. Ser. 453,137–149 (2012). [CrossRef]
10. S. Villéger, S. Brosse, M. Mouchet, D. Mouillot, and M. J. Vanni,“Functional ecology of fish: current approaches and future challenges,”Aquatic Sci. 79,783–801 (2017). [CrossRef]
11. A. K. Rai and A. Kumar,“Continuous measurement of suspended sediment concentration: Technological advancement and future outlook,”Measurement 76,209–227 (2015). [CrossRef]
12. S. Pinet, J.-M. Martinez, S. Ouillon, B. Lartiges, and R. E. Villar,“Variability of apparent and inherent optical properties of sediment-laden waters in large river basins–lessons from in situ measurements and bio-optical modeling,”Opt. Express 25,A283–A310 (2017). [CrossRef]
13. M. Gilboa, T. Katz, U. Shaviti, S. Grosbard, A. Torfstien, and G. Yahel,“Novel approach to measure the rate of sediment resuspension at the ocean and to estimate the contribution of fish activity to this process,” inProceedings of Particles in Europe Conference,(SEQUOIA,2018).
14. S. Shahi and E. Kuru,“An experimental investigation of settling velocity of natural sands in water using Particle Image Shadowgraph,”Powder Technol. 281,184–192 (2015). [CrossRef]
15. C. Thompson, F. Couceiro, G. Fones, R. Helsby, C. Amos, K. Black, E. Parker, N. Greenwood, P. Statham, and B. Kelly-Gerreyn,“In situ flume measurements of resuspension in the North Sea,” Estuarine, Coast. Shelf Sci. 94,77–88 (2011). [CrossRef]
16. D. C. Fugate and C. T. Friedrichs,“Determining concentration and fall velocity of estuarine particle populations using ADV, OBS and LISST,”Cont. Shelf Res. 22,1867–1886 (2002). [CrossRef]
17. M. Raffel, C. E. Willert, F. Scarano, C. J. Kähler, S. T. Wereley, and J. Kompenhans,Particle Image Velocimetry: A Practical Guide(Springer,2018). [CrossRef]
18. E. J. Davies, W. A. M. Nimmo-Smith, Y. C. Agrawal, and A. J. Souza,“Scattering signatures of suspended particles: an integrated system for combining digital holography and laser diffraction,”Opt. Express 19,25488–25499 (2011). [CrossRef]
19. G. E. Elsinga, F. Scarano, B. Wieneke, and B. W. van Oudheusden,“Tomographic Particle Image Velocimetry,”Exp. Fluids 41,933–947 (2006). [CrossRef]
21. Y. Gim, D. H. Shin, D. Y. Moh, and H. S. Ko,“Development of limited-view and three-dimensional reconstruction method for analysis of electrohydrodynamic jetting behavior,”Opt. Express 25,9244–9251 (2017). [CrossRef] [PubMed]
22. A. Levis, Y. Y. Schechner, and R. Talmon,“Statistical tomography of microscopic life,” inProceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition,(IEEE/CVF,2018), pp.6411–6420. [CrossRef]
24. M. Alterman, Y. Y. Schechner, M. Vo, and S. G. Narasimhan,“Passive tomography of turbulence strength,” inProceedings of European Conference on Computer Vision,(Springer,2014), pp.47–60.
25. V. Holodovsky, Y. Y. Schechner, A. Levin, A. Levis, and A. Aides,“In-situ multi-view multi-scattering stochastic tomography,” inProceedings of IEEE International Conference on Computational Photography,(IEEE,2016), pp.1–12.
26. T. Treibitz and Y. Y. Schechner,“Turbid scene enhancement using multi-directional illumination fusion,”IEEE Transactions on Image Process. 21,4662–4667 (2012). [CrossRef]
27. T. Treibitz, Y. Schechner, C. Kunz, and H. Singh,“Flat refractive geometry,” IEEE Transactions on Pattern Analysis Mach. Intell. 34,51–65 (2012). [CrossRef]
28. Y.Y. Schechner and N. Karpel,“Attenuating natural flicker patterns,” inProceedings of MTS/IEEE OCEANS/TECHNOOCEAN, vol.3(IEEE,2004), pp.1262–1268.
29. M. Sheinin and Y. Y. Schechner,“The next best underwater view,” inProceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition,(IEEE/CVF,2016), pp.3764–3773.
30. A. Vainiger, Y. Y. Schechner, T. Treibitz, A. Avni, and D. S. Timor,“Underwater wide–field tomography of sediment resuspension,” inProceedings of Particles in Europe Conference, (SEQUOIA,2018).
31. M. Pharr, W. Jakob, and G. Humphreys,Physically Based Rendering: From Theory to Implementation(Morgan Kaufmann,2016).
32. W. Jakob,“Mitsuba renderer,”(2010).http://www.mitsuba-renderer.org.
33. A. Levis, Y. Y. Schechner, A. Aides, and A. B. Davis,“Airborne three-dimensional cloud tomography,” inProceedings of IEEE International Conference on Computer Vision,(IEEE,2015), pp.3379–3387.
34. H. R. Gordon, O. B. Brown, and M. M. Jacobs,“Computed relationships between the inherent and apparent optical properties of a flat homogeneous ocean,”Appl. Opt. 14, 417 (1975). [CrossRef]
35. C. D. Mobley,Light and Water: Radiative Transfer in Natural Waters(Academic,1994).
36. IDS GmbH,“Data sheet of UI-3260CP-C-HQ camera,”Available at https://en.ids-imaging.com/IDS/datasheet_pdf.php?sku=AB00696
37. Amazon,“Amazon Elastic Compute Cloud (Amazon EC2),”Available at https://aws.amazon.com/ec2/.
38. J. Amanatides and A. Woo,“A fast voxel traversal algorithm for ray tracing,” inProceedings of Eurographics, (1987).
39. P. C. Hansen and M. Saxild-Hansen,“AIR tools—a MATLAB package of algebraic iterative reconstruction methods,”J. Comput. Appl. Math. 236,2167–2178 (2012). [CrossRef]
41. 3D Warehouse, Trimble,“Source of rendered camera model,”Available at https://3dwarehouse.sketchup.com/model/9a99ddf71fc253515561a71aee95364e/BlueViewMB1350onTripod
42. A. Almogi-Labin, R. Calvo, H. Elyashiv, R. Amit, Y. Harlavan, and H. Herut,“Sediment characterization of the Israeli Mediterranean shelf,” Geol. Surv. Isr. Rep. GSI/27/2012 Isr. Oceanogr. Limnol. Res. Rep. H68 (2012).
43. G. Bradski,“The OpenCV Library,”Dr. Dobb’s J. Softw. Tools(2000).
44. LLC Agisoft (2016).AgiSoft PhotoScan Professional, Version 1.2.6.Retrieved from http://www.agisoft.com/downloads/installer/
45. N. Otsu,“A threshold selection method from gray-level histograms,”IEEE Transactions on Systems, Man, and Cybernetics 9,62–66 (1979). [CrossRef]
46. W. Matusik, C. Buehler, R. Raskar, S. J. Gortler, and L. McMillan,“Image-based visual hulls,” inProceedings of ACM Internationl Conference on Computer Graphics and Interactive Techniques,(ACM,2000), pp.369–374.
47. A. Levis, Y. Y. Schechner, and A. B. Davis,“Multiplescattering microphysics tomography,” inProceedings of IEEE Conference on Computer Vision and Pattern Recognition,vol. 1(IEEE, 2017).
48. A. Geva, Y. Y. Schechner, Y. Chernyak, and R. Gupta,“X-ray Computed Tomography Through Scatter,” inProceedings of The European Conference on Computer Vision (ECCV),(Springer,2018), pp.37–54.
49. S. G. Narasimhan, M. Gupta, C. Donner, R. Ramamoorthi, S. K. Nayar, and H. W. Jensen,“Acquiring scattering properties of participating media by dilution,” in Proceedings of ACM Transactions on Graphics,vol. 25(ACM,2006), pp.1003–1012. [CrossRef]