Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Optical wide-field tomography of sediment resuspension

Open Access Open Access

Abstract

We present a wide-field imaging approach to optically sense underwater sediment resuspension events. It uses wide-field multi-directional views and diffuse backlight. Our approach algorithmically quantifies the amount of material resuspended and its spatiotemporal distribution. The suspended particles affect the radiation that reaches the cameras, hence the captured images. By measuring the radiance during and prior to resuspension, we extract the optical depth on the line of sight per pixel. Using computed tomography (CT) principles, the optical depths yield estimation of the extinction coefficient of the suspension, per voxel. The suspended density is then derived from the reconstructed extinction coefficient.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

We propose an approach for underwater optical computed tomography (CT) for the study of marine sedimentation. Sedimentation affects physical, chemical, and biological processes in the sea [1–3]. Sediment resuspension is the transport of previously settled particles from the seafloor back into the overlying water. Such an event occurs when near seafloor currents exceed a threshold velocity. The currents are induced by physical forces, such as waves, winds or tides; human activities, such as fishing, dredging and trawling; and biological activity. Biological resuspension occurs when fish and other animals search for food and shelter at the seafloor.

There is a gap of knowledge regarding the rate and extent of biological sediment resuspension in the ocean [4]. Moreover, the relevance of biological sediment resuspension to biochemical underwater processes is not fully known. Recent publications [5–8] suggest that due to its frequent occurrences, biological activity may be a more significant contributor to sediment resuspension than physical forces and human activity. Due to these reasons it is important to devise methods for measuring these events. However, this is challenging [4, 9, 10]. Understanding resuspension requires a wide set of methods [4, 11–13]. Existing in-situ approaches for quantifying sediment resuspension or fluids are very localized and limited to cm-scale [14–18]. We seek methods that least disrupt resuspension events, hence we seek measurements from a distance. In aquatic environments, a sensing range of several meters is practical. In addition, the evolving sediment clouds have a meter-scale as well. Therefore, we seek multi-meter-scale far-field measurements of these events using cameras. The presented imaging approach (a) observes from a distance the water medium above the seafloor, (b) senses sediment resuspension events, and (c) algorithmically quantifies the resuspension.

The spatial distribution of the particles is three-dimensional (3D). Hence, we develop a 3D tomographic imaging system. To achieve this, the evolving sediment plume is imaged against a diffuse back-light. Imaging is done simultaneously from multiple directions, as illustrated in Fig. 1. The resuspended particles affect the light that reaches the cameras. Image analysis uses a CT principle, motivated by medical CT. In a small scale, optical CT was used in laboratory studies on the dynamics of fluids and particles [19–21], and a recent in-situ study of marine microscopic and mesoscopic organisms [22]. Recently, CT has expanded to large-scale 3D atmospheric sensing of aerosols and clouds [23–25]. This work proposes a meter-scale CT to optically quantify underwater resuspension. We develop optical and algorithmic techniques to function through challenges posed by the underwater environment [26–29]. Initial partial results of our work have been presented in [30].

 figure: Fig. 1

Fig. 1 The concept of an underwater optical tomography system. The volume includes water and a resuspended sediment cloud. There are n voxels. A line of sight corresponding to pixel p is LOSp.

Download Full Size | PDF

2. Theory

2.1. Image formation model

When a scene is only under ambient illumination, the image measures radiance ipAmbient per pixel p. An active illumination screen has radiance ip(0). Pixel p corresponds to a line of sight denoted LOSp, see Fig. 1. The extinction coefficient of bulk water at the site is assumed to be spatially homogeneous. It is denoted as βWater in units of [m−1]. For sediment-free water, according to Beer-Lambert law, the transmitted radiance reaching p is

ipWaterip(0)exp[XLOSpβWaterdX]+ipAmbient[Wm2sr].

We note that off–axis scattering affects the images. In preliminary lab experiments these off–axis scattering components were significantly lower than the model in Eq. (1). The attenuation of transmitted radiance is induced by absorption and scattering of light. Therefore, the water extinction coefficient satisfies

βWaterβAWater+βSWater[m1],
where βSWater and βAWater are the scattering and absorption coefficients of the water, respectively. The relative intensity of light scattered in a single scattering event is represented by the single scattering albedo
ϖWaterβSWaterβAWater+βSWater.

Let βSed(X) be the volumetric extinction coefficient of suspended sediment particles. Then the radiance measured through the suspension is

ip=ip(0)exp[XLOSp(βWater+βSed(X))dX]+ipAmbient[Wm2sr].

The unitless sediment optical depth at pixel p is

τpXLOSpβSed(X)dX.

2.2. Tomographic reconstruction -inverse model

From measurements of ipAmbient, ipWater ip, and Eqs. (1), (4) and (5), the estimated optical depth per pixel p is

τp=ln(ipipAmbientipWateripAmbient).

Let ap,v be the length [m] of ray segment of LOSp, in voxel v, as noted in Fig. 1. Let βvSed be the sediment extinction coefficient of voxel v. The vector βSed ∈ ℝn×1 represents the extinction coefficients of all voxels v ∈ 1..n, in a column-stack form. A finite-sum approximation of Eq. (5) is

τpvap,vβvSedαpβSed.

Tomographic setups have multidirectional LOSs through the scene, Fig. 1. Let Nviews cameras observe the scene, each having resolution of Nwidth × Nheight pixels. The total number of pixels observing the scene is m = Nviews × Nwidth × Nheight. Then, τ ∈ ℝm×1 represents the sampled optical depths in all pixels p ∈ 1..m. Define A+m×n as a projection matrix, whose elements are ap,v. Then,

τ=AβSed.

Let α ≥ 0 be a regularization parameter, and be a 3D Laplacian operator which defines the smoothness term of βSed. Then the volumetric extinction coefficient can be estimated by:

β^SedargminβSed(AβSedτ22+αβSed22)s.tβSed0.

The extinction of light depends on the density and optical properties of the suspended particles. Sediment particles in the medium have an extinction cross section σ in units of [m2]. Per voxel v, the number and mass densities of sediment particles are ρv# in units of [1m3] and ρvMass in units of [grm3], respectively. Each voxel has volume ϑ in units of [m3]. The mass of suspended particles in v is then

μvSed=ρvMassϑ[gr].

The extinction coefficient is

βvSed=σρv#[m1].

The particle mass density ρvMass is linearly proportional to particle number density ρv#. Thus, from Eq. (11) there is a linear relation between βvSed and ρvMass

βvSedbρvMass[m1].

The coefficient b in units of [m2gr] can be calibrated (See Appendix A). From Eqs. (10) and (12) the estimated mass of suspended particles at the voxel v is

μ^vSed=ϑbβ^vSed[gr],
and the total sediment cloud mass is
μ^totalSed=vμ^vSed[gr].

3. Simulations

We tested this concept using both lab experiments and simulations. The simulation environment contained an underwater 3D scene, a submerged diffuse illuminating screen, machine vision cameras and a 3D sediment cloud. Using a radiation transfer solver [31,32], we synthesized the observed underwater images. Then, we performed 3D tomographic reconstruction. The simulated ground truth helped design the imaging configuration by considering how camera specifications (type, amount and poses) affect the reconstruction quality.

3.1. Renderings

During simulations we set ipAmbient=0. The volumetric domain has n = 128 × 50 × 128 voxels. Similarly to [23, 25, 33], the radiative transfer simulations relied on volumetric optical parameters, which are the extinction coefficient β(X), single scattering albedo ϖ(X) and anisotropy parameter gHG of a Henyey-Greenstein scattering phase function. We specifically used typical clear ocean water optical properties [34, 35]. In these waters, corresponding to RGB channels, βWater ≜ (0.583, 0.16, 0.15) [m−1], ϖWater(X) ≜ (0.228, 0.625, 0.667), and gHGWater0.9. The sediment extinction coefficient βSed(X) is spatially heterogeneous and spectrally uniform. As a proxy for a sediment cloud, we used an open source smoke phantom [32]. We aimed to simulate a dense sediment cloud for which on average β(X)X=3.3[m1], corresponding to a 30[cm] visibility range. Thus we scaled the phantom’s range of extinction coefficients to β(X)∈[0, 12.2] [m−1]. In the simulations, we set ϖSed(X) = 0.

The imaging sensor follows a perspective camera model, with a set field of view, image resolution, and Bayer pattern. These parameters are set by the specifications of an off-the-shelf machine vision camera, e.g., IDS UI3260xCP-C. In particular these specifications enabled us to render realistic noise in the simulated images. Let veP be the photon signal generated by Monte-Carlo simulations of light propagation. The maximum photon signal veP generates Nwell photo–electrons in a saturated pixel. Let γe to be the number of photo–electrons per camera gray level. To induce noise we took the following steps:

  1. An effect similar to photon noise is induced by introducing zero–mean Gaussian noise which has a variance equal to veP in photo–electrons counts.
  2. To emulate readout noise, zero–mean Gaussian noise with fixed variance σread_e is added. Thus pixel intensity of a noisy image is ie~N(veP,veP+σread_e2) in photo–electrons counts.
  3. We introduced quantization noise by converting photo–electrons counts to gray levels using ip=ie/γe.

For example, for the IDS UI3260xCP-C camera, the applied specifications [36] in photo–electron counts are : Nwell = 32870, σread_e = 6.2, γe ≈ 128.4. The renderings were performed on a machine of type M4.16Xlarge, of the Amazon Elastic Computing Cloud [37].

3.2. Simulated tomographic reconstructions

Based on camera poses and sediment phantom position, we calculated a sparse projection matrix A ∈ ℝm×n using ray tracing [38]. We used the AIRtools implementation [39] of the Simultaneous Algebraic Reconstruction Technique (SART) [40]. Reconstruction quality compares the estimated β^Sed to the original phantom βSed, in terms of unitless global [23] and local [24] measures

δ=β^Sed1βSed1βSed1,
2=1nβ^SedβSed22max(βSed).

Here we describe representative results for the simulations process of the scenario illustrated in Fig. 2(a). The cameras are uniformly spaced on a 125° horizontal arch at height 0.5 [m], facing the cloud from a 3[m] distance. The phantom is of size 1 × 0.39 × 1[m3] having voxel resolution of 0.78 [cm], and the camera type is IDS UI3260xCP-C. Each column in Fig. 2(b) shows rendered images and optical depths for three different views. The Reconstructed volumetric extinction coefficient of the cloud retrieved from eight cameras is presented at Fig. 2(c). The reconstruction error ϵ2 as a function of the number of cameras is presented in Fig. 2(d). We received similar results for other camera types and positions. The local error ϵ2 dropped as the number of cameras increased. The global error δ saturated at δ = 0.08.

 figure: Fig. 2

Fig. 2 Simulations. (a) Scenario illustration: the cameras are distributed uniformly on a 125° arch of height of 0.5 [m], and facing the cloud from a 3 [m] distance. Note: for visualization we used an open source 3D camera model [41]. (b) Representative images of several side views (water images IWater, cloud images I, optical depth images τ in the green channel). (c) The reconstructed β^Sed of the cloud . (d) Reconstruction errors vs. the number of IDS UI3260xCP-C cameras.

Download Full Size | PDF

4. Experiments

4.1. System and method

We performed experiments in the research seawater pool of dimensions 6 × 3 × 3 [m3], at The Leon H. Charney School of Marine Sciences, University of Haifa, Israel. Inspired by the communicating vessels principle, we built an injection system, illustrated in Fig. 3(a), connected at its top to a 10 [L] source tank. The source tank contained MP SILICA particles suspended in water. The particle size range is 12–26 [µm]. This range suits particles of silt, clay, and fine sand, which exist along the Israeli Mediterranean shelf [42], at sites deeper than 30 [m]. The source tank was partially drained, creating a resuspended cloud emanating from the middle of the lighting screen.

The optical system contained eight machine vision cameras having a linear radiometric response. We used IDS UI3260xCP-C cameras with Tamron M112FM12 12 [mm] lenses, sealed inside designated housings having flat–ports (windows), as shown in Fig. 3(b). According to [27], when a perspective camera resides in an air chamber having a flat–port and is embedded in a water medium, refraction causes the imaging system to have a non-single viewpoint. We note that dome–ports, which we did not use here, can mitigate refraction distortions if the dome center aligns with the lens’ center of projection. Nonetheless, it is possible to approximate flat–port systems as having a single viewpoint [27] by setting a tight lens–port distance. In such conditions, refractions induce two-dimensional image distortions, which can be accommodated digitally using camera calibration. Therefore, each camera was placed inside the housing while keeping the port relatively tangent to the lens. The cameras are mounted on a frame above a lighting screen, Fig. 4(a). Each camera was directed to the volume of interest and set to have ∼ 2.7[m] working distance from the middle of the screen. The illumination screen is composed of sealed LEDs mounted between two diffuse white PVC boards, emitting a total of 24000 lumens, Fig. 4.

 figure: Fig. 3

Fig. 3 (a) System design. (b) The camera housing is made of polycarbonate resin, sealed using a flat–port, and contains: an ODROID XU-4 computer, an IDS UI3260xCP-C camera with Tamron M112FM12 12 [mm] lens, Li-ion batteries and a nano USB WiFi adapter.

Download Full Size | PDF

We used a calibration board, markers on the screen as in Fig. 4(c), openCV [43], and Agisoft software [44] to calibrate the system geometry. This led to the sparse projection matrix A used in Eqs. (8) and (9). Before each resuspension event, we imaged the lighting screen when active and not active, to acquire measurements of ipWater and ipAmbient respectively. Throughout each event, we imaged the evolving cloud to acquire measurements of ip , at 10 frames per second.

 figure: Fig. 4

Fig. 4 (a) Side view of the system outside of the pool. The nozzle emerges from the middle of the screen, and the cameras’ rig is centered above the screen at a height of 2.5 [m]. (b) Top view of the system submerged in a seawater pool. (c) Submerged screen and active calibration board.

Download Full Size | PDF

4.2. Tomography reconstructions

Using Eq. (6), we retrieved the optical depths {τp}p=1m of the suspended sediment cloud through time. Representative images are shown in Fig. 5(a). We performed reconstruction similarly to Section 3.2. The following steps, as shown in Fig. 5, improved the quality and runtime: (a) Pruning pixels by segmenting [45] and cropping of the normalized optical depth images. (b) Reconstructing an initial solution β^Sed(0), using the un-pruned pixels, and α = 0. Then, deriving the visual hull [46] of β^Sed(0). (c) Reconstructing β^Sed within the visual hull, using α = 0.45. The 3D results in Fig. 5 are for the green channel in a 2 [cm] voxel resolution, thus having voxel volume ϑ = 8 [cm3].

 figure: Fig. 5

Fig. 5 Experiment: (a) Representative images of two cameras. Each camera yields a clear water image IWater, an image having resuspension I; The optical depth τ in the green channel; A pruned optical depth image τmasked. (b) Initial reconstruction β^Sed(0) of the cloud in the green channel. (c) Final reconstruction β^Sed of the cloud in the green channel.

Download Full Size | PDF

Using an independent lab experiment, we calibrated the coefficient b which relates βvSed to ρvMass in Eq. (12). This experiment is described in Appendix A. Then using Eqs. (12)(14) we calculated the sediment cloud mass density ρ^Mass and mass μ^Sed. This yielded an estimate of the evolving sediment cloud mass. We compared sediment mass estimation between two different experiments, each having a different density in the source tank: 22.5[mgrcm3], and 30[mgrcm3].The estimated mass of the clouds through 5.5 [sec] from the cloud’s initiation is plotted in Fig. 6(a). Each curve averages two experiment repetitions. Values in corresponding times are scatter-plotted in Fig. 6(b). The linear fit is consistent with the source densities ratio 30:22.5 = 1.33.

Ultimately as in any active system, the system size limits the measurement domain. We noticed that 5.5 [sec] after the cloud’s initiation, the cloud expanded beyond the screen area. This leads erroneously to negative value of τp, when using naively Eq. (6), beyond screen borders. These pixels were pruned in our algorithm. This phenomenon is emphasized using a figure which represents τp using a false-color palette; Fig. 7 shows τp in the optical green channel, 46 [sec] after the cloud’s initiation.

 figure: Fig. 6

Fig. 6 (a) The estimated mass of a resuspension event, after each resuspension initiation. Each curve averages two experiment repetitions (shown as ∗ and ◦). (b) Average reconstructed mass of 30[mgrcm3] source suspension density (μ^total,1Sed) vs. average reconstructed mass of 22.5[mgrcm3] source suspension density (μ^total,2Sed).

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 (a) An RGB image of the cloud, 46 [sec] after the cloud’s initiation. (b) The estimated optical depth τ in the green channel. The values of τ are presented in a false-color palette manner. Negative values beyond screen borders are due to scattered light contributing to measured radiance.

Download Full Size | PDF

5. Discussion

Our approach adapts optical CT principles to a multi-meter-scale underwater domain. Our work goes beyond proposal of the concept, to include a theoretical formulation, computer simulations, engineering of an optical tomographic system, algorithm and eventual empirical validation. We envision future developments for enabling field work in deep natural waters, which strive to minimize disturbance to nature. Future advancement may obviate active lighting in tomographic setups, for example relying on scatter of natural light [47]. It may be beneficial to incorporate measurements of turbidity sensors in conjunction with our approach. Such systems open the door for quantitative in-situ research of marine sedimentation, and other underwater phenomena.

The algorithm we used assumes the resuspended cloud is diluted enough to suit the single– scattering approximation. In a dense and wide cloud, this approximation may bias results. In the calibration described in Fig. 8, when reaching higher sediment densities the relation between optical density and particle density becomes non-linear. Therefore, for optical thickness satisfying ϖSedβl>1, we expect biased results. We believe that this bias can be largely mitigated using full 3D radiative transfer scattering tomography as in [23, 33, 48]. While this requires complex reconstruction algorithms [33, 48], the imaging system would still be similar to ours.

 figure: Fig. 8

Fig. 8 Calibration results of βSed vs. ρSed for RGB channels. The non-linear domain is due to multiple-scattering [49]. The images demonstrate the intensity attenuation of the transmitted light beam, for increasing particle density.

Download Full Size | PDF

Appendix A Sediment density calibration

Sediment density vs. extinction calibration was done in a small water tank, in a dark room, see Fig. 9. A glassware beaker is fixed above a stirring device inside the tank. The beaker contains a suspension of particles in 1 [L] water. A magnet stirring stick is used to maintain a uniform suspension. We used the imaging sensor described in Section 4.1. The lighting array includes: White LED (1 [W], 6500 [K], OPA733WD, Optek Technology), resistor of 47Ω, power supply of 3.335 [V] (Horizon Electronics DHR), DVM (34401A Digital Multimeter), optical mirror, and shutters.

 figure: Fig. 9

Fig. 9 (a) Light path in a water tank from the entry aperture, through a glass beaker with particle suspension, to the camera side. (b) Top-side view. (c) Front view. (d) Water image IWater .

Download Full Size | PDF

The camera sampled the intensity of the light passing through the beaker. We first took a clear water image IWater. Following this, we gradually added to the water beaker particles of roughly constant doses, until a final weight of 600 [mgr]. Here too, we used MP SILICA of size range 12 − 26 [µm]. For each session, we averaged the intensity inside a square measuring–area of 10×10 pixels in the center of the beam and of the imaging sensor, over a second. Denote by irecWater, irec measurements of clear water and suspension, respectively, similarly to Eqs. (1) and (4). As the density of suspension increases, image intensity drops. Thus, during use of higher suspension concentrations we used longer exposures, then normalized the measurements accordingly.

From measurements of irecWater and irec, and Eqs. (1) and (4) the retrieved extinction coefficient is

βSed=1lln(irecWaterirec),
where l = 0.092 [m] is the inner diameter of the beaker. A linear relation is extracted between the weight of particles in a volume of 10−3 [m3] water to the measured extinction coefficient βSed of the suspension, Fig. 8. Following [49], a linear fit should rely only on low concentrations, for which multiple scattering is negligible. We included only measurements within the range of linear response. From the linear fit shown in Fig. 8, the estimated relation corresponding to RGB channels and used in Eq. (12) is b(9.9,9.6,9.4)103[m3gr].

Funding

The Israeli Ministry of Science, Technology, and Space (MOST) (3-12478).

Acknowledgments

We thank M. Fisher, I. Czerninski, Y. Goldfracht, L. Dehter, B. Herzberg, A. Levy , J. Fischer, S. Cohen, M. Groper, S. Farber for assisting in system construction and experiments. We thank V. Holodovsky, A. Levis, A. Aides, T. Katz, U. Shavit, G. Yahel, S. Grossbard for fruitful discussions, and I. Talmon and J. Erez for technical support. YYS is a Landau Fellow -supported by the Taub Foundation. His work is conducted in the Ollendorff Minerva Center. Minerva is funded through the BMBF. TT is supported by The Leona M. and Harry B. Helmsley Charitable Trust and The Maurice Hatter Foundation.

References

1. J. R. Valeur, A. Jensen, and M. Pejrup,“Turbidity, particle fluxes and mineralisation of carbon and nitrogen in a shallow coastal area,”Mar. Freshw. Res. 46,409–418 (1995). [CrossRef]  

2. A. Tengberg, E. Almroth, and P. Hall,“Resuspension and its effects on organic carbon recycling and nutrient exchange in coastal sediments: in situ measurements using new experimental technology,”J. Exp. Mar. Biol. Ecol. 285,119–142 (2003). [CrossRef]  

3. S. C. Wainright,“Stimulation of heterotrophic microplankton production by resuspended marine sediments,”Science 238,1710–1712 (1987). [CrossRef]   [PubMed]  

4. G. Yahel, M. Gilboa, S. Grossbard, A. Vainiger, T. Treibitz, Y. Schechner, U. Shavit, and T. Katz,“Biological activity: an overlooked, mechanism for sediment resuspension, transport, and modification in the ocean,” inProceedings of Particles in Europe Conference,(SEQUOIA,2018).

5. R. Yahel, G. Yahel, and A. Genin,“Daily cycles of suspended sand at coral reefs: a biological control,”Limnol. Oceanogr. 47,1071–1083 (2002). [CrossRef]  

6. G. Yahel, R. Yahel, T. Katz, B. Lazar, B. Herut, and V. Tunnicliffe,“Fish activity: a major mechanism for sediment resuspension and organic matter remineralization in coastal marine sediments,”Mar. Ecol. Prog. Ser. 372,195–209 (2008). [CrossRef]  

7. T. Katz, G. Yahel, M. Reidenbach, V. Tunnicliffe, B. Herut, J. Crusius, F. Whitney, P. V. Snelgrove, and B. Lazar,“Resuspension by fish facilitates the transport and redistribution of coastal sediments,”Limnol. Oceanogr. 57,945–958 (2012). [CrossRef]  

8. T. Katz, G. Yahel, R. Yahel, V. Tunnicliffe, B. Herut, P. Snelgrove, J. Crusius, and B. Lazar,“Groundfish overfishing, diatom decline, and the marine silica cycle: Lessons from Saanich Inlet, Canada, and the Baltic Sea cod crash,” Glob. Biogeochem. Cycles 23, 4032 (2009). [CrossRef]  

9. K. Robert and S. Juniper,“Surface-sediment bioturbation quantified with cameras on the NEPTUNE Canada cabled observatory,”Mar. Ecol. Prog. Ser. 453,137–149 (2012). [CrossRef]  

10. S. Villéger, S. Brosse, M. Mouchet, D. Mouillot, and M. J. Vanni,“Functional ecology of fish: current approaches and future challenges,”Aquatic Sci. 79,783–801 (2017). [CrossRef]  

11. A. K. Rai and A. Kumar,“Continuous measurement of suspended sediment concentration: Technological advancement and future outlook,”Measurement 76,209–227 (2015). [CrossRef]  

12. S. Pinet, J.-M. Martinez, S. Ouillon, B. Lartiges, and R. E. Villar,“Variability of apparent and inherent optical properties of sediment-laden waters in large river basins–lessons from in situ measurements and bio-optical modeling,”Opt. Express 25,A283–A310 (2017). [CrossRef]  

13. M. Gilboa, T. Katz, U. Shaviti, S. Grosbard, A. Torfstien, and G. Yahel,“Novel approach to measure the rate of sediment resuspension at the ocean and to estimate the contribution of fish activity to this process,” inProceedings of Particles in Europe Conference,(SEQUOIA,2018).

14. S. Shahi and E. Kuru,“An experimental investigation of settling velocity of natural sands in water using Particle Image Shadowgraph,”Powder Technol. 281,184–192 (2015). [CrossRef]  

15. C. Thompson, F. Couceiro, G. Fones, R. Helsby, C. Amos, K. Black, E. Parker, N. Greenwood, P. Statham, and B. Kelly-Gerreyn,“In situ flume measurements of resuspension in the North Sea,” Estuarine, Coast. Shelf Sci. 94,77–88 (2011). [CrossRef]  

16. D. C. Fugate and C. T. Friedrichs,“Determining concentration and fall velocity of estuarine particle populations using ADV, OBS and LISST,”Cont. Shelf Res. 22,1867–1886 (2002). [CrossRef]  

17. M. Raffel, C. E. Willert, F. Scarano, C. J. Kähler, S. T. Wereley, and J. Kompenhans,Particle Image Velocimetry: A Practical Guide(Springer,2018). [CrossRef]  

18. E. J. Davies, W. A. M. Nimmo-Smith, Y. C. Agrawal, and A. J. Souza,“Scattering signatures of suspended particles: an integrated system for combining digital holography and laser diffraction,”Opt. Express 19,25488–25499 (2011). [CrossRef]  

19. G. E. Elsinga, F. Scarano, B. Wieneke, and B. W. van Oudheusden,“Tomographic Particle Image Velocimetry,”Exp. Fluids 41,933–947 (2006). [CrossRef]  

20. X. H. Nguyen, S.-H. Lee, and H. S. Ko,“Analysis of electrohydrodynamic jetting behaviors using three-dimensional shadowgraphic tomography,”Appl. Opt. 52,4494–4504 (2013). [CrossRef]   [PubMed]  

21. Y. Gim, D. H. Shin, D. Y. Moh, and H. S. Ko,“Development of limited-view and three-dimensional reconstruction method for analysis of electrohydrodynamic jetting behavior,”Opt. Express 25,9244–9251 (2017). [CrossRef]   [PubMed]  

22. A. Levis, Y. Y. Schechner, and R. Talmon,“Statistical tomography of microscopic life,” inProceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition,(IEEE/CVF,2018), pp.6411–6420. [CrossRef]  

23. A. Aides, Y. Y. Schechner, V. Holodovsky, M. J. Garay, and A. B. Davis,“Multi sky-view 3D aerosol distribution recovery,”Opt. Express 21,25820–25833 (2013). [CrossRef]   [PubMed]  

24. M. Alterman, Y. Y. Schechner, M. Vo, and S. G. Narasimhan,“Passive tomography of turbulence strength,” inProceedings of European Conference on Computer Vision,(Springer,2014), pp.47–60.

25. V. Holodovsky, Y. Y. Schechner, A. Levin, A. Levis, and A. Aides,“In-situ multi-view multi-scattering stochastic tomography,” inProceedings of IEEE International Conference on Computational Photography,(IEEE,2016), pp.1–12.

26. T. Treibitz and Y. Y. Schechner,“Turbid scene enhancement using multi-directional illumination fusion,”IEEE Transactions on Image Process. 21,4662–4667 (2012). [CrossRef]  

27. T. Treibitz, Y. Schechner, C. Kunz, and H. Singh,“Flat refractive geometry,” IEEE Transactions on Pattern Analysis Mach. Intell. 34,51–65 (2012). [CrossRef]  

28. Y.Y. Schechner and N. Karpel,“Attenuating natural flicker patterns,” inProceedings of MTS/IEEE OCEANS/TECHNOOCEAN, vol.3(IEEE,2004), pp.1262–1268.

29. M. Sheinin and Y. Y. Schechner,“The next best underwater view,” inProceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition,(IEEE/CVF,2016), pp.3764–3773.

30. A. Vainiger, Y. Y. Schechner, T. Treibitz, A. Avni, and D. S. Timor,“Underwater wide–field tomography of sediment resuspension,” inProceedings of Particles in Europe Conference, (SEQUOIA,2018).

31. M. Pharr, W. Jakob, and G. Humphreys,Physically Based Rendering: From Theory to Implementation(Morgan Kaufmann,2016).

32. W. Jakob,“Mitsuba renderer,”(2010).http://www.mitsuba-renderer.org.

33. A. Levis, Y. Y. Schechner, A. Aides, and A. B. Davis,“Airborne three-dimensional cloud tomography,” inProceedings of IEEE International Conference on Computer Vision,(IEEE,2015), pp.3379–3387.

34. H. R. Gordon, O. B. Brown, and M. M. Jacobs,“Computed relationships between the inherent and apparent optical properties of a flat homogeneous ocean,”Appl. Opt. 14, 417 (1975). [CrossRef]  

35. C. D. Mobley,Light and Water: Radiative Transfer in Natural Waters(Academic,1994).

36. IDS GmbH,“Data sheet of UI-3260CP-C-HQ camera,”Available at https://en.ids-imaging.com/IDS/datasheet_pdf.php?sku=AB00696

37. Amazon,“Amazon Elastic Compute Cloud (Amazon EC2),”Available at https://aws.amazon.com/ec2/.

38. J. Amanatides and A. Woo,“A fast voxel traversal algorithm for ray tracing,” inProceedings of Eurographics, (1987).

39. P. C. Hansen and M. Saxild-Hansen,“AIR tools—a MATLAB package of algebraic iterative reconstruction methods,”J. Comput. Appl. Math. 236,2167–2178 (2012). [CrossRef]  

40. A. H. Andersen and A. C. Kak,“Simultaneous algebraic reconstruction technique (SART): a superior implementation of the ART algorithm,”Ultrason. imaging 6,81–94 (1984). [CrossRef]   [PubMed]  

41. 3D Warehouse, Trimble,“Source of rendered camera model,”Available at https://3dwarehouse.sketchup.com/model/9a99ddf71fc253515561a71aee95364e/BlueViewMB1350onTripod

42. A. Almogi-Labin, R. Calvo, H. Elyashiv, R. Amit, Y. Harlavan, and H. Herut,“Sediment characterization of the Israeli Mediterranean shelf,” Geol. Surv. Isr. Rep. GSI/27/2012 Isr. Oceanogr. Limnol. Res. Rep. H68 (2012).

43. G. Bradski,“The OpenCV Library,”Dr. Dobb’s J. Softw. Tools(2000).

44. LLC Agisoft (2016).AgiSoft PhotoScan Professional, Version 1.2.6.Retrieved from http://www.agisoft.com/downloads/installer/

45. N. Otsu,“A threshold selection method from gray-level histograms,”IEEE Transactions on Systems, Man, and Cybernetics 9,62–66 (1979). [CrossRef]  

46. W. Matusik, C. Buehler, R. Raskar, S. J. Gortler, and L. McMillan,“Image-based visual hulls,” inProceedings of ACM Internationl Conference on Computer Graphics and Interactive Techniques,(ACM,2000), pp.369–374.

47. A. Levis, Y. Y. Schechner, and A. B. Davis,“Multiplescattering microphysics tomography,” inProceedings of IEEE Conference on Computer Vision and Pattern Recognition,vol. 1(IEEE, 2017).

48. A. Geva, Y. Y. Schechner, Y. Chernyak, and R. Gupta,“X-ray Computed Tomography Through Scatter,” inProceedings of The European Conference on Computer Vision (ECCV),(Springer,2018), pp.37–54.

49. S. G. Narasimhan, M. Gupta, C. Donner, R. Ramamoorthi, S. K. Nayar, and H. W. Jensen,“Acquiring scattering properties of participating media by dilution,” in Proceedings of ACM Transactions on Graphics,vol. 25(ACM,2006), pp.1003–1012. [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 The concept of an underwater optical tomography system. The volume includes water and a resuspended sediment cloud. There are n voxels. A line of sight corresponding to pixel p is LOSp.
Fig. 2
Fig. 2 Simulations. (a) Scenario illustration: the cameras are distributed uniformly on a 125° arch of height of 0.5 [m], and facing the cloud from a 3 [m] distance. Note: for visualization we used an open source 3D camera model [41]. (b) Representative images of several side views (water images IWater, cloud images I, optical depth images τ in the green channel). (c) The reconstructed β ^ Sed of the cloud . (d) Reconstruction errors vs. the number of IDS UI3260xCP-C cameras.
Fig. 3
Fig. 3 (a) System design. (b) The camera housing is made of polycarbonate resin, sealed using a flat–port, and contains: an ODROID XU-4 computer, an IDS UI3260xCP-C camera with Tamron M112FM12 12 [mm] lens, Li-ion batteries and a nano USB WiFi adapter.
Fig. 4
Fig. 4 (a) Side view of the system outside of the pool. The nozzle emerges from the middle of the screen, and the cameras’ rig is centered above the screen at a height of 2.5 [m]. (b) Top view of the system submerged in a seawater pool. (c) Submerged screen and active calibration board.
Fig. 5
Fig. 5 Experiment: (a) Representative images of two cameras. Each camera yields a clear water image IWater, an image having resuspension I; The optical depth τ in the green channel; A pruned optical depth image τmasked. (b) Initial reconstruction β ^ Sed ( 0 ) of the cloud in the green channel. (c) Final reconstruction β ^ Sed of the cloud in the green channel.
Fig. 6
Fig. 6 (a) The estimated mass of a resuspension event, after each resuspension initiation. Each curve averages two experiment repetitions (shown as ∗ and ◦). (b) Average reconstructed mass of 30 [ mgr cm 3 ] source suspension density ( μ ^ total , 1 Sed) vs. average reconstructed mass of 22.5 [ mgr cm 3 ] source suspension density ( μ ^ total , 2 Sed).
Fig. 7
Fig. 7 (a) An RGB image of the cloud, 46 [sec] after the cloud’s initiation. (b) The estimated optical depth τ in the green channel. The values of τ are presented in a false-color palette manner. Negative values beyond screen borders are due to scattered light contributing to measured radiance.
Fig. 8
Fig. 8 Calibration results of βSed vs. ρSed for RGB channels. The non-linear domain is due to multiple-scattering [49]. The images demonstrate the intensity attenuation of the transmitted light beam, for increasing particle density.
Fig. 9
Fig. 9 (a) Light path in a water tank from the entry aperture, through a glass beaker with particle suspension, to the camera side. (b) Top-side view. (c) Front view. (d) Water image IWater .

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

i p Water i p ( 0 ) exp [ X LOS p β Water d X ] + i p Ambient [ W m 2 sr ] .
β Water β A Water + β S Water [ m 1 ] ,
ϖ Water β S Water β A Water + β S Water .
i p = i p ( 0 ) exp [ X LOS p ( β Water + β Sed ( X ) ) d X ] + i p Ambient [ W m 2 sr ] .
τ p X LOS p β Sed ( X ) d X .
τ p = ln ( i p i p Ambient i p Water i p Ambient ) .
τ p v a p , v β v Sed α p β Sed .
τ = A β Sed .
β ^ Sed arg min β Sed ( A β Sed τ 2 2 + α β Sed 2 2 ) s . t β Sed 0 .
μ v Sed = ρ v Mass ϑ [ gr ] .
β v Sed = σ ρ v # [ m 1 ] .
β v Sed b ρ v Mass [ m 1 ] .
μ ^ v Sed = ϑ b β ^ v Sed [ gr ] ,
μ ^ total Sed = v μ ^ v Sed [ gr ] .
δ = β ^ Sed 1 β Sed 1 β Sed 1 ,
2 = 1 n β ^ Sed β Sed 2 2 max ( β Sed ) .
β Sed = 1 l ln ( i rec Water i rec ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.