Abstract

This paper is concerned with the mitigation of backscatter effects in a single gated image. A range-intensity-profile prior dehazing method is proposed to estimate scene depth and finely remove water backscatter at different depths for underwater range-gated imaging. It is based on the prior that the target intensity is distributed with range intensity profiles in gated images. The depth transmission and depth-noise map are then calculated from the scene depth. A high-quality image is restored by subtracting the depth-noise map and dividing the depth transmission. The simulation and experimental results show that the proposed method works well even if a portion of the estimated depth may be smaller than its real value, and the peak signal-to-noise ratio of dehazing images gets up to a doubled increase.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Underwater images are typically characterized by poor visibility and low contrast because of water scattering. Active range-gated imaging (RGI) can suppress water scattering to extend detection range by factor 2 to 3 over conventional underwater cameras [1,2]. It has been widely used in underwater target detection, navigation and marine scientific research [36]. However, in turbid water or far imaging, gated images still suffer from backscatter effect. The backscatter noise increases with scene depth of gated viewing, which leads to a spatially variable contrast loss and degradation. To improve image quality for underwater RGI applications, it is necessary to further suppress the backscatter.

A large number of methods have been taken to solve the problem of image degradation. One of the most common method is the dark channel prior (DCP) method. He et al. proposed a single image defogging method for haze images based on the prior that the dark pixels provide a rough approximation of the thickness of haze for hazy images [7]. Inspired by this, Yeh et al. proposed a pixel-based dark/bright channel prior method to process hazy images [8]. Borkar and Mukherjee used adaptive nearest neighbor regularization to change the patch size automatically and preserve the texture information [9]. DCP based methods have also been used in underwater image dehazing. For instance, Jr et al. developed an underwater DCP to estimate the transmission and reduce the haze effect [10]. Chiang and Chen handled light scattering, artificial lighting and color change distortions suffered by underwater images based on DCP [11].

Some other methods focus on improving image quality by image processing, such as improving the amount of contrast or other possible indicators of the presence of haze. Oakley and Bu focus on finding the minimum of a global cost function to reduce the haze [12]. Ghani and Isa integrated global and local contrast correction to improve the visibility of underwater images [13]. Ancuti et al. solved the defogging problem by multiscale depth fusion [14], where the solution of depth map is searched for by minimizing the nonconvex potential in the random field. Efforts have also been made at improving the image quality based on Human Visual System, such as Retinex [1517].

However, there are very few previous researches aiming at single image dehazing for underwater RGI. In our previous work [18], a 3D deblurring-gated range-intensity correlation imaging method is proposed to reduce the backscatter in 3D RGI. One more reference image must be captured to calculate a depth-noise map with spatially variable backscatter. By subtracting the depth-noise map from two target gate images, new gate images with less noise can be obtained, and then 3D images with high range resolution are reconstructed. In the method, at least three images are needed to reduce the backscatter in 3D depth map. To further improve the performance of 2D RGI based on single image, in this paper, we proposed a range-intensity-profile prior (RIPP) dehazing method for underwater RGI.

2. Range-intensity-profile prior dehazing method

In underwater imaging, the backscatter noise is spatially variable with scene depth. The main idea of the RIPP dehazing method is that the intensity of targets distributes according to the range intensity profile (RIP) in gated images, and thus target depth can be calculated based on the RIP and target intensity. The backscatter of entire VOI is estimated by dark channel values. And then spatially variable backscatter and depth transmission can be calculated. Finally, a high-quality image is restored by subtracting the backscatter and dividing the depth transmission. In the paragraphs to follow, we will elaborate on the proposed RIPP dehazing method.

 Figure 1(a) shows the principle of RGI. For a typical RGI system, the gate of intensified charge-coupled device (ICCD) opens when the target echo signal in sampling volume of interest (VOI) reaches ICCD, so that RGI can suppress the backscatter between the VOI and the imaging system. However, the backscatter in VOI still contributes to the imaging. Figure 1(b) shows a triangular range intensity profile of VOI formed by the convolution of laser pulses and gate pulses. The target signals of fishes ${T_1}$, ${T_2}$ and ${T_3}$ in the VOI distribute according to the RIP. Here, ${R_{gated}} = \tau v/2$ represents the gated range between the imaging system and the VOI, ${R_{begin}} = ({\tau - {t_L}} )v/2$ and ${R_{end}} = ({\tau + {t_g}} )v/2$ is the beginning range and ending range of VOI respectively, $\tau $ is the time delay between laser pulse and gate pulse, v is the light speed in water, ${t_L}$ is the laser pulse width, and ${t_g}$ is the gate pulse width. The captured gated image can be expressed as the sum of target signal and backscatter

$$I = ST + {I_b}$$
where ${I_b}$ is the backscatter in gated image and ST is the target signal in gated image [19], S represents the target signal without transmission attenuation, T represents the depth transmission indicating the portion of light that is received by ICCD and not scattered or attenuated at range R, which can be expressed as
$$T = \frac{{\textrm{exp}({ - 2cR} )}}{{{R^2}}}$$
where c represents the water attenuation coefficient. If the depth transmission T and spatially variable backscatter ${I_b}$, namely the depth-noise map is obtained [18], the dehazing image S can be recovered by rearranging Eq. (1). Here, we propose a RIPP dehazing method for underwater RGI. The overall progress of proposed method is shown in Fig. 1(c).

 figure: Fig. 1.

Fig. 1. Underwater RIPP dehazing method: (a) Principle of RGI; (b) Influence of backscatter noise on gated image intensity; (c) Flow chart of the proposed method.

Download Full Size | PPT Slide | PDF

2.1 Smooth target intensity map estimation

A smooth target intensity map is firstly calculated to provide smooth gray information of targets and avoid the influence of target details in depth calculation.

As one of the fast and robust bilateral filters, median filter is used here for edge preserving smoothing. The local average of I is computed as ${I_a} = \textrm{media}{\textrm{n}_w}(I )$, where $\textrm{median}()$ represents the median filtering, w is the median filter size. To avoid destroying the contrasted texture areas, local standard deviation of I should be reduced [20]. As the DCP can reflect the haze density in images [7], the dark channel of I is subtracted to exclude haze noise. The smooth target intensity map is computed as

$${I_s} = \textrm{media}{\textrm{n}_w}({{I_a} - {I_d} - \textrm{media}{\textrm{n}_w}({{I_{dark}}} )} )$$
where local standard deviation of I is calculated as ${I_d} = \textrm{media}{\textrm{n}_w}({I - {I_a}} )$, dark channel of I is obtained by ${I_{dark}} = \mathop {\min }_{({x,y} )\in \mathrm{\Theta }} ({\textrm{min}(I )} )$, where Θ is a local patch centered at pixel (x, y).

2.2 Range-intensity-profile prior

Supposing that the reflectivity variations of targets in VOI can be ignored, a RIPP algorithm is developed to estimate the scene depth based on the smooth target intensity map and the prior that the intensity of target in VOI is distributed according to the RIP, as shown in Fig. 1(b). The RIP of a gated image is expressed as

$${\beta _{RIP}} \propto \frac{1}{{{R^2}}}\mathop \smallint \nolimits_0^\infty {P_L}(t )g(t )\textrm{d}t$$
where PL(t) and g(t) represents the laser and gate pulse, respectively.

Note that, for the RIP, one same intensity value corresponds to two different depth values except the highest intensity at range Rgated. One of these two depth value is in the head signal section [Rbegin, Rgated], while the other one is in the tail signal section [Rgated, Rend]. We cannot make sure which section the targets are from a single gated image. Here, we assume that the targets are either all in the head signal section or in the tail signal section. The real backscatter for targets in tail signal section is bigger than these in head signal section. If we assume all the targets are in tail signal section, the estimated backscatter may be bigger than its real value for these targets that are in head signal section, so that these target signal will be reduced and destroyed. This is not allowed as the photons are precious for underwater imaging, especially for far underwater imaging. Besides, the backscatter reduces rapidly with increasing depth [19], and the total backscatter in head signal section is much bigger than these in tail signal section. As a result, we assume all the targets are in head signal section. For these targets that are in the head signal section, the backscatter is correctly calculated and can be removed, while for these targets that are in the tail signal section, there estimated depth is smaller than its real depth, and part of backscatter is eliminated.

We can see from the RGI, that the target at Rgated usually has highest intensity for most captured gated images. For gated images without saturation effects, we pick the top 0.1 brightest pixels in smooth target intensity map Is. The target intensity located at ${R_{gated}}$ is obtained by averaging the intensity of these pixels. The picked percentage of brightest pixels is related to the proportion of target areas at Rgated in gated images. If the proportion of target areas is bigger than 0.1, averaging the top 0.1 brightest pixels can represent the targets intensity at Rgated. When the targets are too small, the percentage of top brightest pixels should be smaller than 0.1 according to target size.

As the target intensity at range Rbegin is close to zero, the lowest intensity in Is is chosen as the intensity located at Rbegin. Once the target intensity located at Rgated and Rbegin are known, the target depth R is estimated from the RIP, such as the fishes in Fig. 1(b), where the smooth target intensity map is used as the target intensity. The estimated depth is subject to the constrain: ${R_{begin}} \le R \le {R_{gated}}$, and any calculated depth that is bigger than Rgated is modified to Rgated.

2.3 Depth-noise map estimation

Although the dark channel can approximate the thickness of haze, it may be higher than the real backscatter in target areas for RGI. To avoid the influence of these targets, the dark channel values that are smaller than its mean value are selected. The mean value of these selected values is calculated as the backscatter of entire VOI. The depth-noise map at range R is expressed as

$$I_b^{\prime} = mean[{I_{dark}} < mean({{I_{dark}}} )]\frac{{\mathop \smallint \nolimits_{{R_{begin}}}^R \frac{{\textrm{exp}({ - 2cr} )}}{{{r^2}}}\textrm{d}r}}{{\mathop \smallint \nolimits_{{R_{begin}}}^{{R_{end}}} \frac{{\textrm{exp}({ - 2cr} )}}{{{r^2}}}\textrm{d}r}}$$
where $mean$() represents calculating the mean value of data, ${I_{dark}} < mean({{I_{dark}}} )$ represents the dark channel values that are smaller than its mean value.

2.4 Recovering the dehazing image

The dehazing gated image is obtained by rearranging Eq. (1) as

$$S = \frac{{I - I_b^{\prime}}}{{\textrm{max}\left( {\frac{T}{{\max (T )}}, {T_{low}}} \right) }}$$
where $\textrm{max}()$ represents calculating the max value of data. The depth transmission T is normalized by dividing its maximum. If T is close to zero, the target signal in gated image will close to zero. Therefore, a lowest bound Tlow is required to keep the amount of haze and show the depth information of target. A typical value of Tlow is 0.1 [7].

3. Numerical simulation

To verify the feasibility and superiority of the proposed method, we performed the RIPP dehazing method on Monte Carlo simulated gated images. Real underwater scene images and their depth maps are hard to get, so we simulate the underwater gated images based on NYU Depth Dataset V2 [21], which contains both the realistic scene images and their depth maps. One hundred million photons are simulated to generate the gated image with the resolution of 369×492 pixels. The illuminator has a center wavelength of 532 nm. Both the laser pulse width and gate pulse width are 4 ns. The time delay is set according to the scene depth. In order to verify the effective of the proposed method quantitatively, the images are assessed in terms of peak signal-to-noise ratio (PSNR) and mean structural similarity index (MSSIM) analysis [22]. A noiseless reference gated image is generated when the water attenuation coefficients is 0.05 /m.

To explore the applicability of the proposed method, images at different optical depths are generated by changing the water attenuation coefficient in Monte Carlo simulation. Figures 2 and 3 shows the simulation results. The depth for the table and chair image in Fig. 2(a) is about 6.74 m, and the time delay is 45 ns. The depth for the bike image in Fig. 3(a) is about 2.79 m, and the time delay is 19 ns. For both images, the above row is the original Monte Carlo simulated gated images. The backscatter gradually increases and obscures the background and signal with increasing optical depth. The bottom row is the dehazing results of proposed method. The filter window size is 30×30. The top 0.1 brightest pixels are bounded by red lines in the first original image. They are used to calculate the target intensity at Rgated. Figures 2(b) and 2(c) show the PSNR and MSSIM changes of Fig. 2(a), respectively. Figures 3(b) and 3(c) show the PSNR and MSSIM changes of Fig. 3(a), respectively. The left columns show the PSNR or MSSIM of Monte Carlo simulated image at different optical depths. The right columns show the PSNR-increase-ratio or MSSIM-increase-ratio of dehazing images at different optical depths. The PSNR-increase ratio of ${\eta _{PSNR}}$ and MSSIM-increase-ratio of ${\eta _{MSSIM}}$ is respectively defined by

$${\eta _{PSNR}} = \frac{{\textrm{PSNR}{_{dehaze}} - \textrm{PSNR}{_{original}}}}{{\textrm{PSNR}{_{original}}}}$$
$${\eta _{MSSIM}} = \frac{{\textrm{MSSIM}{_{dehaze}} - \textrm{MSSIM}{_{original}}}}{{\textrm{MSSIM}{_{original}}}}$$
where $\textrm{PSN}{\textrm{R}_{original}}$ and $\textrm{PSNR}{_{dehaze}}$ represents the PSNR of original image and dehazing image respectively, $\textrm{MSSIM}{_{original}}$ and $\textrm{MSSIM}{_{dehaze}}$ represents the MSSIM of original image and dehazing image respectively. We use multiple power polynomial to fit the trend of PSNR/MSSIM-increase-ratio. It shows the trend of how well the image quality becomes. The trend in Fig. 2(c) is not intuitively obvious, but it still shows a trend similar to other fitting lines. One can see from this trend that when the water attenuation coefficient in Monte Carlo simulation is small, the gated image is clear and has little backscatter, so that the dehazing processing of the RIPP dehazing method achieves relatively less improvement in PSNR and MSSIM. When the water attenuation coefficient is too large for backscatter to occupy the original image, the RIPP dehazing method cannot distinguish the backscatter and target signal. In this case, the RIPP dehazing method also achieves relatively less performance improvement. When the target is in a range from about 2 to 6 attenuation lengths (ALs), the RIPP dehazing method not only gives undisturbed dehazing images visually, but the image quality indexes are improved quantitatively. For instance, for the table and chair image, PSNR increases more than 12% and MSSIM increases more than 30% when the target is in a range from about 2 to 6 ALs [bounded by green box in Figs. 2(b) and 2(c)]. For the bike image, PSNR increases more than about 10% and MSSIM increases more than 25% when the imaging range is in a range from about 2 to 6 ALs [bounded by green box in Figs. 3(b) and 3(c)].

 figure: Fig. 2.

Fig. 2. Simulation result of table and chair images at different optical depths: (a) Original images and their dehazing images; (b) PSNR of original images and PSNR-increase-ratio of their dehazing images; (c) MSSIM of original images and MSSIM-increase-ratio of their dehazing images.

Download Full Size | PPT Slide | PDF

 figure: Fig. 3.

Fig. 3. Simulation result of bike images at different optical depths: (a) Original images and their dehazing images; (b) PSNR of original images and PSNR-increase-ratio of their dehazing images; (c) MSSIM of original images and MSSIM-increase-ratio of their dehazing images.

Download Full Size | PPT Slide | PDF

A key parameter in our algorithm is the size of filter window. The filtering effect is weakened and speckle artifacts may occur for a small filter window, while big filter window can blur the image and cause detail lost. For the dehazing images in Figs. 2 and 3, the window size is 30×30. The original images of Fig. 2(a) at 4.04 ALs and Fig. 3(a) at 4.14 ALs are processed using other window sizes for comparison. In Fig. 4(a), the window size is 5×5. Most of the backscatter is reduced, but there are speckle artifacts on the targets. In Figs. 4(b) and 4(c), the window size is 60×60 and 90×90, respectively. They appear more natural than those in Fig. 4(a), but contain more backscatter than the dehazing images at the middle column of Figs. 2(a) and 3(a). The results show that the dehazing images will become more blurred with bigger filter window. In practical applications, one can choose appropriate window sizes for different images. In the remainder of this paper, we use a window size of 30×30.

 figure: Fig. 4.

Fig. 4. Simulation results of images with different filter windows: (a) 5×5; (b) 60×60; (c) 90×90.

Download Full Size | PPT Slide | PDF

4. Experimental results and discussion

The RIPP dehazing method is evaluated through real underwater gated images considered with different underwater scenes.

Figure 5 presents the original and dehazing images with regards to underwater natural creatures including water weeds and Jellyfish. The laser illuminator has a center wavelength of 532 nm. The typical operating frequency for the laser is 30 kHz with average power of about 0.5w. The gated camera has a gated GEN II intensifier, which is coupled to a CCD with 1360×1024 pixels and 8-bit depth. The laser pulse width and the gate pulse width are both set as 4 ns. The original gated images are shown in the left column of Fig. 5. Backscatter haze lowers the image quality and masks creature structures. The single scale Retinex (SSR) [15], histogram equalization (HE) [23], DCP [7], and RIPP dehazing method are used to enhance the original image. For the RIPP dehazing method, the water attenuation coefficient is 0.6 /m and the filter window size is 30×30. SSR and HE method enhance the target signal while amplify the backscatter. DCP reduces the backscatter, but the target signal is also destroyed. The RIPP dehazing method greatly suppresses the backscatter and restores the detail features of targets.

 figure: Fig. 5.

Fig. 5. Experimental results of underwater natural creatures with time delay from top to bottom: $\tau $ = 23 ns; $\tau $ = 19 ns; $\tau $ = 20 ns; $\tau $ = 26 ns.

Download Full Size | PPT Slide | PDF

In Table 1, the PSNR of the original images and PSNR-increase-ratio of the processed images show that SSR method reduces the PSNR of the original image, and DCP method has relatively low PSNR-increase-ratio. This is consistent with their poor dehazing effect for human eyes. Although the PSNR-increase-ratio of HE method sometimes may be higher than the RIPP dehazing method, HE method amplifies the backscatter and confuses targets and background. The RIPP dehazing method outperforms these methods, and there is a marked visual and quantitative improvement.

Tables Icon

Table 1. PSNR-increase-ratio comparison of dehazing natural creature images

Figure 6 shows the experimental results of artificial targets, including fishing net and a standard pyramid [18]. Different from the system of Fig. 5, the laser of this system has an average power of about 1.5 w. The gated GEN II intensifier is coupled to a CCD with 1024 × 1024 pixels. The laser pulse width and the gate pulse width are 4 ns. The left column in Fig. 6 shows the original gated images, from which we can see backscatter obscures the images. Different methods in Fig. 5 are also used here to enhance the images. For the RIPP dehazing method, the water attenuation coefficient is 0.5 /m and the filter window size is 30×30. Table 2 shows the PSNR of the original images and PSNR-increase-ratio of the processed images. The SSR method increases the backscatter and blurs the images with low PSNR-increase-ratio. DCP method destroys some target signal and amplifies the backscatter. HE method also amplifies the backscatter even though it sometimes has relatively higher PSNR-increase-ratio than SSR and DCP method. The RIPP dehazing method reduces the backscatter and enhances the clarity of targets.

 figure: Fig. 6.

Fig. 6. Experimental results of artificial targets with time delay from top to bottom: $\tau $ = 23 ns, $\tau $ = 23 ns, $\tau $ = 140 ns.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 2. PSNR-increase-ratio comparison of dehazing artificial targets images

5. Conclusion and discussion

In this paper, a RIPP dehazing method for underwater RGI is proposed. Only the RGI parameters of time delay, laser pulse width and gate pulse width are needed for dehazing. We develop a RIPP algorithm to estimate the scene depth from single gated image. Then, the depth transmission and depth-noise map are calculated, and the dehazing images are obtained by rearranging Eq. (1). Simulations and experimental results demonstrate that the proposed method further increases the performance of underwater RGI and outperforms the SSR, HE and DCP methods.

Funding

Youth Innovation Promotion Association of the Chinese Academy of Sciences (2017155); National Natural Science Foundation of China (U1736101); Strategic Priority Program of the Chinese Academy of Sciences (XDC03060103); National Natural Science Foundation of China (61875189).

Acknowledgment

The authors acknowledge the financial funding of this work.

Disclosures

The authors declare no conflicts of interest.

References

1. A. Weidemann, G. R. Fournier, L. Forand, and P. Mathieu, “In harbor underwater threat detection/identification using active imaging,” Proc. SPIE 5780(1), 59–70 (2005). [CrossRef]  

2. P. Church, W. Hou, G. Fournier, F. Dalgleish, D. Butler, S. Pari, M. Jamieson, and D. Pike, “Overview of a hybrid underwater camera system,” Proc. SPIE 9111, 1–7 (2014). [CrossRef]  

3. J. Busck, “Underwater 3-d optical imaging with a gated viewing laser radar,” Opt. Eng. 44(11), 116001 (2005). [CrossRef]  

4. M. Laurenzis, F. Christnacher, and D. Monnin, “Long-range three-dimensional active imaging with superresolution depth mapping,” Opt. Lett. 32(21), 3146–3148 (2007). [CrossRef]  

5. X. Wang, Y. Li, and Y. Zhou, “Triangular-range-intensity profile spatial-correlation method for 3d super-resolution range-gated imaging,” Appl. Opt. 52(30), 7302–7406 (2013). [CrossRef]  

6. P. Mariani, I. Quincoces, K. H. Haugholt, Y. Chardard, A. W. Visser, C. Yates, G. Piccinno, G. Reali, P. Risholm, and G. T. Thielemann, “Range-Gated Imaging System for Underwater Monitoring in Ocean Environment,” Sustainability 11(1), 162 (2019). [CrossRef]  

7. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011). [CrossRef]  

8. C. H. Yeh, L. W. Kang, M. S. Lee, and C. Y. Lin, “Haze effect removal from image via haze density estimation in optical model,” Opt. Express 21(22), 27127–27141 (2013). [CrossRef]  

9. K. Borkar and S. Mukherjee, “Single image dehazing by approximating and eliminating the additional airlight component,” Neurocomputing 400, 294–308 (2020). [CrossRef]  

10. P. Drews Jr, E. do Nascimento, F. Moraes, S. Botelho, and M. Campos, “Transmission Estimation in Underwater Single Images,” in IEEE International Conference on Computer Vision Workshops (2013), pp. 825–830.

11. J. Y. Chiang and Y. C. Chen, “Underwater Image Enhancement by Wavelength Compensation and Dehazing,” IEEE Trans. on Image Process. 21(4), 1756–1769 (2012). [CrossRef]  

12. J. P. Oakley and H. Bu, “Correction of Simple Contrast Loss in Color Images,” IEEE Trans. on Image Process. 16(2), 511–522 (2007). [CrossRef]  

13. A. S. A. Ghani and N. A. M. Isa, “Enhancement of low quality underwater image through integrated global and local contrast correction,” Appl. Soft Comput. 37, 332–344 (2015). [CrossRef]  

14. C. O. Ancuti and C. Ancuti, “Single-Scale Fusion,” IEEE Trans. on Image Process. 22(8), 3271–3282 (2013). [CrossRef]  

15. Y. Wang, H. Wang, C. Yin, and M. Dai, “Biologically inspired image enhancement based on Retinex,” Neurocomputing 177, 373–384 (2016). [CrossRef]  

16. A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Fusion-based variational image dehazing,” IEEE Signal Process. Lett. 24, 1 (2016). [CrossRef]  

17. A. Galdran, A. Alvarez-Gila, A. Bria, J. Vazquez-Corral, and M. Bertalmío, “On the duality between retinex and image dehazing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 8212–8221.

18. M. Wang, X. Wang, L. Sun, Y. Yang, and Y. Zhou, “Underwater 3D deblurring-gated range-intensity correlation imaging,” Opt. Lett. 45(6), 1455–1458 (2020). [CrossRef]  

19. X. Wang, Y. Zhou, and Y. Liu, “Impact of echo broadening effect on active range-gated imaging,” Chin. Opt. Lett. 10(10), 32–34 (2012). [CrossRef]  

20. J. P. Tarel and H. Nicolas, “Fast visibility restoration from a single color or gray level image,” in IEEE International Conference on Computer Vision (ICCV) (2010), pp. 4780–4788.

21. N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, “Indoor Segmentation and Support Inference from RGBD Images,” in Proceedings of the 12th European conference on Computer Vision - Volume Part V. Springer, Berlin, Heidelberg, (2012).

22. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

23. J. A. Stark, “Adaptive image contrast enhancement using generalizations of histogram equalization,” IEEE Trans. on Image Process. 9(5), 889–896 (2000). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. A. Weidemann, G. R. Fournier, L. Forand, and P. Mathieu, “In harbor underwater threat detection/identification using active imaging,” Proc. SPIE 5780(1), 59–70 (2005).
    [Crossref]
  2. P. Church, W. Hou, G. Fournier, F. Dalgleish, D. Butler, S. Pari, M. Jamieson, and D. Pike, “Overview of a hybrid underwater camera system,” Proc. SPIE 9111, 1–7 (2014).
    [Crossref]
  3. J. Busck, “Underwater 3-d optical imaging with a gated viewing laser radar,” Opt. Eng. 44(11), 116001 (2005).
    [Crossref]
  4. M. Laurenzis, F. Christnacher, and D. Monnin, “Long-range three-dimensional active imaging with superresolution depth mapping,” Opt. Lett. 32(21), 3146–3148 (2007).
    [Crossref]
  5. X. Wang, Y. Li, and Y. Zhou, “Triangular-range-intensity profile spatial-correlation method for 3d super-resolution range-gated imaging,” Appl. Opt. 52(30), 7302–7406 (2013).
    [Crossref]
  6. P. Mariani, I. Quincoces, K. H. Haugholt, Y. Chardard, A. W. Visser, C. Yates, G. Piccinno, G. Reali, P. Risholm, and G. T. Thielemann, “Range-Gated Imaging System for Underwater Monitoring in Ocean Environment,” Sustainability 11(1), 162 (2019).
    [Crossref]
  7. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011).
    [Crossref]
  8. C. H. Yeh, L. W. Kang, M. S. Lee, and C. Y. Lin, “Haze effect removal from image via haze density estimation in optical model,” Opt. Express 21(22), 27127–27141 (2013).
    [Crossref]
  9. K. Borkar and S. Mukherjee, “Single image dehazing by approximating and eliminating the additional airlight component,” Neurocomputing 400, 294–308 (2020).
    [Crossref]
  10. P. Drews, E. do Nascimento, F. Moraes, S. Botelho, and M. Campos, “Transmission Estimation in Underwater Single Images,” in IEEE International Conference on Computer Vision Workshops (2013), pp. 825–830.
  11. J. Y. Chiang and Y. C. Chen, “Underwater Image Enhancement by Wavelength Compensation and Dehazing,” IEEE Trans. on Image Process. 21(4), 1756–1769 (2012).
    [Crossref]
  12. J. P. Oakley and H. Bu, “Correction of Simple Contrast Loss in Color Images,” IEEE Trans. on Image Process. 16(2), 511–522 (2007).
    [Crossref]
  13. A. S. A. Ghani and N. A. M. Isa, “Enhancement of low quality underwater image through integrated global and local contrast correction,” Appl. Soft Comput. 37, 332–344 (2015).
    [Crossref]
  14. C. O. Ancuti and C. Ancuti, “Single-Scale Fusion,” IEEE Trans. on Image Process. 22(8), 3271–3282 (2013).
    [Crossref]
  15. Y. Wang, H. Wang, C. Yin, and M. Dai, “Biologically inspired image enhancement based on Retinex,” Neurocomputing 177, 373–384 (2016).
    [Crossref]
  16. A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Fusion-based variational image dehazing,” IEEE Signal Process. Lett. 24, 1 (2016).
    [Crossref]
  17. A. Galdran, A. Alvarez-Gila, A. Bria, J. Vazquez-Corral, and M. Bertalmío, “On the duality between retinex and image dehazing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 8212–8221.
  18. M. Wang, X. Wang, L. Sun, Y. Yang, and Y. Zhou, “Underwater 3D deblurring-gated range-intensity correlation imaging,” Opt. Lett. 45(6), 1455–1458 (2020).
    [Crossref]
  19. X. Wang, Y. Zhou, and Y. Liu, “Impact of echo broadening effect on active range-gated imaging,” Chin. Opt. Lett. 10(10), 32–34 (2012).
    [Crossref]
  20. J. P. Tarel and H. Nicolas, “Fast visibility restoration from a single color or gray level image,” in IEEE International Conference on Computer Vision (ICCV) (2010), pp. 4780–4788.
  21. N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, “Indoor Segmentation and Support Inference from RGBD Images,” in Proceedings of the 12th European conference on Computer Vision - Volume Part V. Springer, Berlin, Heidelberg, (2012).
  22. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
    [Crossref]
  23. J. A. Stark, “Adaptive image contrast enhancement using generalizations of histogram equalization,” IEEE Trans. on Image Process. 9(5), 889–896 (2000).
    [Crossref]

2020 (2)

K. Borkar and S. Mukherjee, “Single image dehazing by approximating and eliminating the additional airlight component,” Neurocomputing 400, 294–308 (2020).
[Crossref]

M. Wang, X. Wang, L. Sun, Y. Yang, and Y. Zhou, “Underwater 3D deblurring-gated range-intensity correlation imaging,” Opt. Lett. 45(6), 1455–1458 (2020).
[Crossref]

2019 (1)

P. Mariani, I. Quincoces, K. H. Haugholt, Y. Chardard, A. W. Visser, C. Yates, G. Piccinno, G. Reali, P. Risholm, and G. T. Thielemann, “Range-Gated Imaging System for Underwater Monitoring in Ocean Environment,” Sustainability 11(1), 162 (2019).
[Crossref]

2016 (2)

Y. Wang, H. Wang, C. Yin, and M. Dai, “Biologically inspired image enhancement based on Retinex,” Neurocomputing 177, 373–384 (2016).
[Crossref]

A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Fusion-based variational image dehazing,” IEEE Signal Process. Lett. 24, 1 (2016).
[Crossref]

2015 (1)

A. S. A. Ghani and N. A. M. Isa, “Enhancement of low quality underwater image through integrated global and local contrast correction,” Appl. Soft Comput. 37, 332–344 (2015).
[Crossref]

2014 (1)

P. Church, W. Hou, G. Fournier, F. Dalgleish, D. Butler, S. Pari, M. Jamieson, and D. Pike, “Overview of a hybrid underwater camera system,” Proc. SPIE 9111, 1–7 (2014).
[Crossref]

2013 (3)

2012 (2)

J. Y. Chiang and Y. C. Chen, “Underwater Image Enhancement by Wavelength Compensation and Dehazing,” IEEE Trans. on Image Process. 21(4), 1756–1769 (2012).
[Crossref]

X. Wang, Y. Zhou, and Y. Liu, “Impact of echo broadening effect on active range-gated imaging,” Chin. Opt. Lett. 10(10), 32–34 (2012).
[Crossref]

2011 (1)

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011).
[Crossref]

2007 (2)

M. Laurenzis, F. Christnacher, and D. Monnin, “Long-range three-dimensional active imaging with superresolution depth mapping,” Opt. Lett. 32(21), 3146–3148 (2007).
[Crossref]

J. P. Oakley and H. Bu, “Correction of Simple Contrast Loss in Color Images,” IEEE Trans. on Image Process. 16(2), 511–522 (2007).
[Crossref]

2005 (2)

A. Weidemann, G. R. Fournier, L. Forand, and P. Mathieu, “In harbor underwater threat detection/identification using active imaging,” Proc. SPIE 5780(1), 59–70 (2005).
[Crossref]

J. Busck, “Underwater 3-d optical imaging with a gated viewing laser radar,” Opt. Eng. 44(11), 116001 (2005).
[Crossref]

2004 (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

2000 (1)

J. A. Stark, “Adaptive image contrast enhancement using generalizations of histogram equalization,” IEEE Trans. on Image Process. 9(5), 889–896 (2000).
[Crossref]

Alvarez-Gila, A.

A. Galdran, A. Alvarez-Gila, A. Bria, J. Vazquez-Corral, and M. Bertalmío, “On the duality between retinex and image dehazing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 8212–8221.

Ancuti, C.

C. O. Ancuti and C. Ancuti, “Single-Scale Fusion,” IEEE Trans. on Image Process. 22(8), 3271–3282 (2013).
[Crossref]

Ancuti, C. O.

C. O. Ancuti and C. Ancuti, “Single-Scale Fusion,” IEEE Trans. on Image Process. 22(8), 3271–3282 (2013).
[Crossref]

Bertalmío, M.

A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Fusion-based variational image dehazing,” IEEE Signal Process. Lett. 24, 1 (2016).
[Crossref]

A. Galdran, A. Alvarez-Gila, A. Bria, J. Vazquez-Corral, and M. Bertalmío, “On the duality between retinex and image dehazing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 8212–8221.

Borkar, K.

K. Borkar and S. Mukherjee, “Single image dehazing by approximating and eliminating the additional airlight component,” Neurocomputing 400, 294–308 (2020).
[Crossref]

Botelho, S.

P. Drews, E. do Nascimento, F. Moraes, S. Botelho, and M. Campos, “Transmission Estimation in Underwater Single Images,” in IEEE International Conference on Computer Vision Workshops (2013), pp. 825–830.

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Bria, A.

A. Galdran, A. Alvarez-Gila, A. Bria, J. Vazquez-Corral, and M. Bertalmío, “On the duality between retinex and image dehazing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 8212–8221.

Bu, H.

J. P. Oakley and H. Bu, “Correction of Simple Contrast Loss in Color Images,” IEEE Trans. on Image Process. 16(2), 511–522 (2007).
[Crossref]

Busck, J.

J. Busck, “Underwater 3-d optical imaging with a gated viewing laser radar,” Opt. Eng. 44(11), 116001 (2005).
[Crossref]

Butler, D.

P. Church, W. Hou, G. Fournier, F. Dalgleish, D. Butler, S. Pari, M. Jamieson, and D. Pike, “Overview of a hybrid underwater camera system,” Proc. SPIE 9111, 1–7 (2014).
[Crossref]

Campos, M.

P. Drews, E. do Nascimento, F. Moraes, S. Botelho, and M. Campos, “Transmission Estimation in Underwater Single Images,” in IEEE International Conference on Computer Vision Workshops (2013), pp. 825–830.

Chardard, Y.

P. Mariani, I. Quincoces, K. H. Haugholt, Y. Chardard, A. W. Visser, C. Yates, G. Piccinno, G. Reali, P. Risholm, and G. T. Thielemann, “Range-Gated Imaging System for Underwater Monitoring in Ocean Environment,” Sustainability 11(1), 162 (2019).
[Crossref]

Chen, Y. C.

J. Y. Chiang and Y. C. Chen, “Underwater Image Enhancement by Wavelength Compensation and Dehazing,” IEEE Trans. on Image Process. 21(4), 1756–1769 (2012).
[Crossref]

Chiang, J. Y.

J. Y. Chiang and Y. C. Chen, “Underwater Image Enhancement by Wavelength Compensation and Dehazing,” IEEE Trans. on Image Process. 21(4), 1756–1769 (2012).
[Crossref]

Christnacher, F.

Church, P.

P. Church, W. Hou, G. Fournier, F. Dalgleish, D. Butler, S. Pari, M. Jamieson, and D. Pike, “Overview of a hybrid underwater camera system,” Proc. SPIE 9111, 1–7 (2014).
[Crossref]

Dai, M.

Y. Wang, H. Wang, C. Yin, and M. Dai, “Biologically inspired image enhancement based on Retinex,” Neurocomputing 177, 373–384 (2016).
[Crossref]

Dalgleish, F.

P. Church, W. Hou, G. Fournier, F. Dalgleish, D. Butler, S. Pari, M. Jamieson, and D. Pike, “Overview of a hybrid underwater camera system,” Proc. SPIE 9111, 1–7 (2014).
[Crossref]

do Nascimento, E.

P. Drews, E. do Nascimento, F. Moraes, S. Botelho, and M. Campos, “Transmission Estimation in Underwater Single Images,” in IEEE International Conference on Computer Vision Workshops (2013), pp. 825–830.

Drews, P.

P. Drews, E. do Nascimento, F. Moraes, S. Botelho, and M. Campos, “Transmission Estimation in Underwater Single Images,” in IEEE International Conference on Computer Vision Workshops (2013), pp. 825–830.

Fergus, R.

N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, “Indoor Segmentation and Support Inference from RGBD Images,” in Proceedings of the 12th European conference on Computer Vision - Volume Part V. Springer, Berlin, Heidelberg, (2012).

Forand, L.

A. Weidemann, G. R. Fournier, L. Forand, and P. Mathieu, “In harbor underwater threat detection/identification using active imaging,” Proc. SPIE 5780(1), 59–70 (2005).
[Crossref]

Fournier, G.

P. Church, W. Hou, G. Fournier, F. Dalgleish, D. Butler, S. Pari, M. Jamieson, and D. Pike, “Overview of a hybrid underwater camera system,” Proc. SPIE 9111, 1–7 (2014).
[Crossref]

Fournier, G. R.

A. Weidemann, G. R. Fournier, L. Forand, and P. Mathieu, “In harbor underwater threat detection/identification using active imaging,” Proc. SPIE 5780(1), 59–70 (2005).
[Crossref]

Galdran, A.

A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Fusion-based variational image dehazing,” IEEE Signal Process. Lett. 24, 1 (2016).
[Crossref]

A. Galdran, A. Alvarez-Gila, A. Bria, J. Vazquez-Corral, and M. Bertalmío, “On the duality between retinex and image dehazing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 8212–8221.

Ghani, A. S. A.

A. S. A. Ghani and N. A. M. Isa, “Enhancement of low quality underwater image through integrated global and local contrast correction,” Appl. Soft Comput. 37, 332–344 (2015).
[Crossref]

Haugholt, K. H.

P. Mariani, I. Quincoces, K. H. Haugholt, Y. Chardard, A. W. Visser, C. Yates, G. Piccinno, G. Reali, P. Risholm, and G. T. Thielemann, “Range-Gated Imaging System for Underwater Monitoring in Ocean Environment,” Sustainability 11(1), 162 (2019).
[Crossref]

He, K.

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011).
[Crossref]

Hoiem, D.

N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, “Indoor Segmentation and Support Inference from RGBD Images,” in Proceedings of the 12th European conference on Computer Vision - Volume Part V. Springer, Berlin, Heidelberg, (2012).

Hou, W.

P. Church, W. Hou, G. Fournier, F. Dalgleish, D. Butler, S. Pari, M. Jamieson, and D. Pike, “Overview of a hybrid underwater camera system,” Proc. SPIE 9111, 1–7 (2014).
[Crossref]

Isa, N. A. M.

A. S. A. Ghani and N. A. M. Isa, “Enhancement of low quality underwater image through integrated global and local contrast correction,” Appl. Soft Comput. 37, 332–344 (2015).
[Crossref]

Jamieson, M.

P. Church, W. Hou, G. Fournier, F. Dalgleish, D. Butler, S. Pari, M. Jamieson, and D. Pike, “Overview of a hybrid underwater camera system,” Proc. SPIE 9111, 1–7 (2014).
[Crossref]

Kang, L. W.

Kohli, P.

N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, “Indoor Segmentation and Support Inference from RGBD Images,” in Proceedings of the 12th European conference on Computer Vision - Volume Part V. Springer, Berlin, Heidelberg, (2012).

Laurenzis, M.

Lee, M. S.

Li, Y.

Lin, C. Y.

Liu, Y.

X. Wang, Y. Zhou, and Y. Liu, “Impact of echo broadening effect on active range-gated imaging,” Chin. Opt. Lett. 10(10), 32–34 (2012).
[Crossref]

Mariani, P.

P. Mariani, I. Quincoces, K. H. Haugholt, Y. Chardard, A. W. Visser, C. Yates, G. Piccinno, G. Reali, P. Risholm, and G. T. Thielemann, “Range-Gated Imaging System for Underwater Monitoring in Ocean Environment,” Sustainability 11(1), 162 (2019).
[Crossref]

Mathieu, P.

A. Weidemann, G. R. Fournier, L. Forand, and P. Mathieu, “In harbor underwater threat detection/identification using active imaging,” Proc. SPIE 5780(1), 59–70 (2005).
[Crossref]

Monnin, D.

Moraes, F.

P. Drews, E. do Nascimento, F. Moraes, S. Botelho, and M. Campos, “Transmission Estimation in Underwater Single Images,” in IEEE International Conference on Computer Vision Workshops (2013), pp. 825–830.

Mukherjee, S.

K. Borkar and S. Mukherjee, “Single image dehazing by approximating and eliminating the additional airlight component,” Neurocomputing 400, 294–308 (2020).
[Crossref]

Nicolas, H.

J. P. Tarel and H. Nicolas, “Fast visibility restoration from a single color or gray level image,” in IEEE International Conference on Computer Vision (ICCV) (2010), pp. 4780–4788.

Oakley, J. P.

J. P. Oakley and H. Bu, “Correction of Simple Contrast Loss in Color Images,” IEEE Trans. on Image Process. 16(2), 511–522 (2007).
[Crossref]

Pardo, D.

A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Fusion-based variational image dehazing,” IEEE Signal Process. Lett. 24, 1 (2016).
[Crossref]

Pari, S.

P. Church, W. Hou, G. Fournier, F. Dalgleish, D. Butler, S. Pari, M. Jamieson, and D. Pike, “Overview of a hybrid underwater camera system,” Proc. SPIE 9111, 1–7 (2014).
[Crossref]

Piccinno, G.

P. Mariani, I. Quincoces, K. H. Haugholt, Y. Chardard, A. W. Visser, C. Yates, G. Piccinno, G. Reali, P. Risholm, and G. T. Thielemann, “Range-Gated Imaging System for Underwater Monitoring in Ocean Environment,” Sustainability 11(1), 162 (2019).
[Crossref]

Pike, D.

P. Church, W. Hou, G. Fournier, F. Dalgleish, D. Butler, S. Pari, M. Jamieson, and D. Pike, “Overview of a hybrid underwater camera system,” Proc. SPIE 9111, 1–7 (2014).
[Crossref]

Quincoces, I.

P. Mariani, I. Quincoces, K. H. Haugholt, Y. Chardard, A. W. Visser, C. Yates, G. Piccinno, G. Reali, P. Risholm, and G. T. Thielemann, “Range-Gated Imaging System for Underwater Monitoring in Ocean Environment,” Sustainability 11(1), 162 (2019).
[Crossref]

Reali, G.

P. Mariani, I. Quincoces, K. H. Haugholt, Y. Chardard, A. W. Visser, C. Yates, G. Piccinno, G. Reali, P. Risholm, and G. T. Thielemann, “Range-Gated Imaging System for Underwater Monitoring in Ocean Environment,” Sustainability 11(1), 162 (2019).
[Crossref]

Risholm, P.

P. Mariani, I. Quincoces, K. H. Haugholt, Y. Chardard, A. W. Visser, C. Yates, G. Piccinno, G. Reali, P. Risholm, and G. T. Thielemann, “Range-Gated Imaging System for Underwater Monitoring in Ocean Environment,” Sustainability 11(1), 162 (2019).
[Crossref]

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Silberman, N.

N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, “Indoor Segmentation and Support Inference from RGBD Images,” in Proceedings of the 12th European conference on Computer Vision - Volume Part V. Springer, Berlin, Heidelberg, (2012).

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Stark, J. A.

J. A. Stark, “Adaptive image contrast enhancement using generalizations of histogram equalization,” IEEE Trans. on Image Process. 9(5), 889–896 (2000).
[Crossref]

Sun, J.

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011).
[Crossref]

Sun, L.

Tang, X.

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011).
[Crossref]

Tarel, J. P.

J. P. Tarel and H. Nicolas, “Fast visibility restoration from a single color or gray level image,” in IEEE International Conference on Computer Vision (ICCV) (2010), pp. 4780–4788.

Thielemann, G. T.

P. Mariani, I. Quincoces, K. H. Haugholt, Y. Chardard, A. W. Visser, C. Yates, G. Piccinno, G. Reali, P. Risholm, and G. T. Thielemann, “Range-Gated Imaging System for Underwater Monitoring in Ocean Environment,” Sustainability 11(1), 162 (2019).
[Crossref]

Vazquez-Corral, J.

A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Fusion-based variational image dehazing,” IEEE Signal Process. Lett. 24, 1 (2016).
[Crossref]

A. Galdran, A. Alvarez-Gila, A. Bria, J. Vazquez-Corral, and M. Bertalmío, “On the duality between retinex and image dehazing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 8212–8221.

Visser, A. W.

P. Mariani, I. Quincoces, K. H. Haugholt, Y. Chardard, A. W. Visser, C. Yates, G. Piccinno, G. Reali, P. Risholm, and G. T. Thielemann, “Range-Gated Imaging System for Underwater Monitoring in Ocean Environment,” Sustainability 11(1), 162 (2019).
[Crossref]

Wang, H.

Y. Wang, H. Wang, C. Yin, and M. Dai, “Biologically inspired image enhancement based on Retinex,” Neurocomputing 177, 373–384 (2016).
[Crossref]

Wang, M.

Wang, X.

Wang, Y.

Y. Wang, H. Wang, C. Yin, and M. Dai, “Biologically inspired image enhancement based on Retinex,” Neurocomputing 177, 373–384 (2016).
[Crossref]

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Weidemann, A.

A. Weidemann, G. R. Fournier, L. Forand, and P. Mathieu, “In harbor underwater threat detection/identification using active imaging,” Proc. SPIE 5780(1), 59–70 (2005).
[Crossref]

Yang, Y.

Yates, C.

P. Mariani, I. Quincoces, K. H. Haugholt, Y. Chardard, A. W. Visser, C. Yates, G. Piccinno, G. Reali, P. Risholm, and G. T. Thielemann, “Range-Gated Imaging System for Underwater Monitoring in Ocean Environment,” Sustainability 11(1), 162 (2019).
[Crossref]

Yeh, C. H.

Yin, C.

Y. Wang, H. Wang, C. Yin, and M. Dai, “Biologically inspired image enhancement based on Retinex,” Neurocomputing 177, 373–384 (2016).
[Crossref]

Zhou, Y.

Appl. Opt. (1)

Appl. Soft Comput. (1)

A. S. A. Ghani and N. A. M. Isa, “Enhancement of low quality underwater image through integrated global and local contrast correction,” Appl. Soft Comput. 37, 332–344 (2015).
[Crossref]

Chin. Opt. Lett. (1)

X. Wang, Y. Zhou, and Y. Liu, “Impact of echo broadening effect on active range-gated imaging,” Chin. Opt. Lett. 10(10), 32–34 (2012).
[Crossref]

IEEE Signal Process. Lett. (1)

A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Fusion-based variational image dehazing,” IEEE Signal Process. Lett. 24, 1 (2016).
[Crossref]

IEEE Trans. on Image Process. (5)

C. O. Ancuti and C. Ancuti, “Single-Scale Fusion,” IEEE Trans. on Image Process. 22(8), 3271–3282 (2013).
[Crossref]

J. Y. Chiang and Y. C. Chen, “Underwater Image Enhancement by Wavelength Compensation and Dehazing,” IEEE Trans. on Image Process. 21(4), 1756–1769 (2012).
[Crossref]

J. P. Oakley and H. Bu, “Correction of Simple Contrast Loss in Color Images,” IEEE Trans. on Image Process. 16(2), 511–522 (2007).
[Crossref]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

J. A. Stark, “Adaptive image contrast enhancement using generalizations of histogram equalization,” IEEE Trans. on Image Process. 9(5), 889–896 (2000).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011).
[Crossref]

Neurocomputing (2)

K. Borkar and S. Mukherjee, “Single image dehazing by approximating and eliminating the additional airlight component,” Neurocomputing 400, 294–308 (2020).
[Crossref]

Y. Wang, H. Wang, C. Yin, and M. Dai, “Biologically inspired image enhancement based on Retinex,” Neurocomputing 177, 373–384 (2016).
[Crossref]

Opt. Eng. (1)

J. Busck, “Underwater 3-d optical imaging with a gated viewing laser radar,” Opt. Eng. 44(11), 116001 (2005).
[Crossref]

Opt. Express (1)

Opt. Lett. (2)

Proc. SPIE (2)

A. Weidemann, G. R. Fournier, L. Forand, and P. Mathieu, “In harbor underwater threat detection/identification using active imaging,” Proc. SPIE 5780(1), 59–70 (2005).
[Crossref]

P. Church, W. Hou, G. Fournier, F. Dalgleish, D. Butler, S. Pari, M. Jamieson, and D. Pike, “Overview of a hybrid underwater camera system,” Proc. SPIE 9111, 1–7 (2014).
[Crossref]

Sustainability (1)

P. Mariani, I. Quincoces, K. H. Haugholt, Y. Chardard, A. W. Visser, C. Yates, G. Piccinno, G. Reali, P. Risholm, and G. T. Thielemann, “Range-Gated Imaging System for Underwater Monitoring in Ocean Environment,” Sustainability 11(1), 162 (2019).
[Crossref]

Other (4)

A. Galdran, A. Alvarez-Gila, A. Bria, J. Vazquez-Corral, and M. Bertalmío, “On the duality between retinex and image dehazing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 8212–8221.

P. Drews, E. do Nascimento, F. Moraes, S. Botelho, and M. Campos, “Transmission Estimation in Underwater Single Images,” in IEEE International Conference on Computer Vision Workshops (2013), pp. 825–830.

J. P. Tarel and H. Nicolas, “Fast visibility restoration from a single color or gray level image,” in IEEE International Conference on Computer Vision (ICCV) (2010), pp. 4780–4788.

N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, “Indoor Segmentation and Support Inference from RGBD Images,” in Proceedings of the 12th European conference on Computer Vision - Volume Part V. Springer, Berlin, Heidelberg, (2012).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Underwater RIPP dehazing method: (a) Principle of RGI; (b) Influence of backscatter noise on gated image intensity; (c) Flow chart of the proposed method.
Fig. 2.
Fig. 2. Simulation result of table and chair images at different optical depths: (a) Original images and their dehazing images; (b) PSNR of original images and PSNR-increase-ratio of their dehazing images; (c) MSSIM of original images and MSSIM-increase-ratio of their dehazing images.
Fig. 3.
Fig. 3. Simulation result of bike images at different optical depths: (a) Original images and their dehazing images; (b) PSNR of original images and PSNR-increase-ratio of their dehazing images; (c) MSSIM of original images and MSSIM-increase-ratio of their dehazing images.
Fig. 4.
Fig. 4. Simulation results of images with different filter windows: (a) 5×5; (b) 60×60; (c) 90×90.
Fig. 5.
Fig. 5. Experimental results of underwater natural creatures with time delay from top to bottom: $\tau $ = 23 ns; $\tau $ = 19 ns; $\tau $ = 20 ns; $\tau $ = 26 ns.
Fig. 6.
Fig. 6. Experimental results of artificial targets with time delay from top to bottom: $\tau $ = 23 ns, $\tau $ = 23 ns, $\tau $ = 140 ns.

Tables (2)

Tables Icon

Table 1. PSNR-increase-ratio comparison of dehazing natural creature images

Tables Icon

Table 2. PSNR-increase-ratio comparison of dehazing artificial targets images

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

I=ST+Ib
T=exp(2cR)R2
Is=medianw(IaIdmedianw(Idark))
βRIP1R20PL(t)g(t)dt
Ib=mean[Idark<mean(Idark)]RbeginRexp(2cr)r2drRbeginRendexp(2cr)r2dr
S=IIbmax(Tmax(T),Tlow)
ηPSNR=PSNRdehazePSNRoriginalPSNRoriginal
ηMSSIM=MSSIMdehazeMSSIMoriginalMSSIMoriginal

Metrics