Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Underwater image restoration based on adaptive parameter optimization of the physical model

Open Access Open Access

Abstract

Underwater images have the advantage of carrying high information density and are widely used for marine information acquisition. Due to the complex underwater environment, the captured images are often unsatisfactory and often suffer from color distortion, low contrast, and blurred details. Physical model-based methods are often used in relevant studies to obtain clear underwater images; however, water selectively absorbs light, making the use of a priori knowledge-based methods no longer applicable and thus rendering the restoration of underwater images ineffective. Therefore, this paper proposes an underwater image restoration method based on adaptive parameter optimization of the physical model. Firstly, an adaptive color constancy algorithm is designed to estimate the background light value of underwater image, which effectively guarantees the color and brightness of underwater image. Secondly, aiming at the problem of halo and edge blur in underwater images, a smoothness and uniformity transmittance estimation algorithm is proposed to make the estimated transmittance smooth and uniform, and eliminate the halo and blur of the image. Then, in order to further smooth the edge and texture details of the underwater image, a transmittance optimization algorithm for smoothing edge and texture details is proposed to make the obtained scene transmittance more natural. Finally, combined with the underwater image imaging model and histogram equalization algorithm, the image blurring is eliminated and more image details are retained. The qualitative and quantitative evaluation on the underwater image dataset (UIEBD) shows that the proposed method has obvious advantages in color restoration, contrast and comprehensive effect, and has achieved remarkable results in application testing. It shows that the proposed method can effectively restore underwater degraded images and provide a theoretical basis for the construction of underwater imaging models.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The oceans are rich in resources that continue the life of the earth and make an important contribution to the development of human society [1,2]. With the in-depth exploration of marine resources exploration, marine environment monitoring, underwater life science and marine defense, underwater optical imaging technology has received more and more attention [3,4]. As an important carrier for obtaining marine information, underwater images have the advantage of carrying high information density and have been widely used in underwater target identification, underwater scientific research and hydrographic data measurement [5,6]. Due to the complex underwater environment, water and suspended particles have absorption and scattering effects on light, resulting in problems such as color distortion, low contrast and blurred details in underwater images, which seriously affect the performance of underwater optical imaging, and also affect the accurate acquisition of information and subsequent further processing [7,8]. In the actual imaging process, the imaging principle of underwater images and foggy sky images are very similar, both contain the absorption of the medium and the scattering effect of suspended particles, so people often use the method based on the physical model to obtain clear underwater images [9]. However, water selectively absorbs light, and methods that estimate the parameters of physical models based on a priori knowledge often do not correctly reflect their theories, thus making the parameters inaccurately estimated and leading to poor underwater image restoration. Therefore, it is particularly important to estimate and optimize the model parameters of underwater images accurately, which is not only related to the acquisition of high-quality underwater images but also provides the theoretical basis for the construction of underwater imaging models.

In order to restore the real scene information of underwater images, researchers have proposed a series of methods. These methods can be roughly divided into image restoration methods based on physical models, image enhancement methods based on non-physical models, and deep learning methods. (1) The image restoration method based on physical model is to restore the real scene of underwater image by establishing underwater optical imaging model. The dark channel prior (DCP) algorithm [11] is improved and applied to underwater image recovery in Ref. [10]. The underwater dark channel prior (UDCP) algorithm is proposed, which mainly focuses on the blue and green channels, and the processed underwater images achieve satisfactory results. In Ref. [12], an underwater scene depth estimation method based on image blur and light absorption is proposed. The author uses the depth of field to estimate the model parameters, which can prevent the problem of image color deviation. In Ref. [13], according to the law that different wavelengths have different attenuation rates, the author proposes an underwater image restoration algorithm based on automatic red channel prior, which can restore colors related to short wavelengths. Literature [14] proposes a method to estimate the transmittance by considering both the dark channel prior and the piecewise smoothing assumption, which can effectively eliminate the blur of the image. In Ref. [15], the method of combining dark channel prior and morphological reconstruction can quickly estimate the transmittance and has certain real-time performance. However, the key to the success of underwater image restoration methods based on physical models is the accurate estimation of model parameters, and model parameters based on prior knowledge estimation are often difficult to accurately reflect their actual theory. (2) The image enhancement method of non-physical model is to improve the image quality by increasing the pixel intensity value in the pixel domain or filtering in the transform domain. The common image enhancement methods are histogram equalization [16] and image enhancement based on Retinex [17], which can effectively improve the clarity and color saturation of the image. Reference [18] proposed an image enhancement method based on fusion, which can effectively correct the color deviation and contrast of the image and has a good image enhancement effect. Reference [19] proposed a wavelength compensation and image dehazing (WCID) algorithm. By compensating the attenuation difference along the propagation path and considering the possible influence of artificial light sources, images with significantly enhanced visibility and excellent color fidelity were obtained. However, the image enhancement method using non-physical models often ignores the relationship between the degree of underwater image degradation and the depth of the scene, and has certain limitations in dealing with turbid underwater images. (3) With the rapid development of artificial intelligence, deep learning methods are widely used in underwater image restoration. Researchers often use convolutional neural networks (CNN) to train underwater image sets, and clear underwater images can be obtained without any prior knowledge. Reference [20] used convolutional neural network (CNN) combined with hybrid wavelet and directional filter (HWD) to obtain clear underwater images and achieved certain results. Reference [21] used a new multi-scale dense generative adversarial network (GAN) to enhance underwater images and achieved remarkable results. However, training a deep learning model requires a large number of image data sets, training takes a long time, and requires high hardware configuration. At the same time, the physical mechanism it implements is not clear, and the authenticity of the generated underwater images is almost no verification.

The above methods have restored the underwater image to a certain extent and have made some progress, but the above methods are not perfect in the process of restoring the underwater image. In order to effectively restore the real scene information of underwater images, and also consider the physical characteristics of underwater light propagation, in fact, it is necessary to make full use of the underwater optical imaging model, so it is the key to accurately estimate and optimize the transmittance and background light value of underwater images. Therefore, we propose an adaptive parameter optimization underwater image restoration method based on physical model, which includes four modules: background light value estimation, transmittance estimation, transmittance optimization and image restoration. Firstly, an adaptive color constancy algorithm is designed to estimate the background light value of the underwater image, which effectively guarantees the color and brightness of the underwater image. Secondly, aiming at the problem of halo and edge blurring caused by the absorption of water to light, a smoothness and uniformity transmittance estimation algorithm is proposed to make the estimated transmittance smooth and uniform, so as to achieve the purpose of eliminating image halo and blurring. Then, in order to further smooth the edge and texture details of the underwater image, we propose a transmittance optimization algorithm with smooth edge and texture details to make the obtained scene transmittance more natural. Finally, we combine the underwater image imaging model and histogram equalization algorithm to eliminate image blurring and retain more image details. The main contributions of this paper are summarized as follows:

  • (1) An underwater image restoration method based on adaptive parameter optimization of physical model is proposed. It can effectively estimate the physical model parameters and restore the underwater degraded image, which provides a theoretical basis for the construction of underwater imaging model.
  • (2) An adaptive color constancy background light estimation algorithm is designed to effectively guarantee the color and brightness of underwater images.
  • (3) A smoothness and uniformity transmittance estimation algorithm is proposed to make the estimated transmittance more smooth and uniform, which can effectively eliminate the halo and blur of the image.
  • (4) A transmittance optimization algorithm with smooth edge and texture details is proposed to further smooth the edge and texture details of underwater images, making the scene transmittance more natural.

The structure of this paper is as follows: In Section 2, the optical imaging model and dark channel prior theory are briefly introduced. In Section 3, the main ideas and theoretical basis of our proposed method are described in detail. In Section 4, the research results are analyzed and discussed from both qualitative and quantitative aspects, and the application test is carried out to illustrate the effectiveness of the proposed method. In Section 5, the work of this paper is summarized.

2. Related works

In this section, the basic principles of underwater optical imaging model and dark channel prior are briefly introduced, which is the basis of the proposed method. On this basis, the motivation of the proposed method is introduced.

2.1 Underwater optical imaging model

According to the underwater optical imaging model of Jaffe-McGlamery [22], it can be known that the underwater image is mainly composed of three parts: direct transmission component, forward scattering component, and backward scattering component. The direct transmission component is the light reflected by the object directly incident on the camera; the forward scattering component is caused by the random deviation of light on the camera lens. The backscattering component is the light incident to the camera after the ambient light encounters the scattering of suspended particles, which causes a layer of fog on the image surface, and is also the main reason for the decrease of image contrast and color shift. Compared with the backscattering component, the forward scattering component has less influence on the image quality, so the influence of the forward scattering component on the image is not considered.

In order to reduce the complexity of the underwater optical imaging model, the simplified model can be expressed as:

$${E_c}(x) = {Q_c}(x){t_c}(x) + {A_c}(1 - {t_c}(x)),c \in \{ r,g,b\} ,$$
where the first term on the right side of the equation is the direct component and the second term is the background light scattering component. $E(x)$ is the observed underwater blurred image, $Q(x)$ is the clear ideal image, A is the global background light, and $t(x)$ is the transmittance.

Assuming that the medium is uniform and the light propagates exponentially under water, the transmittance can be expressed as:

$${t_c}(x) = {e^{ - {\chi _c}d(x)}},c \in \{{r,g,b} \},$$
where ${\chi _c}$ is the attenuation rate of the color channel and $d(x)$ is the distance from the camera to the scenic spot.

Under the condition that the background light value and underwater transmittance are known, according to the underwater image imaging model, the original image $E(x)$ can be restored to a clear image $Q(x)$, and its expression is:

$${Q_c}(x) = \frac{1}{{{t_c}(x)}}({E_c}(x) - {A_c}) + {A_c},c \in \{{r,g,b} \}.$$

2.2 Dark channel prior

The dark channel prior theory was proposed by He [11], indicating that at least one color channel may have a very low color pixel value in the R, G, and B channels of each local region of an image. The dark channel can be defined as:

$${E^{dark}}(x) = \mathop {\min }\limits_{c \in \{ r,g,b\} } (\mathop {\min }\limits_{y \in \Omega (x)} ({E_c}(y))) \to 0,$$
where ${E_c}$ is the color channel of E, and $\Omega (x)$ represents the local area of the image centered on $(x,y)$. c is the color channel representing r, g and b.

Finally, the formula of transmittance is obtained as follows:

$${\tilde{t}_1}(x) = 1 - \theta \mathop {\min }\limits_c (\mathop {\min }\limits_{y \in \Omega (x)} (\frac{{{E_c}(y)}}{{{A_c}}})),$$
where $\theta $ is the retention coefficient, usually 0.95. According to the principle of dark channel prior, the key to underwater image restoration lies in the solution of underwater image transmittance and background light value. By optimizing the transmittance and background light value of the underwater image, the dark channel prior algorithm can better restore the underwater blurred image.

2.3 Motivation

The accurate estimation of background light value and transmittance of underwater image directly determines the quality of image restoration. Among the existing background light value and transmittance estimation methods, the representative one is the underwater dark channel prior (UDCP) algorithm [10]. By improving the dark channel prior (DCP) algorithm, the DCP algorithm is only applied to green and blue channels to estimate the background light value and transmittance of underwater images, which solves the problem that it is difficult to model in red channels. However, this method is also based on prior knowledge and it is difficult to reflect its actual theory. Therefore, the processed underwater image will introduce color cast, which has certain limitations. The common improved DCP algorithm also has Ref. [23]. The author combines DCP and the Guided Filter to estimate the transmittance, which avoids image artifacts. Reference [24] used the average filter to estimate the transmittance of the image, and used the region projection method to obtain the background light value. This method can well balance the image restoration quality and image processing speed. Reference [25] introduced a fusion-based transmittance estimation algorithm, and fused the weighting scheme and the Gaussian dark channel method to calculate the atmospheric light value, which improved the accuracy of the estimation of the light source position. However, these methods are not suitable for processing underwater images. Reference [14] uses dark channel prior and piecewise smoothing hypothesis to estimate transmittance, which can obtain smoother transmittance and effectively eliminate image blur, but the transmittance obtained by this method can not retain more image details, and there are still brighter and darker elements in its gray image. Reference [15] introduces the method of morphological reconstruction, which can estimate the transmittance quickly and has certain real-time performance, but the transmittance obtained by this method is not smooth enough, and it also has certain limitations in processing the halo effect of the image. The above research can eliminate the blur of the image to a certain extent, but there are still many deficiencies. At the same time, some of the background light value and transmittance estimation algorithms are not necessarily suitable for underwater images. Therefore, in order to effectively restore the real scene information of underwater images and consider the physical characteristics of underwater light propagation, we propose an underwater image restoration method based on adaptive parameter optimization of physical model. Firstly, we design an adaptive color constancy algorithm to estimate the background light value of underwater images, which effectively ensures the color and brightness of underwater images. Secondly, in order to solve the problem of halo and edge blur in the image caused by the absorption of light by water, a smoothness and uniformity transmittance estimation algorithm is proposed to make the estimated transmittance smooth and uniform. Then, in order to further smooth the edge and texture details of underwater images, we propose a transmittance optimization algorithm for edge and texture details smoothing, which makes the obtained scene transmittance more natural. By accurately estimating the background light value and transmittance suitable for underwater images, our method avoids the common problem that the method based on prior knowledge can not reflect its actual theory. The details of our proposed method are explained in detail in the following sections.

3. Model and methods

In order to effectively estimate and optimize the transmittance and background light value of underwater images, this paper proposes an underwater image restoration method based on adaptive parameter optimization of physical model. In our method, it is mainly composed of four modules: background light estimation, transmittance estimation, transmittance optimization and image restoration. In the background light value estimation module, in order to accurately estimate the background light value of underwater images, an adaptive color constancy background light value estimation algorithm is proposed to effectively ensure the color and brightness of underwater images. In the transmittance estimation module, a smoothness and uniformity transmittance estimation algorithm is proposed to solve the problem of halo and edge blurring caused by the absorption of water to light, so that the estimated transmittance can be smooth and uniform. In the transmittance optimization module, in order to further smooth the edge and texture details of underwater images, a transmittance optimization algorithm for edge and texture details smoothing is proposed, which makes the obtained scene transmittance more natural. In the image restoration module, we combine the underwater image imaging model and histogram equalization algorithm to eliminate image blurring and retain more image details. Figure 1 is the flow chart of the method proposed in this paper. We will introduce each part in detail below.

 figure: Fig. 1.

Fig. 1. The flow chart of the method proposed in this paper.

Download Full Size | PDF

3.1 Estimation of background light value with adaptive color constancy

In the actual imaging process, the imaging principle of underwater image and foggy image is very similar, both of which include the absorption of medium and the scattering of suspended particles, so the gray value of dark channel can be used to characterize the fog concentration. However, there are obvious differences between them. For underwater images, besides the scattering phenomenon of light in water, water will selectively absorb light, which leads to inaccurate estimation of background light value of underwater images. Therefore, we propose an adaptive color constancy algorithm to estimate the atmospheric light value accurately and effectively ensure the color and brightness of underwater images.

Firstly, the underwater image is transformed from spatial domain to frequency domain by Fourier transform, and then the image in frequency domain is filtered by using Gaussian function as the transfer function of homomorphic filtering [26] to enhance the high frequency component and suppress the low frequency component. Finally, the filtered image is transformed from frequency domain to spatial domain by inverse Fourier transform. After the above processing, the details and contrast of the dark area in the underwater image are enhanced, which is beneficial to improve the accuracy of the background light value estimation of the underwater image. According to the dark channel prior theory proposed in Ref. [11], the dark channel map can be obtained as shown in Fig. 2(b). It can be seen that the bright area distribution of the dark channel map is relatively broad, and the obtained background light value is not necessarily close to the real value. Therefore, we first use White Patch Retinex [27] and morphological open operation to process the dark channel image and calculate the background light value, and correct the image color and brightness distortion due to light attenuation to maintain the color constancy of the underwater image. White Patch Retinex improves the color of underwater images, that is, it can still retain the original color of the object when the external light source changes. The calculation formula is as follows:

$${E_c}(x,y) = GS(x,y) \cdot R{S_c}(x,y) \cdot L{S_c},$$
where ${E_c}(x,y)$ is the final image, $GS(x,y)$ is the geometric size factor of image imaging, $R{S_c}(x,y)$ is the reflection coefficient of the object to light, and $L{S_c}$ is the intensity of ambient light. When $GS(x,y) = R{S_c}(x,y) = 1$, ${E_c}(x,y) = L{S_c}$ can be obtained. Therefore, it can be found that the corresponding brightest pixel point in an image is the ambient light, which also represents the global brightness intensity. Usually, the ambient light ${A_1}$ is constant, and its calculation formula is as follows:
$${A_1} = L{S_c} = \max \{{L{S_c}(x,y)} \}.$$

 figure: Fig. 2.

Fig. 2. Estimation of background light value of underwater image. (a) Original underwater image. (b) The dark channel diagram obtained by the method in Ref. [11]. (c) A dark channel map estimated by using an adaptive color constancy algorithm. (d) is the position of the background light value in the underwater image; (e) is the background light value.

Download Full Size | PDF

Then we use the morphological opening operation to process the dark channel map, and find the 0.1% pixels before the brightness value in the dark channel map J, and use the average intensity value of these pixels as the background light value of the underwater image. The calculation formula is as follows:

$${J^{dark}}(x) = \textrm{open}({E^{dark}}(x)),$$
$${A_2} = \max \sum\limits_{c = 1}^3 {{g^c}[\mathop {\arg \max }\limits_{x \in (0.1 \times h \times w)} {J^{dark}}(x)]} ,$$
where h and w represent the height and width of the dark channel graph, respectively. Finally, we take the average value as the background light value of the underwater image, such as $A = {{({A_1} + {A_2})} / 2}$. The dark channel improved by the adaptive color constancy algorithm is shown in Fig. 2(c), which shows that the overall intensity and main edges of the image are guaranteed. Figure 2(d) shows the position of the background light value in the underwater image; Fig. 2(e) shows the background light value.

3.2 Transmittance estimation of smoothness and uniformity

Under the absorption of light by water, the attenuation degree of light of different wavelengths is also different in the process of underwater transmission. The conventional dark channel prior algorithm is used to restore the underwater image, and the halo phenomenon will appear at the junction of the close range and the far range, and the edge information of the image will be blurred. Therefore, the transmittance obtained by the dark channel prior is no longer suitable for the underwater environment. In order to make the estimated transmittance smooth and uniform, eliminate halo and image blur, we propose a smooth and uniform transmittance estimation algorithm.

The coarse transmittance is obtained by Eq. (5) in Section 2.2, and the segmentation of coarse transmittance can be regarded as a multi-label classification problem. The min-cut / max-flow algorithm is applied to minimize the energy function [28]. The energy function can be expressed as:

$$\Gamma (L) = \sum\limits_{p \in P} {{D_p}({L_p})} + \sum\limits_{(p,q) \in N} {{V_{p,q}}({L_p},{L_q})} ,$$
where, $L = \{ {L_p}|p \in {\rm P}\} $ is the label set of the image P, and p and q are the pixels of the image. The first term on the right side of the equation is the data cost, which represents the sum of the penalty costs ${D_p}$ of all pixels. The second term on the right side of the equation is the smoothing cost term, which represents the sum of the smoothing costs of adjacent pixel pairs. The data cost term of the energy function is calculated as follows:
$${D_p}({L_p}) = \min ({|{{L_p} - {g_p}} |^2},const),$$
where ${g_p}$ is the intensity value ($p \in {\rm P}$) of the pixel p of the input image g, and ${L_p}$ is the default label value of the pixel p. $const$ is a limiting constant, here set to 20. The smoothing cost term of the energy function can be obtained by the following equation:
$$V({g_p},{g_q}) = K \cdot \min (|{{g_q} - {g_p}} |,5),$$
where ${g_p}$ and ${g_q}$ are the intensity values ($p,q \in {\rm P}$) of the adjacent pixel points p and q of the input image g, respectively, and K is the smoothing coefficient, and the larger the value, the smoother the segmentation, and here a larger value is taken as 100. When the energy function is minimum, the optimal solution of image segmentation can be obtained. After smoothing the rough transmittance map, the transmittance ${t_g}(x)$ can be obtained. At the same time, based on the dark channel prior theory, this paper uses the coarse transmittance ${\tilde{t}_1}(x)$ as the input image to perform gray-level morphological reconstruction operation, which can not only eliminate the noise of the underwater image, but also retain the details of the image. The darker elements in the coarse transmittance ${\tilde{t}_1}(x)$ are eliminated by the closed operation of gray-scale morphological reconstruction. This feature is used to denoise the dark elements in the coarse transmittance and realize the preprocessing of the coarse transmittance. We use the reconstruction closed operation of the gray-scale image ${\tilde{t}_1}(x)$ with a size of 1 to obtain the estimated transmittance ${\tilde{t}_2}(x)$, as shown below:
$${\tilde{t}_2}(x) = C_R^{(1)}({\tilde{t}_1}(x)).$$

In order to further denoise the transmittance, the transmittance ${\tilde{t}_2}(x)$ is used as the input image to perform the opening operation of grayscale morphological reconstruction to remove the brighter elements in the transmittance ${\tilde{t}_2}(x)$. In this paper, the estimated transmittance ${\tilde{t}_3}(x)$ is obtained by the reconstruction of the gray level image ${\tilde{t}_2}(x)$ with a size of 1. The formula is as follows:

$${\tilde{t}_3}(x) = O_R^{(1)}({\tilde{t}_2}(x)).$$

The open and close operations of the morphological reconstruction of the coarse transmittance ${\tilde{t}_1}(x)$ eliminate the bright and dark elements in the grayscale image, and complete the image denoising. However, the opening and closing operations of morphological reconstruction will make the gray histogram of transmittance move closer to the middle pixel value. In order to increase the distribution range of the transmittance ${\tilde{t}_3}(x)$ gray value, the following formula is used to restore the same distribution range as the coarse transmittance ${\tilde{t}_1}(x)$, and the transmittance ${t_m}(x)$ after the uniformity operation is obtained:

$${t_m}(x) = ({\tilde{t}_3}(x) - \min ({\tilde{t}_3}(x)))\frac{{\max ({{\tilde{t}}_1}(x)) - \min ({{\tilde{t}}_1}(x))}}{{\max ({{\tilde{t}}_3}(x)) - \min ({{\tilde{t}}_3}(x))}} + \min ({\tilde{t}_1}(x)).$$

The transmittance after smoothness and uniformity operation is ${t_g}(x)$ and ${t_m}(x)$ respectively. Finally, the final transmittance ${t_d}(x)$ is obtained by linear fitting of the two transmittance maps through the following formula.

$${t_d}(x) = \gamma {t_g}(x) + (1 - \gamma ){t_m}(x),$$
where $\gamma $ is the adjustment factor, this paper sets it as 0.45. The transmittance obtained by Ref. [10] is shown in Fig. 3(a), and the transmittance estimated by smoothness and uniformity algorithm is shown in Fig. 3(b). Compared with Fig. 3(a), the transmittance estimated by our proposed algorithm is more refined.

 figure: Fig. 3.

Fig. 3. Estimation and optimization of underwater image transmittance. (a) The transmittance diagram obtained by the method in Ref. [10]. (b) Transmittance map estimated by smoothness and uniformity algorithm. (c) Transmittance map optimized by edge and texture detail smoothing algorithm. (d) The image recovered by this method. (e) is the RGB histogram distribution of (d).

Download Full Size | PDF

3.3 Transmission optimization for smoothing edge and texture details

In order to obtain more natural scene transmittance and further smooth the edge and texture details of underwater images, we propose a transmittance optimization algorithm with smooth edge and texture details. We use context regularization [29] to refine the transmittance ${t_d}(x)$. The refined transmittance ${t^\ast }(x)$ can be obtained by minimizing the objective function $H(x,\beta )$:

$$\min (G(x,\beta )) = \min (\frac{\lambda }{2}||{{t^\ast }(x) - {t_d}(x)} ||_2^2 + \sum\limits_{j \in \omega } {{{||{W(x,z) \circ \mu } ||}_1} + \frac{\beta }{2}(\sum\limits_{j \in \omega } {||{\mu - {M_j} \ast {t_d}(x)} ||_2^2} )} ),$$
where $\beta $ is the penalty factor; $\lambda $ is the regularization parameter; ${t^\ast }(x)$ is the optimized transmittance; ${\circ} $ is the element multiplication symbol; $\omega $ is the pixel index set; $\mu $ is the auxiliary variable and ${M_j}$ is the differential filtering operator of pixel j. Combining the alternating minimization algorithm and the two-dimensional fast Fourier transform, and assuming a circular boundary condition, ${t^\ast }(x)$ is finally obtained as
$${t^\ast }(x) = {F^{ - 1}}\left( {\frac{{\frac{\lambda }{\beta }F({t_d}(x)) + \sum\limits_{j \in \omega } {\overline {F({M_j})} \circ F(\mu )} }}{{\frac{\lambda }{\beta } + \sum\limits_{j \in \omega } {\overline {F({M_j})} \circ F({M_j})} }}} \right),$$
where ${F^{ - 1}}$ is the inverse Fourier transform; F is Fourier transform; $\overline {F({M_j})} $ is a complex conjugate transformation. The optimized transmittance can eliminate the obvious halo distortion. In order to further preserve the edge details of the image, we introduce a domain transform filter [30] to optimize the transmittance. Domain transform filter is a method to convert high-dimensional signals into low-dimensional signals equivalently, and the distance between any two points in high-dimensional space should be kept constant, so that filtering in low-dimensional space is equivalent to filtering in high-dimensional space. The transformation formula of the domain transform filter is as follows:
$${t_{df}}[n] = (1 - {a^d}){t^ \ast }[n] + {a^d}{t_{df}}[n - 1],$$
where $d = {t^ \ast }({x_n}) - {t^ \ast }({x_{n - 1}})$ represents the actual distance between two adjacent sampling points ${x_n}$ and ${x_{n - 1}}$ in the transform domain; $a = \textrm{exp} ( - {{\sqrt 2 } / \upsilon }) \in [0,1]$ is the feedback coefficient and $\upsilon $ is the spatial standard deviation. Usually, when $a \in [0,1]$, the filter is stable. With the increase of distance d, ${a^d}$ approaches to 0, and the impulse response of adjacent points decreases, so the optimized transmittance can retain more edge information. Because the transmittance of the output only depends on the current input signal and the input signal of the last moment, this impulse response is not symmetrical. At this time, we will first filter from left to right (or from top to bottom), and then filter from right to left (or from bottom to top), so as to ensure the output transmittance to be stable. Finally, the optimized transmittance ${t_{df}}(x)$ is obtained. Through the above operations, the edge and texture information of the image are better preserved. Figure 3(c) is the transmittance map obtained by the transmittance optimization algorithm using smooth edges and texture details. It can be seen that the transmittance map of Fig. 3(c) is smoother and retains more edge details.

3.4 Image restoration

The background light value A and transmittance ${t_{df}}$ of the underwater image are obtained by the previous method. Then, according to the underwater image imaging model, it can be obtained:

$$Q(x) = \frac{{g(x) - A}}{{\max ({t_{df}}(x),{t_0})}} + A,$$
where ${t_0}$ is used to limit the lower limit of the transmittance ${t_{df}}$, and the value of ${t_0}$ is set to 0.2. The overall color of image $Q(x)$ obtained by the above-mentioned underwater optical imaging model is dark, so histogram equalization is used to improve the contrast of RGB color space of underwater image. We use the gray-scale transformation function T to equalize the histogram of the underwater image $Q(x)$, which can be expressed as:
$${s_k} = T({r_k}) = \sum\nolimits_{k = 0}^{L - 1} {P({r_k}) = \sum\nolimits_{k = 0}^{L - 1} {\frac{{{n_k}}}{N}} } ,$$
where, the purpose of histogram equalization is to map the pixels with original gray level ${r_k}$ to the new image with corresponding pixels with gray level ${s_k}$ through cumulative distribution function. ${n_k}$ is the number of pixels in the $k$-th gray level, N is the total number of pixels in the image, ${r_k}$ is the $k$-th gray level, L is the gray level, and $P({r_k})$ is the relative frequency of ${r_k}$ gray level. In the platform simulation, the calling format of histogram equalization enhancement function is histeq (). Its expression is ${V^{\prime}}(x) = \textrm{histeq}(J(x))$. This method solves the problem that the whole image is dark and shows more details of the image. Figure 3(d) is the restored image of the proposed method. Compared with the original image (such as Fig. 2(a)), the restored image has higher contrast and more realistic color. At the same time, it also eliminates image blur and retains more image details. Figure 3(e) is the RGB histogram distribution of Fig. 3(d). It can be seen that after the image processed by the method proposed in this paper, its R, G, B three-channel histogram distribution is more uniform and the range is wider, making the image look more natural and clear.

4. Experimental results and discussion

In order to verify the effectiveness of the proposed method, we conducted qualitative comparison, quantitative comparison and application test. Firstly, we compare our proposed algorithm with the existing classical underwater image restoration techniques, and then analyze the advantages and disadvantages of each method from a qualitative and quantitative perspective. Finally, we use saliency detection (SOD) and feature matching (SFIT) for application testing, which shows that our proposed method has certain scalability. All the pictures in this paper are from the UIEBD [31] underwater image data set, and representative images in different environments are selected for testing. In order to ensure the fairness of the comparison between different algorithms, the experiment in this paper is carried out in the environment of MATLAB R2018b. The hardware parameters of the computer are: Windows 10 PC Intel Core i7-9700 CPU 3.00 GHz.

4.1 Qualitative comparison

In order to verify the accuracy of the physical model parameters estimated by the method proposed in this paper, we compare it with the existing classical underwater image restoration technology, and analyze it from the aspects of color restoration, contrast and texture details. Fig. 4(a) shows four original underwater images and their RGB three-channel histogram distributions, and Fig. 4(b) shows the underwater restored images and their RGB three-channel histogram distributions after being processed by the method proposed in this paper. From the display results, it can be seen that the histogram of the original underwater image is concentrated in the middle area due to the influence of light absorption, light reflection and light scattering, while the histogram distribution of the image restored by the method proposed in this paper is wider and more uniform. The results of restoration can clearly show that this method not only enhances the details of the image, but also enriches the color information, making the image look more natural and clear.

 figure: Fig. 4.

Fig. 4. Experimental results and RGB histogram distribution of the proposed method in different underwater scenes. (a) The original image and its RGB histogram distribution. (b) The image recovered by the method proposed in this paper and its RGB histogram distribution.

Download Full Size | PDF

The algorithm in this paper is compared with five classic underwater image restoration algorithms, namely UDCP [10], IBLA [12], HWD [20], WCID [19] and ARC [13], and the results are shown in Fig. 5. From the comparison chart, it can be seen that the images obtained by UDCP algorithm have a serious color shift. Image 1 has a low contrast, and Image 2 has a red color shift. The overall color of all images is deepened and the color is unbalanced, which shows that the algorithm is wrong in estimating the background light value of underwater images. Compared with UDCP algorithm, the image obtained by IBLA algorithm has a great improvement, which can better restore the color of the image. However, there are under-saturated areas in Image 1, and the contrast improvement effect in some areas is not good, which shows that the transmittance calculated by this algorithm is not accurate. The overall visual effect of the image restored by HWD algorithm is gray, Image 3 appears obvious local darkness, and Image 5 and Image 6 appear overall gray-white color deviation. The image processed by WCID algorithm can't completely restore the scene, and the restored image is dark in color. Image 1, Image 3 and Image 4 can't reflect the details of the image well, and Image 2 and Image 5 have serious color cast. The image restored by ARC algorithm has good visual effect, but it also has poor contrast effect. In contrast, the overall brightness and contrast of the image restored by the algorithm proposed in this paper are moderate, and there are no areas that are too bright or too dark. In different blue-green underwater environments, the details of the processed image are clearer, and the contrast, saturation and brightness are closer to the natural scene, which shows that the background light value estimated by the method proposed in this paper is more accurate. In addition, the method proposed in this paper can effectively restore the visibility and detail information of the image, greatly improve the definition in the foreground area, and make the texture details in the close-range area clearer, which shows that the estimated and optimized transmittance in this paper is more in line with the underwater environment. Compared with the other five methods, the method proposed in this paper can effectively improve the quality of underwater degraded images, enhance the visual effect and adapt to the underwater environment.

 figure: Fig. 5.

Fig. 5. Qualitative comparison of different underwater scene images. From top to bottom are the original image, the processing result of UDCP, the processing result of IBLA, the processing result of HWD, the processing result of WCID, the processing result of ARC and the processing result of the method proposed in this paper. (a)∼(f) are Image 1∼Image 6 respectively.

Download Full Size | PDF

Figure 6 shows the comparison of detail enhancement ability of underwater natural scene images. It can be seen from the local enlarged region that the underwater image obtained by our method has better contrast and clearer texture, which is subjectively superior to other algorithms.

 figure: Fig. 6.

Fig. 6. Comparison of detail enhancement ability of underwater natural scene images. From top to bottom are the original image, the processing result of UDCP, the processing result of IBLA, the processing result of HWD, the processing result of WCID, the processing result of ARC and the processing result of the method proposed in this paper. The red box is a partial enlargement of the left picture; (a)∼(c) are Image 7∼Image 9 respectively.

Download Full Size | PDF

4.2 Quantitative comparison

Through qualitative comparison, we can see that the method proposed in this paper has good restoration effect in different underwater scenes. Next, we objectively evaluate the restoration quality of underwater images from the aspects of color restoration, contrast and comprehensive effect. Information entropy (IE) [32] is used to measure the information level of an image. The larger the value, the richer the information of the image and the higher the fidelity of the image. The average gradient (AVG) [33] is used to characterize the clarity of the image. The higher the value, the clearer the image. The underwater image color metric index (UIQM) [34] is used to evaluate the effect of underwater image color restoration. The larger the value, the richer the color of the restored image. The chroma, saturation and contrast of the restored image are comprehensively evaluated by the underwater color image quality evaluation index (UCIQE) [35]. The higher the value, the better the comprehensive effect of the restored image. Table 1 shows the values of information entropy (IE), average gradient (AVG), underwater image color metric index (UIQM) and underwater color image quality evaluation index (UCIQE) in Fig. 5, and the bold values are the optimal values of the corresponding algorithms.

Tables Icon

Table 1. The evaluation results of IE, AG, UIQM, and UCIQE in Fig. 5.

Information entropy (IE) is used to describe the average information content of an image. For an image, assuming that ${p_i}$ represents the proportion of pixel gray value i, the unary gray entropy is defined as:

$$\textrm{IE} ={-} \sum\limits_{i = 0}^{255} {{p_i}{{\log }_2}{p_i}} ,$$
where, ${p_i}$ is the probability that a certain gray level is distributed in the image obtained from the gray level histogram.

The average gradient (AG) describes the detailed information of the image and is used to evaluate the clarity of the image. The average gradient can be described as:

$$\textrm{AG} = \frac{1}{{(M - 1)(N - 1)}}\sum\limits_{i = 1}^{M - 1} {} \sum\limits_{j = 1}^{N - 1} {\sqrt {\frac{{{{(I(i,j) - I(i + 1,j))}^2} + {{(I(i,j) - I(i,j + 1))}^2}}}{2}} } ,$$
where M and N represent the width and height of the image, and $I(i,j)$ represents the pixel value at point $(i,j)$ in the image.

UIQM is an underwater image color metric, which can be defined as:

$$\textrm{UIQM} = {c_1} \times \textrm{UICM} + {c_2} \times \textrm{UISM} + {c_3} \times \textrm{UIConM,}$$
where $\textrm{UICM}$ represents the measurement of image color, $\textrm{UISM}$ represents the measurement of image clarity, and $\textrm{UIConM}$ represents the measurement of image contrast. Generally, the values of ${c_1}$, ${c_2}$ and ${c_3}$ are 0.0282, 0.2953 and 3.5753 respectively.

UCIQE is an underwater color image quality evaluation index based on the chromaticity, contrast and saturation metrics of CIELab space. It is mainly used to quantify the degree of image degradation caused by color deviation, blur, low contrast, and suspended particles in underwater images. The calculation formula of the index is:

$$\textrm{UCIQE} = {a_1} \times {\sigma _c} + {a_2} \times co{n_l} + {a_3} \times {\mu _s},$$
where ${\sigma _c}$ represents the standard deviation of chromaticity, which has a good correlation with human perception, $co{n_l}$ represents the contrast of brightness, and ${\mu _s}$ represents the average saturation. While ${a_1}$, ${a_2}$ and ${a_3}$ are constants, which respectively correspond to the weights of linear combination of the three components. Generally, the three weight coefficients are set to 0.4680, 0.2745 and 0.2576 respectively.

Table 1 shows the quantitative comparison results of Fig. 5. As can be seen from Table 1, this algorithm is superior to the other five classical algorithms in terms of information entropy (IE), average gradient (AG) and comprehensive effect measurement (UCIQE), which shows that the restored image is rich in information, rich in image details, good in fidelity, closer to the image in the natural scene, and has good chromaticity, saturation and clarity, and better visual effects. The effect of this algorithm on color metric (UIQM) is lower than that of UDCP, IBLA and HWD. However, the richer the color of the image, the higher the quality of the image. Combined with qualitative evaluation, it can be seen that the high color measurement values of UDCP and HWD may be caused by the incorrect estimation of background light, which leads to color deviation. The high color metric of IBLA may be caused by the color undersaturation or supersaturation of the image. Comprehensive qualitative comparison and quantitative comparison, this method is obviously superior to the other five methods. The method in this paper can effectively restore the true color of the image, and get an image with clearer vision, higher contrast and more balanced color.

In order to avoid the influence of individual samples on the measurement of the algorithm, we randomly selected 80 images from the UIEBD underwater image data set for testing, among which the test indicators include IE, AVG, UIQM, UCIQE and average time. Due to the limitation of space, we show the data results of the test. The test results are shown in Table 2.

Tables Icon

Table 2. Quantitative evaluation results of different algorithms.

As can be seen from Table 2, our proposed method is superior to the other five classical algorithms in terms of information entropy (IE), average gradient (AG) and comprehensive effect metric (UCIQE), and its value in color metric (UIQM) is slightly lower than that of UDCP and HWD algorithms. According to Formula (24), the main reason why the evaluation parameters of the method proposed in this paper are slightly lower in UIQM is that the algorithm balances the color, clarity and contrast of the image, and does not make the color, clarity or contrast of the image too large alone, which leads to the high value of the evaluation index UIQM, so it also shows that the method proposed in this paper can accurately estimate the background light value and transmittance of underwater images. However, the execution time of the method proposed in this paper is longer than that of WCID and ARC algorithms. On the whole, the speed of image processing is moderate. Therefore, our method can also be used in video processing for occasions where real-time requirements are not high.

4.3 Application test

In order to further prove the effectiveness of the method proposed in this paper, we have carried out application tests. Saliency detection [36,37] and Scale Invariant Feature Transform (SIFT) [38,39] are used to detect saliency and test feature point matching of underwater images before and after restoration by this method. The more obvious the significance and the more matching points, the clearer the texture features of the image. The test results are shown in Figs. 7 and 8. The restored image in Fig. 7 is more significant than the original image. The number of feature matching points in the original image in Fig. 8 is less, and the recovered underwater image has more feature matching points. Therefore, the texture details of the underwater image restored by the method proposed in this paper are more, and the image clarity is significantly improved. It shows that this paper can effectively estimate the parameters of physical model and restore underwater degraded images, which provides a theoretical basis for the construction of underwater imaging model.

 figure: Fig. 7.

Fig. 7. Significance test results. (a) is the underwater original image and its saliency map. (b) The image and its saliency map recovered by the method in this paper.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Test#results of feature matching. (a) to (f) respectively represent six groups of pictures, the left side of each group of pictures represents the feature matching test result of the original underwater image, and the right side represents the feature matching test result of the image restored by the method in this paper.

Download Full Size | PDF

5. Conclusion

In this paper, an adaptive parameter optimization underwater image restoration method based on physical model is proposed, which can effectively estimate the physical model parameters and restore the underwater degraded image. The main conclusions of this paper: An adaptive color constancy background light value estimation algorithm is designed to effectively ensure the color and brightness of underwater images; a smoothness and uniformity transmittance estimation algorithm is proposed, which can make the estimated transmittance smooth and uniform, and effectively eliminate the halo and blur of the image. A transmittance optimization algorithm for smoothing edge and texture details is proposed to further smooth the edge and texture details of underwater images, making the scene transmittance more natural. In the qualitative and quantitative evaluation and application test, the algorithm proposed in this paper has achieved remarkable results, which shows that our proposed method can accurately estimate the background light value and transmittance of underwater images, and can improve the visual effect. It has better applicability and can provide a theoretical basis for the construction of underwater imaging models. The existing algorithms are mainly used to restore shallow water images, while deep water images have more diverse noises. Therefore, in the future work, we will focus on the research on noise elimination of deep-water images to obtain high-quality deep-water images, which will facilitate more underwater research. At the same time, we will continue to optimize the method in this paper and improve the image processing time, so that our method can adapt to occasions with high real-time requirements.

Funding

National Key Research and Development Program of China (2022YFC2805904); The Construction Project for Innovative Provinces in Hunan (2020SK2025); The Special Project for the Construction of Innovative Provinces in Hunan (2020GK1021).

Acknowledgments

The authors would like to thank the editor and the anonymous reviewers for their valuable comments.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. W Rickels, J Dovern, and M Quaas, “Beyond fisheries: Common-pool resource problems in oceanic resources and services,” Global Environmental Change 40, 37–49 (2016). [CrossRef]  

2. Y Zhang and A Kendall, “Consequential analysis of algal biofuels: Benefits to ocean resources,” J. Cleaner Prod. 231, 35–42 (2019). [CrossRef]  

3. K. Ke, C. Zhang, Q. Tang, Y. He, and B Yao, “Single underwater image restoration based on descattering and color correction,” Optik 259, 169009 (2022). [CrossRef]  

4. J. Zhou, Z. Liu, W. Zhang, D. Zhang, and W Zhang, “Underwater image restoration based on secondary guided transmission map,” Multimed Tools Appl 80(5), 7771–7788 (2021). [CrossRef]  

5. C Li, J Guo, and C Guo, “Emerging from water: Underwater image color correction based on weakly supervised color transfer,” IEEE Signal Process. Lett. 25(3), 323–327 (2018). [CrossRef]  

6. Z. Zhuang, Z. Fan, H. Jin, K. Gong, and J Peng, “Local linear model and restoration method of underwater images,” Opt. Express 30(17), 30949–30968 (2022). [CrossRef]  

7. Q. Jiao, M. Liu, P. Li, L. Dong, M. Hui, L. Kong, and Y Zhao, “Underwater image restoration via non-convex non-smooth variation and thermal exchange optimization,” JMSE 9(6), 570 (2021). [CrossRef]  

8. S Li, F Liu, and J Wei, “Underwater image restoration based on exponentiated mean local variance and extrinsic prior,” Multimed Tools Appl 81(4), 4935–4960 (2022). [CrossRef]  

9. M Zhang and J Peng, “Underwater image restoration based on a new underwater image formation model,” IEEE Access 6, 58634–58644 (2018). [CrossRef]  

10. P. Drews, E. Nascimento, F. Moraes, S. Botelho, and M. Campos, Transmission estimation in underwater single images//Proceedings of the IEEE international conference on computer vision workshops. 2013: 825–830 (2013).

11. K He, J Sun, and X Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011). [CrossRef]  

12. T Peng Y and C Cosman P, “Underwater image restoration based on image blurriness and light absorption,” IEEE Trans. on Image Process. 26(4), 1579–1594 (2017). [CrossRef]  

13. A. Galdran, D. Pardo, A. Picón, and A Alvarez-Gila, “Automatic red-channel underwater image restoration,” Journal of Visual Communication and Image Representation 26, 132–145 (2015). [CrossRef]  

14. M Zhu and B He, “Dehazing via graph cut,” Opt. Eng. 56(11), 1 (2017). [CrossRef]  

15. S. Salazar-Colores, E. Cabal-Yepez, J. M. Ramos-Arreguin, G. Botella, L. M. Ledesma-Carrillo, and S Ledesma, “A fast image dehazing algorithm using morphological reconstruction,” IEEE Trans. on Image Process. 28(5), 2357–2366 (2019). [CrossRef]  

16. M Reza A, “Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement,” The Journal of VLSI Signal Processing-Systems for Signal, Image, and Video Technology 38(1), 35–44 (2004). [CrossRef]  

17. S. Zhang, T. Wang, J. Dong, and H Yu, “Underwater image enhancement via extended multi-scale Retinex,” Neurocomputing 245, 1–9 (2017). [CrossRef]  

18. C. Ancuti, C. O. Ancuti, T. Haber, and P Bekaert, “Enhancing underwater images and videos by fusion//2012 IEEE conference on computer vision and pattern recognition,” IEEE: 81–88 (2012).

19. S Jayasree M, G Thavaseelan, and G Scholar P, “Underwater color image enhancement using wavelength compensation and dehazing,” IEEE Trans. on Image Process. 21(4), 1756–1769 (2012). [CrossRef]  

20. P Pan, F Yuan, and E Cheng, “Underwater image de-scattering and enhancing using dehazenet and HWD,” Journal of Marine Science and Technology 26(4), 6 (2018). [CrossRef]  

21. Y Guo, H Li, and P Zhuang, “Underwater image enhancement using a multiscale dense generative adversarial network,” IEEE J. Oceanic Eng. 45(3), 862–870 (2020). [CrossRef]  

22. L McGlamery B, “A computer model for underwater camera systems//Ocean Optics VI,” SPIE 208, 221–231 (1980). [CrossRef]  

23. Jiahao Pang, Oscar C. Au, and Zheng Guo. “Improved single image dehazing using guided filter.” Proc. APSIPA ASC 1–4 (2011).

24. W. Wang, F. Chang, T. Ji, and X Wu, “A fast single-image dehazing method based on a physical model and gray projection,” IEEE Access 6, 5641–5653 (2018). [CrossRef]  

25. J. M. Guo, J. Y. Syue, V. R. Radzicki, and H Lee, “An efficient fusion-based defogging,” IEEE Trans. on Image Process. 26(9), 4217–4228 (2017). [CrossRef]  

26. Y. Hu, C. Xu, Z. Li, F. Lei, B. Feng, L. Chu, and D. Wang, “Detail enhancement multi-exposure image fusion based on homomorphic filtering,” Electronics 11(8), 1211 (2022). [CrossRef]  

27. L Chung Y, Y Chung H, and S Chen Y, “A Study of Single Image Haze Removal Using a Novel White-Patch RetinexBased Improved Dark Channel Prior Algorithm,” Intelligent Automation & Soft Computing 26(2), 367–383 (2020). [CrossRef]  

28. Y Boykov, O Veksler, and R Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Machine Intell. 23(11), 1222–1239 (2001). [CrossRef]  

29. C Raikwar S and S Tapaswi, “Lower bound on transmission using non-linear bounding function in single image dehazing,” IEEE Trans. on Image Process. 29, 4832–4847 (2020). [CrossRef]  

30. X. Liu, H. Zhang, Y. M. Cheung, X. You, and Y. Y Tang, “Efficient single image dehazing and denoising: An efficient multi-scale correlated wavelet approach,” Computer Vision and Image Understanding 162, 23–33 (2017). [CrossRef]  

31. C. Li, C. Guo, W. Ren, R. Cong, J. Hou, S. Kwong, and D. Tao, “An underwater image enhancement benchmark dataset and beyond,” IEEE Trans. on Image Process. 29, 4376–4389 (2020). [CrossRef]  

32. J. Zhou, D. Zhang, P. Zou, W. Zhang, and W Zhang, “Retinex-based laplacian pyramid method for image defogging,” IEEE Access 7, 122459–122472 (2019). [CrossRef]  

33. L. Zhang, L. Zhang, X. Mou, and D Zhang, “FSIM: A feature similarity index for image quality assessment,” IEEE Trans. on Image Process. 20(8), 2378–2386 (2011). [CrossRef]  

34. J Islam M, Y Xia, and J Sattar, “Fast underwater image enhancement for improved visual perception,” IEEE Robot. Autom. Lett. 5(2), 3227–3234 (2020). [CrossRef]  

35. M Yang and A Sowmya, “An underwater color image quality evaluation metric,” IEEE Trans. on Image Process. 24(12), 6062–6071 (2015). [CrossRef]  

36. X. Wu, X. Ma, J. Zhang, A. Wang, and Z Jin, Salient object detection via deformed smoothness constraint//2018 25th IEEE International Conference on Image Processing (ICIP). IEEE. 2815–2819 (2018).

37. H. Peng, B. Li, H. Ling, W. Hu, W. Xiong, and S. J Maybank, “Salient object detection via structured matrix decomposition,” IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 818–832 (2017). [CrossRef]  

38. G Lowe D, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision 60(2), 91–110 (2004). [CrossRef]  

39. H. Bay, A. Ess, T. Tuytelaars, and L Van Gool, “Speeded-up robust features (SURF),” Computer vision and image understanding 110(3), 346–359 (2008). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. The flow chart of the method proposed in this paper.
Fig. 2.
Fig. 2. Estimation of background light value of underwater image. (a) Original underwater image. (b) The dark channel diagram obtained by the method in Ref. [11]. (c) A dark channel map estimated by using an adaptive color constancy algorithm. (d) is the position of the background light value in the underwater image; (e) is the background light value.
Fig. 3.
Fig. 3. Estimation and optimization of underwater image transmittance. (a) The transmittance diagram obtained by the method in Ref. [10]. (b) Transmittance map estimated by smoothness and uniformity algorithm. (c) Transmittance map optimized by edge and texture detail smoothing algorithm. (d) The image recovered by this method. (e) is the RGB histogram distribution of (d).
Fig. 4.
Fig. 4. Experimental results and RGB histogram distribution of the proposed method in different underwater scenes. (a) The original image and its RGB histogram distribution. (b) The image recovered by the method proposed in this paper and its RGB histogram distribution.
Fig. 5.
Fig. 5. Qualitative comparison of different underwater scene images. From top to bottom are the original image, the processing result of UDCP, the processing result of IBLA, the processing result of HWD, the processing result of WCID, the processing result of ARC and the processing result of the method proposed in this paper. (a)∼(f) are Image 1∼Image 6 respectively.
Fig. 6.
Fig. 6. Comparison of detail enhancement ability of underwater natural scene images. From top to bottom are the original image, the processing result of UDCP, the processing result of IBLA, the processing result of HWD, the processing result of WCID, the processing result of ARC and the processing result of the method proposed in this paper. The red box is a partial enlargement of the left picture; (a)∼(c) are Image 7∼Image 9 respectively.
Fig. 7.
Fig. 7. Significance test results. (a) is the underwater original image and its saliency map. (b) The image and its saliency map recovered by the method in this paper.
Fig. 8.
Fig. 8. Test#results of feature matching. (a) to (f) respectively represent six groups of pictures, the left side of each group of pictures represents the feature matching test result of the original underwater image, and the right side represents the feature matching test result of the image restored by the method in this paper.

Tables (2)

Tables Icon

Table 1. The evaluation results of IE, AG, UIQM, and UCIQE in Fig. 5.

Tables Icon

Table 2. Quantitative evaluation results of different algorithms.

Equations (25)

Equations on this page are rendered with MathJax. Learn more.

E c ( x ) = Q c ( x ) t c ( x ) + A c ( 1 t c ( x ) ) , c { r , g , b } ,
t c ( x ) = e χ c d ( x ) , c { r , g , b } ,
Q c ( x ) = 1 t c ( x ) ( E c ( x ) A c ) + A c , c { r , g , b } .
E d a r k ( x ) = min c { r , g , b } ( min y Ω ( x ) ( E c ( y ) ) ) 0 ,
t ~ 1 ( x ) = 1 θ min c ( min y Ω ( x ) ( E c ( y ) A c ) ) ,
E c ( x , y ) = G S ( x , y ) R S c ( x , y ) L S c ,
A 1 = L S c = max { L S c ( x , y ) } .
J d a r k ( x ) = open ( E d a r k ( x ) ) ,
A 2 = max c = 1 3 g c [ arg max x ( 0.1 × h × w ) J d a r k ( x ) ] ,
Γ ( L ) = p P D p ( L p ) + ( p , q ) N V p , q ( L p , L q ) ,
D p ( L p ) = min ( | L p g p | 2 , c o n s t ) ,
V ( g p , g q ) = K min ( | g q g p | , 5 ) ,
t ~ 2 ( x ) = C R ( 1 ) ( t ~ 1 ( x ) ) .
t ~ 3 ( x ) = O R ( 1 ) ( t ~ 2 ( x ) ) .
t m ( x ) = ( t ~ 3 ( x ) min ( t ~ 3 ( x ) ) ) max ( t ~ 1 ( x ) ) min ( t ~ 1 ( x ) ) max ( t ~ 3 ( x ) ) min ( t ~ 3 ( x ) ) + min ( t ~ 1 ( x ) ) .
t d ( x ) = γ t g ( x ) + ( 1 γ ) t m ( x ) ,
min ( G ( x , β ) ) = min ( λ 2 | | t ( x ) t d ( x ) | | 2 2 + j ω | | W ( x , z ) μ | | 1 + β 2 ( j ω | | μ M j t d ( x ) | | 2 2 ) ) ,
t ( x ) = F 1 ( λ β F ( t d ( x ) ) + j ω F ( M j ) ¯ F ( μ ) λ β + j ω F ( M j ) ¯ F ( M j ) ) ,
t d f [ n ] = ( 1 a d ) t [ n ] + a d t d f [ n 1 ] ,
Q ( x ) = g ( x ) A max ( t d f ( x ) , t 0 ) + A ,
s k = T ( r k ) = k = 0 L 1 P ( r k ) = k = 0 L 1 n k N ,
IE = i = 0 255 p i log 2 p i ,
AG = 1 ( M 1 ) ( N 1 ) i = 1 M 1 j = 1 N 1 ( I ( i , j ) I ( i + 1 , j ) ) 2 + ( I ( i , j ) I ( i , j + 1 ) ) 2 2 ,
UIQM = c 1 × UICM + c 2 × UISM + c 3 × UIConM,
UCIQE = a 1 × σ c + a 2 × c o n l + a 3 × μ s ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.