Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Haze removal with channel-wise scattering coefficient awareness based on grey pixels

Open Access Open Access

Abstract

Before being captured by observers, the information carried by light may be attenuated by the transmission medium. According to the atmospheric scattering model, this attenuation is wavelength-dependent and increases with distance. However, most existing haze removal methods ignore this wavelength dependency and therefore cannot handle well the color distortions caused by it. To solve this problem, we propose a scattering coefficient awareness method based on the image formation model. The proposed method first makes an initial transmission estimation by the dark channel prior and then calculates the scattering coefficient ratios based on the initial transmission map and the grey pixels in the image. After that, fine transmission maps in RGB channels are calculated from these ratios and compensated for in sky areas. A global correction is also applied to eliminate the color bias induced by the light source before the final output. Qualitatively and quantitatively compared on synthetic and real images against state-of-the-art methods, the proposed method provides better results for the scenes with either white fog or colorized haze.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

For not only human beings but also artificial intelligence, vision is an essential way to comprehend the world. Unfortunately, in outdoor scenes, visual information may be lost before it is captured by the eye or camera. This is because the light reflected from objects may be attenuated by particles in the medium, which causes information loss and captured images to be hazy. In the atmosphere, this attenuation, which is primarily driven by scattering, is usually described by the Koschmieder atmospheric scattering model [1,2]:

$$\begin{aligned}I(x)&=J(x)t(x)+A(1-t(x)),\\ t(x)&={{e}^{-\beta(\lambda) d(x)}}, \end{aligned}$$
where $d(x)$ is the distance between the objects and the observer, $I(x)$ is the observed image, $J(x)$ is the scene radiance, $A$ is the global atmospheric light, $t(x)$ is the medium transmittance representing the portion of light received by the observer, $\beta (\lambda )$ is the light-scattering capacity of an atmospheric unit volume, and $\lambda$ is the wavelength. In Eq. (1), $J(x)t(x)$ is called the direct attenuation while $A(1-t(x))$ is called the air light.

The goal of haze removal is to restore the scene radiance $J(x)$ from the observed image $I(x)$ [2]. It is a complicated problem that has a lot more uncertainty than constraints. For example, in an N-pixel RGB image $J_{c}(x)$ ($c \in {\{r,g,b\}}$), there are 3N constraints ($I_{c}(x)$) and 6N+3 unknowns ($J_{c}(x)$, $t_{c}(x)$, and $A_{c}$). To reduce unknown terms, most haze removal methods presume that the scattering in the atmosphere is spectral uniform, and therefore $t_{r}(x)=t_{g}(x)=t_{b}(x)=t(x)$ [2,3]. However, this presumption is only valid in a few conditions. The specific atmospheric condition is characterized by various particles dispersed in the air, the scale of which determines whether the scattering depends on the wavelength [4]. This relationship can be described as follows:

$$\beta(\lambda) \propto \tfrac{1}{{\lambda}^{\gamma}},$$
where $0\leq \gamma \leq 4$ depends on the magnitude of atmospheric particles. Water droplets, which are much larger than the light wavelength, scatter different wavelength lights in the same coefficient ($\gamma \approx 0$) and cause white fog, meanwhile, smaller aerosols, such as smoke and dust, display considerable selectivity in the wavelength ($\gamma > 0$) and usually result in colorful haze [6]. The curve of relative scattering coefficients versus the light wavelength and particle sizes is shown in Fig. 1(a) (adopted from [5]). Atmospheric particles with a radius of more than 4000 nm scatter light equally across the visible spectrum. Particles of 1000 nm radius scatter green light (535 nm) less than red (700 nm) and blue light (460 nm). Red light is attenuated more quickly than blue light by particles with radii from 600 nm to 2000 nm, but more slowly by particles with radii of less than 400 nm.

 figure: Fig. 1.

Fig. 1. Colorful haze caused by the wavelength-dependent scattering. (a) Relative scattering coefficients versus the light wavelength and particle sizes (adopted from [5]). (b) A color distorted hazy image and the corrected image produced by the proposed method.

Download Full Size | PDF

In most environments, the atmosphere contains particles of various sizes. Some of them are much larger than the light wavelength, but some of them are not. Affected by the latter ones, the light attenuation in the atmosphere usually relies not just on the distance the light travels but also on its wavelength. As shown in Fig. 1(b), the wavelength-dependent attenuation will cause a distance-dependent color distortion, which cannot be handled by most existing haze removal methods due to ignorance of this wavelength dependency [3]. Owing to its spatial heterogeneous nature, this color distortion cannot even be corrected by global illumination estimation-based color correction methods, such as Grey World or Shades of Grey.

In an attempt to address this problem, we propose a scattering coefficient awareness haze removal method. Our proposed method, which is based on the image formation model, is physically plausible and handles both white fog and colorized haze well. The key novelty of the proposed method is estimating the ratios of the scattering coefficients among the RGB channels via grey points in the scene. To the best of our knowledge, this is the first haze removal method that calculates the scattering coefficients explicitly. Knowledge about the scattering coefficients helps the proposed method obtain more accurate transmission maps $t_{c}(x)$ in different color channels, so as to obtain better results in various atmospheric scattering conditions.

The contributions of this paper are summarized as follows:

  • • It proposes a novel homogeneous-haze removal method with scattering coefficient awareness. Based on the image formation model, this method is physically plausible in removing the color bias induced by light scattering. Compared with other state-of-the-art methods, its performance is competitive not only in white fog scenes but also in colorized haze scenes.
  • • It introduces a compensation strategy to deal with the transmittance underestimation of the dark channel prior in the sky areas. This strategy not only helps the proposed method, but also helps other dark channel prior based methods to avoid the unnatural results caused by the inaccurate transmission map in areas without shadow.
  • • It provides a theoretical and experimental analysis of why global color correction is not very suitable for eliminating the color distortion caused by light scattering in haze scenes.

2. Related work

Restoring scene radiance $J(x)$ through the image formation model is a very intuitive approach for haze removal. As discussed in the last section, it is an under-constrained problem for a single image. Haze removal methods based on the image formation model attempt to solve this under-constrained problem by increasing constraints with some prior assumptions. These prior assumptions usually come from natural scene statistics. For example, the atmospheric medium should be spatially smooth [7]. Small image blocks should be uniform in surfaces [8,9] and have similar scene depths [10,11]. The difference in depth between these blocks and their neighborhood should be gradual and can be approximated by their similarity [12]. In haze-free scenes, the contrast should be higher than that in haze scenes [1], and the colors should follow some lines in the color space [13]. In haze scenes, the contrast degradation should be highly correlated with the distance [14], the transmittance should be a function of the saturation [15], and color vectors can be statistically approximated as ellipsoids [16]. In the last decade, the dark channel prior, which assumes that there should be at least one RGB channel close to zero in non-sky zones in haze-free scenes and therefore pixels in this channel can be utilized in accurately estimating the transmission map [2], has a significant impact. Several haze removal methods [1727] are based on the dark channel prior. Some of them replace the soft matting in He’s original work [2] with more efficient or accurate filters [1720], some add new constraints to improve robustness in specific scenarios [21], and some introduce color correction methods to handle the color distortion [2227]. The effectiveness of methods in this image restoration approach mainly depends on the reliability of their assumptions. The more their assumptions fit the scenes, the better these methods will perform. However, it is difficult to find a prevailing prior assumption due to the diversity among scenes of the real world.

In another approach, haze removal is viewed as an issue of image enhancement rather than image restoration. Some methods use a fusion strategy to enhance a single hazy image. In these methods, derived images are acquired through a white balance and a contrast-enhancing procedure [2832], artificial multiple exposures [33], and are then fused through weight maps [28,29] or energy functions [33,34]. Different from the fusion-based methods, other methods switch to grabbing inspiration from the remarkable capacity of biological visual systems that obtain stable perception in diverse environments. With [3537], there are many haze removal methods based on the famous Retinex theory, which is inspired by human color perception and has been theoretically proved to be a solution to the atmospheric scattering model [38]. Also inspired by human perception, a variational framework has been expanded from color correction to haze removal by estimating the mean of haze-free images [39] and adjusting the saturation [40]. Even only taking advantage of the retinal signal processing mechanisms, which are the earliest stages in visual perception, is enough to significantly enhance haze images [41]. These types of methods usually provide nice visual results. Nonetheless, due to the lack of a physical basement, there is no guarantee that these results represent the scene radiance truthfully. To relieve this problem, a physical-based optimization restricts the results of the image enhancement based methods by the image formation model [42].

In recent years, learning-based haze removal methods have made great progress. For example, learning phases are introduced to obtain the theoretically optimal coefficient [43], combine different haze-relevant features [44], predict the transmission map $t(x)$ [45,46], or even directly generate the scene radiance $J(x)$ from the observed image $I(x)$ [47,48]. Due to the challenge of capturing pairs of a scene with as well as without haze, it is common to produce their training sets by synthetic images instead of real-world images. Although this is much easier than before, the difficulty of acquiring training data is still the main obstacle to this type of approaches.

Although the wavelength-dependent attenuation has not been widely considered in image dehazing, it has already been noticed in underwater image restoration [4951]. Derya and Tali noticed that in underwater environment the attenuation coefficient associated with backscatter is different than the coefficient associated with the direct signal [51], and their revised underwater image formation model is important progress in underwater image restoration. By introducing many dependencies that were ignored before, their model is quite accurate, but with much more parameters involved. Fortunately, the atmospheric attenuation is mainly dominated by scattering, therefore we can use a same attenuation coefficient for the direct attenuation term and the air light term in the atmospheric image formation model. In this sense, underwater image restoration is a similar but more difficult task compared with image dehazing.

3. Proposed method

Natural scenes always contain some intrinsic grey pixels that have uniform reflectance with visible spectrum and therefore look grey under a neutral illumination [52]. The proposed method first estimates an initial transmission map based on the dark channel prior, and then calculates the ratios of the real scattering coefficients in RGB channels to the scattering coefficient corresponding to the initial transmission map using these grey pixels. After that, transmission maps in RGB channels are obtained by the scattering coefficient ratios and the initial transmission map. In the sky area where the dark channel values are not close to zero, the dark channel prior tends to underestimate the initial transmission map $t(x)$ and leads to an unnatural result [2]. To avoid that, we introduced a compensated value for such area. Finally, a light source estimation method is applied to cancel the color bias induced by the light source and obtain the scene reflectance. The flowchart of the proposed method is shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. Flowchart of the proposed haze removal method based on the dark channel prior and grey pixel prior.

Download Full Size | PDF

3.1 Dark channel prior

According to [2], the dark channel of an RGB image $Image_{c}(x)$ ($c \in {\{r,g,b\}}$) is defined as:

$$Image_{dark}(x) = \mathop{\min}_{y \in {\varOmega(x)}}\big(\mathop{\min}_{c}Image_c(y)\big),$$
where $\varOmega (x)$ is a local patch centered on x.

Based on the dark channel prior, which assumes that the dark channel pixels should approach zero in haze-free scenes, and ignoring the wavelength dependency of the scattering coefficient, the transmission map of a hazy image can be estimated by its dark channel pixels [2]:

$$\begin{aligned}\mathop{\min}_{y \in {\varOmega(x)}}\!\big(\!\mathop{\min}_{c}\!\frac{I_c(y)}{A_c}\big)\! &=\!\mathop{\min}_{y \in {\varOmega(x)}}\!\big(\!\mathop{\min}_{c}\frac{J_c(y)}{A_c}\big)\widetilde{t}(x)\!+\!1\!-\!\widetilde{t}(x),\\ \mathop{\min}_{y \in {\varOmega(x)}}\!&\big(\!\mathop{\min}_{c}\frac{J_c(y)}{A_c}\big) \rightarrow 0,\\ \widetilde{t}(x) = \big(1-\mathop{\min}_{y \in {\varOmega(x)}}\!\big(\!\mathop{\min}_{c}\frac{I_c(y)}{A_c}\big)\big)&/\big(1-\mathop{\min}_{y \in {\varOmega(x)}}\!\big(\!\mathop{\min}_{c}\frac{J_c(y)}{A_c}\big)\big) \approx1-\mathop{\min}_{y \in {\varOmega(x)}}\!\big(\!\mathop{\min}_{c}\frac{I_c(y)}{A_c}\big), \end{aligned}$$
where $A_c$ is the atmospheric light, which approximately equals $I_c$ in the most haze-opaque region [2]. In order to keep the proposed method as simple as possible, although recent methods provide more accurate estimations of the atmospheric light [10,13,23], here we still calculate it through the brightest $10\%$ dark channel pixels following He’s original work [2].
$$A_c \quad \approx \mathop{\overline{I_c(x)}}\limits_{x \ \in \ {\mathop{\max}\limits_{10\%}( I_{dark}(x))}}.$$

3.2 Scattering coefficient ratio estimation by grey pixels

The original dark channel prior works well for white fog. However, because it assumes that the transmission values are equal in RGB channels ($t(x)=t_r(x)=t_g(x)=t_b(x)$), it is difficult to well handle the distance-dependent color distortion caused by the wavelength-dependent attenuation ($t_r(x)\neq t_g(x)\neq t_b(x)$) [23,25]. Taking into account the scattering coefficient difference among the RGB channels, Eq. (1) is revised to

$$\begin{aligned}I_c(x)&=J_c(x)t_c(x)+A_c(1-t_c(x)),\\ t_c(x)&={{e}^{-\beta_c d(x)}}, \end{aligned}$$
where $c \in {\{r,g,b\}}$.

From Eq. (6), using the medium transmittance $t_c(x)$ as a bridge yields the relationship between the hazy image and the scattering coefficients:

$$\frac{A_c-I_c(x)}{A_c-J_c(x)}=t_c(x)={{e}^{-\beta_c d(x)}}.$$

Images are normalized by atmospheric light as follows:

$${I'}_c(x)=\frac{I_c(x)}{A_c}, \qquad\qquad {J'}_c(x)=\frac{J_c(x)}{A_c}.$$

By dividing the estimated transmission map $\widetilde {t}(x)$ by the medium transmittance $t_{c}(x)$ in RGB channels, there is:

$$\frac{1-{I'_{dark}}(x)}{1-{J'_{dark}}(x)}\cdot\frac{1-{J'}_c(x)}{1-{I'}_c(x)}={{e}^{(\frac{\beta_c}{\widetilde{\beta}}-1)\widetilde{\beta} d(x)}},$$
where $I'_{dark}(x)$ and $J'_{dark}(x)$ are the dark channels corresponding to $I'(x)$ and $J'(x)$, respectively, and $\beta _c$ and $\widetilde {\beta }$ are scattering coefficients corresponding to $t_c(x)$ and $\widetilde {t}(x)$, respectively. Note that intensity of $J'_{dark}(x)$ is low but not exactly equal to zero.

Taking the logarithm, Eq. (9) is rewritten as

$$\begin{aligned}\frac{\beta_c}{\widetilde{\beta}}&=1-ln\left(\frac{1-{I'_{dark}}(x)}{1-{J'_{dark}}(x)}\cdot\frac{1-{J'}_c(x)}{1-{I'}_c(x)}\right)/\left({-\widetilde{\beta} d(x)}\right)\\ &=1-ln(\frac{1-{I'_{dark}}(x)}{1-{J'_{dark}}(x)}\cdot\frac{1-{J'}_c(x)}{1-{I'}_c(x)})/ln(\widetilde{t}(x)), \end{aligned}$$
where $ln$ is the natural logarithm function.

If there are some grey pixels that satisfy the following relationship:

$$\begin{aligned}\overline{J' (x)}&=(J'_r (x)+J'_g (x)+J'_b (x))/3,\\ \overline{J' (x)}&=J'_r (x)=J'_g (x)=J'_b (x), \end{aligned}$$
for these pixels, the unknown terms in numerator and denominator of Eq. (10), i.e., $J'_c(x)$ and $J'_{dark}(x)$, should be equal with each other and can be eliminated.

It has been proved that most haze-free scenes contain such grey pixels by Yang’s statistical analysis (95% of 14238 natural haze-free images contain grey pixels [52]). According to Eq. (11), for grey pixels, which have equal RGB values ($J'_r (x)=J'_g (x)=J'_b (x)$), $J'_c(x)$ is equal to $J'_{dark}(x)$ and Eq. (10) is simplified as:

$$\frac{\beta_c}{\widetilde{\beta}}=1-ln(\frac{{1-I'_{dark}}(x)}{1-{I'}_c(x)})/ln(\widetilde{t}(x)) \qquad x\in \{\textrm{grey pixels}\}.$$

To eliminate the effect of noise, $\frac {\beta _c}{\widetilde {\beta }}$ is calculated and averaged on all the grey pixels.

According to Eq. (6), the transmission maps in RGB channels are calculated as

$$t_{c}(x) = (\widetilde{t}(x))^\frac{\beta_c}{\widetilde{\beta}}.$$

Here we normalize it to the range $[0,1]$.

Now the problem is how to find these grey pixels, which satisfy Eq. (11), from the normalized captured image $I'_c(x)$. This problem will be solved in the next subsection.

3.3 Grey pixels detection by the greyness index

According to the atmospheric scattering model (Eq. (6)), when the medium transmittance $t_c(x)$ is sufficiently high, the captured image $I'_c(x)$ is dominated by direct attenuation rather than air light. The proposed method assumes that for the pixels with the largest $30\% \enspace \widetilde {t}(x)$, the air light can be ignored and the captured image approximates the direct attenuation:

$$I'_c(x) \approx J'_c(x)t_c(x),\quad x\in \{\mathop{\max}_{30\%}( \widetilde{t}(x))\}, c \in {\{r,g,b\}}.$$

From Eq. (14), the difference between $I'_c(x)$ and $\overline {I' (x)}$ in the logarithmic domain is:

$$\begin{aligned}ln(I'_c(x))-ln(\overline{I' (x)})&\approx ln\left(J'_c(x)t_c(x)\right)-ln\left(\overline{J' (x)}\ \overline{t(x)}\right)\\ &=ln(J'_c(x))-ln(\overline{J'(x)})+ln(t_c(x))-ln(\overline{t(x)}).\\ \end{aligned}$$

Bringing Eq. (11) into Eq. (15), for grey pixels, the term $ln(J'_c(x))-ln(\overline {J'(x)})$ is eliminated:

$$ln(I'_c(x))-ln(\overline{I' (x)})=ln(t_c(x))-ln(\overline{t(x)})\qquad x\in \{\textrm{grey pixels}\}.$$

The proposed method assumes that scattering coefficients are spatially homogeneous and not extremely high. Based on this assumption, the medium transmittance is nearly constant within a very small local region. And therefore in this local region, the outcome after subtracting their average value from transmission maps of RGB channels should also keep constant. In other words, the local contrast in this subtraction outcome should be close to zero:

$$LoG\{ln(t_c(x))-ln(\overline{t(x)})\} \approx 0,$$
where $LoG$ is a Laplacian of the Gaussian filter with a size of $5\!\times \!5$ pixels.

Combing Eq. (16) and Eq. (17), for grey pixels, there is:

$$LoG\{ln(I'_c(x))-ln(\overline{I' (x)})\}=LoG\{ln(t_c(x))-ln(\overline{t(x)})\} \approx 0\qquad x\in \{\textrm{grey pixels}\}.$$

Equation (18) implies that the lower $LoG\{ln(I'_c(x))-ln(\overline {I' (x)})\}$ is, the higher probability the pixel to be a grey pixel is. So grey pixels can be found by the greyness index (GI) [52,53]:

$$\begin{aligned}GI(x)=&||LoG\{ln(I'_r(x))-ln(\overline{I' (x)})\},\\ &LoG\{ln(I'_b(x))-ln(\overline{I' (x)})\}||\\ x&\in \{\mathop{\max}_{30\%}( \widetilde{t}(x))\}, \end{aligned}$$
where $|| \cdot ||$ is an L2 norm operator. A smaller greyness index value means a higher possibility to be a grey pixel.

According to [53], pixels with the smallest $0.1\%$ GI in the scene are chosen as grey pixels.

3.4 Scene recovery

From Eq. (1), in classical haze image restoration methods, the scene radiance is:

$$J_c(x)= \frac{I_c(x)-A_c(1-t(x))}{t(x)}.$$

To ensure the non-negative value in scene radiance, $I_c(x)$ should be not less than $A_c(1-t(x))$. However, in the sky area where there is no shadow and the $J_{dark}(x)$ is not small enough, the dark channel prior tends to underestimate the transmission map $t(x)$ [2]. Such underestimated transmission map $t(x)$ may make $I_c(x)$ less than $A_c(1-t(x))$ and lead to an over-dehazing result. So in practice, dark channel prior based methods usually set a lower bound value (such as $t_0$ in [2]) to restrict the transmission map. However, this constant value $t_0$ is scene-independent and difficult to be applicable to any scenes. Instead of such a constant value, the proposed method uses an adaptive value $\varepsilon$ to compensate for the sky region. This variable value $\varepsilon$ is scene-adapted and calculated from $A_c$, $I_c(x)$, and $\widetilde {t}(x)$ to ensure the recovered scene radiance $J_c(x)$ non-negative:

$$\begin{aligned}&I_c(x)\geq A_c(1-t(x)),\\ &t(x)=\widetilde{t}(x)+\varepsilon \geq \frac{A_c-I_c(x)}{A_c},\\ &\varepsilon = \max \left(\frac{A_c-\min (I_c(x)+A_c\widetilde{t}(x))}{A_c}\right). \end{aligned}$$

Due to the flaw of the dark channel prior, the estimated transmission map $\widetilde {t}(x)$ is not accurate enough in the sky area. What is worse is that the potential errors may be exponentially amplified by Eq. (13) and lead to huge errors in $t_{c}(x)$. So in this area, we use the estimated transmission map $\widetilde {t}(x)$ compensated by the additional value $\varepsilon$ to replace the calculated result by Eq. (13):

$$t_{c}(x) = \begin{cases} & t_{c}(x) \qquad\quad x\notin \{Sky\}. \\ & \\ & \widetilde{t}(x)+\varepsilon \qquad x\in \{Sky\}. \end{cases}$$

K-means clustering [54] is used to decide which pixels belong to the sky area. Because the distance $d(x)$ between the sky area and the observer is long, the estimated transmission map $\widetilde {t}(x)$ in such area should be close to zero. So after clustering $\widetilde {t}(x)$ into two groups with initial centroid positions 0 and 1, the pixels in the sky area will correspond to the clustered group with the lower centroid position. By setting the pixels of $\widetilde {t}(x)$ replica to 1 for the sky area and 0 for other areas, a sky map having the same dimension with $\widetilde {t}(x)$ is obtained. For a smoother transition, this sky map is refined by a guided filter [17] using the brightness of the input haze image $(I_r(x)+I_g(x)+I_b(x))/3$ as guidance.

Then the scene radiance $J_c(x)$ is recovered by the proposed method as

$$J_c(x) = \frac{I_c(x)-A_c(1-t_c(x))}{t_c(x)}.$$

From Eq. (23), the haze-free scene radiance $J_c(x)$ is recovered from the captured image $I_c(x)$ and the color bias induced by scattering is eliminated. However, the scattering is not the only reason resulting color bias. Under a non-neutral illumination, the haze-free scene radiance $J_c(x)$ will contain the color cast induced from the illumination [53]. Such possible color bias should be corrected through illumination elimination:

$$R_{c}(x) =\frac{J_c(x)}{L_c/\sqrt{L^2_r+L^2_g+L^2_b}} \qquad c \in {\{r,g,b\}},$$
where $R_{c}(x)$ is the scene reflectance and $L_c$ is the illumination, which is estimated by the average RGB value of all the grey pixels in $J_c(x)$ [53]:
$$L_{c} =\mathop{\overline{J_c(x)}}_{x\in \{\textrm{grey pixels}\}} \qquad c \in {\{r,g,b\}}.$$

4. Experiments

In this section, we first demonstrate the contribution of the main components of the proposed method and then compare the results of the proposed method with those of state-of-the-art methods on synthetic and real images. The results of these compared methods were obtained by their released Matlab codes [20,21,23,2527,29,32,33,38,41] or Python codes [15,46,47]. Most of them are supposed to be applied to not only white foggy images but also colorful hazy images. Evaluated scores for the compared methods are shown in the tables as average and standard deviation.

4.1 Ablation analysis

Ablation studies are conducted in this subsection to analyze the effect of the main components of the proposed method.

4.1.1 Transmission map estimation with scattering coefficient awareness versus global color correction

In a haze-free scene, the color of captured objects is determined by the reflectance of objects, the spectral distribution of the illumination, and the sensor sensitivity. If the spectral response of the sensor is narrow, the image formation [53] is

$$J_c(x) =L_cR_{c}(x),$$
where $J_c(x)$ is the haze-free scene radiance, $R_{c}(x)$ is the scene reflectance, and $L_c$ is the sensor-captured illumination, which is a projection of the light source spectral distribution on the sensor sensitivity. The aim of color correction, or color constancy, is to estimate and eliminate the effect of the illumination chromaticity on the captured images [55,56].

Bringing Eq. (26) into Eq. (6), Eq. (27) is derived:

$$I_c(x)=L_cR_{c}(x)t_c(x)+A_c(1-t_c(x)).$$

From Eq. (27), in haze scenes there are two types of possible color distortion: the light-source-induced color bias and the light-scattering-induced color bias. The light-source-induced color bias, which is caused by the colorful illumination, is spatially uniform and can be corrected by color constancy methods based on Eq. (26). However, the light-scattering-induced color bias, which results from the wavelength dependence of the medium transmittance, also varies with distance (like the transmittance) and is not uniform in space. Therefore, traditional color constancy methods such as Grey World or Shades of Grey, which assume that the color cast is constant over the whole scene, are not an ideal choice to handle this kind of spatial heterogeneous color bias. The proposed method eliminates the light-scattering-induced color bias through transmittance estimation with scattering coefficient awareness, and then applies a global color correction to remove the light-source-induced color bias. The effects of these two blocks are exhibited in Fig. 3.

 figure: Fig. 3.

Fig. 3. Results with/without scattering coefficient awareness and global color correction.

Download Full Size | PDF

In the condition of dust storm weather, the distribution of particle radius in 500–3000 nm has an order of magnitude increase compared with a clean day [57]. Red light (having a longer wavelength) is attenuated stronger by such particles than blue light, and it suffers a greater scattering coefficient $\beta (\lambda )$. According to Eq. (1), at the same distance $d(x)$, a greater scattering coefficient $\beta (\lambda )$ results in a lower transmittance $t(x)$ and finally leads to a higher value of the observed image $I(x)$. As shown in the upper left image of Fig. 3, the input image captured under a dust storm is generally brownish-red because of a smaller transmittance from a greater scattering coefficient in the red channel.

Without scattering coefficient awareness, the ignorance about the difference among transmission maps of RGB channels results in the overestimation of the transmission map of the red channel and leads to an uneven red color cast in the image after haze removal. This distortion is especially obvious in distance. It can be seen that in the first line images of Fig. 3, the color veil of distant buildings is much heavier than that of the nearby pavilion. The residual color bias in the image after a global color correction indicates that such an operation helps little with this non-uniform color distortion.

With scattering coefficient awareness, the proposed method handles this distance-dependent color distortion well. The proposed method calculates the greyness index of the area where the transmittance is high. The heat map of the greyness index is shown in the middle left image of Fig. 3. A darker blue in the heat map indicates a smaller greyness index, which means the corresponding pixel in the scene is more likely to be a grey pixel. Grey pixels detected by the greyness index are highlighted in white, as shown in the bottom left image of Fig. 3. These pixels are mainly concentrated in the white balustrade of the pavilion and the iron gray roof of the scene in Fig. 3. The ratios of the scattering coefficients ($\frac {\beta _c}{\widetilde {\beta }}=[5.47\ 2.89\ 1.00]$) are calculated from these pixels. With this knowledge, the proposed method obtains transmission maps presenting the wavelength-dependent attenuation. Because in the blue channel the scattering coefficient is smaller than it in the red channel, the transmission map is greater than it in the red channel. These differences among transmission maps of RGB channels increase with the distance from an area in a scene to the camera. For the pavilion area close to the camera, the values of the RGB transmission maps are approximately equal because of the small distance. In contrast, for the distant building areas, a larger distance amplifies the divergence among scattering coefficients and leads to bigger differences among the transmission maps of RGB channels; this makes these areas look dark blue. Based on transmission maps containing the information of the distance-dependent color distortion, the proposed method obtains a haze-free result only containing the light-source-induced color bias, which will be eliminated by the following global color correction.

4.1.2 Sky compensation

A main flaw of the dark channel prior is underestimating the transmission map in areas without shadow, such as sky areas [19]. As shown in the second image of Fig. 4, this underestimation is exponentially amplified by Eq. (13) and leads to an unnatural result containing much noise in areas without shadow. So in this area, a compensation value $\varepsilon$, which is estimated from the scene to ensure the recovered scene radiance non-negative, is used to replace the calculated result by Eq. (13). As shown in the forth image of Fig. 4, although this compensation may result in an insufficient haze removal, it is not a big problem because there are usually no visible objects in these regions. Furthermore, keeping some residual haze in distant areas helps with distance perception [2].

 figure: Fig. 4.

Fig. 4. Results with/without sky compensation.

Download Full Size | PDF

4.2 Proof of concept on synthetic images

In this subsection, the proposed method was evaluated through quantitative comparisons, conducted on synthetic white foggy images of the FRIDA2 dataset [21] and colorized hazy images generated from the former ones, to prove its core concept. FRIDA2 dataset provides paired haze-free images and depth maps of 66 diverse road scenes. We generated the colorized hazy images using the atmospheric scattering model. Because image restoration methods usually assume that the atmospheric light and the scattering coefficient are distance-independent, comparisons in this subsection were based on the uniform fog type in FRIDA2. According to [21], we computed the mean absolute difference (MAD) between the non-sky pixels of the image after haze removal and those of the paired haze-free image as a full-reference assessment; lower MAD values correspond to better results. This metric is global and not very sensitive to errors around edges [21]. CIEDE [58], a color difference metric in which smaller values indicating better results, is used to evaluate color restoration. In addition, we also used the peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM [59]) as quantitative metrics, a greater value of which means a better result.

4.2.1 White foggy images of FRIDA2

This comparison was conducted using white foggy images of the FRIDA2 dataset [21]. Table 1 shows the comparison between the proposed method and state-of-the-art methods in terms of MAD, CIEDE, PSNR, and SSIM. To illustrate the degree of synthetic fog, the results without haze removal are also provided in the table, noted as DN. As shown in Table 1, the proposed method ranks 2nd. Its performance is only a little worse than NBPCPA [21], which is originally designed for the traffic scenes in this dataset. This shows the advantage of the proposed method in realization of adaptation to colorized haze without loss of its ability to remove white fog.

Tables Icon

Table 1. Results on synthetic white foggy images.a

4.2.2 Brownish hazy images based on FRIDA2 images

This comparison was conducted on brownish hazy images generated from FRIDA2 images [21]. These images looking like they were captured under a dust storm were synthesized by dot multiplying a matrix $[2.0~1.6~1.2]$ on the scattering coefficients of RGB channels in the white foggy images of FRIDA2. As shown in Table 2, there is a significant performance gap between our method and the compared methods. The proposed method obtains a much lower MAD and CIEDE value, and a much higher PSNR and higher SSIM value. It ranks 1st in terms of both of MAD, CIEDE, PSNR, and SSIM, showing robustness in different metrics.

Tables Icon

Table 2. Results on synthetic brownish hazy images.a

4.2.3 Bluish hazy images based on FRIDA2 images

This comparison was conducted on bluish hazy images generated from FRIDA2 images [21]. These bluish hazy images were synthesized by dot multiplying a matrix $[1.2~1.6~2.0]$ on the scattering coefficients of RGB channels in the white foggy images of FRIDA2. As shown in Table 3, consistent with the results on brownish images, the proposed method still has a significant advantage and ranks 1st, showing robustness to different scatterings in various atmospheric conditions.

Tables Icon

Table 3. Results on synthetic bluish hazy images.a

4.2.4 Evaluation of color restoration

The proposed method relies on the knowledge about the scattering coefficients to handle the potential color bias caused by the possible differences among transmission maps of RGB channels. The more accurate the estimation of the scattering coefficients is, the better the proposed method restores such distance-dependent color bias. Here we evaluated the accuracy of scattering coefficients estimation by the metric of angular error [56], defined as

$$AE = cos^{{-}1}(\frac{(\frac{\beta_c}{\widetilde{\beta}})_{gt}\cdot(\frac{\beta_c}{\widetilde{\beta}})_{est}}{||(\frac{\beta_c}{\widetilde{\beta}})_{gt}||\cdot||(\frac{\beta_c}{\widetilde{\beta}})_{est}||}),$$
where $|| \cdot ||$ is an L2 norm operator, and $(\frac {\beta _c}{\widetilde {\beta }})_{est}$ and $(\frac {\beta _c}{\widetilde {\beta }})_{gt}$ are the vectors of the estimated scattering coefficient ratios and the ground truth scattering coefficient ratios, respectively. A smaller angular error means a smaller difference between the estimated and the ground truth vectors, leading to a better color restored result.

Figure 5 exhibits the accuracy of our scattering coefficient estimation on synthetic uniform images that are described in the previous subsections. Angular errors between the ground truth and our estimation are denoted by blue lines. Angular errors without scattering coefficient ratio estimation serve as a reference and are denoted by red lines. In white foggy images, the mean value of our angular error is 0.31, and the median value is 0.24. Although our estimation may introduce some errors when fog is colorless, these deviations are very tiny and won’t induce visible color bias. Without scattering coefficient ratio estimation, angular errors in brownish and bluish hazy images are higher than 11. Our estimation sharply decreases angular errors, eliminating the light-scattering-induced color bias in these images well. In brownish hazy images, the mean value of our angular error is 2.72, and the median value is 2.82. In bluish hazy images, the mean value of our angular error is 2.91, and the median value is 2.67. These significantly decreased angular errors exhibit the significant advantage of the proposed method shown in Table 2 & 3. In both cases of white fog and colorized haze, the proposed method estimates the scattering coefficients accurately and provides color corrected results.

 figure: Fig. 5.

Fig. 5. Evaluation of color restoration. A smaller angular error means a better result.

Download Full Size | PDF

4.3 Subjective comparison on real images

A subjective comparison is shown in Fig. 6 to visually demonstrate the haze removal ability of the proposed method. Twenty subjects were invited to compare 5 types of images, i.e., the original hazy images, the dehazing results of the proposed method, GDCP [23], which is incorporated with an adaptive color correction, SBTE [15], which uses white balance to remove the color veil, and NGT [32], which corrects color by adaptive histogram equalisation. These subjects are from 22 to 28 years old with normal visual acuity and ordinary color vision. At a distance of approximately 70cm, the subjects viewed paired images (to one subject, every combination of two images, randomly selected from the 5 type images, appears once in every scene) displayed on a Dell LCD display with default settings to compare them visually. The subjects were allowed to take as much time as needed to select a better one from the two displayed images. The selected image scored 1 while another image scored −1. If the subjects thought two images had equal quality, both images scored 0. Average values and standard deviation values of subjective scores are shown in Fig. 6. In this pair-wise psychophysical experiment, our method obtains the highest scores for every scene.

The first scene exhibited in Fig. 6 is a garden that is under a white fog without any color cast. Except for NGT [32] suffering from color fading, nearly all the methods work well in this condition. Compared with other methods, the proposed method produces more natural color. For example, the trees in our result are vivid green full of layers while they look a little bluish in the results of GDCP [23].

 figure: Fig. 6.

Fig. 6. Comparison against haze removal results on five different types of hazy images selected from Internet. Subjective scores are shown above the images, a larger value meaning a better result.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. An example of the LIVE-hazy database.

Download Full Size | PDF

The second scene shown in Fig. 6 is a camp that is under a sandstorm with a brownish yellow color tone. The result of GDCP [23] looks well overall, but its saturation is a little high, especially in areas of the ground and tents. On the contrary, due to the low saturation, NGT [32] produces an image where everything is grey. Meanwhile, the global color correction of SBTE [15] gives its resulting image a bluish violet tone. In comparison, the proposed method removes haze more accurately and provides a result that has better contrast and saturation.

The third scene displayed in Fig. 6 is an urban landscape under a light green veil due to air pollution. The result of SBTE [15] presents a more even and stronger green cast than that in the original hazy image. The result of GDCP [23] has a purple distortion, which is most obvious in the sky areas. The result of NGT [32] has the same saturation problem. In comparison, the proposed method retains more color details, such as the green trees and the red housetops.

The forth scene presented in Fig. 6 is a cityscape under a blue color tone. The result of NGT [32] nearly lost all color information. In the result of SBTE [15], the blue color distortion of the input image is turned into brownish red. Compared with the proposed method, the result of GDCP [23] retains more blue residual haze.

The fifth scene in Fig. 6 is a shrub land with an orange color bias. In the result of NGT [32], this color bias is eliminated at the cost of the saturation, while the result of SBTE [15] presents nearly a same orange color bias with the input image. In the result of GDCP [23], the nearby grass looks well, but the shrub slightly further away turns a little bluish. Compared with other methods, the proposed method produces a clear result with more natural color.

As a whole, we can see from Fig. 6 that the proposed method achieves competitive or superior results in terms of clarity, color shift, and detail recovery in processing images with white fog and colorful haze.

4.4 Objective comparisons on real images

4.4.1 LIVE-hazy dataset

The LIVE-hazy dataset [29], containing 500 real images with varying densities of fog, provides diverse scenes such as mountains, trees, houses, roads, and towns. One of them is shown in Fig. 7. Most of the well-known images used in previous works are included by this dataset [29]. This dataset also provides a prediction of fog density, the Fog Aware Density Evaluator (FADE), a lower value of which means less residual haze. This serves as a non-reference assessment for haze removal. This prediction is based on statistical features of natural haze and haze-free images and is highly consistent with human judgments [29]. Besides that, we also used BRISQUE [60] and NIQE [61], a lower value of which means a better result, and CDQI [62], a greater value of which means a better result, as quality metrics. The evaluated scores on original hazy images (noted as DN) are also provided in the table but not participate in ranking.

As shown in Table 4, in terms of FADE [29], the proposed method ranks 1st; DEFADE [29], an enhancement-based method trained on this dataset with FADE, ranks 2nd; DMSR [38], a Retinex-based method, ranks 3rd. In terms of BRISQUE [60], a non-reference image spatial quality evaluator, the proposed method ranks 1st, MOF [20] ranks 2nd, and OCM [26] ranks 3rd. In terms of CDQI [62], a contrast evaluator based on natural scene statistics, NGT [32] ranks 1st, HRDCP [25] ranks 2nd, and the proposed method ranks 3rd. In terms of NIQE [61], a non-reference evaluator similar to BRISQUE [60] but without features from distorted images, MOF [20] ranks 1st, DNET [46] ranks 2nd, and the proposed method ranks 3rd. It is worth noted that the NIQE score of DN is better than those of most compared methods except for MOF [20]. In all compared methods, the proposed method is the only one that obtains top 3 in all metrics. Given the large diversity of scenes in this dataset, these results indicate the high adaptation of the proposed method to the complex real world. No matter whether the fog density is light or heavy, the proposed method handles white fog and colorized haze well.

Tables Icon

Table 4. Results on the LIVE-hazy database.a

4.4.2 O-HAZE dataset

The O-HAZE [63] dataset contains paired high resolution images of 45 outdoor scenes with and without haze generated by a haze machine. Same with the comparisons on synthetic images, here we used MAD, PSNR, CIEDE [58], and SSIM [59] as full-reference image quality assessment metrics. To accelerate computation, images in this dataset were scaled to the quarter size of original sizes. Note that although reducing the size of the image might decrease the amount of information in the scene and therefore change the final results, it is still a fair comparison because the evaluation of all the compared methods was executed on the same images with same sizes.

As shown in Table 5, the proposed method ranks 1st in terms of MAD, PSNR, and SSIM, and 2nd in terms of CIEDE, showing robustness in different metrics. AMEF [33] ranks 2nd in terms of MAD and PSNR, and 3rd in terms of SSIM and CIEDE. DMSR [38] ranks 3rd in terms of MAD, PSNR, and SSIM, and 4th in terms of CIEDE.

Tables Icon

Table 5. Results on the O-HAZE database.a

4.5 Computation time

Table 6 shows the computation time of the proposed method on images with different sizes. The computation time of two recent methods based on the dark channel prior is also listed in the table as reference. It was conducted with a personal computer (Intel i7-6700K 4.0 GHz CPU with 16GB RAM, Windows 7 Pro 64) and the MATLAB R2019b environment. The computation time of the proposed method increases from 0.5 s on a $360\times 480$ pixel image to 8.4 s on a $1440\times 1920$ pixel image, almost linearly with the number of pixels.

Tables Icon

Table 6. Computation time on images with different sizes.

5. Discussion and conclusion

In recent years, color bias in hazy images attract more and more attention. Some haze removal methods make use of traditional color correction methods, such as Grey World or Shades of Grey, to globally adjust the weights in RGB channels. These color corrections eliminate the light-source-induced color bias well [23,25]. However, the light-scattering-induced color bias varying with the distance change requires a pixel-wise correction in local areas rather than a global adjustment in a whole image. To address this problem, the proposed method takes advantage of scattering coefficient ratios estimated from grey pixels to calculate transmission maps in RGB channels respectively. In small local regions where medium transmittance is constant and high, the image formation models of the light-scattering hazy images and of the light-source-induced color bias images are similar enough to find grey pixels by the greyness index. Besides that, the proposed method compensates for the transmittance underestimation of the dark channel prior in no shadow areas to avoid the unnatural results. Before the final output, an extra global color correction is also applied to eliminate the potential light-source-induced color bias. The proposed method, which is based on the image formation model and therefore physically plausible, provides superior results in synthetic and real images of white fog and colorized haze compared with state-of-the-art methods.

The proposed method is, to the best of our knowledge, the first haze removal method that takes wavelength-dependent attenuation into account. Although Berman et al. already noticed that attenuation varies with wavelength in their underwater image restoration method [49], handling such wavelength-dependent attenuation in terrestrial scenes is much harder than in underwater scenes. The main difference is that in underwater images attenuation ratios of color channels can be chosen from an existing library of water types by the grey-world assumption. However, in terrestrial images, there is no such library providing extra information; therefore, attenuation ratios have to be calculated directly from the captured image itself. Moreover, the grey-world assumption, which is used by Berman et al. to select the right attenuation ratios, is simple but not robust enough [49]. Large areas of the same color, such as grassland, will undermine this assumption. Another similar work to the proposed method is GDCP, which generalizes the dark channel prior and uses an adaptive color correction to remove color casts and at the same time restore contrast [23]. However, in GDCP, the approach of color correction is to adjust ambient light. This global operator is suitable for the light-source-induced color bias but cannot handle the light-scattering-induced color bias as satisfactorily as the proposed method. Tsai et al. also preserved sky areas to solve the over-enhancement drawback of the dark channel prior [19]. Different from the proposed method, their method detects the sky areas by a guided filter. The essence of this detection method is looking for a smooth region. However, a smooth region does not necessarily mean a sky region; this can be a problem. In this method, flat surfaces may also be recognized as sky areas and handled incorrectly.

The proposed method estimates the scattering coefficient ratios through grey pixels detected by the greyness index based on the assumption that medium transmittance should be close to constant, at least in a small local area. When scattering coefficients are spatially heterogeneous, medium transmittance will change quickly even in local regions, which results in inaccurate detection of grey pixels and decreased performance of the proposed method. Extending the application of the proposed method from homogeneous to heterogeneous haze will be our next work. In addition, extending the proposed method to adapt to underwater environment, where there is not only scattering but also absorption [4951], will be our another future work.

Funding

Key Area R&D Program of Guangdong Province (2018B030338001); National Natural Science Foundation of China (61806041, 62076055).

Acknowledgments

We thank LetPub for its linguistic assistance during the preparation of this manuscript.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are available in Refs. [21,29,63].

References

1. R. T. Tan, “Visibility in bad weather from a single image,” in CVPR, (IEEE, 2008), pp. 1–8.

2. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011). [CrossRef]  

3. L. Yu, S. You, M. S. Brown, and R. T. Tan, “Haze visibility enhancement: A survey and quantitative benchmarking,” Comput. Vis. Image Underst. 165, 1–16 (2017). [CrossRef]  

4. D. Berman, “Seeing farther,” Ph.D. thesis, Tel Aviv University (2018).

5. N. S. Kopeika, S. Solomon, and Y. Gencay, “Wavelength variation of visible and near-infrared resolution through the atmosphere: dependence on aerosol and meteorological conditions,” J. Opt. Soc. Am. 71(7), 892–901 (1981). [CrossRef]  

6. S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Trans. Pattern Anal. Machine Intell. 25(6), 713–724 (2003). [CrossRef]  

7. J.-P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in ICCV, (IEEE, 2009), pp. 2201–2208.

8. R. Fattal, “Dehazing using color-lines,” ACM Trans. Graph. 34(1), 1–14 (2014). [CrossRef]  

9. M. Sulami, I. Glatzer, R. Fattal, and M. Werman, “Automatic recovery of the atmospheric light in hazy images,” in ICCP, (IEEE, 2014), pp. 1–11.

10. J.-H. Kim, W.-D. Jang, J.-Y. Sim, and C.-S. Kim, “Optimized contrast enhancement for real-time image and video dehazing,” J. Vis. Commun. Image Represent. 24(3), 410–425 (2013). [CrossRef]  

11. L. He, J. Zhao, N. Zheng, and D. Bi, “Haze removal using the difference- structure-preservation prior,” IEEE Trans. on Image Process. 26(3), 1063–1075 (2017). [CrossRef]  

12. S. Mandal and A. N. Rajagopalan, “Local proximity for enhanced visibility in haze,” IEEE Trans. on Image Process. 29, 2478–2491 (2020). [CrossRef]  

13. D. Berman, T. Treibitz, and S. Avidan, “Single image dehazing using haze-lines,” IEEE Trans. Pattern Anal. Machine Intell. 42(3), 720–734 (2020). [CrossRef]  

14. C. O. Ancuti, C. Ancuti, C. Hermans, and P. Bekaert, “A fast semi-inverse approach to detect and remove the haze from a single image,” in ACCV, (Springer, 2010), pp. 501–514.

15. S. E. Kim, T. H. Park, and I. K. Eom, “Fast single image dehazing using saturation based transmission map estimation,” IEEE Trans. on Image Process. 29, 1985–1998 (2020). [CrossRef]  

16. T. M. Bui and W. Kim, “Single image dehazing using color ellipsoid prior,” IEEE Trans. on Image Process. 27(2), 999–1009 (2018). [CrossRef]  

17. K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Machine Intell. 35(6), 1397–1409 (2013). [CrossRef]  

18. C.-H. Yeh, L.-W. Kang, M.-S. Lee, and C.-Y. Lin, “Haze effect removal from image via haze density estimation in optical model,” Opt. Express 21(22), 27127–27141 (2013). [CrossRef]  

19. C.-C. Tsai, C.-Y. Lin, and J.-I. Guo, “Dark channel prior based video dehazing algorithm with sky preservation and its embedded system realization for adas applications,” Opt. Express 27(9), 11877–11901 (2019). [CrossRef]  

20. D. Zhao, L. Xu, Y. Yan, J. Chen, and L.-Y. Duan, “Multi-scale optimal fusion model for single image dehazing,” Signal Processing: Image Communication 74, 253–265 (2019). [CrossRef]  

21. J.-P. Tarel, N. Hautiere, L. Caraffa, A. Cord, H. Halmaoui, and D. Gruyer, “Vision enhancement in homogeneous and heterogeneous fog,” IEEE Intelligent Transportation Systems Magazine 4(2), 6–20 (2012). [CrossRef]  

22. C. Chen, M. N. Do, and J. Wang, “Robust image and video dehazing with visual artifact suppression via gradient residual minimization,” in ECCV, (Springer, 2016), pp. 576–591.

23. Y. Peng, K. Cao, and P. C. Cosman, “Generalization of the dark channel prior for single image restoration,” IEEE Trans. on Image Process. 27(6), 2856–2868 (2018). [CrossRef]  

24. C. O. Ancuti, C. Ancuti, C. De Vleeschouwer, and M. Sbetr, “Color channel transfer for image dehazing,” IEEE Signal Process. Lett. 26(9), 1413–1417 (2019). [CrossRef]  

25. Z. Shi, Y. Feng, M. Zhao, E. Zhang, and L. He, “Let you see in sand dust weather: A method based on halo-reduced dark channel prior dehazing for sand-dust image enhancement,” IEEE Access 7, 116722–116733 (2019). [CrossRef]  

26. Y. Yang, C. Zhang, L. Liu, G. Chen, and H. Yue, “Visibility restoration of single image captured in dust and haze weather conditions,” Multidim Syst Sign Process 31(2), 619–633 (2020). [CrossRef]  

27. Y. Cheng, Z. Jia, H. Lai, J. Yang, and N. K. Kasabov, “A fast sand-dust image enhancement algorithm by blue channel compensation and guided image filtering,” IEEE Access 8, 196690–196699 (2020). [CrossRef]  

28. C. O. Ancuti and C. Ancuti, “Single image dehazing by multi-scale fusion,” IEEE Trans. on Image Process. 22(8), 3271–3282 (2013). [CrossRef]  

29. L. K. Choi, J. You, and A. C. Bovik, “Referenceless prediction of perceptual fog density and perceptual image defogging,” IEEE Trans. on Image Process. 24(11), 3888–3901 (2015). [CrossRef]  

30. S. Wang, W. Cho, J. Jang, M. A. Abidi, and J. Paik, “Contrast-dependent saturation adjustment for outdoor image enhancement,” J. Opt. Soc. Am. A 34(1), 7–17 (2017). [CrossRef]  

31. J. Vazquez-Corral, A. Galdran, P. Cyriac, and M. Bertalmío, “A fast image dehazing method that does not introduce color artifacts,” J. Real-Time Image Proc. 17(3), 607–622 (2020). [CrossRef]  

32. Z. Shi, Y. Feng, M. Zhao, E. Zhang, and L. He, “Normalised gamma transformation-based contrast-limited adaptive histogram equalisation with colour correction for sand–dust image enhancement,” IET Image Processing 14(4), 747–756 (2020). [CrossRef]  

33. A. Galdran, “Image dehazing by artificial multiple-exposure image fusion,” Signal Processing 149, 135–147 (2018). [CrossRef]  

34. A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmio, “Fusion-based variational image dehazing,” IEEE Signal Process. Lett. 24, 1 (2016). [CrossRef]  

35. J. Zhou and F. Zhou, “Single image dehazing motivated by retinex theory,” in IMSNA, (IEEE, 2013), pp. 243–247.

36. D. Nair, P. A. Kumar, and P. Sankaran, “An effective surround filter for image dehazing,” in ICACCI, (ACM, 2014), p. 20.

37. W. Wei, C. He, and X. G. Xia, “A constrained total variation model for single image dehazing,” Pattern Recognition 80, 196–209 (2018). [CrossRef]  

38. A. Galdran, A. Alvarez-Gila, A. Bria, J. Vazquez-Corral, and M. Bertalmío, “On the duality between retinex and image dehazing,” in CVPR, (IEEE, 2018), pp. 8212–8221.

39. A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “A variational framework for single image dehazing,” in ECCV, (Springer, 2014), pp. 259–270.

40. A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Enhanced variational image dehazing,” SIAM J. Imaging Sci. 8(3), 1519–1546 (2015). [CrossRef]  

41. X.-S. Zhang, S.-B. Gao, C.-Y. Li, and Y.-J. Li, “A retina inspired model for enhancing visibility of hazy images,” Front. Comput. Neurosci. 9, 151 (2015). [CrossRef]  

42. J. Vazquez-Corral, G. D. Finlayson, and M. Bertalmío, “Physical-based optimization for non-physical image dehazing methods,” Opt. Express 28(7), 9327–9339 (2020). [CrossRef]  

43. Q. Zhu, J. Mai, and L. Shao, “A fast single image haze removal algorithm using color attenuation prior,” IEEE Trans. on Image Process. 24(11), 3522–3533 (2015). [CrossRef]  

44. K. Tang, J. Yang, and J. Wang, “Investigating haze-relevant features in a learning framework for image dehazing,” in CVPR, (IEEE, 2014), pp. 2995–3000.

45. W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in ECCV, (Springer, 2016), pp. 154–169.

46. B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Trans. on Image Process. 25(11), 5187–5198 (2016). [CrossRef]  

47. B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “Aod-net: All-in-one dehazing network,” in ICCV, (IEEE, 2017), pp. 4770–4778.

48. J. L. Yin, Y. C. Huang, B. H. Chen, and S. Z. Ye, “Color transferred convolutional neural networks for image dehazing,” IEEE Trans. Circuits Syst. Video Technol. 30(11), 3957–3967 (2020). [CrossRef]  

49. D. Berman, D. Levy, S. Avidan, and T. Treibitz, “Underwater single image color restoration using haze-lines and a new quantitative dataset,” IEEE Trans. Pattern Anal. Machine Intell. pp. 1 (2020).

50. Y. Liu, S. Rong, X. Cao, T. Li, and B. He, “Underwater single image dehazing using the color space dimensionality reduction prior,” IEEE Access 8, 91116–91128 (2020). [CrossRef]  

51. D. Akkaynak and T. Treibitz, “A revised underwater image formation model,” in CVPR, (IEEE, 2018), pp. 6723–6732.

52. K.-F. Yang, S.-B. Gao, and Y.-J. Li, “Efficient illuminant estimation for color constancy using grey pixels,” in CVPR, (IEEE, 2015), pp. 2254–2263.

53. Y. Qian, J.-K. Kamarainen, J. Nikkanen, and J. Matas, “On finding gray pixels,” in CVPR, (IEEE, 2019), pp. 8062–8070.

54. D. Arthur and S. Vassilvitskii, “k-means++: The advantages of careful seeding,” in SODA, (ACM, 2007), pp. 1027–1035.

55. S.-B. Gao, K.-F. Yang, C.-Y. Li, and Y.-J. Li, “Color constancy using double-opponency,” IEEE Trans. Pattern Anal. Machine Intell. 37(10), 1973–1985 (2015). [CrossRef]  

56. X.-S. Zhang, S.-B. Gao, R.-X. Li, X.-Y. Du, C.-Y. Li, and Y.-J. Li., “A retinal mechanism inspired color constancy model,” IEEE Trans. on Image Process. 25(3), 1219–1232 (2016). [CrossRef]  

57. K. Ardon-Dryer and Z. Levin, “Ground-based measurements of immersion freezing in the eastern mediterranean,” Atmos. Chem. Phys. 14(10), 5217–5231 (2014). [CrossRef]  

58. G. Sharma, W. Wu, and E. N. Dalal, “The ciede2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations,” Color Res. Appl. 30(1), 21–30 (2005). [CrossRef]  

59. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

60. A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Trans. on Image Process. 21(12), 4695–4708 (2012). [CrossRef]  

61. A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a "completely blind" image quality analyzer,” IEEE Signal Process. Lett. 20(3), 209–212 (2013). [CrossRef]  

62. Y. Fang, K. Ma, Z. Wang, W. Lin, Z. Fang, and G. Zhai, “No-reference quality assessment of contrast-distorted images based on natural scene statistics,” IEEE Signal Process. Lett. 22(7), 1 (2014). [CrossRef]  

63. C. O. Ancuti, C. Ancuti, R. Timofte, and C. De Vleeschouwer, “O-haze: a dehazing benchmark with real hazy and haze-free outdoor images,” in CVPRW, (IEEE, 2018), pp. 754–762.

Data availability

Data underlying the results presented in this paper are available in Refs. [21,29,63].

21. J.-P. Tarel, N. Hautiere, L. Caraffa, A. Cord, H. Halmaoui, and D. Gruyer, “Vision enhancement in homogeneous and heterogeneous fog,” IEEE Intelligent Transportation Systems Magazine 4(2), 6–20 (2012). [CrossRef]  

29. L. K. Choi, J. You, and A. C. Bovik, “Referenceless prediction of perceptual fog density and perceptual image defogging,” IEEE Trans. on Image Process. 24(11), 3888–3901 (2015). [CrossRef]  

63. C. O. Ancuti, C. Ancuti, R. Timofte, and C. De Vleeschouwer, “O-haze: a dehazing benchmark with real hazy and haze-free outdoor images,” in CVPRW, (IEEE, 2018), pp. 754–762.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Colorful haze caused by the wavelength-dependent scattering. (a) Relative scattering coefficients versus the light wavelength and particle sizes (adopted from [5]). (b) A color distorted hazy image and the corrected image produced by the proposed method.
Fig. 2.
Fig. 2. Flowchart of the proposed haze removal method based on the dark channel prior and grey pixel prior.
Fig. 3.
Fig. 3. Results with/without scattering coefficient awareness and global color correction.
Fig. 4.
Fig. 4. Results with/without sky compensation.
Fig. 5.
Fig. 5. Evaluation of color restoration. A smaller angular error means a better result.
Fig. 6.
Fig. 6. Comparison against haze removal results on five different types of hazy images selected from Internet. Subjective scores are shown above the images, a larger value meaning a better result.
Fig. 7.
Fig. 7. An example of the LIVE-hazy database.

Tables (6)

Tables Icon

Table 1. Results on synthetic white foggy images. a

Tables Icon

Table 2. Results on synthetic brownish hazy images. a

Tables Icon

Table 3. Results on synthetic bluish hazy images. a

Tables Icon

Table 4. Results on the LIVE-hazy database. a

Tables Icon

Table 5. Results on the O-HAZE database. a

Tables Icon

Table 6. Computation time on images with different sizes.

Equations (28)

Equations on this page are rendered with MathJax. Learn more.

I ( x ) = J ( x ) t ( x ) + A ( 1 t ( x ) ) , t ( x ) = e β ( λ ) d ( x ) ,
β ( λ ) 1 λ γ ,
I m a g e d a r k ( x ) = min y Ω ( x ) ( min c I m a g e c ( y ) ) ,
min y Ω ( x ) ( min c I c ( y ) A c ) = min y Ω ( x ) ( min c J c ( y ) A c ) t ~ ( x ) + 1 t ~ ( x ) , min y Ω ( x ) ( min c J c ( y ) A c ) 0 , t ~ ( x ) = ( 1 min y Ω ( x ) ( min c I c ( y ) A c ) ) / ( 1 min y Ω ( x ) ( min c J c ( y ) A c ) ) 1 min y Ω ( x ) ( min c I c ( y ) A c ) ,
A c I c ( x ) ¯ x     max 10 % ( I d a r k ( x ) ) .
I c ( x ) = J c ( x ) t c ( x ) + A c ( 1 t c ( x ) ) , t c ( x ) = e β c d ( x ) ,
A c I c ( x ) A c J c ( x ) = t c ( x ) = e β c d ( x ) .
I c ( x ) = I c ( x ) A c , J c ( x ) = J c ( x ) A c .
1 I d a r k ( x ) 1 J d a r k ( x ) 1 J c ( x ) 1 I c ( x ) = e ( β c β ~ 1 ) β ~ d ( x ) ,
β c β ~ = 1 l n ( 1 I d a r k ( x ) 1 J d a r k ( x ) 1 J c ( x ) 1 I c ( x ) ) / ( β ~ d ( x ) ) = 1 l n ( 1 I d a r k ( x ) 1 J d a r k ( x ) 1 J c ( x ) 1 I c ( x ) ) / l n ( t ~ ( x ) ) ,
J ( x ) ¯ = ( J r ( x ) + J g ( x ) + J b ( x ) ) / 3 , J ( x ) ¯ = J r ( x ) = J g ( x ) = J b ( x ) ,
β c β ~ = 1 l n ( 1 I d a r k ( x ) 1 I c ( x ) ) / l n ( t ~ ( x ) ) x { grey pixels } .
t c ( x ) = ( t ~ ( x ) ) β c β ~ .
I c ( x ) J c ( x ) t c ( x ) , x { max 30 % ( t ~ ( x ) ) } , c { r , g , b } .
l n ( I c ( x ) ) l n ( I ( x ) ¯ ) l n ( J c ( x ) t c ( x ) ) l n ( J ( x ) ¯   t ( x ) ¯ ) = l n ( J c ( x ) ) l n ( J ( x ) ¯ ) + l n ( t c ( x ) ) l n ( t ( x ) ¯ ) .
l n ( I c ( x ) ) l n ( I ( x ) ¯ ) = l n ( t c ( x ) ) l n ( t ( x ) ¯ ) x { grey pixels } .
L o G { l n ( t c ( x ) ) l n ( t ( x ) ¯ ) } 0 ,
L o G { l n ( I c ( x ) ) l n ( I ( x ) ¯ ) } = L o G { l n ( t c ( x ) ) l n ( t ( x ) ¯ ) } 0 x { grey pixels } .
G I ( x ) = | | L o G { l n ( I r ( x ) ) l n ( I ( x ) ¯ ) } , L o G { l n ( I b ( x ) ) l n ( I ( x ) ¯ ) } | | x { max 30 % ( t ~ ( x ) ) } ,
J c ( x ) = I c ( x ) A c ( 1 t ( x ) ) t ( x ) .
I c ( x ) A c ( 1 t ( x ) ) , t ( x ) = t ~ ( x ) + ε A c I c ( x ) A c , ε = max ( A c min ( I c ( x ) + A c t ~ ( x ) ) A c ) .
t c ( x ) = { t c ( x ) x { S k y } . t ~ ( x ) + ε x { S k y } .
J c ( x ) = I c ( x ) A c ( 1 t c ( x ) ) t c ( x ) .
R c ( x ) = J c ( x ) L c / L r 2 + L g 2 + L b 2 c { r , g , b } ,
L c = J c ( x ) ¯ x { grey pixels } c { r , g , b } .
J c ( x ) = L c R c ( x ) ,
I c ( x ) = L c R c ( x ) t c ( x ) + A c ( 1 t c ( x ) ) .
A E = c o s 1 ( ( β c β ~ ) g t ( β c β ~ ) e s t | | ( β c β ~ ) g t | | | | ( β c β ~ ) e s t | | ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.