Abstract

Photographic images taken in foggy or hazy weather (hazy images) exhibit poor visibility and detail because of scattering and attenuation of light caused by suspended particles, and therefore, image dehazing has attracted considerable research attention. The current polarization-based dehazing algorithms strongly rely on the presence of a “sky area”, and thus, the selection of model parameters is susceptible to external interference of high-brightness objects and strong light sources. In addition, the noise of the restored image is large. In order to solve these problems, we propose a polarization-based dehazing algorithm that does not rely on the sky area (“non-sky”). First, a linear polarizer is used to collect three polarized images. The maximum- and minimum-intensity images are then obtained by calculation, assuming the polarization of light emanating from objects is negligible in most scenarios involving non-specular objects. Subsequently, the polarization difference of the two images is used to determine a sky area and calculate the infinite atmospheric light value. Next, using the global features of the image, and based on the assumption that the airlight and object radiance are irrelevant, the degree of polarization of the airlight (DPA) is calculated by solving for the optimal solution of the correlation coefficient equation between airlight and object radiance; the optimal solution is obtained by setting the right-hand side of the equation to zero. Then, the hazy image is subjected to dehazing. Subsequently, a filtering denoising algorithm, which combines the polarization difference information and block-matching and 3D (BM3D) filtering, is designed to filter the image smoothly. Our experimental results show that the proposed polarization-based dehazing algorithm does not depend on whether the image includes a sky area and does not require complex models. Moreover, the dehazing image except specular object scenarios is superior to those obtained by Tarel, Fattal, Ren, and Berman based on the criteria of no-reference quality assessment (NRQA), blind/referenceless image spatial quality evaluator (BRISQUE), blind anistropic quality index (AQI), and e.

© 2017 Optical Society of America

1. Introduction

The dehazing of hazy images taken under foggy weather conditions has attracted considerable attention from researchers, particularly in terms of application to consumer photography, object detection, and surveillance. The present model of the polarization-based dehazing algorithm is simple and easy to realize. This algorithm can restore clear-day visibility of scenes in an excellent manner; however, the approach still suffers from a few problems: 1) the image segmentation method is used to select the pixel region with the highest luminance value as an estimate of the infinite atmospheric light value A. However, when there is no sky area in the image or certain high-brightness objects and strong light sources exist in the image, this method to estimate A is inaccurate, which subsequently affects the dehazed image. 2) The calculation of the degree of polarization of the airlight (DPA) relies heavily on the sky area or any similar objects in the scene, and an inaccurate selection of the sky area or its low quality will yield errors in the degree of polarization. 3) The noise of the restored image is large subsequent to the application of the polarization-based dehazing algorithm, which greatly affects the quality of the dehazed image.

In view of the above problems, this paper proposes a polarization-based dehazing algorithm that does not rely on the sky area. Through our observations of hazy images and the analysis of the polarization-based dehazing model, we find that the polarization difference image ΔI rarely contains the direct transmission light component D of the scene, and the farther is the scene from the camera, the weaker is the direct transmission light of the scene; in addition, the more is the airlight scattering in the light path, the higher is the intensity value of the polarization difference image. In this context, in the study, we choose the highest brightness area of the polarization difference image to estimate A, without considering the segmentation of the sky area. Moreover, the algorithm performance is not affected by the high-brightness object and strong light source. In addition, in order to solve the problem that the DPA p relies heavily on the sky area or similar objects in the scene, we use the global features of the image to assume that the airlight A and object radiance Lobject are uncorrelated in the entire image. Subsequently, we obtain the DPA by solving for the optimal solution of the correlation coefficient equation between airlight and object radiance; the optimal solution is solved after setting the right-hand side of the equation to zero. Finally, according to the nature of the physical polarization-based dehazing model, we design a filtering denoising algorithm combining the polarization difference and the block-matching and 3D (BM3D) filter to improve the image denoising effect in view of the hazy image’s noise model. The algorithm proposed in this paper solves the problem of the strong dependence of current polarization-based dehazing algorithms on the sky area. Our approach does not require complex modeling or stringent a priori conditions, and consequently, the application scope is greatly enlarged. Furthermore, an algorithm is proposed to optimize the noise generated by the image restoration process by using depth information, and the dehazed image is smoother and clearer than that obtained with the traditional polarization-based dehazing methods.

The remainder of this article is organized as follows. Chapter 2 introduces the related works in the field of image dehazing. Chapter 3 provides a detailed description of our proposed non-sky polarization-based dehazing algorithm, and Chapter 4 reports on our experiments carried out to test our algorithm. In Chapter 5, we discuss the limitation of our method from a theoretical perspective. Finally, we discuss our approach and summarize our study in Chapter 6.

2. Related works

2.1. Polarization-based dehazing

According to whether a projection signal (such as radar wave and laser) is needed, dehazing techniques can be classified into two categories: active and passive. The active approach uses extra information returned by the projected signal to dehaze the image [1–5], which requires expensive equipment that is not easily available to the public. Without the use of projection signals and the consequent high costs, the passive class of techniques can be implemented simply by optimizing and reconstructing the relevant optical systems and algorithms, and therefore, the application of the passive approach is wide. Passive dehazing can be further classified into single-image dehazing [6–10], multiple-image dehazing [11–13], polarization-based dehazing [14], and so on. Single-image dehazing relies on a variety of prior assumptions to eliminate complex parameters in the model. This approach includes He’s dark channel hypothesis [6], Fattal’s local irrelevance hypothesis [7], Tan’s Markov model hypothesis [8], Tarel’s atmospheric dissipation function hypothesis [9], and Berman’s Non-Local Priori [10]; however, these methods are limited by their prior conditions. Multi-image dehazing algorithms include the approach of Tang [11], Ren’s machine learning dehazing [12], and Ancuti’s image-fusion dehazing [13]; however, such methods are not developed on the basis of physical models. Meanwhile, polarization information has been used for target recognition [15–18], scene segmentation [19–21], and dehazing, based on the scattering of light by atmospheric particles. In this context, Schechner and Nayer (2003) [14] proposed a widely used polarization-based dehazing model, which can be expressed as

I=Lobjectt+A=Lobjecte-βd+A(1-e-βd)
where I represents the image irradiance, Lobject the object radiance, t the transmittance of incoherent light, A the airlight, A the airlight radiance at an infinite distance, β the coefficient of extinction due to scattering and absorption, and d the distance from the camera to the scene. Based on the physical model, the maximum- and minimum-intensity images (I and I, respectively) are acquired by means of rotating a linear polarizer. The polarization difference image is computed as ΔI = II. After model parameters p (i.e., the DPA) and A are solved by using the polarization information of the sky area (p=Isky-IskyIsky+Isky, the model does not consider circularly polarized light), airlight A and transmittance t are recovered. Finally, all these parameters are used in a polarization-based dehazing model to restore the dehazed image.
Lobject=I+I-ΔI/p1-ΔIpA=I-A1-AA

It can be observed from Eq. (2) that only I, I, A, and p are required to obtain a haze-free image. The polarization-based dehazing model is used for image restoration with two polarization images acquired by using the linear polarizer along different polarization directions. This model is popular because of its compactness, affordance of rich details, and high visibility. However, the model requires an analysis of the sky area in the hazy image, and A and p require computation according to the polarization information of the sky area. Moreover, the suitability of the chosen sky area and the calculation accuracy of A and p are critical to the polarization-based dehazing model; therefore, research on the selection of the sky area has been intensive along with the calculation of the DPA and the algorithm for denoising the restored image. The following section reviews these three aspects.

2.2. Estimate of A

Schechner et al. (2003) [14] established the polarization-based dehazing model and also proposed the selection of the sky area with the highest luminance value in the image to estimate A. However, it is difficult to choose a suitable valuation area, and the estimation of A is affected by the presence of high-brightness objects in the scene. Therefore, Tan et al. (2008) [8] and Tarel et al. (2009) [9] proposed subjecting the hazy image to white-balance pretreatment with the chroma preset to 1, thereby avoiding the influence of the deviation of the sky area selection on the restored image. However, He et al. (2009) [6] did not find this method very accurate, and there was no theoretical basis for presetting the chroma value to 1 after preprocessing. Therefore, they proposed a dehazing algorithm based on dark-channel hypothesis, which eliminated the interference of high-brightness objects and strong light sources while estimating A. Zhu et al. (2015) [22] speculated that estimating airlight A in the “far-scene areas” of the image reduced the effect of direct transmission D while yielding A values closer to the actual airlight value. Based on prior color attenuation, they chose a far-scene area as the sky area according to the obtained depth map. Meanwhile, the method of presetting the chroma has no theoretical basis and affords no accuracy. Further, the selection of the sky area according to light intensity information is affected by the high-brightness regions of the scene. In addition, the selection of sky area based on prior knowledge is limited by prior conditions. In order to meet the requirements of the dehazing of the non-sky area and overcome the problem of high-brightness object interference, we propose a method of estimating A based on the polarization difference image information without the need for prior knowledge or preset conditions. This method can eliminate the interference of high-brightness objects and strong light sources, and it can determine a suitable valuation area in the image with or without a sky area to estimate A.

2.3. Calculation of DPA p

In their approach, Schechner et al. (2003) [14] acquired the maximum- and minimum-intensity images by rotating a linear polarizer and estimated the DPA by using the polarization difference information in the sky area. The method is strongly dependent on the sky area, and it cannot compute the DPA if the choice of the sky area is unsuitable or if there is no sky area in the image. Namer et al. (2006) [23] proposed that when the maximum- and minimum-intensity images are obtained, if the positions of similar objects or the distance between these objects are known, the DPA can be obtained even with a non-sky image. While this method solves the problem of the sky-area dependence, it requires the presence of similar objects in the scene. Liang et al. (2015) [24] proposed the use of four different polarization-angle images to acquire the maximum- and minimum-intensity images of the scene and solve for the Stokes vector of airlight. They computed the DPA with the pixel corresponding to the polarization angle of the maximum probability and selected the maximum value as DPA p. The method proposed by Liang et al. is not restricted by the region and does not require similar objects in the scene; however, there is a large error in calculating the DPA according to the polarization-angle probability. In view of the above problems, we use the global features of the image in question to assume that airlight A and object radiance Lobject are uncorrelated in the whole image, and we calculate the DPA by solving for the optimal solution of the correlation coefficient equation by setting it to equal zero. Our proposed method does not depend on any local area in the image, and the DPA obtained is more reliable.

2.4. Filter denoising

The hazy-image restoration process can introduce and even amplify noise. In this regard, most researchers [25, 26] use the common median and bilateral filtering methods to filter out noise. In order to ensure that filtering denoising could be focused at the dehazing model, Kratz et al. [27], Fattal et al. [7], and Tan et al. [8] abandoned the original blind filter denoising method, and they proposed a different noise model based on the restoration model. Their noise model can be used to filter noise derived from the restoration model. On the basis of the noise model, He et al. [6] proposed using a guidance filter to replace the original soft drawing method to further filter the restored image noise; this approach retains more image detail. The application of denoising methods such as median filtering often leads to loss of detail, and further, these denoising methods are not based on the dehazing model. The noise model can aid in achieving a certain denoising effect, but the resulting image still needs to be smoothed. Further, the guided filter still does not perform well in terms of keeping intact the image details. Against this backdrop, we propose a method of separating noise based on polarization difference according to the characteristics of the physical model of polarization-based dehazing. The combination of the polarization difference and BM3D filter significantly improves the image denoising efficacy.

3. Polarization-based dehazing without sky area

In comparison with the traditional polarization-based dehazing, we present a new approach to calculating the DPA p, estimating the airlight radiance at an infinite distance (A), and filtering of the restored image. The flow chart of our dehazing process is shown in Fig. 1. First, we acquire three images of the same scene at different polarization angles using the linear polarizer, and we obtain the maximum- and minimum-intensity images according to the model of the linear polarizer. Second, we further acquire image irradiance I and the polarization difference image ΔI (presented in detail in section 3.1). Subsequently, according to the polarization difference information, we accurately estimate A (presented in detail in section 3.2.1). At the same time, we solve for the DPA p based on the uncorrelated relationship between airlight A and object radiance Lobject in the model (presented in detail in section 3.2.2). Next, adding a constant factor ε (1 ≤ ε ≤ 1/p) to adjust the DPA as p0(εp), we substitute the unadjusted DPA p and the adjusted DPA p0 into the model for dehazing to acquire a high-noise, high-contrast restored image and a low-noise, low-contrast restored image, respectively. The difference between the two images is processed by a weight factor map to obtain the noise image. Finally, the difference between the high-noise image and the noise image is processed by BM3D filtering to obtain a high-quality, low-noise image (presented in detail in section 3.3).

 figure: Fig. 1

Fig. 1 Flowchart of proposed dehazing process.

Download Full Size | PPT Slide | PDF

3.1. Estimation of polarization difference image

The solar incident light, which interacts with atmospheric particles and objects in the scene to undergo partial polarization, can be characterized via the Stokes vector [IQUV]T [28]; further, the circular-polarized component in the polarized light is sufficiently small for it to be neglected (V = 0). After a given polarization angle is determined as the 0° reference direction, a linear polarizer model with polarization angle of θ is established as follows:

[IoutQoutUoutVout]=M[IinQinUinVin]=12[Iin+Qincos2θ+Uinsin2θIincos2θ+Qincos22θ+Uincos2θsin2θIinsin2θ+Qincos2θsin2θ+Uinsin22θ0]

From Eq. (3), the light intensity that passes through the linear polarizer with polarization angle of θ can be expressed as

Iout=Iin+Qincos2θ+Uinsin2θ

In our study, after substituting the expressions for the three images acquired with different polarization angles into Eq. (4), we obtain three independent equations that can be used to solve for the polarization state of the incoming light [IinQinUinVin]T. In practice, the generation and amplification of noise is bound to occur in the actual image acquisition, and therefore, images with three arbitrary polarization angles cannot be used to obtain the high-quality Stokes vector of incident light. In this regard, Tyo et al. [29, 30] proposed and proved that subsequent to the setting of a particular polarization angle as the 0° reference, the incident light’s Stokes vector with the optimal signal-to-noise ratio (SNR) can be obtained when the value of θ is 0°, 60°, or 120°. After the Stokes vector with the optimal SNR is known, equation (4) becomes a function of θ, and we can obtain the maximum and minimum intensities for θ = ξ and θ=ξ+π2 by means of the derivation method. Settings θ = ξ and θ=ξ+π2 correspond to the maximum- and minimum-intensity images, respectively, and consequently, we can obtain the polarization difference image as ΔI = II. The 0°, 60°, and 120° polarization images, maximum-intensity image, minimum-intensity image, and polarization difference image are shown in Fig. 2.

 figure: Fig. 2

Fig. 2 Estimation of polarization difference image. (a) 0° image, (b) 60° image, (c) 120° image, (d) minimum-intensity image, (e) maximum-intensity image, and (f) polarization difference image.

Download Full Size | PPT Slide | PDF

3.2. Estimation of A and p without sky area

The airlight radiance at infinite distance A and DPA p form the two global constants in a homogeneous atmosphere, where A represents airlight without attenuation, and to a certain extent, it represents the overall intensity of the scene while p represents the DPA.

3.2.1. Estimation of A

To eliminate interference from high-brightness white objects and strong light sources with the selection of the sky area and to select an appropriate region in the hazy image without a sky area to estimate the relevant parameters, we consider Eqs. (5), (6), and (7) [14] below:

I=A+D
A=A(1-e-βd)
D=Lobjecte-βd

From these equations, it can be inferred that the farther is distance d, the weaker is the direct transmission light D of the scene, and the more the airlight A scatters in the light path, the closer is the value of I to A. As a result, an area with a relatively smaller D value and greater A value can be used to estimate A. According to the definition of degree of polarization, we can write

p=A-AA+A=A-AA

Polarization of light emanating from specular object is significant to Eq. (8). However, in most scenarios, specular objects occupy a relatively small proportion of images, and are located far away from the camera; therefore, the polarization of light emanating from these objects is negligible. By neglecting the contribution of polarization, we obtain I and I from Eq. (5)

I=D2+A
I=D2+A

Thus, equation (8) can be transformed to

p=A-AA=I-IA

Thus, we have

ΔI=I-I=Ap

Because the DPA is a global constant, airlight A is positively correlated with the polarization difference ΔI [Eq. (12)], and higher-brightness areas in the polarization difference image contain more airlight content A with no direct transmission light D of the scene. Based on the above analysis, we can eliminate the interference of high-brightness objects and strong light sources according to the polarization difference image and estimate A in the image without using the sky area. The valuation method of A in this study includes three steps:

  1. The hazy image I and polarization difference image ΔI are transformed into the RGB color channels, and subsequent processing is performed using the three corresponding color channels.
  2. Next, the area corresponding to 0.1% of the brightest pixels in the α(α ∈ [R, G, B]) channel of the polarization difference image is selected as candidate area ψ1.
  3. Area ψ2, which corresponds to area ψ1 in the hazy image I is located, and 0.1% of the brightest pixels in area ψ2 is selected. Next, the mean of these pixel values is calculated as the estimation of A.

In Fig. 3(a) that shows an image with a sky area, the sky area with low transparency and high brightness is indicated by the blue circle, and the area with the high-brightness building is indicated by the red circle. According to our analysis, the blue-circled area is more suitable than the red-circled area to estimate A; however, the choice of the red-circled area may result in considerable errors, and therefore, we take advantage of the polarization difference image to further filter out the correct valuation area (blue-circled area). In Fig. 3(b), we successfully choose the blue-circled area as our valuation area without any interference of high-brightness objects and strong light sources from the red-circled area, and therefore, A can be estimated accurately.

 figure: Fig. 3

Fig. 3 High-brightness object interfering with estimation of A. (a) Hazy image with sky area (R channel). (b) Difference image (R channel) (Red-circled area indicates high-brightness building area and blue-circled area indicates suitable valuation area. The red-dotted region is used to indicate that the interference area has been removed in (b)).

Download Full Size | PPT Slide | PDF

In order to further verify the effectiveness of the algorithm in the “non-sky” image, the two abovementioned images in Fig. 3 are truncated to obtain the two corresponding images shown in Fig. 4 by truncating the sky area, that is, the sky areas of the images shown in Fig. 3(a) are truncated to yield the two images shown in Fig. 4(a). Here, we note that the direct choice of a remote area with low transparency and high brightness can be affected by the high-brightness buildings, leading to the wrong result that the red-circled areas are selected as valuation areas, as shown in Fig. 4(a). Our method based on polarization difference image can also remove the interference of high-brightness buildings and aid selection of the blue-circled area that is the correct valuation area. The results of the application of our method are shown in Fig. 4(b). In addition, from Fig. 4, it can be observed that in the absence of a sky area in the image, another suitable area with reduced direct object radiance light and more airlight (wherein the brightness of the area in the polarization difference is the highest) is selected as the most appropriate valuation area to estimate A.

 figure: Fig. 4

Fig. 4 Estimation of A in non-sky image. (a) Hazy images without sky area (R channel). (b) Corresponding difference images (R channel) (Red-circled area indicates high-brightness building area, while the blue-circled one indicates the appropriate valuation area. The red-dotted region is used to indicate that the interference area has been removed in (b)).

Download Full Size | PPT Slide | PDF

3.2.2. Estimation of DPA p

In order to overcome the limitation of the calculation of the DPA with the use of the sky area, we propose a method to calculate the DPA p based on the global features. From Eq. (1), we note that airlight A is determined by scene depth d and coefficient of extinction β; in addition, Lobject is determined by the characteristics of the object itself, which are usually related to the target material. Therefore, it is assumed that airlight A and object radiance Lobject are two unrelated variables in the whole image, which can be expressed as:

Cov(A,Lobject)=0
Upon substituting Eqs. (2) and (6) into Eq. (13), we have
Cov(ΔIp,pI-ΔIpA-ΔI)=0

Ideally, without considering noise and measurement errors, equation (14) yields a constant value, but the data acquired in practical application is not ideal, and all kinds of noise in the data affect the solution of the model, and therefore, equation (14) must be changed to solve the optimization problem to meet the needs of practical application. Thus, we have

argmin|Cov(ΔIp,pI-ΔIpA-ΔI)|

Equation (15) is the result of solving Eq. (14) for the optimal solution, which is a nonlinear equation. Our study uses the GlobalSearch method of MATLAB Tools to numerically solve the formula. In order to make a preliminary judgment comparison, we remove the sky area of the image used in Schechner’s study and acquire the image without the sky area. Next, we calculate the DPA of the image [Fig. 5].

 figure: Fig. 5

Fig. 5 Schechner’s initial polarization image and processed image without sky area. (a) Image with sky area. (b) Image without sky area.

Download Full Size | PPT Slide | PDF

After our DPA calculation, the DPA of the image with sky area can be expressed as

[prpgpb][0.290.270.24]
with the DPA of the image without the sky area being
[prpgpb][0.340.320.27]

In a hazy or foggy atmosphere, the longer wavelength of light is less affected by scattering, which can maintain a relatively higher DPA, and the relation pr > pg > pb conforms with the conclusion of the literature[32, 33]. Moreover, the DPA calculated in this study is essentially the same as that calculated by Schechner (pr ≈ 0.28, pg ≈ 0.25, pb ≈ 0.22). The restored images are shown in Fig. 6.

 figure: Fig. 6

Fig. 6 Dehazed images with and without sky area. (a) Dehazed image with sky area. (b) Dehazed image without sky area.

Download Full Size | PPT Slide | PDF

As can be observed from Fig. 6, although there is some noise in the dehazed images, the details and colors appear to be clearly visible. In particular, we note from the image without the sky that our method affords effective dehazing.

3.3. Filtering of noise

As per our analysis above, we conclude that the noise in the dehazed image is mainly affected by the noise of transmittance t and noise amplification, and therefore, these two aspects are next addressed.

3.3.1. Modification of transmittance t

According to Eq. (1), transmittance t is only related to the coefficient of extinction β and depth of scene d. In a homogeneous atmosphere, the coefficient of extinction β is constant, and therefore, t is only related to scene depth d. As per this relation, the ideal transmittance t is smooth in the local area. When affected by noise and other factors in practice, transmittance t will contain many noise points, as shown in Fig. 7(a). In order to obtain a high-quality dehazed image, it is necessary to filter and denoise transmittance t. However, to obtain an excellent filtering performance, the filtering algorithm needs to be executed according to the feature of the image.

 figure: Fig. 7

Fig. 7 Denoising transmittance t. (a) Transmittance image with noise. (b) Transmittance image denoised by guided filter. (c) Transmittance image denoised by BM3D filter.

Download Full Size | PPT Slide | PDF

In this regard, both the guided filtering and BM3D algorithms [31] can suitably retain details of the dehazed image. The image in Fig. 7(a) contains an unprocessed transmittance t, and it can be observed that there are high levels of noise in the near- and far-scene areas. Thus, the range of the corresponding gray histogram is large, with the image appearing mostly blurry.

Transmittance t is next processed by means of a guided filter [Fig. 7(b)]. While a considerable amount of noise is filtered, several details are also “smoothed” out. From the corresponding histogram, we note that the peak rapidly narrows, and the corresponding pixel region exhibits very high similarity between pixels, which is detrimental to the estimation of the attenuation of direct transmission light D. In contrast, the BM3D algorithm can not only smooth the noise (the histogram is moderately narrow), but also retain the image details; thus, the BM3D filter performs better [Fig. 7(c)].

3.3.2. Removal of amplified noise

The process of denoising transmittance t reduces the overall noise of the dehazed image. However, a considerable amount of noise remains, as can be seen in Fig. 8(a). Upon separating the image [Fig. 8(a)] to be filtered into its corresponding RGB channels [Fig. 8(b)], we clearly observe the noise points of each independent channel, and we note that the noise in the far scene is so dense that the image quality and visibility are greatly decreased. From Eq. (2), when d → ∞, denominator 1-AA approaches 0, which indicates amplification of low-level noise (corresponding to the numerator term IA). In general, there are two approaches to solve the problem of noise amplification: one is to introduce factor ε to suppress noise, and the other is to use a variety of filtering algorithms directly to filter noise. With regards to Fig. 8(c), Schechner introduces a factor ε to adjust the DPA, which replaces p in the model with variable εp, which in turn increases the denominator value, thereby leading to reduction in the noise amplification [14]. However, the contrast of the restored image is less than before, and the details are correspondingly unclear. Moreover, the image, in its entirety, appears to exhibit a certain degree of haze once again.

 figure: Fig. 8

Fig. 8 Effect of adding factor ε along with block-matching and 3D (BM3D) filtering. (a) Dehazed image with noise. (b) RGB-channel image with noise. (c) RGB-channel image processed using factor ε. (d) Color image processed using factor ε. (e) RGB-channel image processed by BM3D. (f) Color image processed by BM3D.

Download Full Size | PPT Slide | PDF

In addition, the noise amplification degree increases gradually with the scene distance, and the BM3D algorithm is not suitable for images with different levels of noise. Upon filtering the respective RGB-channel images [Fig. 8(b)] using a high-level BM3D filter, we obtain denoised images [Fig. 8(e)]. However, BM3D not only filters high-level noise points in the distance, but also excessively “smoothens” the near-scene detail, and therefore, the near-scene area of the color image [Fig. 8(f)] is smoothed. Meanwhile, a low-level BM3D filter does not effectively filter the noise.

Based on the above analysis, we present a filtering denoising method based on relative depth information; our filter denoising process is shown in Fig. 9. In the process, we normalize the difference between the maximum-intensity image [Fig. 9(a)] and minimum-intensity image [Fig. 9(b)]. Subsequently, we obtain the three-channel weight map t0 [Fig. 9(f)]. A preliminary noise image N [Fig. 9(e)] is acquired from the difference between the dehazed image Lεobject processed by factor ε [Fig. 9(d)] and dehazed image Lobject with noise [Fig. 9(c)], which can be expressed as

N=Lobject-Lεobject

 figure: Fig. 9

Fig. 9 Filtering of amplified noise. (a) Maximum-intensity image. (b) Minimum-intensity image. (c) Dehazed image with noise. (d) Dehazed image processed by factor ε. (e) Preliminary noise image. (f) Three-channel weight map. (g) New noise image processed using weight map. (h) Final denoised image. (i) Local noise effect in Fig. 9(h). (j) Schechner’s result. (k) Local noise effect in Fig. 9(j).

Download Full Size | PPT Slide | PDF

The preliminary noise image N contains a significant amount of noise and some near-scene image details. The far-scene area of the image N contains the most noise, while the near-scene area contains the most scene details. In order to lower the pixel value of the far-scene area of the noise image N and increase the pixel value of the near-scene area of the noise image N, we introduce weight factor t0 [Fig. 9(f))] related to the scene depth, and we multiply t0 with the noise image N to obtain a new noise image tN [Fig. 9(g)]. Next, we subtract the dehazed image Lobject [Fig. 9(c)] with noise and the new image tN [Fig. 9(g)] to obtain the denoised image. This procedure can effectively reduce the distance-based noise while also retaining the near-scene textural details. Finally, we use BM3D to filter out the remaining low-level noise in the difference denoised image to obtain the final denoised image [Fig. 9(h)]. It can be observed from Fig. 9(h) that the final denoised image exhibits high contrast, low noise, and rich texture. When compared with Schechner’s approach [Fig. 9(j)], our approach [Fig. 9(i)] outperforms Schechner’s approach with regard to the local area [Fig. 9 (k)], and the color is more authentic.

4. Experimental results

Because there is no standard hazy polarization image dataset to evaluate our method, we utilize the polarization images used by Schechner and our polarization images captured outdoors as test data. In addition, in order to demonstrate that our algorithm does not rely on the sky area to solve the model parameters, we use certain images that contain no sky area or only a few sky areas in the hazy polarization test images. We then need to choose the algorithms to perform a comparison, based on the following conditions. First, the methods selected to do comparison should be well-known in the field of dehazing, and their dehazing effect should be satisfactory in addition. Secondly, the comparison algorithm should have the source code and the parameters set by the authors, as algorithms programmed without the knowledge of the author’s parameters is not suitable. The source code of the algorithms of Fattal [7], Tarel [9], Berman [10] and Ren [12] are provided by authors. Therefore, we compare our algorithm with the methods outlined in the literature [7, 9, 10, 12] using the MATLAB 2013b package, which runs on an Intel(R) Core(TM) 3.4-GHz CPU with 4 GB of RAM, and it is shown in Code 1 [32]. We used 18 different scenes in our image quality evaluation, of which five scenes were from Schechner’s study, with the remaining 13 scenes from our outdoor photographs, and the results of our evaluation are shown in Fig. 10 and Fig. 11.

 figure: Fig. 10

Fig. 10 Comparison of algorithm performance using Schechner’s dataset.

Download Full Size | PPT Slide | PDF

 figure: Fig. 11

Fig. 11 Comparison of algorithm performance using our dataset.

Download Full Size | PPT Slide | PDF

From the results of the dehazing, we observe that the Tarel method can quickly estimate the atmospheric dissipation function by means of median filtering, but the function cannot match the depth of the scene accurately. The correction degree of the corners in the image is not sufficiently large, and the color of the whole image has been “overcorrected”; moreover, the use of the median filter may blur the corners, and thus, there is still some haze remaining at small edges where the depth of field changes suddenly. The Fattal method requires that the image contains sufficient color information and higher SNR, and therefore, in areas with dense haze and little change in color information, the problem of overcorrection still exists, and the sky area is very easily distorted. In addition, the efficacy of the Ren method is not obvious in the image with the dense haze. To a certain extent, the Berman method distorts the sky area, and the overall scene appears slightly darker. In contrast, our method not only uses polarization information to segment the scene and airlight and prevent halo formation, but also acquires the dehazed image with better color and greater detail by using the polarization difference filter to suppress noise generation and amplification. Moreover, this method can overcome the limitation of requiring the sky area. The method also affords suitable dehazing of images with a small sky area and even with no sky area.

Next, in order to evaluate our algorithm in an objective manner, we used the no-reference image quality evaluation algorithm to evaluate the reconstructed image quality, which includes no-reference quality assessment (NRQA) [33], blind image quality index (BIQI) [34], blind/referenceless image spatial quality evaluator (BRISQUE) [35], nature image quality evaluator (NIQE) [36], blind anistropic quality index (AQI) [37], e, r, and σ [9,38] indices. Adapting a nonlinear statistical model for natural images by incorporating quantization distortion modeling, NRQA measures the ringing effect and blur of the image; BIQI being a natural image-feature statistic algorithm based on wavelet domain, it assumes that distortion can affect the natural statistical properties of wavelet domain, and that the effect is regular and quantifiable; BRISQUE and NIQE both assume natural image normalized luminance coefficients closely follow a Gaussian-like distribution, but the degraded image did not hold well for this distribution. Thus, depending on the model’s difference, the quality of a given test image was simply expressed, the difference between the two algorithms is that NIQE no longer relies on subjective evaluation scores; AQI calculate the entropy of image on a local basis by using a spatial/spatial-frequency distribution as an approximation for a probability density function, and measures the variance of the expected entropy of a given image upon a set of predefined directions. Indices e, r, and σ are used for the measurement of the degree of local contrast enhancement, where e represents the ratio of the new visible edge, r denotes the normalized gradient mean of the visible edge, and σ denotes the percentage of saturated black or white pixel points. The mean score of each dehazing algorithm for 18 different scenes in Fig. 10 and Fig. 11 is listed in Table 1.

Tables Icon

Table 1. Quantitative comparison of state-of-art algorithms for various performance indices.

As can be observed from Table 1, our method outperforms the others as per the NRQA, BRISQUE, AQI, and e indices. Our algorithm also compares favorably with the other algorithms for the remaining indices. Moreover, our algorithm performs relatively well as per subjective judgment.

5. Discussion

As depicted in section 3, we assume that light emanating from scene objects has insignificant polarization; thus, it makes a negligible contribution to the measured polarization. In most scenarios, objects with specular surface occupy a relatively small proportion of images, and the objects with specular surface are far away from the camera, therefore, it is reasonable to neglect the polarization of light from objects. But in some specific scenarios where objects with specular surface occupy a large proportion of images and the objects with specular surface are not far away from the camera, the performance of the method proposed in paper is not satisfactory. The light emanating from specular surface of scene object and its polarization influence the measure of airlight and direct transmission light, and so Eq. (12) should be described in detail as:

ΔI=I-I=Ap+DpD
where pD is the degree of polarization of an object.

If the polarization of objects with specular surfaces cannot be neglected, we will obtain an estimation of airlight (A^),

A^=I-Ip=A+DpDp

The estimation of direct transmission light () is

D^=I-A^=D(1-pDp)

It could be seen that when pD ≫ 0 and A^A, D; thus, the A^ and that we obtain from Eqs. (18) and (19) are incorrect. Furthermore, due to the neglect of pD, the value of p calculated by Eq. (15) also may be invalid, and the method we proposed may fail to dehaze. The dehazing results for scenes with large specular object are shown in Fig. 12.

 figure: Fig. 12

Fig. 12 Algorithm performance in the scene with specular surface.

Download Full Size | PPT Slide | PDF

For the scenes in Fig. 12, we observe that the specular object of the images dehazed by our method tend to be darker than the original, because pD cannot be neglected in these scenarios, item 1-pDp is close to zero in Eq. (19), and object areas appear dark on the image. In order to measure the quality of dehazing, we use the image quality evaluation algorithm in section 4 to evaluate the dehazed images in Fig. 12, and the mean score of each dehazing algorithm for the different scenes is listed in Table 2.

Tables Icon

Table 2. Quantitative comparison of state-of-art algorithms for various performance indices.

The simplicity of the model leads to the limitation of our method, and the only way to solve this problem is to improve the model and introduce variable pD into the model. In future works, we will introduce variable pD into our model. Due to the introduction of a new unknown variable, we will require additional information to ensure the model is solvable. We would like to make further use of Eq. (13) to associate pD with p. Theoretically, the pre-estimation of DPA map could be calculated by using the maximum and minimum intensities, and dehazing will influence the degree of image degradation in different color-spaces. Therefore, we plan to utilize the pre-estimation and degradation of different color-spaces to estimate pD and p, and finally in the scenarios in which specular objects may be dehazed.

6. Conclusion

In this study, by using the polarization difference image information to negate the interference of high-brightness white objects and strong light sources, we estimate the infinite atmospheric light value accurately, thereby obtaining a suitably dehazed image. Importantly, the proposed method overcomes the dependence of previously proposed methods on the sky area. In our approach, based on the hypothesis that airlight and object radiance are irrelevant to the dehazing operation, we calculate the DPA of the image by solving the optimal solution of the light correlation coefficient equation by setting it to equal zero, which enlarges the algorithm’s scope of application. In addition, noise generation and amplification are corrected by using the BM3D filtering algorithm and difference method, which solves the problems of halo formation, sky distortion, and noise pollution; consequently, our approach affords dehazed images with rich detail and natural color. Our algorithm outperforms those of Tarel, Fattal, Ren, and Berman in terms of the three indices of NRQA, BRISQUE, AQI, and e. As we neglected the polarization of light emanating from objects, the proposed method is unsuitable for scenarios where objects with specular surface occupy a large proportion of images and are not far away from the camera. We will introduce the degree of polarization of light emanating from objects into dehazing model in the future, as a new unknown variable, for which we will need additional information to ensure that the model is solvable. We would like to make further use of Eq. (13) to associate pD with p. Theoretically, the pre-estimation of DPA map could be calculated by using the maximum and the minimum intensities, and dehazing will influence the degree of image degradation in different color-spaces. Therefore, we plan to utilize the pre-estimation and the degradation to estimate pD and p, finally in the scenarios where specular objects may be dehazed. At present, our program uses Matlab optimtool (GlobalSearch) to solve DPA. The GlobalSearch function spends about 30 minutes to solve DPA. In order to quicken the process, we will accelerate it with multithreading based on the C code in the future.

Funding

National Natural Science Foundation of China (NSFC) (51675033).

References and links

1. N. Hautière, J.P. Tarel, and D. Aubert, “Towards fog-free in-vehicle vision systems through contrast restoration,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

2. J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: Model-based photograph enhancement and viewing,” ACM Trans. Graph. 27(5), 1–10 (2008). [CrossRef]  

3. S. G. Narasimhan and S. K. Nayar, “Interactive (de) weathering of an image using physical models,” Proceedings of IEEE Workshop Color and Photometric Methods in Computer Vision (2003).

4. J. P. Oakley and B. L. Satherley, “Improving image quality in poor visibility conditions using a physical model for contrast degradation,” IEEE Trans. Image Process. 7(2), 167–179 (1998). [CrossRef]  

5. K. K. Tan and J. P. Oakley, “Physics-based approach to color image enhancement in poor visibility conditions,” J. Opt. Soc. Am. A 18(10), 2460–2467 (2001). [CrossRef]  

6. K.M. He, J. Sun, and X.O. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2010). [PubMed]  

7. R. Fattal, “Single image dehazing,” ACM Trans.Graph. 27(3), 988–992 (2008).

8. R. T. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Anchorage, Alaska, USA, 2008), pp. 1–8.

9. J. P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in Proceedings of IEEE International Conference on Computer Vision (Kyoto, Japan, 2009), pp. 2201–2208.

10. D. Berman, T. treibitz, and S. Avidan, “Non-local image dehazing,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2016).

11. K. Tang, J. Yang, and J. Wang, “Investigating haze-relevant features in a learning framework for image dehazing,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2014).

12. W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M. H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in European Conference on Computer Vision (Springer, 2016), pp. 154–169.

13. C. O. Ancuti and C. Ancuti, “Single image dehazing by multi-scale fusion,” IEEE Transactions on Image Processing 22, 3271–3282 (2013). [CrossRef]   [PubMed]  

14. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt. 42(3), 511–525 (2003). [CrossRef]   [PubMed]  

15. M. Saito, Y. Sato, K. Ikeuchi, and H. Kashiwagi, “Measurement of surface orientations of transparent objects by use of polarization in highlight,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1999), pp. 381–386.

16. H. Chen and L. B. Wolff, “Polarization phase-based method for material classification and object recognition in computer vision,” Proc. SPIE 2599, 54–63 (1996). [CrossRef]  

17. K. M. Yemelyanov, M. Lo, E. Pugh, and N. Engheta, “Display of polarization information by coherently moving dots,” Opt. Express 11, 1577–1584 (2003). [CrossRef]   [PubMed]  

18. J. Tyo, M. Rowe, E. Pugh, and N. Engheta, “Target detection in optically scattering media by polarization-difference imaging,” Appl. Opt. 35(11), 1855–1870 (1996). [CrossRef]   [PubMed]  

19. M. Ben-Ezra, “Segmentation with invisible keying signal,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2000), pp. 32–37.

20. S. S. Lin, K. M. Yemelyanov, E. N. Pugh, and N. Engheta, “Separation and contrast enhancement of overlapping cast shadow components using polarization,” Opt. Express 14(16), 7099–7108 (2006). [CrossRef]   [PubMed]  

21. L. B. Wolff, “Using polarization to separate reflection components,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1989), pp. 363–369. [CrossRef]  

22. Q. Zhu, J. Mai, and L. Shao, “A fast single image haze removal algorithm using color attenuation prior,” IEEE Transactions on Image Processing 24, 3522–3533 (2015). [CrossRef]   [PubMed]  

23. E. Namer, S. Shwartz, and Y. Y. Schechner, “Skyless polarimetric calibration and visibility enhancement,” Opt. Express 17(2), 472–493 (2009). [CrossRef]   [PubMed]  

24. J. Liang, L. Ren, H. Ju, W. Zhang, and E. Qu, “Polarimetric dehazing method for dense haze removal based on distribution analysis of angle of polarization,” Opt. Express 23, 26146–26157 (2015). [CrossRef]   [PubMed]  

25. S. Liu, M. A. Rahman, C. Y. Wong, C. F. Lin, H. Wu, and N. Kwok, “Image dehazing from the perspective of noise filtering,” Computers & Electrical Engineering, in press (2016). [CrossRef]  

26. W. Sun, H. Wang, C. Sun, B. Guo, W. Jia, and M. Sun, “Fast single image haze removal via local atmospheric light veil estimation,” Computers & Electrical Engineering 46, 371–383 (2015). [CrossRef]  

27. L. Kratz and K. Nishino, “Factorizing scene albedo and depth from a single foggy image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1701–1708.

28. M. Bass, Devices, Measurements, and Properties, Vol. 2 of Handbook of Optics (McGraw-Hill, 1995), Chap. 22.

29. J. Tyo, “Optimum linear combination strategy for an n-channel polarization-sensitive imaging or vision system,” J. Opt. Soc. Am. A 15(2), 359–366 (1998). [CrossRef]  

30. J. S. Tyo, “Design of optimal polarimeters: maximization of signal-to-noise ratio and minimization of systematic error,” Appl. Opt. 41, 619–630 (2002). [CrossRef]   [PubMed]  

31. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. on Image Processing 16, 2080–2095 (2007). [CrossRef]  

32. Q. Yufu and Z. Zhaofan, “Matlab code for non-sky dehazing,”(MediaFire, 2017). http://www.mediafire.com/file/1q447szg0u5zvja/Dehaze_Code.zip.

33. H. R. Sheikh, A. C. Bovik, and L. Cormack, “No-reference quality assessment using natural scene statistics: Jpeg2000,” IEEE Transactions on Image Processing 14, 1918–1927 (2005). [CrossRef]   [PubMed]  

34. A. K. Moorthy and A. C. Bovik, “A two-step framework for constructing blind image quality indices,” in Proceedings of IEEE Conference on Signal Processing Letters (IEEE, 2010) pp. 513–516. [CrossRef]  

35. A. Mittal, A. K. Moorthy, and A. C. Bovik, ‘No-reference image quality assessment in the spatial domain,” IEEE Trans. on Image Process 21(12), 4695–4708 (2012). [CrossRef]  

36. A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” in Proceedings of IEEE Conference on Signal Processing Letters (IEEE, 2013), pp. 209–212. [CrossRef]  

37. N. Ponomarenko, O. Ieremeiev, V. Lukin, K. Egiazarian, L. Jin, J. Astola, B. Vozel, K. Chehdi, M. Carli, and F. Battisti, “Color image database tid2013: Peculiarities and preliminary results,” In Proc. of European Workshop on Visual Information Processing, pp. 106–111, Paris, France (2013).

38. N. Hautière, J. P. Tarel, D. Aubert, and E. Dumont, “Blind contrast enhancement assessment by gradient ratioing at visible edges,” Image Analysis & Stereology 27, 87–95 (2011). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. N. Hautière, J.P. Tarel, and D. Aubert, “Towards fog-free in-vehicle vision systems through contrast restoration,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.
  2. J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: Model-based photograph enhancement and viewing,” ACM Trans. Graph. 27(5), 1–10 (2008).
    [Crossref]
  3. S. G. Narasimhan and S. K. Nayar, “Interactive (de) weathering of an image using physical models,” Proceedings of IEEE Workshop Color and Photometric Methods in Computer Vision (2003).
  4. J. P. Oakley and B. L. Satherley, “Improving image quality in poor visibility conditions using a physical model for contrast degradation,” IEEE Trans. Image Process. 7(2), 167–179 (1998).
    [Crossref]
  5. K. K. Tan and J. P. Oakley, “Physics-based approach to color image enhancement in poor visibility conditions,” J. Opt. Soc. Am. A 18(10), 2460–2467 (2001).
    [Crossref]
  6. K.M. He, J. Sun, and X.O. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2010).
    [PubMed]
  7. R. Fattal, “Single image dehazing,” ACM Trans.Graph. 27(3), 988–992 (2008).
  8. R. T. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Anchorage, Alaska, USA, 2008), pp. 1–8.
  9. J. P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in Proceedings of IEEE International Conference on Computer Vision (Kyoto, Japan, 2009), pp. 2201–2208.
  10. D. Berman, T. treibitz, and S. Avidan, “Non-local image dehazing,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2016).
  11. K. Tang, J. Yang, and J. Wang, “Investigating haze-relevant features in a learning framework for image dehazing,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2014).
  12. W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M. H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in European Conference on Computer Vision (Springer, 2016), pp. 154–169.
  13. C. O. Ancuti and C. Ancuti, “Single image dehazing by multi-scale fusion,” IEEE Transactions on Image Processing 22, 3271–3282 (2013).
    [Crossref] [PubMed]
  14. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt. 42(3), 511–525 (2003).
    [Crossref] [PubMed]
  15. M. Saito, Y. Sato, K. Ikeuchi, and H. Kashiwagi, “Measurement of surface orientations of transparent objects by use of polarization in highlight,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1999), pp. 381–386.
  16. H. Chen and L. B. Wolff, “Polarization phase-based method for material classification and object recognition in computer vision,” Proc. SPIE 2599, 54–63 (1996).
    [Crossref]
  17. K. M. Yemelyanov, M. Lo, E. Pugh, and N. Engheta, “Display of polarization information by coherently moving dots,” Opt. Express 11, 1577–1584 (2003).
    [Crossref] [PubMed]
  18. J. Tyo, M. Rowe, E. Pugh, and N. Engheta, “Target detection in optically scattering media by polarization-difference imaging,” Appl. Opt. 35(11), 1855–1870 (1996).
    [Crossref] [PubMed]
  19. M. Ben-Ezra, “Segmentation with invisible keying signal,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2000), pp. 32–37.
  20. S. S. Lin, K. M. Yemelyanov, E. N. Pugh, and N. Engheta, “Separation and contrast enhancement of overlapping cast shadow components using polarization,” Opt. Express 14(16), 7099–7108 (2006).
    [Crossref] [PubMed]
  21. L. B. Wolff, “Using polarization to separate reflection components,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1989), pp. 363–369.
    [Crossref]
  22. Q. Zhu, J. Mai, and L. Shao, “A fast single image haze removal algorithm using color attenuation prior,” IEEE Transactions on Image Processing 24, 3522–3533 (2015).
    [Crossref] [PubMed]
  23. E. Namer, S. Shwartz, and Y. Y. Schechner, “Skyless polarimetric calibration and visibility enhancement,” Opt. Express 17(2), 472–493 (2009).
    [Crossref] [PubMed]
  24. J. Liang, L. Ren, H. Ju, W. Zhang, and E. Qu, “Polarimetric dehazing method for dense haze removal based on distribution analysis of angle of polarization,” Opt. Express 23, 26146–26157 (2015).
    [Crossref] [PubMed]
  25. S. Liu, M. A. Rahman, C. Y. Wong, C. F. Lin, H. Wu, and N. Kwok, “Image dehazing from the perspective of noise filtering,” Computers & Electrical Engineering, in press (2016).
    [Crossref]
  26. W. Sun, H. Wang, C. Sun, B. Guo, W. Jia, and M. Sun, “Fast single image haze removal via local atmospheric light veil estimation,” Computers & Electrical Engineering 46, 371–383 (2015).
    [Crossref]
  27. L. Kratz and K. Nishino, “Factorizing scene albedo and depth from a single foggy image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1701–1708.
  28. M. Bass, Devices, Measurements, and Properties, Vol. 2 of Handbook of Optics (McGraw-Hill, 1995), Chap. 22.
  29. J. Tyo, “Optimum linear combination strategy for an n-channel polarization-sensitive imaging or vision system,” J. Opt. Soc. Am. A 15(2), 359–366 (1998).
    [Crossref]
  30. J. S. Tyo, “Design of optimal polarimeters: maximization of signal-to-noise ratio and minimization of systematic error,” Appl. Opt. 41, 619–630 (2002).
    [Crossref] [PubMed]
  31. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. on Image Processing 16, 2080–2095 (2007).
    [Crossref]
  32. Q. Yufu and Z. Zhaofan, “Matlab code for non-sky dehazing,”(MediaFire, 2017). http://www.mediafire.com/file/1q447szg0u5zvja/Dehaze_Code.zip .
  33. H. R. Sheikh, A. C. Bovik, and L. Cormack, “No-reference quality assessment using natural scene statistics: Jpeg2000,” IEEE Transactions on Image Processing 14, 1918–1927 (2005).
    [Crossref] [PubMed]
  34. A. K. Moorthy and A. C. Bovik, “A two-step framework for constructing blind image quality indices,” in Proceedings of IEEE Conference on Signal Processing Letters (IEEE, 2010) pp. 513–516.
    [Crossref]
  35. A. Mittal, A. K. Moorthy, and A. C. Bovik, ‘No-reference image quality assessment in the spatial domain,” IEEE Trans. on Image Process 21(12), 4695–4708 (2012).
    [Crossref]
  36. A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” in Proceedings of IEEE Conference on Signal Processing Letters (IEEE, 2013), pp. 209–212.
    [Crossref]
  37. N. Ponomarenko, O. Ieremeiev, V. Lukin, K. Egiazarian, L. Jin, J. Astola, B. Vozel, K. Chehdi, M. Carli, and F. Battisti, “Color image database tid2013: Peculiarities and preliminary results,” In Proc. of European Workshop on Visual Information Processing, pp. 106–111, Paris, France (2013).
  38. N. Hautière, J. P. Tarel, D. Aubert, and E. Dumont, “Blind contrast enhancement assessment by gradient ratioing at visible edges,” Image Analysis & Stereology 27, 87–95 (2011).
    [Crossref]

2015 (3)

Q. Zhu, J. Mai, and L. Shao, “A fast single image haze removal algorithm using color attenuation prior,” IEEE Transactions on Image Processing 24, 3522–3533 (2015).
[Crossref] [PubMed]

J. Liang, L. Ren, H. Ju, W. Zhang, and E. Qu, “Polarimetric dehazing method for dense haze removal based on distribution analysis of angle of polarization,” Opt. Express 23, 26146–26157 (2015).
[Crossref] [PubMed]

W. Sun, H. Wang, C. Sun, B. Guo, W. Jia, and M. Sun, “Fast single image haze removal via local atmospheric light veil estimation,” Computers & Electrical Engineering 46, 371–383 (2015).
[Crossref]

2013 (1)

C. O. Ancuti and C. Ancuti, “Single image dehazing by multi-scale fusion,” IEEE Transactions on Image Processing 22, 3271–3282 (2013).
[Crossref] [PubMed]

2012 (1)

A. Mittal, A. K. Moorthy, and A. C. Bovik, ‘No-reference image quality assessment in the spatial domain,” IEEE Trans. on Image Process 21(12), 4695–4708 (2012).
[Crossref]

2011 (1)

N. Hautière, J. P. Tarel, D. Aubert, and E. Dumont, “Blind contrast enhancement assessment by gradient ratioing at visible edges,” Image Analysis & Stereology 27, 87–95 (2011).
[Crossref]

2010 (1)

K.M. He, J. Sun, and X.O. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2010).
[PubMed]

2009 (1)

2008 (2)

R. Fattal, “Single image dehazing,” ACM Trans.Graph. 27(3), 988–992 (2008).

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: Model-based photograph enhancement and viewing,” ACM Trans. Graph. 27(5), 1–10 (2008).
[Crossref]

2007 (1)

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. on Image Processing 16, 2080–2095 (2007).
[Crossref]

2006 (1)

2005 (1)

H. R. Sheikh, A. C. Bovik, and L. Cormack, “No-reference quality assessment using natural scene statistics: Jpeg2000,” IEEE Transactions on Image Processing 14, 1918–1927 (2005).
[Crossref] [PubMed]

2003 (2)

2002 (1)

2001 (1)

1998 (2)

J. Tyo, “Optimum linear combination strategy for an n-channel polarization-sensitive imaging or vision system,” J. Opt. Soc. Am. A 15(2), 359–366 (1998).
[Crossref]

J. P. Oakley and B. L. Satherley, “Improving image quality in poor visibility conditions using a physical model for contrast degradation,” IEEE Trans. Image Process. 7(2), 167–179 (1998).
[Crossref]

1996 (2)

J. Tyo, M. Rowe, E. Pugh, and N. Engheta, “Target detection in optically scattering media by polarization-difference imaging,” Appl. Opt. 35(11), 1855–1870 (1996).
[Crossref] [PubMed]

H. Chen and L. B. Wolff, “Polarization phase-based method for material classification and object recognition in computer vision,” Proc. SPIE 2599, 54–63 (1996).
[Crossref]

Ancuti, C.

C. O. Ancuti and C. Ancuti, “Single image dehazing by multi-scale fusion,” IEEE Transactions on Image Processing 22, 3271–3282 (2013).
[Crossref] [PubMed]

Ancuti, C. O.

C. O. Ancuti and C. Ancuti, “Single image dehazing by multi-scale fusion,” IEEE Transactions on Image Processing 22, 3271–3282 (2013).
[Crossref] [PubMed]

Astola, J.

N. Ponomarenko, O. Ieremeiev, V. Lukin, K. Egiazarian, L. Jin, J. Astola, B. Vozel, K. Chehdi, M. Carli, and F. Battisti, “Color image database tid2013: Peculiarities and preliminary results,” In Proc. of European Workshop on Visual Information Processing, pp. 106–111, Paris, France (2013).

Aubert, D.

N. Hautière, J. P. Tarel, D. Aubert, and E. Dumont, “Blind contrast enhancement assessment by gradient ratioing at visible edges,” Image Analysis & Stereology 27, 87–95 (2011).
[Crossref]

N. Hautière, J.P. Tarel, and D. Aubert, “Towards fog-free in-vehicle vision systems through contrast restoration,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Avidan, S.

D. Berman, T. treibitz, and S. Avidan, “Non-local image dehazing,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2016).

Bass, M.

M. Bass, Devices, Measurements, and Properties, Vol. 2 of Handbook of Optics (McGraw-Hill, 1995), Chap. 22.

Battisti, F.

N. Ponomarenko, O. Ieremeiev, V. Lukin, K. Egiazarian, L. Jin, J. Astola, B. Vozel, K. Chehdi, M. Carli, and F. Battisti, “Color image database tid2013: Peculiarities and preliminary results,” In Proc. of European Workshop on Visual Information Processing, pp. 106–111, Paris, France (2013).

Ben-Ezra, M.

M. Ben-Ezra, “Segmentation with invisible keying signal,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2000), pp. 32–37.

Berman, D.

D. Berman, T. treibitz, and S. Avidan, “Non-local image dehazing,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2016).

Bovik, A. C.

A. Mittal, A. K. Moorthy, and A. C. Bovik, ‘No-reference image quality assessment in the spatial domain,” IEEE Trans. on Image Process 21(12), 4695–4708 (2012).
[Crossref]

H. R. Sheikh, A. C. Bovik, and L. Cormack, “No-reference quality assessment using natural scene statistics: Jpeg2000,” IEEE Transactions on Image Processing 14, 1918–1927 (2005).
[Crossref] [PubMed]

A. K. Moorthy and A. C. Bovik, “A two-step framework for constructing blind image quality indices,” in Proceedings of IEEE Conference on Signal Processing Letters (IEEE, 2010) pp. 513–516.
[Crossref]

A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” in Proceedings of IEEE Conference on Signal Processing Letters (IEEE, 2013), pp. 209–212.
[Crossref]

Cao, X.

W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M. H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in European Conference on Computer Vision (Springer, 2016), pp. 154–169.

Carli, M.

N. Ponomarenko, O. Ieremeiev, V. Lukin, K. Egiazarian, L. Jin, J. Astola, B. Vozel, K. Chehdi, M. Carli, and F. Battisti, “Color image database tid2013: Peculiarities and preliminary results,” In Proc. of European Workshop on Visual Information Processing, pp. 106–111, Paris, France (2013).

Chehdi, K.

N. Ponomarenko, O. Ieremeiev, V. Lukin, K. Egiazarian, L. Jin, J. Astola, B. Vozel, K. Chehdi, M. Carli, and F. Battisti, “Color image database tid2013: Peculiarities and preliminary results,” In Proc. of European Workshop on Visual Information Processing, pp. 106–111, Paris, France (2013).

Chen, B.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: Model-based photograph enhancement and viewing,” ACM Trans. Graph. 27(5), 1–10 (2008).
[Crossref]

Chen, H.

H. Chen and L. B. Wolff, “Polarization phase-based method for material classification and object recognition in computer vision,” Proc. SPIE 2599, 54–63 (1996).
[Crossref]

Cohen, M.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: Model-based photograph enhancement and viewing,” ACM Trans. Graph. 27(5), 1–10 (2008).
[Crossref]

Cohen-Or, D.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: Model-based photograph enhancement and viewing,” ACM Trans. Graph. 27(5), 1–10 (2008).
[Crossref]

Cormack, L.

H. R. Sheikh, A. C. Bovik, and L. Cormack, “No-reference quality assessment using natural scene statistics: Jpeg2000,” IEEE Transactions on Image Processing 14, 1918–1927 (2005).
[Crossref] [PubMed]

Dabov, K.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. on Image Processing 16, 2080–2095 (2007).
[Crossref]

Deussen, O.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: Model-based photograph enhancement and viewing,” ACM Trans. Graph. 27(5), 1–10 (2008).
[Crossref]

Dumont, E.

N. Hautière, J. P. Tarel, D. Aubert, and E. Dumont, “Blind contrast enhancement assessment by gradient ratioing at visible edges,” Image Analysis & Stereology 27, 87–95 (2011).
[Crossref]

Egiazarian, K.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. on Image Processing 16, 2080–2095 (2007).
[Crossref]

N. Ponomarenko, O. Ieremeiev, V. Lukin, K. Egiazarian, L. Jin, J. Astola, B. Vozel, K. Chehdi, M. Carli, and F. Battisti, “Color image database tid2013: Peculiarities and preliminary results,” In Proc. of European Workshop on Visual Information Processing, pp. 106–111, Paris, France (2013).

Engheta, N.

Fattal, R.

R. Fattal, “Single image dehazing,” ACM Trans.Graph. 27(3), 988–992 (2008).

Foi, A.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. on Image Processing 16, 2080–2095 (2007).
[Crossref]

Guo, B.

W. Sun, H. Wang, C. Sun, B. Guo, W. Jia, and M. Sun, “Fast single image haze removal via local atmospheric light veil estimation,” Computers & Electrical Engineering 46, 371–383 (2015).
[Crossref]

Hautiere, N.

J. P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in Proceedings of IEEE International Conference on Computer Vision (Kyoto, Japan, 2009), pp. 2201–2208.

Hautière, N.

N. Hautière, J. P. Tarel, D. Aubert, and E. Dumont, “Blind contrast enhancement assessment by gradient ratioing at visible edges,” Image Analysis & Stereology 27, 87–95 (2011).
[Crossref]

N. Hautière, J.P. Tarel, and D. Aubert, “Towards fog-free in-vehicle vision systems through contrast restoration,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

He, K.M.

K.M. He, J. Sun, and X.O. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2010).
[PubMed]

Ieremeiev, O.

N. Ponomarenko, O. Ieremeiev, V. Lukin, K. Egiazarian, L. Jin, J. Astola, B. Vozel, K. Chehdi, M. Carli, and F. Battisti, “Color image database tid2013: Peculiarities and preliminary results,” In Proc. of European Workshop on Visual Information Processing, pp. 106–111, Paris, France (2013).

Ikeuchi, K.

M. Saito, Y. Sato, K. Ikeuchi, and H. Kashiwagi, “Measurement of surface orientations of transparent objects by use of polarization in highlight,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1999), pp. 381–386.

Jia, W.

W. Sun, H. Wang, C. Sun, B. Guo, W. Jia, and M. Sun, “Fast single image haze removal via local atmospheric light veil estimation,” Computers & Electrical Engineering 46, 371–383 (2015).
[Crossref]

Jin, L.

N. Ponomarenko, O. Ieremeiev, V. Lukin, K. Egiazarian, L. Jin, J. Astola, B. Vozel, K. Chehdi, M. Carli, and F. Battisti, “Color image database tid2013: Peculiarities and preliminary results,” In Proc. of European Workshop on Visual Information Processing, pp. 106–111, Paris, France (2013).

Ju, H.

Kashiwagi, H.

M. Saito, Y. Sato, K. Ikeuchi, and H. Kashiwagi, “Measurement of surface orientations of transparent objects by use of polarization in highlight,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1999), pp. 381–386.

Katkovnik, V.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. on Image Processing 16, 2080–2095 (2007).
[Crossref]

Kopf, J.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: Model-based photograph enhancement and viewing,” ACM Trans. Graph. 27(5), 1–10 (2008).
[Crossref]

Kratz, L.

L. Kratz and K. Nishino, “Factorizing scene albedo and depth from a single foggy image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1701–1708.

Kwok, N.

S. Liu, M. A. Rahman, C. Y. Wong, C. F. Lin, H. Wu, and N. Kwok, “Image dehazing from the perspective of noise filtering,” Computers & Electrical Engineering, in press (2016).
[Crossref]

Liang, J.

Lin, C. F.

S. Liu, M. A. Rahman, C. Y. Wong, C. F. Lin, H. Wu, and N. Kwok, “Image dehazing from the perspective of noise filtering,” Computers & Electrical Engineering, in press (2016).
[Crossref]

Lin, S. S.

Lischinski, D.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: Model-based photograph enhancement and viewing,” ACM Trans. Graph. 27(5), 1–10 (2008).
[Crossref]

Liu, S.

W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M. H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in European Conference on Computer Vision (Springer, 2016), pp. 154–169.

S. Liu, M. A. Rahman, C. Y. Wong, C. F. Lin, H. Wu, and N. Kwok, “Image dehazing from the perspective of noise filtering,” Computers & Electrical Engineering, in press (2016).
[Crossref]

Lo, M.

Lukin, V.

N. Ponomarenko, O. Ieremeiev, V. Lukin, K. Egiazarian, L. Jin, J. Astola, B. Vozel, K. Chehdi, M. Carli, and F. Battisti, “Color image database tid2013: Peculiarities and preliminary results,” In Proc. of European Workshop on Visual Information Processing, pp. 106–111, Paris, France (2013).

Mai, J.

Q. Zhu, J. Mai, and L. Shao, “A fast single image haze removal algorithm using color attenuation prior,” IEEE Transactions on Image Processing 24, 3522–3533 (2015).
[Crossref] [PubMed]

Mittal, A.

A. Mittal, A. K. Moorthy, and A. C. Bovik, ‘No-reference image quality assessment in the spatial domain,” IEEE Trans. on Image Process 21(12), 4695–4708 (2012).
[Crossref]

A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” in Proceedings of IEEE Conference on Signal Processing Letters (IEEE, 2013), pp. 209–212.
[Crossref]

Moorthy, A. K.

A. Mittal, A. K. Moorthy, and A. C. Bovik, ‘No-reference image quality assessment in the spatial domain,” IEEE Trans. on Image Process 21(12), 4695–4708 (2012).
[Crossref]

A. K. Moorthy and A. C. Bovik, “A two-step framework for constructing blind image quality indices,” in Proceedings of IEEE Conference on Signal Processing Letters (IEEE, 2010) pp. 513–516.
[Crossref]

Namer, E.

Narasimhan, S. G.

Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt. 42(3), 511–525 (2003).
[Crossref] [PubMed]

S. G. Narasimhan and S. K. Nayar, “Interactive (de) weathering of an image using physical models,” Proceedings of IEEE Workshop Color and Photometric Methods in Computer Vision (2003).

Nayar, S. K.

Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt. 42(3), 511–525 (2003).
[Crossref] [PubMed]

S. G. Narasimhan and S. K. Nayar, “Interactive (de) weathering of an image using physical models,” Proceedings of IEEE Workshop Color and Photometric Methods in Computer Vision (2003).

Neubert, B.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: Model-based photograph enhancement and viewing,” ACM Trans. Graph. 27(5), 1–10 (2008).
[Crossref]

Nishino, K.

L. Kratz and K. Nishino, “Factorizing scene albedo and depth from a single foggy image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1701–1708.

Oakley, J. P.

K. K. Tan and J. P. Oakley, “Physics-based approach to color image enhancement in poor visibility conditions,” J. Opt. Soc. Am. A 18(10), 2460–2467 (2001).
[Crossref]

J. P. Oakley and B. L. Satherley, “Improving image quality in poor visibility conditions using a physical model for contrast degradation,” IEEE Trans. Image Process. 7(2), 167–179 (1998).
[Crossref]

Pan, J.

W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M. H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in European Conference on Computer Vision (Springer, 2016), pp. 154–169.

Ponomarenko, N.

N. Ponomarenko, O. Ieremeiev, V. Lukin, K. Egiazarian, L. Jin, J. Astola, B. Vozel, K. Chehdi, M. Carli, and F. Battisti, “Color image database tid2013: Peculiarities and preliminary results,” In Proc. of European Workshop on Visual Information Processing, pp. 106–111, Paris, France (2013).

Pugh, E.

Pugh, E. N.

Qu, E.

Rahman, M. A.

S. Liu, M. A. Rahman, C. Y. Wong, C. F. Lin, H. Wu, and N. Kwok, “Image dehazing from the perspective of noise filtering,” Computers & Electrical Engineering, in press (2016).
[Crossref]

Ren, L.

Ren, W.

W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M. H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in European Conference on Computer Vision (Springer, 2016), pp. 154–169.

Rowe, M.

Saito, M.

M. Saito, Y. Sato, K. Ikeuchi, and H. Kashiwagi, “Measurement of surface orientations of transparent objects by use of polarization in highlight,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1999), pp. 381–386.

Satherley, B. L.

J. P. Oakley and B. L. Satherley, “Improving image quality in poor visibility conditions using a physical model for contrast degradation,” IEEE Trans. Image Process. 7(2), 167–179 (1998).
[Crossref]

Sato, Y.

M. Saito, Y. Sato, K. Ikeuchi, and H. Kashiwagi, “Measurement of surface orientations of transparent objects by use of polarization in highlight,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1999), pp. 381–386.

Schechner, Y. Y.

Shao, L.

Q. Zhu, J. Mai, and L. Shao, “A fast single image haze removal algorithm using color attenuation prior,” IEEE Transactions on Image Processing 24, 3522–3533 (2015).
[Crossref] [PubMed]

Sheikh, H. R.

H. R. Sheikh, A. C. Bovik, and L. Cormack, “No-reference quality assessment using natural scene statistics: Jpeg2000,” IEEE Transactions on Image Processing 14, 1918–1927 (2005).
[Crossref] [PubMed]

Shwartz, S.

Soundararajan, R.

A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” in Proceedings of IEEE Conference on Signal Processing Letters (IEEE, 2013), pp. 209–212.
[Crossref]

Sun, C.

W. Sun, H. Wang, C. Sun, B. Guo, W. Jia, and M. Sun, “Fast single image haze removal via local atmospheric light veil estimation,” Computers & Electrical Engineering 46, 371–383 (2015).
[Crossref]

Sun, J.

K.M. He, J. Sun, and X.O. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2010).
[PubMed]

Sun, M.

W. Sun, H. Wang, C. Sun, B. Guo, W. Jia, and M. Sun, “Fast single image haze removal via local atmospheric light veil estimation,” Computers & Electrical Engineering 46, 371–383 (2015).
[Crossref]

Sun, W.

W. Sun, H. Wang, C. Sun, B. Guo, W. Jia, and M. Sun, “Fast single image haze removal via local atmospheric light veil estimation,” Computers & Electrical Engineering 46, 371–383 (2015).
[Crossref]

Tan, K. K.

Tan, R. T.

R. T. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Anchorage, Alaska, USA, 2008), pp. 1–8.

Tang, K.

K. Tang, J. Yang, and J. Wang, “Investigating haze-relevant features in a learning framework for image dehazing,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2014).

Tang, X.O.

K.M. He, J. Sun, and X.O. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2010).
[PubMed]

Tarel, J. P.

N. Hautière, J. P. Tarel, D. Aubert, and E. Dumont, “Blind contrast enhancement assessment by gradient ratioing at visible edges,” Image Analysis & Stereology 27, 87–95 (2011).
[Crossref]

J. P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in Proceedings of IEEE International Conference on Computer Vision (Kyoto, Japan, 2009), pp. 2201–2208.

Tarel, J.P.

N. Hautière, J.P. Tarel, and D. Aubert, “Towards fog-free in-vehicle vision systems through contrast restoration,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

treibitz, T.

D. Berman, T. treibitz, and S. Avidan, “Non-local image dehazing,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2016).

Tyo, J.

Tyo, J. S.

Uyttendaele, M.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: Model-based photograph enhancement and viewing,” ACM Trans. Graph. 27(5), 1–10 (2008).
[Crossref]

Vozel, B.

N. Ponomarenko, O. Ieremeiev, V. Lukin, K. Egiazarian, L. Jin, J. Astola, B. Vozel, K. Chehdi, M. Carli, and F. Battisti, “Color image database tid2013: Peculiarities and preliminary results,” In Proc. of European Workshop on Visual Information Processing, pp. 106–111, Paris, France (2013).

Wang, H.

W. Sun, H. Wang, C. Sun, B. Guo, W. Jia, and M. Sun, “Fast single image haze removal via local atmospheric light veil estimation,” Computers & Electrical Engineering 46, 371–383 (2015).
[Crossref]

Wang, J.

K. Tang, J. Yang, and J. Wang, “Investigating haze-relevant features in a learning framework for image dehazing,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2014).

Wolff, L. B.

H. Chen and L. B. Wolff, “Polarization phase-based method for material classification and object recognition in computer vision,” Proc. SPIE 2599, 54–63 (1996).
[Crossref]

L. B. Wolff, “Using polarization to separate reflection components,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1989), pp. 363–369.
[Crossref]

Wong, C. Y.

S. Liu, M. A. Rahman, C. Y. Wong, C. F. Lin, H. Wu, and N. Kwok, “Image dehazing from the perspective of noise filtering,” Computers & Electrical Engineering, in press (2016).
[Crossref]

Wu, H.

S. Liu, M. A. Rahman, C. Y. Wong, C. F. Lin, H. Wu, and N. Kwok, “Image dehazing from the perspective of noise filtering,” Computers & Electrical Engineering, in press (2016).
[Crossref]

Yang, J.

K. Tang, J. Yang, and J. Wang, “Investigating haze-relevant features in a learning framework for image dehazing,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2014).

Yang, M. H.

W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M. H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in European Conference on Computer Vision (Springer, 2016), pp. 154–169.

Yemelyanov, K. M.

Zhang, H.

W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M. H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in European Conference on Computer Vision (Springer, 2016), pp. 154–169.

Zhang, W.

Zhu, Q.

Q. Zhu, J. Mai, and L. Shao, “A fast single image haze removal algorithm using color attenuation prior,” IEEE Transactions on Image Processing 24, 3522–3533 (2015).
[Crossref] [PubMed]

ACM Trans. Graph. (1)

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: Model-based photograph enhancement and viewing,” ACM Trans. Graph. 27(5), 1–10 (2008).
[Crossref]

ACM Trans.Graph. (1)

R. Fattal, “Single image dehazing,” ACM Trans.Graph. 27(3), 988–992 (2008).

Appl. Opt. (3)

Computers & Electrical Engineering (1)

W. Sun, H. Wang, C. Sun, B. Guo, W. Jia, and M. Sun, “Fast single image haze removal via local atmospheric light veil estimation,” Computers & Electrical Engineering 46, 371–383 (2015).
[Crossref]

IEEE Trans. Image Process. (1)

J. P. Oakley and B. L. Satherley, “Improving image quality in poor visibility conditions using a physical model for contrast degradation,” IEEE Trans. Image Process. 7(2), 167–179 (1998).
[Crossref]

IEEE Trans. on Image Process (1)

A. Mittal, A. K. Moorthy, and A. C. Bovik, ‘No-reference image quality assessment in the spatial domain,” IEEE Trans. on Image Process 21(12), 4695–4708 (2012).
[Crossref]

IEEE Trans. on Image Processing (1)

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. on Image Processing 16, 2080–2095 (2007).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

K.M. He, J. Sun, and X.O. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2010).
[PubMed]

IEEE Transactions on Image Processing (3)

C. O. Ancuti and C. Ancuti, “Single image dehazing by multi-scale fusion,” IEEE Transactions on Image Processing 22, 3271–3282 (2013).
[Crossref] [PubMed]

H. R. Sheikh, A. C. Bovik, and L. Cormack, “No-reference quality assessment using natural scene statistics: Jpeg2000,” IEEE Transactions on Image Processing 14, 1918–1927 (2005).
[Crossref] [PubMed]

Q. Zhu, J. Mai, and L. Shao, “A fast single image haze removal algorithm using color attenuation prior,” IEEE Transactions on Image Processing 24, 3522–3533 (2015).
[Crossref] [PubMed]

Image Analysis & Stereology (1)

N. Hautière, J. P. Tarel, D. Aubert, and E. Dumont, “Blind contrast enhancement assessment by gradient ratioing at visible edges,” Image Analysis & Stereology 27, 87–95 (2011).
[Crossref]

J. Opt. Soc. Am. A (2)

Opt. Express (4)

Proc. SPIE (1)

H. Chen and L. B. Wolff, “Polarization phase-based method for material classification and object recognition in computer vision,” Proc. SPIE 2599, 54–63 (1996).
[Crossref]

Other (17)

M. Ben-Ezra, “Segmentation with invisible keying signal,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2000), pp. 32–37.

N. Hautière, J.P. Tarel, and D. Aubert, “Towards fog-free in-vehicle vision systems through contrast restoration,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

M. Saito, Y. Sato, K. Ikeuchi, and H. Kashiwagi, “Measurement of surface orientations of transparent objects by use of polarization in highlight,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1999), pp. 381–386.

S. G. Narasimhan and S. K. Nayar, “Interactive (de) weathering of an image using physical models,” Proceedings of IEEE Workshop Color and Photometric Methods in Computer Vision (2003).

R. T. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Anchorage, Alaska, USA, 2008), pp. 1–8.

J. P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in Proceedings of IEEE International Conference on Computer Vision (Kyoto, Japan, 2009), pp. 2201–2208.

D. Berman, T. treibitz, and S. Avidan, “Non-local image dehazing,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2016).

K. Tang, J. Yang, and J. Wang, “Investigating haze-relevant features in a learning framework for image dehazing,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2014).

W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M. H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in European Conference on Computer Vision (Springer, 2016), pp. 154–169.

L. B. Wolff, “Using polarization to separate reflection components,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1989), pp. 363–369.
[Crossref]

L. Kratz and K. Nishino, “Factorizing scene albedo and depth from a single foggy image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1701–1708.

M. Bass, Devices, Measurements, and Properties, Vol. 2 of Handbook of Optics (McGraw-Hill, 1995), Chap. 22.

S. Liu, M. A. Rahman, C. Y. Wong, C. F. Lin, H. Wu, and N. Kwok, “Image dehazing from the perspective of noise filtering,” Computers & Electrical Engineering, in press (2016).
[Crossref]

Q. Yufu and Z. Zhaofan, “Matlab code for non-sky dehazing,”(MediaFire, 2017). http://www.mediafire.com/file/1q447szg0u5zvja/Dehaze_Code.zip .

A. K. Moorthy and A. C. Bovik, “A two-step framework for constructing blind image quality indices,” in Proceedings of IEEE Conference on Signal Processing Letters (IEEE, 2010) pp. 513–516.
[Crossref]

A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” in Proceedings of IEEE Conference on Signal Processing Letters (IEEE, 2013), pp. 209–212.
[Crossref]

N. Ponomarenko, O. Ieremeiev, V. Lukin, K. Egiazarian, L. Jin, J. Astola, B. Vozel, K. Chehdi, M. Carli, and F. Battisti, “Color image database tid2013: Peculiarities and preliminary results,” In Proc. of European Workshop on Visual Information Processing, pp. 106–111, Paris, France (2013).

Supplementary Material (1)

NameDescription
» Code 1       It is our dehazing code based on MATLAB.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Flowchart of proposed dehazing process.
Fig. 2
Fig. 2 Estimation of polarization difference image. (a) 0° image, (b) 60° image, (c) 120° image, (d) minimum-intensity image, (e) maximum-intensity image, and (f) polarization difference image.
Fig. 3
Fig. 3 High-brightness object interfering with estimation of A. (a) Hazy image with sky area (R channel). (b) Difference image (R channel) (Red-circled area indicates high-brightness building area and blue-circled area indicates suitable valuation area. The red-dotted region is used to indicate that the interference area has been removed in (b)).
Fig. 4
Fig. 4 Estimation of A in non-sky image. (a) Hazy images without sky area (R channel). (b) Corresponding difference images (R channel) (Red-circled area indicates high-brightness building area, while the blue-circled one indicates the appropriate valuation area. The red-dotted region is used to indicate that the interference area has been removed in (b)).
Fig. 5
Fig. 5 Schechner’s initial polarization image and processed image without sky area. (a) Image with sky area. (b) Image without sky area.
Fig. 6
Fig. 6 Dehazed images with and without sky area. (a) Dehazed image with sky area. (b) Dehazed image without sky area.
Fig. 7
Fig. 7 Denoising transmittance t. (a) Transmittance image with noise. (b) Transmittance image denoised by guided filter. (c) Transmittance image denoised by BM3D filter.
Fig. 8
Fig. 8 Effect of adding factor ε along with block-matching and 3D (BM3D) filtering. (a) Dehazed image with noise. (b) RGB-channel image with noise. (c) RGB-channel image processed using factor ε. (d) Color image processed using factor ε. (e) RGB-channel image processed by BM3D. (f) Color image processed by BM3D.
Fig. 9
Fig. 9 Filtering of amplified noise. (a) Maximum-intensity image. (b) Minimum-intensity image. (c) Dehazed image with noise. (d) Dehazed image processed by factor ε. (e) Preliminary noise image. (f) Three-channel weight map. (g) New noise image processed using weight map. (h) Final denoised image. (i) Local noise effect in Fig. 9(h). (j) Schechner’s result. (k) Local noise effect in Fig. 9(j).
Fig. 10
Fig. 10 Comparison of algorithm performance using Schechner’s dataset.
Fig. 11
Fig. 11 Comparison of algorithm performance using our dataset.
Fig. 12
Fig. 12 Algorithm performance in the scene with specular surface.

Tables (2)

Tables Icon

Table 1 Quantitative comparison of state-of-art algorithms for various performance indices.

Tables Icon

Table 2 Quantitative comparison of state-of-art algorithms for various performance indices.

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

I = L object t + A = L object e - β d + A ( 1 - e - β d )
L object = I + I - Δ I / p 1 - Δ I p A = I - A 1 - A A
[ I out Q out U out V out ] = M [ I in Q in U in V in ] = 1 2 [ I in + Q in cos 2 θ + U in sin 2 θ I in cos 2 θ + Q in cos 2 2 θ + U in cos 2 θ sin 2 θ I in sin 2 θ + Q in cos 2 θ sin 2 θ + U in sin 2 2 θ 0 ]
I out = I in + Q in cos 2 θ + U in sin 2 θ
I = A + D
A = A ( 1 - e - β d )
D = L object e - β d
p = A - A A + A = A - A A
I = D 2 + A
I = D 2 + A
p = A - A A = I - I A
Δ I = I - I = A p
Cov ( A , L object ) = 0
Cov ( Δ I p , p I - Δ I p A - Δ I ) = 0
argmin | Cov ( Δ I p , p I - Δ I p A - Δ I ) |
[ p r p g p b ] [ 0.29 0.27 0.24 ]
[ p r p g p b ] [ 0.34 0.32 0.27 ]
N = L object - L ε object
Δ I = I - I = A p + D p D
A ^ = I - I p = A + D p D p
D ^ = I - A ^ = D ( 1 - p D p )

Metrics