Abstract

Images/videos captured from optical devices are usually degraded by turbid media such as haze, smoke, fog, rain and snow. Haze is the most common problem in outdoor scenes because of the atmosphere conditions. This paper proposes a novel single image-based dehazing framework to remove haze artifacts from images, where we propose two novel image priors, called the pixel-based dark channel prior and the pixel-based bright channel prior. Based on the two priors with the haze optical model, we propose to estimate atmospheric light via haze density analysis. We can then estimate transmission map, followed by refining it via the bilateral filter. As a result, high-quality haze-free images can be recovered with lower computational complexity compared with the state-of-the-art approach based on patch-based dark channel prior.

© 2013 Optical Society of America

1. Introduction

Different weather conditions such as haze, fog, smoke, rain, or snow will cause complex visual effects of spatial or temporal domains in images or videos [13]. Such artifacts may significantly degrade the performances of outdoor vision systems relying on image/video feature extraction [4] or visual attention modeling [57], such as event detection, object detection, tracking, and recognition, scene analysis and classification, image indexing and retrieval. For example, visual attention models [5] compute a saliency map topographically encoding for saliency at each location in the visual input that simulates which elements of a visual scene are likely to attract the attention of human observers. Nevertheless, the performances of the model for related applications may be degraded if haze directly interacts with the interested target in an image. A comprehensive survey of detection approaches for outdoor environmental factors to enhance the accuracy of video-based automatic incident detection systems can be found in [8]. Moreover, hazing artifacts have been shown to significantly influence the performance of image/video coding in [9], where the dehazing effects in compression were investigated in terms of the coding artifacts and motion estimation in both cases of applying dehazing before and after compression. It has been concluded that both better dehazing performance and better coding efficiency can be achieved when the dehazing is applied before compression [9]. More specifically, we have evaluated several vision-based object detection systems (e.g., [4,6,7]) with hazy and haze-removed images (obtained by the proposed method) as inputs to claim that detection accuracy for haze-removed images would be better, which will be shown in Sec. 4.1.

Removal of weather effects has recently received much attention [10], such as removals of haze [931], rain [3235], and snow [34] from image/video. In this paper, we focus on haze removal, i.e., dehazing, from a single image. As illustrated in Fig. 1, haze is an example of the turbid medium (e.g., particles and water droplets) in the atmosphere, which will degrade outdoor images due to atmospheric absorption and scattering. The irradiance received by the camera from the scene point is attenuated along the line of sight. Furthermore, the incoming light is blended with the airlight, i.e., ambient light reflected into the line of sight by atmospheric particles. As a result, the degraded images will lose contrast and color fidelity. Based on the fact that the amount of scattering depends on the distance of the scene points from the camera, the degradation is spatially variant [28].

 

Fig. 1 An example illustration of haze optical model applied to a natural scenario with haze.

Download Full Size | PPT Slide | PDF

Based on the fact that haze is dependent on the unknown depth, dehazing is therefore a challenging problem. Furthermore, if the available input is only one single hazy image, the problem is under-constrained and more challenging. Hence, most traditional dehazing approaches [1019] have been proposed based on using multiple hazy images as input or additional prior knowledge. Polarization-based methods [1114] were proposed to remove the haze effects through two or more images taken with different degrees of polarization. In [1517], more constraints obtained from multiple images of the same scene under different weather conditions were employed for haze/weather artifact removal. Moreover, in depth-based methods [1821], it is required to provide some depth information from user inputs or known 3D models for dehazing or deweathering.

Nevertheless, taking multiple input images of the same scene is usually impractical in several real applications. Single image haze removal [2230] has recently received much attention. The success of these approaches usually lies on using stronger priors or assumptions. In [22], it is observed that a haze-free image should have higher contrast compared with its hazy version. Hence, it was proposed to remove haze from a single image by maximizing the local contrast of the restored image. The results are usually visually compelling but may not be physically valid. Fattal [23] proposed to estimate the albedo of the scene and the medium transmission under the assumption that the transmission and the surface shading are locally uncorrelated for single image dehazing. This approach is physically sound and can usually produce impressive results. Nevertheless, it cannot well restore heavily hazy images and may fail in the cases where the assumption is invalid. Tarel and Hautière [24] proposed a fast visibility restoration algorithm for a single image based on median filter in order to preserve both edges and corners. This method has been shown to be not suitable to handle the region with discontinuous depth in [31]. In addition, in [25], it was presented an algorithm for removing the effects of light scattering in a single underwater image. In [26], it was proposed to iteratively extract transmission under the assumption that large-scale chromaticity variations are due to transmission while small-scale luminance variations are due to scene albedo for single image dehazing. A filtering-based dehazing method for single image was proposed in [27], where the basic idea is to compute an accurate atmosphere veil that is not only smoother, but also respect with depth information of the underlying image. In [31], single image dehazing was extended to video dehazing with the maintenance of spatial and temporal coherence.

Recently, an effective image prior, called dark channel prior [28] was proposed to remove haze from a single image, where the key observation is that most local patches in outdoor haze-free images contain some pixels whose intensity is very low in at least one color channel. Based on this prior with the haze optical model, one can directly estimate the thickness of the haze and restore a high-quality haze-free image. Then, the dark channel prior was also employed in [29,30] for single image dehazing.

In this paper, inspired by the patch-based dark channel prior [28], we propose two novel image priors, called the pixel-based dark channel prior and the pixel-based bright channel prior. Based on the two novel priors with the haze optical model, we propose to estimate the atmospheric light via haze density analysis. We can then estimate the transmission map, followed by refining it via the bilateral filter [36]. As a result, high-quality haze-free images can be recovered with lower computational complexity compared with [28]. The main contribution of this paper is three-fold: (i) we propose the pixel-based dark channel prior for single image dehazing, which can significantly reduce the computational complexity of the original patch-based approach [28] while maintaining the dark channel property of an image; (ii) we propose the pixel-based bright channel prior, which can be integrated with the dark channel prior for haze density analysis and atmospheric light estimation; and (iii) in our method, the estimated atmospheric light for an image is pixel-based, which might be more accurate than using only a constant for an image employed in [28], resulting in more reliable estimation of transmission map. Moreover, we also investigate several real applications of single image dehazing, as shown in Sec. 4.1, which was rarely evaluated in the literature.

The rest of this paper is organized as follows. In Sec. 2, we briefly review the haze imaging model, the patch-based dark channel prior [28], and the dehazing method proposed in [28]. Sec. 3 presents the proposed single-image-based dehazing framework. In Sec. 4, experimental results are demonstrated. Finally, Sec. 5 concludes this paper.

2. Related works

2.1 Haze optical model

In computer vision community, the haze optical model widely used to describe the formation of a hazy image I(x), where x is the pixel index, is shown as [1,28]:

I(x)=J(x)t(x)+A(1t(x)),
where I(x) is the observed intensity, J(x) is the scene radiance (the original haze-free image to be recovered), A is the global atmospheric light, and t(x) is the medium transmission indicating the portion of the light that is not scattered and reaches the camera. The major goal of single image dehazing is to recover a haze-free image J(x), A, and t(x) from the received image I(x), which is a under-constrained problem. In this model, the term J(x)t(x) is called direct attenuation and the term A(1t(x)) is called airlight. The direct attenuation denotes the scene radiance and its decay in the medium, while the airlight results from previously scattered light and leads to the shift of the scene colors. The direct attenuation is a multiplicative distortion of the scene radiance, while the airlight is an additive one. This haze optical model has been employed in most works of single image dehazing [2230].

2.2 Patch-based dark channel prior

The dark channel prior [28] is based on the observation in outdoor haze-free images that in most of the non-sky image patches, at least one color channel has some pixels whose intensity are very low and close to zero. That is, the minimum intensity in such a patch is close to zero. For an arbitrary image J(x), its dark channel Jdark(x) is given by [28]:

Jdark(x)=minyΩ(x)[minc{R,G,B}Jc(y)],
where Ω(x) is a local patch centered at x, ydenotes the index for a pixel in Ω(x), c denotes one of the three color channels (R,G,orB) in the RBG (red, blue, and green) color space, and Jc denotes the color channel c of J.

Based on the concept of a dark channel, it has been shown that if J is an outdoor haze-free image, except for the sky region, the intensity of J’s dark channel is low and tends to be zero, i.e., Jdark0, which is called by dark channel prior. The low intensity in the dark channel is mainly derived from the three factors: (i) shadows; (ii) colorful objects or surfaces; and (iii) dark objects or surfaces. The patch-based dark channel prior has been recently used to single image dehazing [2830], as described in Sec. 2.3.

2.3 Single image dehazing based on patch-based dark channel prior

In [28], a single image-based dehazing framework based on the patch-based dark channel prior was proposed. Based on the haze imaging model shown in Eq. (1), the authors [28] proposed to first estimate the atmospheric light A and the transmission map t(x), followed by refining t(x) via the soft matting process [37]. For estimating A, it is general to use the color of the most haze-opaque in I as A [22] or A’s initial guess [23]. Based on the property that the dark channel of a hazy image can approximate the haze denseness, in [28], it was proposed to use the dark channel to detect the most haze-opaque region. They first picked the top 0.1% brightest pixels in the dark channel of I, which are usually most haze-opaque. Among these pixels, the pixels with the highest intensity in I are selected as the atmospheric light A.

After estimating A, based on the haze imaging model shown in Eq. (1) and the dark channel prior shown in Eq. (2), the transmission map t(x) can be derived as:

t(x)=1wminyΩ(x)[mincIc(y)Ac],
where w is a constant parameter used to keep a very small amount of haze for the distant objects, 0<w1 (wis set to 0.95 in [28]), the definitions of y, Ω(x), and c, respectively, are the same as those defined in Eq. (2), Icdenotes the color channel c of I, and Ac denotes the color channel c of A. Then, t(x) is refined via the soft matting process [37]. Finally, the haze-free image J can be recovered by:
J(x)=I(x)Amax(t(x),t0)+A,
where t0 denotes a lower bound of t(x) to preserve a small amount of haze in very dense haze regions (t0is set to 0.1 in [28]).

3. Proposed single image-based dehazing framework

In this section, we present the proposed single image-based dehazing framework, including our pixel-based dark/bright channel prior, haze density analysis, atmospheric light estimation, and transmission map estimation/refinement methods to improve the dehazing method via the patch-based dark channel prior proposed in [28]. The main drawbacks of [28], includes: (i) the estimated atmospheric light of an image is a constant for all of the pixels, which is not accurate enough, and may induce lower accuracy of transmission map estimation; (ii) the computational complexity for calculating a patch-based dark channel prior for an image is too expensive, especially for large patch size (e.g., 15 × 15 is used in [28]); and (iii) the computational complexity of the transmission map refinement process via soft matting is too expensive. The issue of computational complexity will be discussed and evaluated in Sec. 4.3. The details of the proposed single image dehazing method are elaborated in the following subsections.

3.1 Pixel-based dark/bright channel prior

Based on our observation, most outdoor haze-free images contain many pixels whose intensity is very low in at least one color channel. We call this property as “pixel-based dark channel prior.” Even if the degree of darkness of the dark channel obtained by our pixel-based dark channel prior is less darker than that obtained by the patch-based approach [28], it is already sufficiently enough for the estimation of atmospheric light and transmission map, described in Secs. 3.2 and 3.3, respectively. More specifically, for an image J(x), its pixel-based dark channel Jdark_pixel(x) can be derived as:

Jdark_pixel(x)=minc{R,G,B}Jc(x),
where c denotes one of the three color channels, and Jc denotes the color channel c of J. As an example shown in Figs. 2(b) and 2(c), we calculate the histogram (Fig. 2(c)) for the pixel-based dark channel (Fig. 2(b)) of the image shown in Fig. 2(a). It can be found that most pixels in the dark channel have very small values.

 

Fig. 2 An example of the proposed pixel-based dark/bright channel prior: (a) the original haze-free image; (b) the pixel-based dark channel version of (a); (c) the histogram of (b); (d) the pixel-based bright channel version of (a); and (e) the histogram of (d).

Download Full Size | PPT Slide | PDF

On the other hand, we also observe that most outdoor haze-free images contain many pixels whose intensity is very high in at least one color channel, which is called “pixel-based bright channel prior.” Hence, the pixel-based bright channel Jbright_pixel(x) can be similarly derived as:

Jbright_pixel(x)=maxc{R,G,B}Jc(x).
Similarly, as illustrated in Figs. 2(d) and 2(e), even if the degree of brightness of the obtained pixel-based bright channel prior is not really very “bright,” it is already sufficiently enough for the estimation of atmospheric light, described in Sec. 3.2.

3.2 Haze density analysis and estimation of atmospheric light

Based on the fact that the density of haze in different regions of an image should be different, the value of atmospheric light for each pixel should be different accordingly. In general, in a hazy image, the haze density is higher in the region of deeper depth, while the haze density is lower in the region of shallower depth. To estimate the haze density, we first convert an input hazy image I(x) to its color representation IHSV(x) in the HSV (hue, saturation, and value) color space. HSV color space has been widely employed in several computer vision and multimedia applications with the main characteristic of better stability beneficial from separating the brightness and chromaticity information of an image [38,39]. We then define the HSV distance for each pixel as:

d(x)=IHSV(x)B,
where B denotes the brightest color value in the HSV color space. In general, smaller d(x) means that the pixel x is with brighter pixel value and with haze of higher density. Then, we convert d(x) to:
s(x)=1d(x)maxyIHSV[d(y)].
That is, s(x) is directly proportional to the haze density of the pixel x.

Then, we determine the pixel value Ahighestc of the color channel c with the highest haze density in I by selecting the pixel value corresponding to the top 0.1% brightest value in Jdark_pixel, while determining the pixel value Alowestc of the color channel c with the lowest haze density in I by selecting the pixel value corresponding to the top 30% darkest value in Jbright_pixel. Based on the above-mentioned information about the haze density and the dark/bright channel prior for I(x), we can estimate the color channel c of the atmospheric light A(x) for I(x), i.e., Ac(x), as:

Ac(x)=|AhighestcAlowestcmaxyIHSV[11d(y)]minyIHSV[11d(y)]|×s(x),
That is, we linearly determine the atmospheric light A(x) for each pixel x based on the haze density, where s(x) is directly proportional to the haze density of the pixel x, as illustrated in Fig. 3. After estimating the atmospheric light A(x), we can then estimate the transmission map t(x), described in Sec. 3.3.

 

Fig. 3 An example illustrating the proposed estimation method of atmospheric light, where the red dotted line means the pixel value corresponding to the top 0.1% brightest value in Jdark_pixel, and the blue dotted line means the pixel value corresponding to the top 30% darkest value in Jbright_pixel.

Download Full Size | PPT Slide | PDF

3.3 Estimation of transmission map and its refinement

Based on our atmospheric light estimation technique, the haze imaging model shown in Eq. (1) can be modified by replacing A with A(x). By normalizing the modified haze imaging equation with Ac(x) for the color channel c, followed by calculating the pixel-based dark channel on both sides, we can obtain:

minc{R,G,B}[Ic(x)Ac(x)]=t(x)minc{R,G,B}[Jc(x)Ac(x)]+1t(x).
Based on the fact that the pixel-based dark channel of the haze-free image J is close to zero and Ac(x) is always positive, we can derive t(x) from Eq. (10). Moreover, similar to [28], we also keep a very small amount of haze for the distant objects by introducing a parameter w(x) for each pixel x, 0<w(x)1. Different from using only a constant w (set to 0.95) for all pixels of an image in [28], we propose to use the adaptive weighting function w(x), adapted to the depth of x, by setting w(x)=[s(x)]1/3. That is, w(x) is larger in the region of deeper depth, while w(x) is smaller in the region of shallower depth. Then, we can derive t(x) as:
t(x)=1w(x)[minc{R,G,B}Ic(x)Ac(x)],
as illustrated in Fig. 4(b). We then refine t(x) via the bilateral filter [36] to reduce halo artifacts, as illustrated in Fig. 4(c), which is more efficient than the soft matting process with high complexity [37] employed in [28].The bilateral filter can efficiently smooth an image while preserving edges, by means of a nonlinear combination of nearby image values, which has been extensively applied and investigated recently for image processing, such as image denoising. We can then recover a hazy image based on Eq. (1), described in Sec. 3.4.

 

Fig. 4 An example illustrating the proposed dehazing method: (a) the hazy image to be dehazed; (b) the transmission map of (a) obtained by the proposed method; (c) the refined transmission map by using the bilateral filter; and (d) the dehazed image of (a) obtained by the proposed method.

Download Full Size | PPT Slide | PDF

3.4 Recovery of Haze-free Image

After estimating t(x) and A(x), we can derive J(x) based on the haze imaging model. Similar to [28], we also introduce a lower bound t0 to preserve a small amount of haze in very dense haze regions (t0 is also set to 0.1 based on [28]). Finally, the haze-free image J(x) can be derived as:

J(x)=I(x)A(x)max(t(x),t0)+A(x),
as illustrated in Fig. 4(d).

4. Experimental results

To demonstrate the practicability and performance of the proposed single image-based dehazing method, in Sec. 4.1, we evaluate the performance of the three existing vision-based object detection systems [4,6,7] with and without applying the proposed dehazing method as a preprocessing step. Then, in Sec. 4.2, we compare our method with the three state-of-the-art image-based dehazing methods [2223,28].

4.1 Experiments on vision-based systems with/without image dehazing as preprocessing

As shown in Fig. 5, we applied the HOG (histogram of oriented gradients)-based pedestrian detector released from [4] to the hazy image shown in Fig. 5(a) and its haze-removed version (obtained by the proposed method presented in Sec. 3) shown in Fig. 5(b), respectively. It can be found that the detection accuracy for the haze-removed version is better. In addition, as shown in Fig. 6, we applied the visual attention model-based blob detector released from [6] to the hazy image shown in Fig. 6(a) and its haze-removed version (obtained by the proposed method) shown in Fig. 6(b), respectively. It can be found that the detection accuracy for the haze-removed version is better. Moreover, we also evaluate the graph-based approach released from [7] to calculate the visual saliency map for the hazy image shown in Fig. 7(a) and its haze-removed version (obtained by the proposed method) shown in Fig. 7(b), respectively. Similarly, it can also be found that the derived saliency map for the haze-removed version is better.

 

Fig. 5 Applying the HOG-based pedestrian detector released from [4] to: (a) the original hazy image; and (b) the haze-removed version (obtained by the proposed method) of (a), where more pedestrians can be detected in (b).

Download Full Size | PPT Slide | PDF

 

Fig. 6 Applying the visual attention model-based blob detector released from [6] to: (a) the original hazy image; and (b) the haze-removed version (obtained by the proposed method) of (a), where more objects of interest can be detected in (b).

Download Full Size | PPT Slide | PDF

 

Fig. 7 Applying the graph-based approach released from [7] to calculate the visual saliency map for: (a) the original hazy image; and (b) the haze-removed version (obtained by the proposed method) of (a), where more accurate saliency map can be obtained in (b).

Download Full Size | PPT Slide | PDF

In Fig. 8, the proposed method can outperform both Fattal’s and He’s methods, especially in the recovery of the red wall region. Fattal’s method [23] is mainly based on local statistics and requires sufficient color information and variance. When the color information is faint induced by haze, the variance is not high enough for this method to estimate the transmission map. Moreover, the estimation of atmospheric light in He’s method [28] is somewhat rough, which may induce unreliable estimation of the transmission map, resulting in inaccurate color information recovery. The proposed method applies more reliable estimations of atmospheric light and transmission map, resulting in better image recovery. Similar observations can be also found in Figs. 9 and 10.

 

Fig. 8 Dehazing results: (a) hazy image; the haze-removed versions of (a) via (b) Fattal’s; (c) He’s; and (d) proposed methods.

Download Full Size | PPT Slide | PDF

 

Fig. 9 Dehazing results: (a) hazy image; the haze-removed versions of (a) via (b) Fattal’s; (c) He’s; and (d) proposed methods.

Download Full Size | PPT Slide | PDF

 

Fig. 10 Dehazing results: (a) hazy image; the haze-removed versions of (a) via (b) Fattal’s; (c) He’s; and (d) proposed methods.

Download Full Size | PPT Slide | PDF

In addition, in Fig. 11, even if Fattal’s method [23] can remove the haze more completely than our method, the recovered image seems to be somewhat unnatural. For example, in Fig. 11(b), the sky region is too bright and the colors of some tree regions are too white. The proposed method employs more adaptive weighting function in estimation of the transmission map (Eq. (11)) to suitably keep a very small amount of haze, resulting in more natural image recovery. In Fig. 12, the proposed method can achieve better color information recovery than that of He’s method [28] based on better estimation of the transmission map.

 

Fig. 11 Dehazing results: (a) hazy image; the haze-removed versions of (a) via (b) Fattal’s; and (c) proposed methods.

Download Full Size | PPT Slide | PDF

 

Fig. 12 Dehazing results: (a) hazy image; the haze-removed versions of (a) via (b) He’s; and (c) proposed methods.

Download Full Size | PPT Slide | PDF

Moreover, Figs. 13 and 14 show the comparison results of Tan’s [22] and the proposed methods, where it can be found that our method can significantly outperform Tan’s method. The main reason is that Tan’s method tends to maximize the contrast of a hazy image, which may result in overestimating the haze layer and over-saturating colors. However, the proposed method can recover the image structure without sacrificing the fidelity of the color information.

 

Fig. 13 Dehazing results: (a) hazy image; the haze-removed versions of (a) via (b) Tan’s; and (c) proposed methods.

Download Full Size | PPT Slide | PDF

 

Fig. 14 Dehazing results: (a) hazy image; the haze-removed versions of (a) via (b) Tan’s; and (c) proposed methods.

Download Full Size | PPT Slide | PDF

On the other hand, to subjectively evaluate the proposed method, we also invited 10 subjects to participate in the subjective test. All subjects were undergraduate or graduate students and none of them had knowledge about the evaluated algorithms, including Tan’s [22], Fattal’s [23], He’s [28], and the proposed algorithms. We selected 50 hazy images and applied the four evaluated methods to perform dehazing and obtained the respective 50 dehazed images for each method. Similar to the principle of the double stimulus continuous quality scale (DSCQS) quality assessment [40] for video quality measurement recommended by ITU-R BT.500, each subject was asked to subjectively give a score to each dehazed image. The score corresponding to the visual quality of an image is ranged from 5 to 1, denoting excellent (5), good (4), fair (3), poor (2), and bad (1), respectively. Table 1 lists the average score obtained by averaging the scores of the 50 dehazed images for each evaluated method. Based on Table 1, the proposed method outperforms the three methods [22,23,28] in terms of the subjective test.

Tables Icon

Table 1. Subjective evaluation results of Tan’s [22], Fattal’s [23], He’s [28], and the proposed methods

4.3 Discussion on computational complexity

The proposed dehazing method was implemented in MATLAB® 64-bit on a personal computer equipped with Intel® Core i7-2600 processor and 4 GB memory. Table 2 lists the run-time (in seconds for processing an image) of the dehazing method based on patch-based dark channel prior (He’s method) [28] and the proposed method. It can be observed from Table 2 that the computational complexity of the proposed method is significantly lower than that of He’s method [28]. The main reasons are: (i) the computational complexity for calculating a patch-based dark channel prior for an image is too expensive, especially for large patch size used in [28], while our method employs the proposed pixel-based dark channel prior, which can significantly reduce the complexity; and (ii) the computational complexity of the transmission map refinement process via soft matting [37] used in [28] is too expensive, while our method employs the bilateral filter [36], which is efficient and with lower complexity. As a result, higher-quality haze-free images can be recovered with lower computational complexity by our method compared with [28].

Tables Icon

Table 2. Run time (in seconds) comparison between He’s method [28] and the proposed method.

5. Conclusion

In this paper, we have proposed a novel single image-based dehazing framework, where we proposed two novel image priors, called the pixel-based dark channel prior and the pixel-based bright channel prior. Based on the two priors with the haze optical model, we can estimate the atmospheric light via haze density analysis. We can then estimate the transmission map, followed by refining it via the bilateral filter. Based on our experimental results, high-quality haze-free images can be recovered by our dehazing method which can outperform or be comparable with the three state-of-the-art methods [22,23,28] used for comparisons. For future work, we will extend our method to video dehazing while maintaining spatial and temporal consistency, which can be embedded into driving safety assistance device with the integration of our previous rain removal techniques [32,33].

Acknowledgments

This work was supported in part by the National Science Council, Taiwan, under Grants NSC99-2628-E-110-008-MY3 and NSC100-2218-E-224-017-MY3. Our thanks to Wen- Hung Xu for executing the program on the test data in this work; his timely assistance is greatly appreciated.

References and links

1. S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” Int. J. Comput. Vis. 48(3), 233–254 (2002). [CrossRef]  

2. K. Garg and S. K. Nayar, “Vision and rain,” Int. J. Comput. Vis. 75(1), 3–27 (2007). [CrossRef]  

3. R. C. Henry, S. Mahadev, S. Urquijo, and D. Chitwood, “Color perception through atmospheric haze,” J. Opt. Soc. Am. A 17(5), 831–835 (2000). [CrossRef]   [PubMed]  

4. S. Maji, A. C. Berg, and J. Malik, “Classification using intersection kernel support vector machines is efficient,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Anchorage, Alaska, USA, 2008), pp. 1–8. [CrossRef]  

5. L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998). [CrossRef]  

6. M. Jahangiri and M. Petrou, “An attention model for extracting components that merit identification,” in Proceedings of IEEE International Conference on Image Processing (Cairo, Egypt, 2009), pp. 965–968. [CrossRef]  

7. J. Harel, C. Koch, and P. Perona, “Graph-based visual saliency,” in Proceedings of Advances in Neural Information Processing Systems (2007), pp. 545–552.

8. M. S. Shehata, J. Cai, W. M. Badawy, T. W. Burr, M. S. Pervez, R. J. Johannesson, and A. Radmanesh, “Video-based automatic incident detection for smart roads: the outdoor environmental challenges regarding false alarms,” IEEE Trans. Intell. Transp. Syst. 9(2), 349–360 (2008). [CrossRef]  

9. K. B. Gibson, D. T. Võ, and T. Q. Nguyen, “An investigation of dehazing effects on image and video coding,” IEEE Trans. Image Process. 21(2), 662–673 (2012). [CrossRef]   [PubMed]  

10. S. G. Narasimhan and S. K. Nayar, “Removing weather effects from monochrome images,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Kauai, Hawaii, USA, 2001), pp. 186–193.

11. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Kauai, Hawaii, USA, 2001), pp. 325–332.

12. S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (New York, USA, 2006), pp. 1984 – 1991.

13. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt. 42(3), 511–525 (2003). [CrossRef]   [PubMed]  

14. E. Namer, S. Shwartz, and Y. Y. Schechner, “Skyless polarimetric calibration and visibility enhancement,” Opt. Express 17(2), 472–493 (2009). [CrossRef]   [PubMed]  

15. S. K. Nayar and S. G. Narasimhan, “Vision in bad weather,” in Proceedings of IEEE International Conference on Computer Vision (Kerkyra, Greece, 1999), pp. 820–827. [CrossRef]  

16. S. G. Narasimhan and S. K. Nayar, “Chromatic framework for vision in bad weather,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Hilton Head Island, USA, 2000), pp. 598–605. [CrossRef]  

17. S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Trans. Pattern Anal. Mach. Intell. 25(6), 713–724 (2003). [CrossRef]  

18. J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph. 27(5), 1–10 (2008). [CrossRef]  

19. S. G. Narasimhan and S. K. Nayar, “Interactive deweathering of an image using physical models,” Proceedings of IEEE Workshop Color and Photometric Methods in Computer Vision (2003).

20. F. Cozman and E. Krotkov, “Depth from scattering,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (San Juan, USA, 1997), pp. 801–806. [CrossRef]  

21. K. K. Tan and J. P. Oakley, “Physics-based approach to color image enhancement in poor visibility conditions,” J. Opt. Soc. Am. A 18(10), 2460–2467 (2001). [CrossRef]   [PubMed]  

22. R. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Anchorage, Alaska, USA, 2008), pp. 1–8. [CrossRef]  

23. R. Fattal, “Single image dehazing,” ACM Trans. Graph. 27(3), 1–9 (2008). [CrossRef]  

24. J. P. Tarel and N. Hautière, “Fast visibility restoration from a single color or gray level image,” in Proceedings of IEEE International Conference on Computer Vision (Kyoto, Japan, 2009), pp. 2201–2208. [CrossRef]  

25. N. Carlevaris-Bianco, A. Mohan, and R. M. Eustice, “Initial results in underwater single image dehazing,” in Proceedings of MTS/IEEE OCEANS (Seattle, USA, 2010), pp. 1–8. [CrossRef]  

26. J. Zhang, L. Li, G. Yang, Y. Zhang, and J. Sun, “Local albedo-insensitive single image dehazing,” Vis. Comput. 26(6–8), 761–768 (2010). [CrossRef]  

27. C. Xiao and J. Gan, “Fast image dehazing using guided joint bilateral filter,” Vis. Comput. 28(6–8), 713–721 (2012). [CrossRef]  

28. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel Prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2010). [PubMed]  

29. C. T. Chu and M. S. Lee, “A content-adaptive method for single image dehazing,” Lect. Notes Comput. Sci. 6298, 350–361 (2010). [CrossRef]  

30. J. Yu and Q. Liao, “Fast single image fog removal using edge-preserving smoothing,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, (Prague, Czech Republic, 2011), pp. 1245–1248. [CrossRef]  

31. J. Zhang, L. Li, Y. Zhang, G. Yang, X. Cao, and J. Sun, “Video dehazing with spatial and temporal coherence,” Vis. Comput. 27(6–8), 749–757 (2011). [CrossRef]  

32. L. W. Kang, C. W. Lin, and Y. H. Fu, “Automatic single-image-based rain streaks removal via image decomposition,” IEEE Trans. Image Process. 21(4), 1742–1755 (2012). [CrossRef]   [PubMed]  

33. L. W. Kang, C. W. Lin, C. T. Lin, and Y. C. Lin, “Self-learning-based rain streak removal for image/video,” in Proceedings of IEEE International Symposium on Circuits and Systems (Seoul, Korea, 2012), pp. 1871–1874. [CrossRef]  

34. P. C. Barnum, S. Narasimhan, and T. Kanade, “Analysis of rain and snow in frequency space,” Int. J. Comput. Vis. 86(2–3), 256–274 (2010). [CrossRef]  

35. J. Bossu, N. Hautière, and J. P. Tarel, “Rain or snow detection in image sequences through use of a histogram of orientation of streaks,” Int. J. Comput. Vis. 93(3), 348–367 (2011). [CrossRef]  

36. C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proceedings of IEEE International Conference on Computer Vision (Bombay, 1998), pp. 839–846. [CrossRef]  

37. A. Levin, D. Lischinski, and Y. Weiss, “A closed form solution to natural image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (New York, USA, 2006), pp. 61–68. [CrossRef]  

38. B. S. Manjunath, J. R. Ohm, V. V. Vasudevan, and A. Yamada, “Color and texture descriptors,” IEEE Trans. Circ. Syst. Video Tech. 11(6), 703–715 (2001). [CrossRef]  

39. X. Wang and Z. Wang, “A novel method for image retrieval based on structure elements’ descriptor,” J. Vis. Commun. Image Represent. 24(1), 63–74 (2013). [CrossRef]  

40. M. H. Pinson and S. Wolf, “Comparing subjective video quality testing methodologies,” in Proceedings of SPIE 5150, Visual Communications and Image Processing (2003).

References

  • View by:
  • |
  • |
  • |

  1. S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” Int. J. Comput. Vis.48(3), 233–254 (2002).
    [CrossRef]
  2. K. Garg and S. K. Nayar, “Vision and rain,” Int. J. Comput. Vis.75(1), 3–27 (2007).
    [CrossRef]
  3. R. C. Henry, S. Mahadev, S. Urquijo, and D. Chitwood, “Color perception through atmospheric haze,” J. Opt. Soc. Am. A17(5), 831–835 (2000).
    [CrossRef] [PubMed]
  4. S. Maji, A. C. Berg, and J. Malik, “Classification using intersection kernel support vector machines is efficient,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Anchorage, Alaska, USA, 2008), pp. 1–8.
    [CrossRef]
  5. L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell.20(11), 1254–1259 (1998).
    [CrossRef]
  6. M. Jahangiri and M. Petrou, “An attention model for extracting components that merit identification,” in Proceedings of IEEE International Conference on Image Processing (Cairo, Egypt, 2009), pp. 965–968.
    [CrossRef]
  7. J. Harel, C. Koch, and P. Perona, “Graph-based visual saliency,” in Proceedings of Advances in Neural Information Processing Systems (2007), pp. 545–552.
  8. M. S. Shehata, J. Cai, W. M. Badawy, T. W. Burr, M. S. Pervez, R. J. Johannesson, and A. Radmanesh, “Video-based automatic incident detection for smart roads: the outdoor environmental challenges regarding false alarms,” IEEE Trans. Intell. Transp. Syst.9(2), 349–360 (2008).
    [CrossRef]
  9. K. B. Gibson, D. T. Võ, and T. Q. Nguyen, “An investigation of dehazing effects on image and video coding,” IEEE Trans. Image Process.21(2), 662–673 (2012).
    [CrossRef] [PubMed]
  10. S. G. Narasimhan and S. K. Nayar, “Removing weather effects from monochrome images,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Kauai, Hawaii, USA, 2001), pp. 186–193.
  11. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Kauai, Hawaii, USA, 2001), pp. 325–332.
  12. S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (New York, USA, 2006), pp. 1984 – 1991.
  13. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt.42(3), 511–525 (2003).
    [CrossRef] [PubMed]
  14. E. Namer, S. Shwartz, and Y. Y. Schechner, “Skyless polarimetric calibration and visibility enhancement,” Opt. Express17(2), 472–493 (2009).
    [CrossRef] [PubMed]
  15. S. K. Nayar and S. G. Narasimhan, “Vision in bad weather,” in Proceedings of IEEE International Conference on Computer Vision (Kerkyra, Greece, 1999), pp. 820–827.
    [CrossRef]
  16. S. G. Narasimhan and S. K. Nayar, “Chromatic framework for vision in bad weather,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Hilton Head Island, USA, 2000), pp. 598–605.
    [CrossRef]
  17. S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Trans. Pattern Anal. Mach. Intell.25(6), 713–724 (2003).
    [CrossRef]
  18. J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph.27(5), 1–10 (2008).
    [CrossRef]
  19. S. G. Narasimhan and S. K. Nayar, “Interactive deweathering of an image using physical models,” Proceedings of IEEE Workshop Color and Photometric Methods in Computer Vision (2003).
  20. F. Cozman and E. Krotkov, “Depth from scattering,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (San Juan, USA, 1997), pp. 801–806.
    [CrossRef]
  21. K. K. Tan and J. P. Oakley, “Physics-based approach to color image enhancement in poor visibility conditions,” J. Opt. Soc. Am. A18(10), 2460–2467 (2001).
    [CrossRef] [PubMed]
  22. R. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Anchorage, Alaska, USA, 2008), pp. 1–8.
    [CrossRef]
  23. R. Fattal, “Single image dehazing,” ACM Trans. Graph.27(3), 1–9 (2008).
    [CrossRef]
  24. J. P. Tarel and N. Hautière, “Fast visibility restoration from a single color or gray level image,” in Proceedings of IEEE International Conference on Computer Vision (Kyoto, Japan, 2009), pp. 2201–2208.
    [CrossRef]
  25. N. Carlevaris-Bianco, A. Mohan, and R. M. Eustice, “Initial results in underwater single image dehazing,” in Proceedings of MTS/IEEE OCEANS (Seattle, USA, 2010), pp. 1–8.
    [CrossRef]
  26. J. Zhang, L. Li, G. Yang, Y. Zhang, and J. Sun, “Local albedo-insensitive single image dehazing,” Vis. Comput.26(6–8), 761–768 (2010).
    [CrossRef]
  27. C. Xiao and J. Gan, “Fast image dehazing using guided joint bilateral filter,” Vis. Comput.28(6–8), 713–721 (2012).
    [CrossRef]
  28. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel Prior,” IEEE Trans. Pattern Anal. Mach. Intell.33(12), 2341–2353 (2010).
    [PubMed]
  29. C. T. Chu and M. S. Lee, “A content-adaptive method for single image dehazing,” Lect. Notes Comput. Sci.6298, 350–361 (2010).
    [CrossRef]
  30. J. Yu and Q. Liao, “Fast single image fog removal using edge-preserving smoothing,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, (Prague, Czech Republic, 2011), pp. 1245–1248.
    [CrossRef]
  31. J. Zhang, L. Li, Y. Zhang, G. Yang, X. Cao, and J. Sun, “Video dehazing with spatial and temporal coherence,” Vis. Comput.27(6–8), 749–757 (2011).
    [CrossRef]
  32. L. W. Kang, C. W. Lin, and Y. H. Fu, “Automatic single-image-based rain streaks removal via image decomposition,” IEEE Trans. Image Process.21(4), 1742–1755 (2012).
    [CrossRef] [PubMed]
  33. L. W. Kang, C. W. Lin, C. T. Lin, and Y. C. Lin, “Self-learning-based rain streak removal for image/video,” in Proceedings of IEEE International Symposium on Circuits and Systems (Seoul, Korea, 2012), pp. 1871–1874.
    [CrossRef]
  34. P. C. Barnum, S. Narasimhan, and T. Kanade, “Analysis of rain and snow in frequency space,” Int. J. Comput. Vis.86(2–3), 256–274 (2010).
    [CrossRef]
  35. J. Bossu, N. Hautière, and J. P. Tarel, “Rain or snow detection in image sequences through use of a histogram of orientation of streaks,” Int. J. Comput. Vis.93(3), 348–367 (2011).
    [CrossRef]
  36. C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proceedings of IEEE International Conference on Computer Vision (Bombay, 1998), pp. 839–846.
    [CrossRef]
  37. A. Levin, D. Lischinski, and Y. Weiss, “A closed form solution to natural image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (New York, USA, 2006), pp. 61–68.
    [CrossRef]
  38. B. S. Manjunath, J. R. Ohm, V. V. Vasudevan, and A. Yamada, “Color and texture descriptors,” IEEE Trans. Circ. Syst. Video Tech.11(6), 703–715 (2001).
    [CrossRef]
  39. X. Wang and Z. Wang, “A novel method for image retrieval based on structure elements’ descriptor,” J. Vis. Commun. Image Represent.24(1), 63–74 (2013).
    [CrossRef]
  40. M. H. Pinson and S. Wolf, “Comparing subjective video quality testing methodologies,” in Proceedings of SPIE 5150, Visual Communications and Image Processing (2003).

2013 (1)

X. Wang and Z. Wang, “A novel method for image retrieval based on structure elements’ descriptor,” J. Vis. Commun. Image Represent.24(1), 63–74 (2013).
[CrossRef]

2012 (3)

K. B. Gibson, D. T. Võ, and T. Q. Nguyen, “An investigation of dehazing effects on image and video coding,” IEEE Trans. Image Process.21(2), 662–673 (2012).
[CrossRef] [PubMed]

C. Xiao and J. Gan, “Fast image dehazing using guided joint bilateral filter,” Vis. Comput.28(6–8), 713–721 (2012).
[CrossRef]

L. W. Kang, C. W. Lin, and Y. H. Fu, “Automatic single-image-based rain streaks removal via image decomposition,” IEEE Trans. Image Process.21(4), 1742–1755 (2012).
[CrossRef] [PubMed]

2011 (2)

J. Zhang, L. Li, Y. Zhang, G. Yang, X. Cao, and J. Sun, “Video dehazing with spatial and temporal coherence,” Vis. Comput.27(6–8), 749–757 (2011).
[CrossRef]

J. Bossu, N. Hautière, and J. P. Tarel, “Rain or snow detection in image sequences through use of a histogram of orientation of streaks,” Int. J. Comput. Vis.93(3), 348–367 (2011).
[CrossRef]

2010 (4)

P. C. Barnum, S. Narasimhan, and T. Kanade, “Analysis of rain and snow in frequency space,” Int. J. Comput. Vis.86(2–3), 256–274 (2010).
[CrossRef]

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel Prior,” IEEE Trans. Pattern Anal. Mach. Intell.33(12), 2341–2353 (2010).
[PubMed]

C. T. Chu and M. S. Lee, “A content-adaptive method for single image dehazing,” Lect. Notes Comput. Sci.6298, 350–361 (2010).
[CrossRef]

J. Zhang, L. Li, G. Yang, Y. Zhang, and J. Sun, “Local albedo-insensitive single image dehazing,” Vis. Comput.26(6–8), 761–768 (2010).
[CrossRef]

2009 (1)

2008 (3)

M. S. Shehata, J. Cai, W. M. Badawy, T. W. Burr, M. S. Pervez, R. J. Johannesson, and A. Radmanesh, “Video-based automatic incident detection for smart roads: the outdoor environmental challenges regarding false alarms,” IEEE Trans. Intell. Transp. Syst.9(2), 349–360 (2008).
[CrossRef]

R. Fattal, “Single image dehazing,” ACM Trans. Graph.27(3), 1–9 (2008).
[CrossRef]

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph.27(5), 1–10 (2008).
[CrossRef]

2007 (1)

K. Garg and S. K. Nayar, “Vision and rain,” Int. J. Comput. Vis.75(1), 3–27 (2007).
[CrossRef]

2003 (2)

S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Trans. Pattern Anal. Mach. Intell.25(6), 713–724 (2003).
[CrossRef]

Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt.42(3), 511–525 (2003).
[CrossRef] [PubMed]

2002 (1)

S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” Int. J. Comput. Vis.48(3), 233–254 (2002).
[CrossRef]

2001 (2)

K. K. Tan and J. P. Oakley, “Physics-based approach to color image enhancement in poor visibility conditions,” J. Opt. Soc. Am. A18(10), 2460–2467 (2001).
[CrossRef] [PubMed]

B. S. Manjunath, J. R. Ohm, V. V. Vasudevan, and A. Yamada, “Color and texture descriptors,” IEEE Trans. Circ. Syst. Video Tech.11(6), 703–715 (2001).
[CrossRef]

2000 (1)

1998 (1)

L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell.20(11), 1254–1259 (1998).
[CrossRef]

Badawy, W. M.

M. S. Shehata, J. Cai, W. M. Badawy, T. W. Burr, M. S. Pervez, R. J. Johannesson, and A. Radmanesh, “Video-based automatic incident detection for smart roads: the outdoor environmental challenges regarding false alarms,” IEEE Trans. Intell. Transp. Syst.9(2), 349–360 (2008).
[CrossRef]

Barnum, P. C.

P. C. Barnum, S. Narasimhan, and T. Kanade, “Analysis of rain and snow in frequency space,” Int. J. Comput. Vis.86(2–3), 256–274 (2010).
[CrossRef]

Bossu, J.

J. Bossu, N. Hautière, and J. P. Tarel, “Rain or snow detection in image sequences through use of a histogram of orientation of streaks,” Int. J. Comput. Vis.93(3), 348–367 (2011).
[CrossRef]

Burr, T. W.

M. S. Shehata, J. Cai, W. M. Badawy, T. W. Burr, M. S. Pervez, R. J. Johannesson, and A. Radmanesh, “Video-based automatic incident detection for smart roads: the outdoor environmental challenges regarding false alarms,” IEEE Trans. Intell. Transp. Syst.9(2), 349–360 (2008).
[CrossRef]

Cai, J.

M. S. Shehata, J. Cai, W. M. Badawy, T. W. Burr, M. S. Pervez, R. J. Johannesson, and A. Radmanesh, “Video-based automatic incident detection for smart roads: the outdoor environmental challenges regarding false alarms,” IEEE Trans. Intell. Transp. Syst.9(2), 349–360 (2008).
[CrossRef]

Cao, X.

J. Zhang, L. Li, Y. Zhang, G. Yang, X. Cao, and J. Sun, “Video dehazing with spatial and temporal coherence,” Vis. Comput.27(6–8), 749–757 (2011).
[CrossRef]

Carlevaris-Bianco, N.

N. Carlevaris-Bianco, A. Mohan, and R. M. Eustice, “Initial results in underwater single image dehazing,” in Proceedings of MTS/IEEE OCEANS (Seattle, USA, 2010), pp. 1–8.
[CrossRef]

Chen, B.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph.27(5), 1–10 (2008).
[CrossRef]

Chitwood, D.

Chu, C. T.

C. T. Chu and M. S. Lee, “A content-adaptive method for single image dehazing,” Lect. Notes Comput. Sci.6298, 350–361 (2010).
[CrossRef]

Cohen, M.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph.27(5), 1–10 (2008).
[CrossRef]

Cohen-Or, D.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph.27(5), 1–10 (2008).
[CrossRef]

Cozman, F.

F. Cozman and E. Krotkov, “Depth from scattering,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (San Juan, USA, 1997), pp. 801–806.
[CrossRef]

Deussen, O.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph.27(5), 1–10 (2008).
[CrossRef]

Eustice, R. M.

N. Carlevaris-Bianco, A. Mohan, and R. M. Eustice, “Initial results in underwater single image dehazing,” in Proceedings of MTS/IEEE OCEANS (Seattle, USA, 2010), pp. 1–8.
[CrossRef]

Fattal, R.

R. Fattal, “Single image dehazing,” ACM Trans. Graph.27(3), 1–9 (2008).
[CrossRef]

Fu, Y. H.

L. W. Kang, C. W. Lin, and Y. H. Fu, “Automatic single-image-based rain streaks removal via image decomposition,” IEEE Trans. Image Process.21(4), 1742–1755 (2012).
[CrossRef] [PubMed]

Gan, J.

C. Xiao and J. Gan, “Fast image dehazing using guided joint bilateral filter,” Vis. Comput.28(6–8), 713–721 (2012).
[CrossRef]

Garg, K.

K. Garg and S. K. Nayar, “Vision and rain,” Int. J. Comput. Vis.75(1), 3–27 (2007).
[CrossRef]

Gibson, K. B.

K. B. Gibson, D. T. Võ, and T. Q. Nguyen, “An investigation of dehazing effects on image and video coding,” IEEE Trans. Image Process.21(2), 662–673 (2012).
[CrossRef] [PubMed]

Hautière, N.

J. Bossu, N. Hautière, and J. P. Tarel, “Rain or snow detection in image sequences through use of a histogram of orientation of streaks,” Int. J. Comput. Vis.93(3), 348–367 (2011).
[CrossRef]

J. P. Tarel and N. Hautière, “Fast visibility restoration from a single color or gray level image,” in Proceedings of IEEE International Conference on Computer Vision (Kyoto, Japan, 2009), pp. 2201–2208.
[CrossRef]

He, K.

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel Prior,” IEEE Trans. Pattern Anal. Mach. Intell.33(12), 2341–2353 (2010).
[PubMed]

Henry, R. C.

Itti, L.

L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell.20(11), 1254–1259 (1998).
[CrossRef]

Jahangiri, M.

M. Jahangiri and M. Petrou, “An attention model for extracting components that merit identification,” in Proceedings of IEEE International Conference on Image Processing (Cairo, Egypt, 2009), pp. 965–968.
[CrossRef]

Johannesson, R. J.

M. S. Shehata, J. Cai, W. M. Badawy, T. W. Burr, M. S. Pervez, R. J. Johannesson, and A. Radmanesh, “Video-based automatic incident detection for smart roads: the outdoor environmental challenges regarding false alarms,” IEEE Trans. Intell. Transp. Syst.9(2), 349–360 (2008).
[CrossRef]

Kanade, T.

P. C. Barnum, S. Narasimhan, and T. Kanade, “Analysis of rain and snow in frequency space,” Int. J. Comput. Vis.86(2–3), 256–274 (2010).
[CrossRef]

Kang, L. W.

L. W. Kang, C. W. Lin, and Y. H. Fu, “Automatic single-image-based rain streaks removal via image decomposition,” IEEE Trans. Image Process.21(4), 1742–1755 (2012).
[CrossRef] [PubMed]

L. W. Kang, C. W. Lin, C. T. Lin, and Y. C. Lin, “Self-learning-based rain streak removal for image/video,” in Proceedings of IEEE International Symposium on Circuits and Systems (Seoul, Korea, 2012), pp. 1871–1874.
[CrossRef]

Koch, C.

L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell.20(11), 1254–1259 (1998).
[CrossRef]

Kopf, J.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph.27(5), 1–10 (2008).
[CrossRef]

Krotkov, E.

F. Cozman and E. Krotkov, “Depth from scattering,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (San Juan, USA, 1997), pp. 801–806.
[CrossRef]

Lee, M. S.

C. T. Chu and M. S. Lee, “A content-adaptive method for single image dehazing,” Lect. Notes Comput. Sci.6298, 350–361 (2010).
[CrossRef]

Levin, A.

A. Levin, D. Lischinski, and Y. Weiss, “A closed form solution to natural image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (New York, USA, 2006), pp. 61–68.
[CrossRef]

Li, L.

J. Zhang, L. Li, Y. Zhang, G. Yang, X. Cao, and J. Sun, “Video dehazing with spatial and temporal coherence,” Vis. Comput.27(6–8), 749–757 (2011).
[CrossRef]

J. Zhang, L. Li, G. Yang, Y. Zhang, and J. Sun, “Local albedo-insensitive single image dehazing,” Vis. Comput.26(6–8), 761–768 (2010).
[CrossRef]

Liao, Q.

J. Yu and Q. Liao, “Fast single image fog removal using edge-preserving smoothing,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, (Prague, Czech Republic, 2011), pp. 1245–1248.
[CrossRef]

Lin, C. T.

L. W. Kang, C. W. Lin, C. T. Lin, and Y. C. Lin, “Self-learning-based rain streak removal for image/video,” in Proceedings of IEEE International Symposium on Circuits and Systems (Seoul, Korea, 2012), pp. 1871–1874.
[CrossRef]

Lin, C. W.

L. W. Kang, C. W. Lin, and Y. H. Fu, “Automatic single-image-based rain streaks removal via image decomposition,” IEEE Trans. Image Process.21(4), 1742–1755 (2012).
[CrossRef] [PubMed]

L. W. Kang, C. W. Lin, C. T. Lin, and Y. C. Lin, “Self-learning-based rain streak removal for image/video,” in Proceedings of IEEE International Symposium on Circuits and Systems (Seoul, Korea, 2012), pp. 1871–1874.
[CrossRef]

Lin, Y. C.

L. W. Kang, C. W. Lin, C. T. Lin, and Y. C. Lin, “Self-learning-based rain streak removal for image/video,” in Proceedings of IEEE International Symposium on Circuits and Systems (Seoul, Korea, 2012), pp. 1871–1874.
[CrossRef]

Lischinski, D.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph.27(5), 1–10 (2008).
[CrossRef]

A. Levin, D. Lischinski, and Y. Weiss, “A closed form solution to natural image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (New York, USA, 2006), pp. 61–68.
[CrossRef]

Mahadev, S.

Manduchi, R.

C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proceedings of IEEE International Conference on Computer Vision (Bombay, 1998), pp. 839–846.
[CrossRef]

Manjunath, B. S.

B. S. Manjunath, J. R. Ohm, V. V. Vasudevan, and A. Yamada, “Color and texture descriptors,” IEEE Trans. Circ. Syst. Video Tech.11(6), 703–715 (2001).
[CrossRef]

Mohan, A.

N. Carlevaris-Bianco, A. Mohan, and R. M. Eustice, “Initial results in underwater single image dehazing,” in Proceedings of MTS/IEEE OCEANS (Seattle, USA, 2010), pp. 1–8.
[CrossRef]

Namer, E.

E. Namer, S. Shwartz, and Y. Y. Schechner, “Skyless polarimetric calibration and visibility enhancement,” Opt. Express17(2), 472–493 (2009).
[CrossRef] [PubMed]

S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (New York, USA, 2006), pp. 1984 – 1991.

Narasimhan, S.

P. C. Barnum, S. Narasimhan, and T. Kanade, “Analysis of rain and snow in frequency space,” Int. J. Comput. Vis.86(2–3), 256–274 (2010).
[CrossRef]

Narasimhan, S. G.

S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Trans. Pattern Anal. Mach. Intell.25(6), 713–724 (2003).
[CrossRef]

Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt.42(3), 511–525 (2003).
[CrossRef] [PubMed]

S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” Int. J. Comput. Vis.48(3), 233–254 (2002).
[CrossRef]

S. K. Nayar and S. G. Narasimhan, “Vision in bad weather,” in Proceedings of IEEE International Conference on Computer Vision (Kerkyra, Greece, 1999), pp. 820–827.
[CrossRef]

S. G. Narasimhan and S. K. Nayar, “Chromatic framework for vision in bad weather,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Hilton Head Island, USA, 2000), pp. 598–605.
[CrossRef]

S. G. Narasimhan and S. K. Nayar, “Interactive deweathering of an image using physical models,” Proceedings of IEEE Workshop Color and Photometric Methods in Computer Vision (2003).

S. G. Narasimhan and S. K. Nayar, “Removing weather effects from monochrome images,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Kauai, Hawaii, USA, 2001), pp. 186–193.

Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Kauai, Hawaii, USA, 2001), pp. 325–332.

Nayar, S. K.

K. Garg and S. K. Nayar, “Vision and rain,” Int. J. Comput. Vis.75(1), 3–27 (2007).
[CrossRef]

Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt.42(3), 511–525 (2003).
[CrossRef] [PubMed]

S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Trans. Pattern Anal. Mach. Intell.25(6), 713–724 (2003).
[CrossRef]

S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” Int. J. Comput. Vis.48(3), 233–254 (2002).
[CrossRef]

S. G. Narasimhan and S. K. Nayar, “Chromatic framework for vision in bad weather,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Hilton Head Island, USA, 2000), pp. 598–605.
[CrossRef]

S. G. Narasimhan and S. K. Nayar, “Interactive deweathering of an image using physical models,” Proceedings of IEEE Workshop Color and Photometric Methods in Computer Vision (2003).

Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Kauai, Hawaii, USA, 2001), pp. 325–332.

S. G. Narasimhan and S. K. Nayar, “Removing weather effects from monochrome images,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Kauai, Hawaii, USA, 2001), pp. 186–193.

S. K. Nayar and S. G. Narasimhan, “Vision in bad weather,” in Proceedings of IEEE International Conference on Computer Vision (Kerkyra, Greece, 1999), pp. 820–827.
[CrossRef]

Neubert, B.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph.27(5), 1–10 (2008).
[CrossRef]

Nguyen, T. Q.

K. B. Gibson, D. T. Võ, and T. Q. Nguyen, “An investigation of dehazing effects on image and video coding,” IEEE Trans. Image Process.21(2), 662–673 (2012).
[CrossRef] [PubMed]

Niebur, E.

L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell.20(11), 1254–1259 (1998).
[CrossRef]

Oakley, J. P.

Ohm, J. R.

B. S. Manjunath, J. R. Ohm, V. V. Vasudevan, and A. Yamada, “Color and texture descriptors,” IEEE Trans. Circ. Syst. Video Tech.11(6), 703–715 (2001).
[CrossRef]

Pervez, M. S.

M. S. Shehata, J. Cai, W. M. Badawy, T. W. Burr, M. S. Pervez, R. J. Johannesson, and A. Radmanesh, “Video-based automatic incident detection for smart roads: the outdoor environmental challenges regarding false alarms,” IEEE Trans. Intell. Transp. Syst.9(2), 349–360 (2008).
[CrossRef]

Petrou, M.

M. Jahangiri and M. Petrou, “An attention model for extracting components that merit identification,” in Proceedings of IEEE International Conference on Image Processing (Cairo, Egypt, 2009), pp. 965–968.
[CrossRef]

Radmanesh, A.

M. S. Shehata, J. Cai, W. M. Badawy, T. W. Burr, M. S. Pervez, R. J. Johannesson, and A. Radmanesh, “Video-based automatic incident detection for smart roads: the outdoor environmental challenges regarding false alarms,” IEEE Trans. Intell. Transp. Syst.9(2), 349–360 (2008).
[CrossRef]

Schechner, Y. Y.

E. Namer, S. Shwartz, and Y. Y. Schechner, “Skyless polarimetric calibration and visibility enhancement,” Opt. Express17(2), 472–493 (2009).
[CrossRef] [PubMed]

Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt.42(3), 511–525 (2003).
[CrossRef] [PubMed]

Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Kauai, Hawaii, USA, 2001), pp. 325–332.

S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (New York, USA, 2006), pp. 1984 – 1991.

Shehata, M. S.

M. S. Shehata, J. Cai, W. M. Badawy, T. W. Burr, M. S. Pervez, R. J. Johannesson, and A. Radmanesh, “Video-based automatic incident detection for smart roads: the outdoor environmental challenges regarding false alarms,” IEEE Trans. Intell. Transp. Syst.9(2), 349–360 (2008).
[CrossRef]

Shwartz, S.

E. Namer, S. Shwartz, and Y. Y. Schechner, “Skyless polarimetric calibration and visibility enhancement,” Opt. Express17(2), 472–493 (2009).
[CrossRef] [PubMed]

S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (New York, USA, 2006), pp. 1984 – 1991.

Sun, J.

J. Zhang, L. Li, Y. Zhang, G. Yang, X. Cao, and J. Sun, “Video dehazing with spatial and temporal coherence,” Vis. Comput.27(6–8), 749–757 (2011).
[CrossRef]

J. Zhang, L. Li, G. Yang, Y. Zhang, and J. Sun, “Local albedo-insensitive single image dehazing,” Vis. Comput.26(6–8), 761–768 (2010).
[CrossRef]

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel Prior,” IEEE Trans. Pattern Anal. Mach. Intell.33(12), 2341–2353 (2010).
[PubMed]

Tan, K. K.

Tang, X.

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel Prior,” IEEE Trans. Pattern Anal. Mach. Intell.33(12), 2341–2353 (2010).
[PubMed]

Tarel, J. P.

J. Bossu, N. Hautière, and J. P. Tarel, “Rain or snow detection in image sequences through use of a histogram of orientation of streaks,” Int. J. Comput. Vis.93(3), 348–367 (2011).
[CrossRef]

J. P. Tarel and N. Hautière, “Fast visibility restoration from a single color or gray level image,” in Proceedings of IEEE International Conference on Computer Vision (Kyoto, Japan, 2009), pp. 2201–2208.
[CrossRef]

Tomasi, C.

C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proceedings of IEEE International Conference on Computer Vision (Bombay, 1998), pp. 839–846.
[CrossRef]

Urquijo, S.

Uyttendaele, M.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph.27(5), 1–10 (2008).
[CrossRef]

Vasudevan, V. V.

B. S. Manjunath, J. R. Ohm, V. V. Vasudevan, and A. Yamada, “Color and texture descriptors,” IEEE Trans. Circ. Syst. Video Tech.11(6), 703–715 (2001).
[CrossRef]

Võ, D. T.

K. B. Gibson, D. T. Võ, and T. Q. Nguyen, “An investigation of dehazing effects on image and video coding,” IEEE Trans. Image Process.21(2), 662–673 (2012).
[CrossRef] [PubMed]

Wang, X.

X. Wang and Z. Wang, “A novel method for image retrieval based on structure elements’ descriptor,” J. Vis. Commun. Image Represent.24(1), 63–74 (2013).
[CrossRef]

Wang, Z.

X. Wang and Z. Wang, “A novel method for image retrieval based on structure elements’ descriptor,” J. Vis. Commun. Image Represent.24(1), 63–74 (2013).
[CrossRef]

Weiss, Y.

A. Levin, D. Lischinski, and Y. Weiss, “A closed form solution to natural image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (New York, USA, 2006), pp. 61–68.
[CrossRef]

Xiao, C.

C. Xiao and J. Gan, “Fast image dehazing using guided joint bilateral filter,” Vis. Comput.28(6–8), 713–721 (2012).
[CrossRef]

Yamada, A.

B. S. Manjunath, J. R. Ohm, V. V. Vasudevan, and A. Yamada, “Color and texture descriptors,” IEEE Trans. Circ. Syst. Video Tech.11(6), 703–715 (2001).
[CrossRef]

Yang, G.

J. Zhang, L. Li, Y. Zhang, G. Yang, X. Cao, and J. Sun, “Video dehazing with spatial and temporal coherence,” Vis. Comput.27(6–8), 749–757 (2011).
[CrossRef]

J. Zhang, L. Li, G. Yang, Y. Zhang, and J. Sun, “Local albedo-insensitive single image dehazing,” Vis. Comput.26(6–8), 761–768 (2010).
[CrossRef]

Yu, J.

J. Yu and Q. Liao, “Fast single image fog removal using edge-preserving smoothing,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, (Prague, Czech Republic, 2011), pp. 1245–1248.
[CrossRef]

Zhang, J.

J. Zhang, L. Li, Y. Zhang, G. Yang, X. Cao, and J. Sun, “Video dehazing with spatial and temporal coherence,” Vis. Comput.27(6–8), 749–757 (2011).
[CrossRef]

J. Zhang, L. Li, G. Yang, Y. Zhang, and J. Sun, “Local albedo-insensitive single image dehazing,” Vis. Comput.26(6–8), 761–768 (2010).
[CrossRef]

Zhang, Y.

J. Zhang, L. Li, Y. Zhang, G. Yang, X. Cao, and J. Sun, “Video dehazing with spatial and temporal coherence,” Vis. Comput.27(6–8), 749–757 (2011).
[CrossRef]

J. Zhang, L. Li, G. Yang, Y. Zhang, and J. Sun, “Local albedo-insensitive single image dehazing,” Vis. Comput.26(6–8), 761–768 (2010).
[CrossRef]

ACM Trans. Graph. (2)

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph.27(5), 1–10 (2008).
[CrossRef]

R. Fattal, “Single image dehazing,” ACM Trans. Graph.27(3), 1–9 (2008).
[CrossRef]

Appl. Opt. (1)

IEEE Trans. Circ. Syst. Video Tech. (1)

B. S. Manjunath, J. R. Ohm, V. V. Vasudevan, and A. Yamada, “Color and texture descriptors,” IEEE Trans. Circ. Syst. Video Tech.11(6), 703–715 (2001).
[CrossRef]

IEEE Trans. Image Process. (2)

K. B. Gibson, D. T. Võ, and T. Q. Nguyen, “An investigation of dehazing effects on image and video coding,” IEEE Trans. Image Process.21(2), 662–673 (2012).
[CrossRef] [PubMed]

L. W. Kang, C. W. Lin, and Y. H. Fu, “Automatic single-image-based rain streaks removal via image decomposition,” IEEE Trans. Image Process.21(4), 1742–1755 (2012).
[CrossRef] [PubMed]

IEEE Trans. Intell. Transp. Syst. (1)

M. S. Shehata, J. Cai, W. M. Badawy, T. W. Burr, M. S. Pervez, R. J. Johannesson, and A. Radmanesh, “Video-based automatic incident detection for smart roads: the outdoor environmental challenges regarding false alarms,” IEEE Trans. Intell. Transp. Syst.9(2), 349–360 (2008).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell. (3)

S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Trans. Pattern Anal. Mach. Intell.25(6), 713–724 (2003).
[CrossRef]

L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell.20(11), 1254–1259 (1998).
[CrossRef]

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel Prior,” IEEE Trans. Pattern Anal. Mach. Intell.33(12), 2341–2353 (2010).
[PubMed]

Int. J. Comput. Vis. (4)

P. C. Barnum, S. Narasimhan, and T. Kanade, “Analysis of rain and snow in frequency space,” Int. J. Comput. Vis.86(2–3), 256–274 (2010).
[CrossRef]

J. Bossu, N. Hautière, and J. P. Tarel, “Rain or snow detection in image sequences through use of a histogram of orientation of streaks,” Int. J. Comput. Vis.93(3), 348–367 (2011).
[CrossRef]

S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” Int. J. Comput. Vis.48(3), 233–254 (2002).
[CrossRef]

K. Garg and S. K. Nayar, “Vision and rain,” Int. J. Comput. Vis.75(1), 3–27 (2007).
[CrossRef]

J. Opt. Soc. Am. A (2)

J. Vis. Commun. Image Represent. (1)

X. Wang and Z. Wang, “A novel method for image retrieval based on structure elements’ descriptor,” J. Vis. Commun. Image Represent.24(1), 63–74 (2013).
[CrossRef]

Lect. Notes Comput. Sci. (1)

C. T. Chu and M. S. Lee, “A content-adaptive method for single image dehazing,” Lect. Notes Comput. Sci.6298, 350–361 (2010).
[CrossRef]

Opt. Express (1)

Vis. Comput. (3)

J. Zhang, L. Li, G. Yang, Y. Zhang, and J. Sun, “Local albedo-insensitive single image dehazing,” Vis. Comput.26(6–8), 761–768 (2010).
[CrossRef]

C. Xiao and J. Gan, “Fast image dehazing using guided joint bilateral filter,” Vis. Comput.28(6–8), 713–721 (2012).
[CrossRef]

J. Zhang, L. Li, Y. Zhang, G. Yang, X. Cao, and J. Sun, “Video dehazing with spatial and temporal coherence,” Vis. Comput.27(6–8), 749–757 (2011).
[CrossRef]

Other (18)

M. H. Pinson and S. Wolf, “Comparing subjective video quality testing methodologies,” in Proceedings of SPIE 5150, Visual Communications and Image Processing (2003).

J. P. Tarel and N. Hautière, “Fast visibility restoration from a single color or gray level image,” in Proceedings of IEEE International Conference on Computer Vision (Kyoto, Japan, 2009), pp. 2201–2208.
[CrossRef]

N. Carlevaris-Bianco, A. Mohan, and R. M. Eustice, “Initial results in underwater single image dehazing,” in Proceedings of MTS/IEEE OCEANS (Seattle, USA, 2010), pp. 1–8.
[CrossRef]

R. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Anchorage, Alaska, USA, 2008), pp. 1–8.
[CrossRef]

S. G. Narasimhan and S. K. Nayar, “Interactive deweathering of an image using physical models,” Proceedings of IEEE Workshop Color and Photometric Methods in Computer Vision (2003).

F. Cozman and E. Krotkov, “Depth from scattering,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (San Juan, USA, 1997), pp. 801–806.
[CrossRef]

J. Yu and Q. Liao, “Fast single image fog removal using edge-preserving smoothing,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, (Prague, Czech Republic, 2011), pp. 1245–1248.
[CrossRef]

L. W. Kang, C. W. Lin, C. T. Lin, and Y. C. Lin, “Self-learning-based rain streak removal for image/video,” in Proceedings of IEEE International Symposium on Circuits and Systems (Seoul, Korea, 2012), pp. 1871–1874.
[CrossRef]

C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proceedings of IEEE International Conference on Computer Vision (Bombay, 1998), pp. 839–846.
[CrossRef]

A. Levin, D. Lischinski, and Y. Weiss, “A closed form solution to natural image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (New York, USA, 2006), pp. 61–68.
[CrossRef]

S. K. Nayar and S. G. Narasimhan, “Vision in bad weather,” in Proceedings of IEEE International Conference on Computer Vision (Kerkyra, Greece, 1999), pp. 820–827.
[CrossRef]

S. G. Narasimhan and S. K. Nayar, “Chromatic framework for vision in bad weather,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Hilton Head Island, USA, 2000), pp. 598–605.
[CrossRef]

M. Jahangiri and M. Petrou, “An attention model for extracting components that merit identification,” in Proceedings of IEEE International Conference on Image Processing (Cairo, Egypt, 2009), pp. 965–968.
[CrossRef]

J. Harel, C. Koch, and P. Perona, “Graph-based visual saliency,” in Proceedings of Advances in Neural Information Processing Systems (2007), pp. 545–552.

S. Maji, A. C. Berg, and J. Malik, “Classification using intersection kernel support vector machines is efficient,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Anchorage, Alaska, USA, 2008), pp. 1–8.
[CrossRef]

S. G. Narasimhan and S. K. Nayar, “Removing weather effects from monochrome images,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Kauai, Hawaii, USA, 2001), pp. 186–193.

Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Kauai, Hawaii, USA, 2001), pp. 325–332.

S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (New York, USA, 2006), pp. 1984 – 1991.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1
Fig. 1

An example illustration of haze optical model applied to a natural scenario with haze.

Fig. 2
Fig. 2

An example of the proposed pixel-based dark/bright channel prior: (a) the original haze-free image; (b) the pixel-based dark channel version of (a); (c) the histogram of (b); (d) the pixel-based bright channel version of (a); and (e) the histogram of (d).

Fig. 3
Fig. 3

An example illustrating the proposed estimation method of atmospheric light, where the red dotted line means the pixel value corresponding to the top 0.1% brightest value in J dark_pixel , and the blue dotted line means the pixel value corresponding to the top 30% darkest value in J bright_pixel .

Fig. 4
Fig. 4

An example illustrating the proposed dehazing method: (a) the hazy image to be dehazed; (b) the transmission map of (a) obtained by the proposed method; (c) the refined transmission map by using the bilateral filter; and (d) the dehazed image of (a) obtained by the proposed method.

Fig. 5
Fig. 5

Applying the HOG-based pedestrian detector released from [4] to: (a) the original hazy image; and (b) the haze-removed version (obtained by the proposed method) of (a), where more pedestrians can be detected in (b).

Fig. 6
Fig. 6

Applying the visual attention model-based blob detector released from [6] to: (a) the original hazy image; and (b) the haze-removed version (obtained by the proposed method) of (a), where more objects of interest can be detected in (b).

Fig. 7
Fig. 7

Applying the graph-based approach released from [7] to calculate the visual saliency map for: (a) the original hazy image; and (b) the haze-removed version (obtained by the proposed method) of (a), where more accurate saliency map can be obtained in (b).

Fig. 8
Fig. 8

Dehazing results: (a) hazy image; the haze-removed versions of (a) via (b) Fattal’s; (c) He’s; and (d) proposed methods.

Fig. 9
Fig. 9

Dehazing results: (a) hazy image; the haze-removed versions of (a) via (b) Fattal’s; (c) He’s; and (d) proposed methods.

Fig. 10
Fig. 10

Dehazing results: (a) hazy image; the haze-removed versions of (a) via (b) Fattal’s; (c) He’s; and (d) proposed methods.

Fig. 11
Fig. 11

Dehazing results: (a) hazy image; the haze-removed versions of (a) via (b) Fattal’s; and (c) proposed methods.

Fig. 12
Fig. 12

Dehazing results: (a) hazy image; the haze-removed versions of (a) via (b) He’s; and (c) proposed methods.

Fig. 13
Fig. 13

Dehazing results: (a) hazy image; the haze-removed versions of (a) via (b) Tan’s; and (c) proposed methods.

Fig. 14
Fig. 14

Dehazing results: (a) hazy image; the haze-removed versions of (a) via (b) Tan’s; and (c) proposed methods.

Tables (2)

Tables Icon

Table 1 Subjective evaluation results of Tan’s [22], Fattal’s [23], He’s [28], and the proposed methods

Tables Icon

Table 2 Run time (in seconds) comparison between He’s method [28] and the proposed method.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

I ( x ) = J ( x ) t ( x ) + A ( 1 t ( x ) ) ,
J dark ( x ) = min y Ω ( x ) [ min c { R , G , B } J c ( y ) ] ,
t ( x ) = 1 w min y Ω ( x ) [ min c I c ( y ) A c ] ,
J ( x ) = I ( x ) A max ( t ( x ) , t 0 ) + A ,
J dark_pixel ( x ) = min c { R , G , B } J c ( x ) ,
J bright_pixel ( x ) = max c { R , G , B } J c ( x ) .
d ( x ) = I HSV ( x ) B ,
s ( x ) = 1 d ( x ) max y I HSV [ d ( y ) ] .
A c ( x ) = | A highest c A lowest c max y I HSV [ 1 1 d ( y ) ] min y I HSV [ 1 1 d ( y ) ] | × s ( x ) ,
min c { R , G , B } [ I c ( x ) A c ( x ) ] = t ( x ) min c { R , G , B } [ J c ( x ) A c ( x ) ] + 1 t ( x ) .
t ( x ) = 1 w ( x ) [ min c { R , G , B } I c ( x ) A c ( x ) ] ,
J ( x ) = I ( x ) A ( x ) max ( t ( x ) , t 0 ) + A ( x ) ,

Metrics