The radiance received by the sensor is influenced by the atmospheric interaction including the effects of absorption and scattering. Based on the analysis of the radiance along the transmission path, we propose an image degradation model and a recovery method for remote sensing and bad weather condition in which the effect of multiple scattering cannot be ignored. Several real outdoor images are restored to verify the effectiveness of the proposed model and method. The results turn out to be significantly improved in contrast and sharpness.
© 2012 OSA
The quality of the images captured by the sensor in long distance can be dramatically degraded by the interaction of atmospheric particles, especially in poor weather condition. For example, Table 1 presents some contrast data influenced by atmospheric transmission. In the calculation, two groups of objects on the ground are chosen. The highest and lowest reflectance of the objects in group 1 is 0.9 and 0.09, respectively, while it is 0.9 and 0.009 in group 2. The radiance data are all computed by PcModwin (vision 3.7) software  under the condition that the observation height is 18 km, and the side-glance transmission distances are 50 km, 70 km and 100 km, respectively. The contrast is calculated by Eq. (1):
With the development of computer vision, more and more image recovery algorithms which are used to correct the influence of atmospheric transmission are proposed. Most of them are based on the widely used model expressed by Eq. (2) [2–11]:Eq. (2). J(i,j), t(i,j) and A denote the target object radiance, the medium transmittance at the pixel location (i,j) and the sky radiance respectively. The operator is the component-wise multiplication.
In order to solve the problem, additional information [2–6] is required, such as the depth of the scene [2,4] or image sequences [5,6], which is not practical in remote sensing application. Therefore, the recent researches [7–10] focus on developing algorithms which adopt single image. Tan  optimizes a cost function in the framework of Markov random fields to improve the image quality, with the assumptions that the contrast of the refined image is higher compared with the input hazy image and the atmospheric light is constant in the local area. But the halo effect is serious at depth discontinuities and the output tends to be too saturated. Fattal  estimates the transmittance only assuming that the medium transmittance and the surface shading are locally statistically uncorrelated. This approach produces good results except for heavy haze pollution. He et al.  restore the input image according to the dark channel prior which is a statistic of outdoor haze-free images. However, the prior is not accurate when the scene object is inherently similar to the atmospheric light.
The model shown in Eq. (2) is derived from the Bouguer-Lambert-Beer law . The law is based on the assumption that the effect of atmospheric scattering is not taken into consideration. However, when the target object is far away from the sensor, the influence caused by atmospheric scattering [12,13] becomes increasingly significant. So it cannot be ignored in remote sensing or extremely poor weather condition.
In this study, we propose an image degradation model and a recovery method which takes the atmospheric scattering into consideration for remote sensing images. The performance of our method is compared with that of another two methods to verify its effectiveness.
2. The theory and approach
2.1 The proposed model
It is widely believed that the radiance received by the sensor is composed of two parts, as shown in Fig. 1(a) . The solid line in Fig. 1(a) represents the radiance from the target object, and the dashed line denotes the sky radiance, both of which are attenuated through the atmospheric medium.
First of all, we analyze the radiance from the target object. Assuming the size of the CCD sensor is of r × c pixels, the detected target can be divided into r × c blocks correspondingly, illustrated in Fig. 1(b). So every image pixel in the sensor is related to an object pixel at the same position (i,j). However, because of the atmospheric interaction along the transmission path, the radiance leaving from an object pixel, indicated by fo(i,j), does not contribute to the corresponding image pixel entirely.
In the visible spectrum, the main influence of atmospheric interaction is atmospheric absorption and scattering. They influence the transmission of the radiance simultaneously. But to simplify the problem, they are supposed to be two independent processes, i.e., the influence resulted from atmospheric scattering is assumed to occur after the attenuation caused by atmospheric absorption.
The Bouguer-Lambert-Beer law states the relationship between the absorption and the properties of the medium through which the radiance is transmitted when scattering is ignored. Hence, the attenuation caused by the atmospheric absorption can be simulated by the Bouguer-Lambert-Beer law, which is expressed as:Eq. (3) the decrease effect.
Then the effect of the atmospheric scattering is taken into consideration, which includes single scattering and multiple scattering. However, single scattering is usually treated as a random phenomenon, while multiple scattering is deterministic because its randomness is averaged out by the large number of scattering events. Based on this analysis, the influence of atmospheric scattering is mainly determined by the multiple scattering along the transmission path.
Narasimhan et al.  define the glow of a point source caused by the multiple scattering as the atmospheric point spread function. Then with the assumption that an extended light source of arbitrary shape and size is made up of several isotropic source elements, they develop a physics-based model for the multiple scattering of the extended light source as follows:
Therefore, during the atmospheric transmission, each object pixel can be regarded as a source element. Since the influence resulted from atmospheric scattering is assumed to occur after the attenuation caused by atmospheric absorption, it can be simulated by the convolution operation:Eq. (5) the dispersion effect.
Due to the dispersion effect, the radiance received by the sensor at position (i,j) is influenced by the pixels which are inside a local region centered at (i,j) in the object plane. Especially, for the pixels at the boundaries of the sensor, their radiance is also affected by the object pixels out of the view field. As shown in Fig. 1(b), taking the red pixel on the top boundary of the image plane as an example, its radiance includes the contributions from all the pixels inside the blue square of the object plane. Consequently, the captured image is derived from a larger object matrix through an aperture filter to deal with the boundaries, which is expressed byFig. 1(b), which is determined by the size of the dispersion kernel ho. Omitting the subscript, Eq. (6) can be rewritten as
For the sky radiance, it can be assumed as a uniform object with radiance fa. So the attenuation of fa is similar to that of the object radiance fo. Because of equivalence between the portion of the radiance from this uniform object that reaches the sensor and the portion of the radiance from the target object that missed by the sensor, the transmittance of fa is 1-t. Thus the path radiance received by the sensor, denoted by qa, is calculated by the following formula:
Additionally, the noise of CCD which is denoted by NCCD also affects the final captured image. Therefore, the total radiance g received by the sensor is the sum of the terms qo, qa and NCCD, as shown below:
Equation (9) is our image degradation model. Comparing Eq. (9) with the widely used model in Eq. (2), we see that they both contain two parts including the surface-leaving radiance and path radiance. Meanwhile, the decrease effect of the radiance along the transmission path is also taken into consideration in both models. However, we model the dispersion effect by a space-variant convolution process for each pixel and a rectangular window filtering process for the entire image, because the influence of atmospheric scattering cannot be ignored in remote sensing or extremely poor weather condition. Moreover, Eq. (9) contains the noise of CCD, which is unavoidable in imaging.
2.2 The image recovery
The goal of image recovery is to restore the target object radiance fo by removing the atmospheric influence from the captured image g in Eq. (9). Figure 2 demonstrates the whole procedure of the image recovery with the assumption that the parameters t, fa, ho and ha are already estimated (the estimation of these parameters will be discussed later in subsections 2.2.1 and 2.2.2).
To deal with the boundary pixels in the image deconvolution of step 4, the image matrix g is first enlarged to g’, i.e., step 1 in Fig. 2. The pixel values of the enlarged area are supposed to equal the nearest array border value. Figure 3 gives an example of step 1, in which Fig. 3(a) is the original image, and Fig. 3(b) is its extended result by repeating the pixel values on the borders. But the pixels in the enlarged area are left untreated in steps 2-5. It is reasonable because these pixels are not included in the sensitive area of the sensor.
After step 1, Eq. (9) is rewritten as follows:Eq. (10).
Because the kernel ho is space-variant for g’, the object radiance has to be restored pixel by pixel. Suppose that we want to recover the object radiance at position (i,j) inside the sensitive area of the sensor, which is represented by the red pixel in g’ in Fig. 2. We extract a local region S centers at position (i,j) (step 2) due to the dispersion effect discussed in subsection 2.1. S is of the same size as the atmospheric point spread function at position (i,j).
Then we compute the term G with Eq. (11) (i.e., step 3):14] in our experiments, i.e., step 4.
Comparing Eq. (12) with Eq. (10), it may be confused that the noise term NCCD disappears. This is because the procedure of image deconvolution does not need this term, which is a random variable, to be known. NCCD is taken into account automatically in the regularized deconvolution algorithm  presented in Eq. (12).
Consequently, the object radiance is obtained by
In step 5, we select the center pixel of the region S (represented by the blue pixel in Fig. 2) and divide it by max(t,t0). Because the pixels in the enlarged area are not treated, the result is actually fo(i,j). After all the pixels in g are traversed, we obtain the object radiance fo.
2.2.1 Estimation of the transmittance and sky radiance
The medium transmittance t and the sky radiance fa in our image degradation model are estimated with the method proposed by He et al. , which manages to remove the atmospheric influence from images based on the dark channel prior. The prior is a statistics of haze-free outdoor images that most local non-sky regions contain some pixels with very low intensities in at least one color channel, which can be described as:Eq. (14) is called the dark channel of the image fo. In ref , the authors have proved that the dark channel prior is also adequate for the images with sky regions.
Figure 5(a) shows a haze-free outdoor image while Fig. 5(b) is a hazy one. Their corresponding dark channels are exhibited in Figs. 5(c) and 5(d). The size of the local region Ω should be properly set to cover the small objects whose radiance is inherently similar to the sky radiance, otherwise, the prior is invalid . Obviously, the intensities of most pixels in Fig. 5(c) are low, while due to the influence of the sky radiance, they are much higher in Fig. 5(d), which is consistent with the dark channel prior.10].
Although the sky radiance fa depends heavily on the optical thickness, it can be obtained from the original image g which is the only given information. We extract the top 0.1% brightest pixels in the dark channel of g, among which the pixel with the highest intensity in g is selected as fa .16], U is an identity matrix with the same size as L, λ is a small value to constrain t’ which is the desired transmittance. The refined transmittance t’ is shown in Fig. 6(c). Hence, the transmittance of the uniform object is calculated by 1-t’. More details can be found in [10,16].
2.2.2 Estimation of the dispersion kernel
In , Narasimhan et al. discuss the atmospheric point spread function caused by multiple scattering and establish the relationship between the object radiance and the received radiance, which has been mentioned subsection 2.1.
Based on the result of Narasimhan et al., Metari and Deschênes  introduce the generalized Gaussian distribution to approximate the atmospheric point spread function, i.e.,13] for more details about Eq. (17).
Because of the relationship between the optical thickness T and the optimized medium transmittance t’ which is described by the following equation:Equation (17) can be utilized to approximate the dispersion kernel ho. Similarly, ha is calculated by replacing t’ with 1-t’ in Eq. (17).
3. The results and comparison
To exhibit the effectiveness of our model presented by Eq. (9), we take several real outdoor images to implement the image recovery, including the method of He et al. , the method of Metari and Deschênes , and our approach. In the experiments, the value of t0 in Eq. (13) is set to 0.1, the size of the small local region Ω in Eq. (15) is 9 × 9 for all the tested images, and the constant ω is 0.7. In addition, λ equals 10−5 in Eq. (16), k and q in Eq. (17) is 0.5 and 0.7 respectively. The pixel values of the enlarged area in image g’ are assumed to equal the nearest array border value. The deconvolution in Eq. (12) is executed by the Wiener filtering in Matlab, the power spectrum ratio of the noise and the undegraded image in this algorithm is set to 0.02 which can be adjusted to suppress the amplified noise caused by the term NCCD.
Figure 7(a) shows one original degraded image taken from an aircraft. Figure 7(b) presents the output based on the dark channel prior employing the model shown in Eq. (2) (i.e., the result by He et al.). Figure 7(c) exhibits the final image by deconvolving the model in Eq. (4) pixel by pixel with Wiener filtering (i.e., the result by Metari and Deschênes). And Fig. 7(d) is the result obtained by solving our model. In total, the performances of all the methods are much better than the original one. More specifically, in Fig. 7(b), the contrast of the image is dramatically improved and the color information is also recovered, despite that the edges of the object are not well refined due to neglecting the multiple scattering. The edges are sharp and clear in Fig. 7(c), however, the final result is not significantly enhanced and the object is still difficult to distinguish. Our result achieves the best performance in both contrast and the sharpness of the object edges, as shown in Fig. 7(d).
Other experiments are exhibited in Fig. 8 . Figures 8(a) and 8(e) are two original images both taken from the aircraft. Figure 8(i) is captured in a heavy haze day at the top of a hill. Figures 8(b), 8(f) and 8(j) are the corresponding results by He et al., and Figs. 8(c), 8(g) and 8(k) are the results by Metari and Deschênes. Figures 8(d), 8(h) and 8(l) are the results of our approach. Obviously, the quality of the output images obtained from our model is significantly enhanced with high contrast, vivid color information, and sharp edges of the object. Moreover, the proposed model works well not only for the remote sensing images (i.e., Figs. 8(a) and 8(e)), but also for the images captured in bad weather condition (i.e., Fig. 8(i)).
In order to compare the performances of the image recovery methods objectively, the Gray Mean Gradient (GMG) and Laplacian (LAP) image quality assessment methods  are used, and the results (larger values represent better image quality) are given in Table 2 . Obviously, the results obtained from the proposed model achieve the largest assessment values for all the test images, which indicates that our approach outperforms the other two.
We analyze the impact of the atmospheric transmission on the radiance detected by the sensor in remote sensing and bad weather condition, and propose an image degradation model and a recovery method taking multiple scattering into consideration. The radiance from the target object is decreased along the transmission path according to the Bouguer-Lambert-Beer law, and dispersed due to the multiple scattering. Because the sky radiance which enters into the sensor can be regarded as a uniform object, the attenuation analysis is the same as the target object. In order to verify the effectiveness of our model, we employ the existent algorithms to estimate the unknown parameters. Moreover, the performances of the proposed model and the widely used model in which multiple scattering effect is ignored are compared. Experimental results show that the images obtained from our model are significantly improved in contrast, clearness, color saturation and object edges. Besides, the GMG and LAP image quality assessment methods are used, and the values of the output images by our algorithm are the largest which indicates that the proposed model outperforms the widely used model.
We thank the anonymous reviewers for their valuable comments which help to improve this paper. This work is supported by Chinese National Natural Science Foundation (No. 60977010) and Chinese National Programs for High Technology Research and Development (No. 2009CB724006).
References and links
1. A. Berk, G. P. Anderson, P. K. Acharya, J. H. Chetwynd, L. S. Bernstein, E. P. Shettle, M. W. Matthew, and S. M. Adler-Golden, “Modtran4 user’s manual,” Air Force Research Laboratory, 1999.
2. J. P. Oakley and B. L. Satherley, “Improving image quality in poor visibility conditions using a physical model for contrast degradation,” IEEE Trans. Image Process. 7(2), 167–179 (1998). [CrossRef] [PubMed]
4. J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph. 27, 116 (2008).
5. S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Trans. Pattern Anal. Mach. Intell. 25(6), 713–724 (2003). [CrossRef]
6. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2001), 325–332.
7. R. T. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), 1–8.
8. S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2006), 1984–1991.
9. R. Fattal, “Single image dehazing,” ACM Trans. Graph. 27(3), 72 (2008). [CrossRef]
10. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), 1956–1963.
11. S. G. Narasimhan and S. K. Nayar, “Chromatic framework for vision in bad weather,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2000), 598–605.
12. S. G. Narasimhan and S. K. Nayar, “Shedding light on the weather,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2003), 665–672.
13. S. Metari and F. Desch, ênes, “A new convolution kernel for atmospheric point spread function applied to computer vision,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2007), 1–8.
14. R. C. Gonzalez and R. E. Woods, Digital Image Processing, Second Edition, (Publishing House of Electronics Industry, 2002), Chap. 5.
15. M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging (IOP, 1998), Chap. 5.
16. A. Levin, D. Lischinski, and Y. Weiss, “A closed form solution to natural image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2006), 61–68.
17. W. Dong, Y. Chen, Z. Xu, H. Feng, and Q. Li, “Image stabilization with support vector machine,” J. Zhejiang Univ.-Sci. C Comput. & Electron. 12(6), 478–485 (2011). [CrossRef]