## Abstract

The radiance received by the sensor is influenced by the atmospheric interaction including the effects of absorption and scattering. Based on the analysis of the radiance along the transmission path, we propose an image degradation model and a recovery method for remote sensing and bad weather condition in which the effect of multiple scattering cannot be ignored. Several real outdoor images are restored to verify the effectiveness of the proposed model and method. The results turn out to be significantly improved in contrast and sharpness.

© 2012 OSA

## 1. Introduction

The quality of the images captured by the sensor in long distance can be dramatically degraded by the interaction of atmospheric particles, especially in poor weather condition. For example, Table 1 presents some contrast data influenced by atmospheric transmission. In the calculation, two groups of objects on the ground are chosen. The highest and lowest reflectance of the objects in group 1 is 0.9 and 0.09, respectively, while it is 0.9 and 0.009 in group 2. The radiance data are all computed by PcModwin (vision 3.7) software [1] under the condition that the observation height is 18 km, and the side-glance transmission distances are 50 km, 70 km and 100 km, respectively. The contrast is calculated by Eq. (1):

where*E*

_{a}+ E_{0}denotes the radiance received by the sensor from the object with the highest reflectance, and

*E*

_{b}+ E_{0}is that from the object with the lowest reflectance in the same group.

*E*and

_{a}*E*are the radiance coming out from the objects with the highest and lowest reflectance, respectively.

_{b}*E*

_{0}is the path radiance. As shown in the table, the contrast declines severely for distant objects due to atmospheric transmission. Therefore, to enhance the image quality, image recovery is necessary.

With the development of computer vision, more and more image recovery algorithms which are used to correct the influence of atmospheric transmission are proposed. Most of them are based on the widely used model expressed by Eq. (2) [2–11]:

where the first term on the right side, i.e.,*J*(

*i*,

*j*)$\circ $

*t*(

*i*,

*j*) is the direct attenuated radiance from the target object which represents the surface-leaving radiance. The second term

*A*·[1-

*t*(

*i*,

*j*)] describes the path radiance including the effects of molecular scattering, aerosol scattering, and the interaction between molecular and aerosol scattering.

*I*(

*i*,

*j*) is the radiance received by the sensor, which is the only known term in Eq. (2).

*J*(

*i*,

*j*),

*t*(

*i*,

*j*) and

*A*denote the target object radiance, the medium transmittance at the pixel location (

*i*,

*j*) and the sky radiance respectively. The operator $\circ $ is the component-wise multiplication.

In order to solve the problem, additional information [2–6] is required, such as the depth of the scene [2,4] or image sequences [5,6], which is not practical in remote sensing application. Therefore, the recent researches [7–10] focus on developing algorithms which adopt single image. Tan [7] optimizes a cost function in the framework of Markov random fields to improve the image quality, with the assumptions that the contrast of the refined image is higher compared with the input hazy image and the atmospheric light is constant in the local area. But the halo effect is serious at depth discontinuities and the output tends to be too saturated. Fattal [9] estimates the transmittance only assuming that the medium transmittance and the surface shading are locally statistically uncorrelated. This approach produces good results except for heavy haze pollution. He *et al.* [10] restore the input image according to the dark channel prior which is a statistic of outdoor haze-free images. However, the prior is not accurate when the scene object is inherently similar to the atmospheric light.

The model shown in Eq. (2) is derived from the Bouguer-Lambert-Beer law [11]. The law is based on the assumption that the effect of atmospheric scattering is not taken into consideration. However, when the target object is far away from the sensor, the influence caused by atmospheric scattering [12,13] becomes increasingly significant. So it cannot be ignored in remote sensing or extremely poor weather condition.

In this study, we propose an image degradation model and a recovery method which takes the atmospheric scattering into consideration for remote sensing images. The performance of our method is compared with that of another two methods to verify its effectiveness.

## 2. The theory and approach

#### 2.1 The proposed model

It is widely believed that the radiance received by the sensor is composed of two parts, as shown in Fig. 1(a) . The solid line in Fig. 1(a) represents the radiance from the target object, and the dashed line denotes the sky radiance, both of which are attenuated through the atmospheric medium.

First of all, we analyze the radiance from the target object. Assuming the size of the CCD sensor is of *r* × *c* pixels, the detected target can be divided into *r* × *c* blocks correspondingly, illustrated in Fig. 1(b). So every image pixel in the sensor is related to an object pixel at the same position (*i*,*j*). However, because of the atmospheric interaction along the transmission path, the radiance leaving from an object pixel, indicated by *f _{o}*(

*i*,

*j*), does not contribute to the corresponding image pixel entirely.

In the visible spectrum, the main influence of atmospheric interaction is atmospheric absorption and scattering. They influence the transmission of the radiance simultaneously. But to simplify the problem, they are supposed to be two independent processes, i.e., the influence resulted from atmospheric scattering is assumed to occur after the attenuation caused by atmospheric absorption.

The Bouguer-Lambert-Beer law states the relationship between the absorption and the properties of the medium through which the radiance is transmitted when scattering is ignored. Hence, the attenuation caused by the atmospheric absorption can be simulated by the Bouguer-Lambert-Beer law, which is expressed as:

*i*,

*j*) represents the pixel position,

*p*is attenuated radiance,

_{o}*t*is the transmittance,

*μ*is the absorption coefficient of the atmospheric medium which is assumed to be uniform, and

*L*is the distance from the object to the sensor. To make it clear, we call the process shown in Eq. (3) the decrease effect.

Then the effect of the atmospheric scattering is taken into consideration, which includes single scattering and multiple scattering. However, single scattering is usually treated as a random phenomenon, while multiple scattering is deterministic because its randomness is averaged out by the large number of scattering events. Based on this analysis, the influence of atmospheric scattering is mainly determined by the multiple scattering along the transmission path.

Narasimhan *et al.* [12] define the glow of a point source caused by the multiple scattering as the atmospheric point spread function. Then with the assumption that an extended light source of arbitrary shape and size is made up of several isotropic source elements, they develop a physics-based model for the multiple scattering of the extended light source as follows:

*I*is the radiance coming out of the medium,

*I*

_{0}is the radiance of the extended light source.

*APSF*is the atmospheric point spread function which is different for each pixel (i.e., space-variant), and $\otimes $ stands for the convolution operator.

Therefore, during the atmospheric transmission, each object pixel can be regarded as a source element. Since the influence resulted from atmospheric scattering is assumed to occur after the attenuation caused by atmospheric absorption, it can be simulated by the convolution operation:

where*q*is the radiance received by the sensor,

_{o}*h*denotes the atmospheric point spread function (it will be discussed later in subsection 2.2.2). We call the process shown in Eq. (5) the dispersion effect.

_{o}Due to the dispersion effect, the radiance received by the sensor at position (*i*,*j*) is influenced by the pixels which are inside a local region centered at (*i*,*j*) in the object plane. Especially, for the pixels at the boundaries of the sensor, their radiance is also affected by the object pixels out of the view field. As shown in Fig. 1(b), taking the red pixel on the top boundary of the image plane as an example, its radiance includes the contributions from all the pixels inside the blue square of the object plane. Consequently, the captured image is derived from a larger object matrix through an aperture filter to deal with the boundaries, which is expressed by

*p*and

_{o}’*f*are the (

_{o}’*r*+

*k*-1) × (

*c*+

*k*-1) enlarged matrices of

*p*and

_{o}*f*, respectively. Here,

_{o}*t*is redefined as the transmittance associated with

*f*, and

_{o}’*n*is the rectangular window function with the same size

*r*×

*c*as the sensor. The width of the enlarged area at each side is (k-1)/2 as shown in Fig. 1(b), which is determined by the size of the dispersion kernel

*h*. Omitting the subscript, Eq. (6) can be rewritten aswhich represents the whole process of decrease and dispersion effects.

_{o}For the sky radiance, it can be assumed as a uniform object with radiance *f _{a}*. So the attenuation of

*f*is similar to that of the object radiance

_{a}*f*. Because of equivalence between the portion of the radiance from this uniform object that reaches the sensor and the portion of the radiance from the target object that missed by the sensor, the transmittance of

_{o}*f*is 1-

_{a}*t*. Thus the path radiance received by the sensor, denoted by

*q*, is calculated by the following formula:

_{a}*h*is the atmospheric point spread function of

_{a}*f*.

_{a}Additionally, the noise of CCD which is denoted by *N _{CCD}* also affects the final captured image. Therefore, the total radiance

*g*received by the sensor is the sum of the terms

*q*,

_{o}*q*and

_{a}*N*, as shown below:

_{CCD}Equation (9) is our image degradation model. Comparing Eq. (9) with the widely used model in Eq. (2), we see that they both contain two parts including the surface-leaving radiance and path radiance. Meanwhile, the decrease effect of the radiance along the transmission path is also taken into consideration in both models. However, we model the dispersion effect by a space-variant convolution process for each pixel and a rectangular window filtering process for the entire image, because the influence of atmospheric scattering cannot be ignored in remote sensing or extremely poor weather condition. Moreover, Eq. (9) contains the noise of CCD, which is unavoidable in imaging.

#### 2.2 The image recovery

The goal of image recovery is to restore the target object radiance *f _{o}* by removing the atmospheric influence from the captured image

*g*in Eq. (9). Figure 2 demonstrates the whole procedure of the image recovery with the assumption that the parameters

*t*,

*f*,

_{a}*h*and

_{o}*h*are already estimated (the estimation of these parameters will be discussed later in subsections 2.2.1 and 2.2.2).

_{a}To deal with the boundary pixels in the image deconvolution of step 4, the image matrix *g* is first enlarged to *g’*, i.e., step 1 in Fig. 2. The pixel values of the enlarged area are supposed to equal the nearest array border value. Figure 3
gives an example of step 1, in which Fig. 3(a) is the original image, and Fig. 3(b) is its extended result by repeating the pixel values on the borders. But the pixels in the enlarged area are left untreated in steps 2-5. It is reasonable because these pixels are not included in the sensitive area of the sensor.

After step 1, Eq. (9) is rewritten as follows:

*f*is to be restored from Eq. (10).

_{o}Because the kernel *h _{o}* is space-variant for

*g’*, the object radiance has to be restored pixel by pixel. Suppose that we want to recover the object radiance at position (

*i*,

*j*) inside the sensitive area of the sensor, which is represented by the red pixel in

*g’*in Fig. 2. We extract a local region

*S*centers at position (

*i*,

*j*) (step 2) due to the dispersion effect discussed in subsection 2.1.

*S*is of the same size as the atmospheric point spread function at position (

*i*,

*j*).

Then we compute the term *G* with Eq. (11) (i.e., step 3):

*·*,

*·*) stands for the regularized space-variant deconvolution operation of the two operands. We use the Wiener filtering [14] in our experiments, i.e., step 4.

Comparing Eq. (12) with Eq. (10), it may be confused that the noise term *N _{CCD}* disappears. This is because the procedure of image deconvolution does not need this term, which is a random variable, to be known.

*N*is taken into account automatically in the regularized deconvolution algorithm [15] presented in Eq. (12).

_{CCD}Consequently, the object radiance is obtained by

where*t*

_{0}(0<

*t*

_{0}≤

*t*) is a small constant to avoid the denominator to be zero.

In step 5, we select the center pixel of the region *S* (represented by the blue pixel in Fig. 2) and divide it by max(*t*,*t*_{0}). Because the pixels in the enlarged area are not treated, the result is actually *f _{o}*(

*i*,

*j*). After all the pixels in

*g*are traversed, we obtain the object radiance

*f*.

_{o}Figure 4
is the flowchart for solving our degradation model. We employ existent methods in [10,13] whose results turn out to be robust to figure out the unknown parameters *t*, *f _{a}*,

*h*and

_{o}*h*.

_{a}### 2.2.1 Estimation of the transmittance and sky radiance

The medium transmittance *t* and the sky radiance *f _{a}* in our image degradation model are estimated with the method proposed by He

*et al*. [10], which manages to remove the atmospheric influence from images based on the dark channel prior. The prior is a statistics of haze-free outdoor images that most local non-sky regions contain some pixels with very low intensities in at least one color channel, which can be described as:

*f*represents the r, g or b channel of the haze-free image

_{o}^{c}*f*, pixel

_{o}*y*locates in the small local region

*Ω*with pixel (

*i*,

*j*) at the center. The left term of Eq. (14) is called the dark channel of the image

*f*. In ref [10], the authors have proved that the dark channel prior is also adequate for the images with sky regions.

_{o}Figure 5(a)
shows a haze-free outdoor image while Fig. 5(b) is a hazy one. Their corresponding dark channels are exhibited in Figs. 5(c) and 5(d). The size of the local region *Ω* should be properly set to cover the small objects whose radiance is inherently similar to the sky radiance, otherwise, the prior is invalid [10]. Obviously, the intensities of most pixels in Fig. 5(c) are low, while due to the influence of the sky radiance, they are much higher in Fig. 5(d), which is consistent with the dark channel prior.

Consequently, according to the dark channel prior given in Eq. (14), the medium transmittance of the radiance from the target object is derived from Eq. (2) as follows:

*ω*(0<

*ω*≤1) is a constant to model the remaining haze for the distant objects [10].

Although the sky radiance *f _{a}* depends heavily on the optical thickness, it can be obtained from the original image

*g*which is the only given information. We extract the top 0.1% brightest pixels in the dark channel of

*g*, among which the pixel with the highest intensity in

*g*is selected as

*f*[10].

_{a}In order to remove the block effect in *t* as shown in Fig. 6(b)
, the soft matting algorithm [10,16] is employed to refine the transmittance *t*:

*L*is the Matting Laplacian matrix [16],

*U*is an identity matrix with the same size as

*L*,

*λ*is a small value to constrain

*t’*which is the desired transmittance. The refined transmittance

*t’*is shown in Fig. 6(c). Hence, the transmittance of the uniform object is calculated by 1-

*t’*. More details can be found in [10,16].

### 2.2.2 Estimation of the dispersion kernel

In [12], Narasimhan *et al.* discuss the atmospheric point spread function caused by multiple scattering and establish the relationship between the object radiance and the received radiance, which has been mentioned subsection 2.1.

Based on the result of Narasimhan *et al.*, Metari and Deschênes [13] introduce the generalized Gaussian distribution to approximate the atmospheric point spread function, i.e.,

*σ*is related to the forward scattering parameter

*q*(0≤

*q*≤1) which can be determined from

*g*.

*T*is the optical thickness,

*k*(

*k*>0) is a constant related to

*T*.

*Γ*(

*·*) is the gamma function, and

*A*(

*·*) is the scale parameter equal tothe readers can refer to [13] for more details about Eq. (17).

Because of the relationship between the optical thickness *T* and the optimized medium transmittance *t’* which is described by the following equation:

*h*. Similarly,

_{o}*h*is calculated by replacing

_{a}*t’*with 1-

*t’*in Eq. (17).

## 3. The results and comparison

To exhibit the effectiveness of our model presented by Eq. (9), we take several real outdoor images to implement the image recovery, including the method of He *et al*. [10], the method of Metari and Deschênes [13], and our approach. In the experiments, the value of *t _{0}* in Eq. (13) is set to 0.1, the size of the small local region

*Ω*in Eq. (15) is 9 × 9 for all the tested images, and the constant

*ω*is 0.7. In addition,

*λ*equals 10

^{−5}in Eq. (16),

*k*and

*q*in Eq. (17) is 0.5 and 0.7 respectively. The pixel values of the enlarged area in image

*g’*are assumed to equal the nearest array border value. The deconvolution in Eq. (12) is executed by the Wiener filtering in Matlab, the power spectrum ratio of the noise and the undegraded image in this algorithm is set to 0.02 which can be adjusted to suppress the amplified noise caused by the term

*N*.

_{CCD}Figure 7(a)
shows one original degraded image taken from an aircraft. Figure 7(b) presents the output based on the dark channel prior employing the model shown in Eq. (2) (i.e., the result by He *et al*.). Figure 7(c) exhibits the final image by deconvolving the model in Eq. (4) pixel by pixel with Wiener filtering (i.e., the result by Metari and Deschênes). And Fig. 7(d) is the result obtained by solving our model. In total, the performances of all the methods are much better than the original one. More specifically, in Fig. 7(b), the contrast of the image is dramatically improved and the color information is also recovered, despite that the edges of the object are not well refined due to neglecting the multiple scattering. The edges are sharp and clear in Fig. 7(c), however, the final result is not significantly enhanced and the object is still difficult to distinguish. Our result achieves the best performance in both contrast and the sharpness of the object edges, as shown in Fig. 7(d).

Other experiments are exhibited in Fig. 8
. Figures 8(a) and 8(e) are two original images both taken from the aircraft. Figure 8(i) is captured in a heavy haze day at the top of a hill. Figures 8(b), 8(f) and 8(j) are the corresponding results by He *et al*., and Figs. 8(c), 8(g) and 8(k) are the results by Metari and Deschênes. Figures 8(d), 8(h) and 8(l) are the results of our approach. Obviously, the quality of the output images obtained from our model is significantly enhanced with high contrast, vivid color information, and sharp edges of the object. Moreover, the proposed model works well not only for the remote sensing images (i.e., Figs. 8(a) and 8(e)), but also for the images captured in bad weather condition (i.e., Fig. 8(i)).

In order to compare the performances of the image recovery methods objectively, the Gray Mean Gradient (GMG) and Laplacian (LAP) image quality assessment methods [17] are used, and the results (larger values represent better image quality) are given in Table 2 . Obviously, the results obtained from the proposed model achieve the largest assessment values for all the test images, which indicates that our approach outperforms the other two.

## 4. Conclusion

We analyze the impact of the atmospheric transmission on the radiance detected by the sensor in remote sensing and bad weather condition, and propose an image degradation model and a recovery method taking multiple scattering into consideration. The radiance from the target object is decreased along the transmission path according to the Bouguer-Lambert-Beer law, and dispersed due to the multiple scattering. Because the sky radiance which enters into the sensor can be regarded as a uniform object, the attenuation analysis is the same as the target object. In order to verify the effectiveness of our model, we employ the existent algorithms to estimate the unknown parameters. Moreover, the performances of the proposed model and the widely used model in which multiple scattering effect is ignored are compared. Experimental results show that the images obtained from our model are significantly improved in contrast, clearness, color saturation and object edges. Besides, the GMG and LAP image quality assessment methods are used, and the values of the output images by our algorithm are the largest which indicates that the proposed model outperforms the widely used model.

## Acknowledgments

We thank the anonymous reviewers for their valuable comments which help to improve this paper. This work is supported by Chinese National Natural Science Foundation (No. 60977010) and Chinese National Programs for High Technology Research and Development (No. 2009CB724006).

## References and links

**1. **A. Berk, G. P. Anderson, P. K. Acharya, J. H. Chetwynd, L. S. Bernstein, E. P. Shettle, M. W. Matthew, and S. M. Adler-Golden, “Modtran4 user’s manual,” Air Force Research Laboratory, 1999.

**2. **J. P. Oakley and B. L. Satherley, “Improving image quality in poor visibility conditions using a physical model for contrast degradation,” IEEE Trans. Image Process. **7**(2), 167–179 (1998). [CrossRef] [PubMed]

**3. **K. K. Tan and J. P. Oakley, “Physics-based approach to color image enhancement in poor visibility conditions,” J. Opt. Soc. Am. A **18**(10), 2460–2467 (2001). [CrossRef] [PubMed]

**4. **J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph. **27**, 116 (2008).

**5. **S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Trans. Pattern Anal. Mach. Intell. **25**(6), 713–724 (2003). [CrossRef]

**6. **Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in *Proceedings of IEEE Conference on Computer Vision and Pattern Recognition* (IEEE, 2001), 325–332.

**7. **R. T. Tan, “Visibility in bad weather from a single image,” in *Proceedings of IEEE Conference on Computer Vision and Pattern Recognition* (IEEE, 2008), 1–8.

**8. **S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” in *Proceedings of IEEE Conference on Computer Vision and Pattern Recognition* (IEEE, 2006), 1984–1991.

**9. **R. Fattal, “Single image dehazing,” ACM Trans. Graph. **27**(3), 72 (2008). [CrossRef]

**10. **K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” in *Proceedings of IEEE Conference on Computer Vision and Pattern Recognition* (IEEE, 2009), 1956–1963.

**11. **S. G. Narasimhan and S. K. Nayar, “Chromatic framework for vision in bad weather,” in *Proceedings of IEEE Conference on Computer Vision and Pattern Recognition* (IEEE, 2000), 598–605.

**12. **S. G. Narasimhan and S. K. Nayar, “Shedding light on the weather,” in P*roceedings of IEEE Conference on Computer Vision and Pattern Recognition* (IEEE, 2003), 665–672.

**13. **S. Metari and F. Desch, *ê*nes, “A new convolution kernel for atmospheric point spread function applied to computer vision,” in *Proceedings of IEEE Conference on Computer Vision* (IEEE, 2007), 1–8.

**14. **R. C. Gonzalez and R. E. Woods, *Digital Image Processing, Second Edition*, (Publishing House of Electronics Industry, 2002), Chap. 5.

**15. **M. Bertero and P. Boccacci, *Introduction to Inverse Problems in Imaging* (IOP, 1998), Chap. 5.

**16. **A. Levin, D. Lischinski, and Y. Weiss, “A closed form solution to natural image matting,” in *Proceedings of IEEE Conference on Computer Vision and Pattern Recognition* (IEEE, 2006), 61–68.

**17. **W. Dong, Y. Chen, Z. Xu, H. Feng, and Q. Li, “Image stabilization with support vector machine,” J. Zhejiang Univ.-Sci. C Comput. & Electron. **12**(6), 478–485 (2011). [CrossRef]