Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single image recovery in scattering medium by propagating deconvolution

Open Access Open Access

Abstract

This paper proposed an effective method called propagating deconvolution to recover single image degraded in a scattering medium. The propagating deconvolution is in the manner of the calculus, which is based on multi-layered decomposition of the scattering volume and modeled blurring function of a single-layer scattering volume. Parameters of the deconvolution algorithm are estimated just from in situ measurement of the pure scattered background from one single image.

©2014 Optical Society of America

1. Introduction

Images in a scattering medium (such as fog, water and smoke) are inevitably degraded due to absorption and scattering effects, resulting in blurring details and color distortion as well as poor image contrast. Image recovery in a scattering medium is highly desired for many applications and a large number of contributions have been made by researchers. Narasimhan [1] investigated image recovery in bad weather based on a light transmission model in a scattering atmosphere from multiple images of the same scene under different weather conditions. Schechner [2] used a polarization based method to remove the haze using images taken under different polarization. Recently, attention has been focused on single image recovery due to the needs of real time imaging tasks [35]. Tan [3] proposed to remove haze by maximizing the local contrast of the restored single image. Fattal [4] estimated the irradiance of the scene and the medium transmission under the assumption that the transmission and the surface shading are locally uncorrelated. He [5] proposed the dark channel prior which is based on the statistics of outdoor haze-free images for haze removal. Image haze removal in underwater conditions has also been investigated [2] [6, 7].

Currently, most researches in this area have focused on the improvement of image contrast. Image recovery based on linear degradation models is thought to facilitate the solutions more powerful. Image recovery attempts to recover a true image from a blurred image through a linear process using the following form:

g(x,y)=f(x,y)h(x,y)+n(x,y)

Where g(x, y) is the blurred image, f(x, y) is the true image, h(x, y) is the point spread function (PSF), n(x, y) is the additive noise and * denotes the convolution operation.

Under an assumption of the linear process of (1), recovery of an image degraded by transmission through a scattering medium can be formulated as a deconvolution problem. In applications, the PSF, representing the blurring kernel of the forward scattering, depends on some certain properties of the scattering medium, the so-called inherent optical property (IOP).

In our previous work [8], we have represented image formation as cumulative outputs of layered successive sub-systems, each of which is configured in the form of a single-layered PSF representation. However, the depth parameter is directly estimated from a uniform region of the target, supposing the target is depth-fixed, which could violate large amount of observations. This paper proposes a new image deconvolution algorithm for various depth-dependent image recovery in a scattering medium, which is called propagating deconvolution. Consequently, image decovolution can be realized flexibly by a back-tracking scheme without measure of the depth, approaching to the desired solution through an iterative routine in range dimension. The propagating nature of this method prevents computational instability and assures convergence. The physical model of the sub-systems and the implementation of the algorithm are described in details and the experimental results are given. What’s more, the experiments on color images show the advantage of the algorithm in color recovery.

2. Image formation by layered decomposition model

When an object is illuminated, forward scattering occurs when the reflected light is scattered during transmission to the sensor. The amount by which the forward scattered image is blurred depends on the range as well as the IOP of the scattering medium. We can formulate the blurring function as the output of sub-systems through layer-based decomposition, which is illustrated in Fig. 1.

 figure: Fig. 1

Fig. 1 Layered decomposition of the scattering medium between the target and the sensor.

Download Full Size | PDF

The whole scattering medium is decomposed into m layers along the symmetrical optical axis, denoted as z. The model widely used to describe the formation of a hazy image is [2]

E=Jexp(βz)+A(1exp(βz))

Where E is the observed intensity, J is the scene radiance, A is the air light, β is the attenuation coefficient of the scattering medium and z is the scene depth.

During the forward transmission, each layer contributes its own unique blurring function, denoted as a thin single-layered PSF hΔ. The whole blurring function is just the output of all the successive sub-systems in form h = hΔ (1) * hΔ(2) *…* hΔ(m), and hΔ(1) = hΔ(2) == hΔ(m) = hΔ..

Assuming that the thickness of each layer is Δ (Δ is sufficiently small); the single-layered blurring function is modeled as a Gaussian PSF [8].

hΔ(x,y)=(1/πθΔ)exp{(x2+y2)/θΔ}(θΔ0,ifΔ0,hΔ(x,y)δ(x,y))

whose Fourier transform is

HΔ(u,v)=exp[θΔπ2(u2+v2)]

The length z is determined by the range between the target and the sensor andz=mΔ. Considering the attenuation in light transmission, the transfer function associated with the scattering volume of length z is

Hz(u,v)=exp(βz)exp{zρπ2(u2+v2)}(ρ=θΔ /Δ)

Defining k(x, y) as the unit-length scattering coefficient, the received back scattered light intensity generated by layer Δ = dz at distance z is

dnz(x,y)={k(x,y)Idz}hz(x,y)

where hz(x, y) is the PSF in the spatial domain associated with Hz(u,v); and I is the illumination intensity at the layer at distance z. In the atmosphere, environmental illumination projects equal intensity on each layer. Thus, the illumination intensity at each layer is just I = A exp(-βz) + A(1-exp(-βz)) = A. In the frequency domain, we can write (6) as

dNz(u,v)=K(u,v)AdzHz(u,v)

When we integrate (7) over all z,

Nz(u,v)=AK(u,v)0zeβ+ρπ2(u2+v2)zdz=AK(u,v)β+ρπ2(u2+v2)(1e[β+ρπ2(u2+v2)]z)

Let k(x, y) = k0 + γ, where k0 denotes the mean component and γ denotes the random component respectively. γ is allowed to be the white Gaussian noise with variance σ2, because the scattering generated by particles in a sufficiently thin layer is thought to be non-contributory. The mean component of the total back scattering intensity, associated to k0 is

Sd=Nz(0,0)=(k0A/β){1exp(βz)}

The power spectrum of the random component associated with γ can be represented as

Pn(u,v)={A2σ2/[β+ρπ2(u2+v2)]2}(1exp{[βz+zρπ2(u2+v2)]})2

The essential statistics of the back scattering noise of haze was analyzed and presented in (9) and (10).

3. Algorithm of propagating deconvolution

3.1 Principle of propagating deconvlution

The principle mentioned above implies that the restoration based on the image formation model through layer-based sub-system can be implemented through a successive iterative process. Supposing the original image f passes m layers whose total length is z, the received degraded image g can be formulated as

g=f(m1)*hm+sd(m)f(m1)=f(m2)*hm1+sd(m1)f(1)=f*h1+sd(1)

The expressions above suggest that the restoration of the original image f can be obtained through successive layer-based deconvolutions, each of which is a knowledge-based solution if the PSF of single layer has been predetermined. Without knowing the entire scattering volume, we can approach the final solution progressively, i.e., by the way of propagating deconvolution in range dimension.

Obviously the single-layered PSF depends on the thickness of the layer as well as the IOP parameters, as represented in (3). In the followings, we show that instead of quantifying the range metric, the PSF formulation can be configured in the parameter space.

Supposing the scattering volume is equally divided into a set of layers. For the mth layer with a thickness of Δ, according to (5), the PSF with respect to this layer is

Hα(u,v)=exp{α(1+ρβπ2(u2+v2))}(α=βΔ)

The scattered light intensity which occurs within this layer can be derived by integrating expressions (9) and (10) from z to z + Δ, giving rise to the mean level Sd(m) and the power spectrum of the random component Pn(m) of this layer respectively:

sd(m)=k0Azz+Δexp{βz}dz=k0Aβexp{βz}(1exp{βΔ})
Pn(m)(u,v)=A2σ2(zz+Δexp{βzzρπ2(u2+v2)}dz)2=A2σ2(β+ρπ2(u2+v2))2[exp{βzρzπ2(u2+v2)}(1exp{βΔρΔπ2(u2+v2)})]2

Further defining auxiliary variables:η1=ρ/β; η2=Aσ/β; η3=k0A/β, remembering z = , then

Hα(u,v)=exp{α(1+η1π2(u2+v2))}
sd(m)=η3exp{mα}(1exp{α})
Pn(m)(u,v)=η22[1+η1π2(u2+v2)]2[exp{mα(1+η1π2(u2+v2))(1exp{α(1+η1π2(u2+v2))})]2

It is noted that in underwater medium, the active illumination has to substitute for the environmental illumination in atmosphere. Therefore the above representations are modified with respect to the attenuation during transmission of the illumination intensity. Suppose the illumination source is located at the sensor in underwater, the light intensity projected onto the layer of the scattering medium at distance z is thus I = Aexp(-βz). Without loss of essentiality, the formulation of (7) are alternatively formulated as (18).

dNz'(u,v)=K(u,v)Aexp(βz)dzHz(u,v)

The corresponding formulations of underwater model needs to take 2 β instead of β in the formulation of (9), (10), (13) and (14).

When we obtain a patch image of the sky or water which means z→∞ and m→∞, there remains a pure scattering background. Fitting the observed scattering background to Sd and Pn gives rise to an in situ estimate of η1, η2 and η3,.

3.2 Control of the propagating process

Similar to most of the iterative deconvolution techniques, the stop of iterations needs to be controlled by a defined metric. The proposed algorithm does allow for the control of the convergence through the evaluation of image restoration, i.e., the blur measure. The approach of maximum entropy has been successfully applied for image restoration [9]. It was found that the blur measure of image entropy effectively favored the perceptual evaluation when applying the proposed methods in our experiment, which can be applied to control the iterations. The entropy of image f is defined as [9]:

S=i=1Lpilnpi

Where pi is the frequency of occurrence of pixels at ith grey level, i = 1, 2... L, The iteration stopped when the updated image possessed the maximum of S. It should be noted that the result of S at each iteration relies on the selection of the step length, i.e., the presetα.

4. Implementation and experimental results

According to (15)(17), once the variables η1, η2 and η3 were evaluated and α was preset, the original image can be regarded as f (0)(x, y), and well developed deconvolution techniques can be applied through an iterative routine. The Wiener filter is widely used in image restoration and can be applied to implement the algorithm. The computational steps are as below and finally end by control criteria:

f(m+1)(x,y)=f(m)(x,y)sd(m),m=0,1,2
F(m+1)(u,v)=Hα(u,v)|Hα(u,v)|2+Pn(m)(u,v)/Pf(m)(u,v)F(m)(u,v)

We conducted experiments on images degraded by different scattering medium. The propagating deconvolution routine was applied in three channels R, G and B of the color image respectively and the three results were recomposed into the output image. Recovery of a single fogy image was shown in Fig. 2. It can be shown that propagating process can be properly controlled by the evaluation of the blur measure S in our experiments. However, there still exist uncertainties in the evaluation of the best recovery of the image, either caused by the disagreement between the objective metrics and the visual inspection, or due to the inappropriate quantification of the parameterα. A sophisticated solution is to “slow down” the propagating steps when approaching to the best evaluation by the defined blur measure, to refine the recovered output to agree to visual inspection as much as possible. Thus we can change the parameter α to smaller scales at the step before the maximum S is being reached to obtain the final output image.

 figure: Fig. 2

Fig. 2 (a) is an original fog image, (b)-(d) are the successive recovered images of (a), choosing α = 0.3, for m = 1, 2, 3, respectively. The propagation stops at (c) associated to the maximum S.

Download Full Size | PDF

Figure 3 shows the results using methods of Fattal [4], He [5] and our proposed method, respectively. Outputs of each method were evaluated by blur measure S.

 figure: Fig. 3

Fig. 3 column (a) are the original images; column (b) are the results by Fattal [4]; column (c) are the results by He [5]; column (d) are refined results by our method.

Download Full Size | PDF

It was shown that for samples on first and third rows in Fig. 3, our method performed best by the evaluation criterion S, while the method of He [5] performed the best for the sample on second row. It is noted that the propagating deconvolution algorithm runs by way of deblurring the scene at successive depth associated to the propagating step, i.e., the depth based local recovery because corresponds to the volume of the scatter medium. Since the blur measure S is in fact computed globally, for a scene with large depth of view, such as the second row in Fig. 3, the globally computed S, but locally associated to a certain depth, is inevitably biased evaluated.

In general, while the method of He [5] performed well in terms of image contrast by inspection, our method showed advantages of noise depression and color fidelity in image recovery. Because the proposed propagating deconvolution method models both the forward scattering and back scattering effects within a linear degradation framework, the deconvolution operation can properly filter the noise while deblurring the target. Moreover, since the optical properties (transmission and scattering) usually differ in scattering medium, especially in underwater, color compensation is an important requirement of image recovery. By the proposed algorithm, three color channels are processed independently, each of which employs color-specific parameters estimated from their own channel respectively. We thought this to be a reasonable solution to color compensation, and the experiments verified its good performance compared with other methods.

A comparison of a local region recovery at long depth is further given in Fig. 4. This example implied that in the case of poor signal-to-noise ratio, the proposed method still performed the best in terms of noise depression and color fidelity for depth-dependent local recovery. Nevertheless, a global descattering can be expected by fusing successive recovered images.

 figure: Fig. 4

Fig. 4 (a) is a selected patch from the column (a), the second row of Fig. 3, (b)-(d) are the recovered results by methods of Fattal [4], He [5], and our method, respectively.

Download Full Size | PDF

4. Conclusion

This paper presents an algorithm of propagating deconvolution for single image recovery based on a linear degradation model to represent image formation as successive layer-based outputs in a scattering medium. The algorithm was actually designated and applied for real-time image recovery under unknown environments. It should be noted that the proposed PSF of the layer-based elementary scattering volume is only an approximated model and the solution is thus sub-optimal. The suitability in real-time applications and robustness in computation, particularly for an instantaneous imaging task in a scattering medium, have been stressed in this work. Nevertheless, the approximated solution is expected to be further refined through well-developed blind deconvolution techniques, now that a sufficiently good initial estimate is available. Finally, physical interpretations behind the method could be expected to stimulate alternative representations for space-varying image recovery and range estimation in a scattering medium.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (60772058) and partially supported by National Education Ministry Engineer Research Center of Ocean Information Technology, Ocean University of China.

References and links

1. S. G. Narasimhan and S. K. Nayar, “Chromatic framework for vision in bad weather,” in Proceedings of IEEE Conference on CVPR (IEEE, 2000), 598–605. [CrossRef]  

2. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on CVPR (IEEE, 2001), 1, 325–332.

3. R. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on CVPR (IEEE, 2008), 1–8. [CrossRef]  

4. R. Fattal, “Single image dehazing,” J. ACM Siggraph 27(3), 1–9 (2008). [CrossRef]  

5. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” in Proceedings of IEEE Conference on CVPR (IEEE, 2009), 1956–1963.

6. W. Hou, D. J. Gray, A. D. Weidemann, and R. A. Arnone, “Comparison and validation of point spread models for imaging in natural waters,” Opt. Express 16(13), 9958–9965 (2008). [CrossRef]   [PubMed]  

7. K. Iqbal, R. Abdul Salam, A. Osman, and A. Zawawi Talib, “Underwater image enhancement using an integrated color model” J. Comput. Sci. 34, 2–12 (2007).

8. G. Wang, B. Zheng, and F. F. Sun, “Estimation-based approach for underwater image restoration,” Opt. Lett. 36(13), 2384–2386 (2011). [CrossRef]   [PubMed]  

9. M. Pinchas and B. Z. Bobrovsky, “A maximum entropy approach for blind deconvolution,” Signal Process. 86(10), 2913–2931 (2006). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1
Fig. 1 Layered decomposition of the scattering medium between the target and the sensor.
Fig. 2
Fig. 2 (a) is an original fog image, (b)-(d) are the successive recovered images of (a), choosing α = 0.3, for m = 1, 2, 3, respectively. The propagation stops at (c) associated to the maximum S.
Fig. 3
Fig. 3 column (a) are the original images; column (b) are the results by Fattal [4]; column (c) are the results by He [5]; column (d) are refined results by our method.
Fig. 4
Fig. 4 (a) is a selected patch from the column (a), the second row of Fig. 3, (b)-(d) are the recovered results by methods of Fattal [4], He [5], and our method, respectively.

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

g(x,y)=f(x,y)h(x,y)+n(x,y)
E=Jexp(βz)+A(1exp(βz))
h Δ (x,y)=(1/ π θ Δ )exp{ ( x 2 + y 2 )/ θ Δ }( θ Δ 0,ifΔ0, h Δ (x,y)δ( x,y ))
H Δ (u,v)=exp[ θ Δ π 2 ( u 2 + v 2 ) ]
H z (u,v)=exp(βz)exp{zρ π 2 ( u 2 + v 2 )}(ρ= θ Δ   /Δ)
d n z (x,y)={k(x,y)Idz} h z (x,y)
d N z (u,v)=K(u,v)Adz H z (u,v)
N z (u,v)=AK(u,v) 0 z e β+ρ π 2 ( u 2 + v 2 )z dz= AK(u,v) β+ρ π 2 ( u 2 + v 2 ) (1 e [ β+ρ π 2 ( u 2 + v 2 ) ]z )
S d = N z (0,0)=( k 0 A /β ){1exp(βz)}
P n (u,v)={ A 2 σ 2 / [β+ρ π 2 ( u 2 + v 2 )] 2 } (1exp{[βz+zρ π 2 ( u 2 + v 2 )]}) 2
g= f (m1) * h m + s d (m) f (m1) = f (m2) * h m1 + s d (m1) f (1) =f* h 1 + s d (1)
H α (u,v)=exp{ α( 1+ ρ β π 2 ( u 2 + v 2 ) ) } (α=βΔ)
s d (m) = k 0 A z z+Δ exp{βz} dz= k 0 A β exp{βz}( 1exp{βΔ} )
P n (m) (u,v)= A 2 σ 2 ( z z+Δ exp{βzzρ π 2 ( u 2 + v 2 )} dz ) 2 = A 2 σ 2 ( β+ρ π 2 ( u 2 + v 2 ) ) 2 [ exp{βzρz π 2 ( u 2 + v 2 )} ( 1exp{βΔρΔ π 2 ( u 2 + v 2 )} ) ] 2
H α (u,v)=exp{α( 1+ η 1 π 2 ( u 2 + v 2 ) )}
s d (m) = η 3 exp{mα}(1exp{α})
P n (m) (u,v)= η 2 2 [1+ η 1 π 2 ( u 2 + v 2 )] 2 [ exp{mα( 1+ η 1 π 2 ( u 2 + v 2 ) ) ( 1exp{α( 1+ η 1 π 2 ( u 2 + v 2 ) )} ) ] 2
d N z ' (u,v)=K(u,v)Aexp(βz)dz H z (u,v)
S= i=1 L p i ln p i
f (m+1) (x,y)= f (m) (x,y) s d (m) ,m=0, 1, 2
F (m+1) (u,v)= H α (u,v) | H α (u,v) | 2 + P n (m) (u,v)/ P f (m) (u,v) F (m) (u,v)
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.