Abstract

In underwater imaging scenarios, the scattering media could cause severe image degradation due to the backscatter veiling as well as signal attenuation. In this paper, we consider the polarization effect of the object, and propose a method of retrieving the objects radiance based on estimating the polarized-difference image of the target signal. We show with a real-world experiment that by taking into account the polarized-difference image of the target signal additionally, the quality of the underwater image can be effectively enhanced, which is particularly effective in the cases where both the object radiance and the backscatter contribute to the polarization, such as underwater detection of the artifact objects.

© 2016 Optical Society of America

1. Introduction

The underwater image quality is degraded due to the poor visibility conditions [1]. The particles existing in the water can scatter and absorb the target signal out of the optical path, causing energy lost. Moreover, the particles also scatter undesired light into the optical path, which could veil the object and lead to the image quality reduction. However, there are active demands for performing detection or vision tasks underwater, such as the exploration of marine resources and inspection of ship hulls for damage [2,3].

Previous researches have been performed to reduce such negative effects of underwater imaging in various ways, such as dark channel prior [4], self-tuning restoration filter [5], etc. In particular, polarimetric imaging is found to be an effective way to improve the underwater vision [6], which is also used for dehazing [7,8]. Generally speaking, the current polarization-based methods of underwater imaging often assumed that only veiling light is polarized [6]. However, this assumption could not be appropriate for all cases in practice, because object radiance could also contribute to the polarization. In particular, when the depolarization degree of the objects is low, the light scattered or reflected by the objects could also contribute considerably to the polarization [9,10], and thus significant estimation errors could be caused by the previous methods.

In this paper, we investigate the effects that polarization of both veiling light and object radiance have on the underwater imaging formation. We show the limitation of previous polarization-based methods for underwater image recovery, which assume that the object radiance is unpolarized. We propose a method for underwater image recovery based on estimating the polarized-difference (PD) image of the target signal, which takes into account the contributions of both the object radiance and the veiling light to the polarization of the light received by the detector. In addition, the real world experiment of polarimetric imaging underwater is performed, in order to demonstrate the feasibility and the superiority of the method proposed in this paper.

2. Optical models of underwater imaging

2.1 Model of underwater image formation

The basic physical model of underwater image formation is based on the model in [6, 11], in which the irradiance received by the detector comes from two sources described in the following.

The first source is composed of the irradiance of the object in the fields of view, which is attenuated due to the absorption and scattering processes in the water. The image corresponding to this attenuated signal is called the direct transmission or target signal, which is given by

D(x,y)=L(x,y)t(x,y),
where (x,y) indicates the coordinates of the pixels in the image. L(x,y) is the radiance of the object without being attenuated by the particles in water, and t(x,y) refers to the medium transmittance, which can be expressed as

t(x,y)=eβ(x,y)ρ(x,y).

The parameter β(x,y) in Eq. (2) is the attenuation coefficient, which is attributed to the absorption effect and the scattering effect in the water. In our case, we assume that the attenuation coefficient β(x,y) is spatially invariant, which can be expressed asβ(x,y)=β0. Therefore, the transmittance only depends on the propagation distance ρ(x,y), which refers to the underwater part of the optical path between the object and the detector.

The second source comes from the light scattered towards the detector by the scattering particles in water, which is termed as veiling light or backscatter, and the backscatter can be expressed as

B(x,y)=A[1t(x,y)],
where A corresponds to the value of the backscatter in a line of sight that extends to infinity in the water. The scalar A is associated with the particles in the water and the illumination conditions. The backscatter is partially polarized [12], which is considered in our image recovery algorithm.

According to the theory of image formation above, the image of the irradiance acquired by the detector can be expressed as

I(x,y)=D(x,y)+B(x,y).

Based on the equations above, the object radiance L(x,y) can be deduced as

L(x,y)=I(x,y)A[1t(x,y)]t(x,y).

It can be seen from Eq. (5) that A and t(x,y) are two key unknown parameters for recovering L(x,y). The method of estimating A will be described in Section 3.1. Besides, a major problem is to estimate the transmittance t(x,y) in a proper way. For the image recorded by the detector, the pixels at different coordinates in the image correspond to different values of propagation distance ρ(x,y), and thus correspond to different values of transmittance t(x,y) according to Eq. (2). As indicated in Eqs. (1) and (3), the attenuation of the signal and the backscatter significantly depend on the transmittance t(x,y), which indicates that t(x,y) is a key parameter for recovering the underwater imaging.

2.2 Polarization effects of underwater imaging

Several works have been performed to investigate the behavior of the polarized light in turbid water [1,11,12]. In this work, we polarize the illumination beam by placing a linear polarizer in front of the light source and imaging through a rotating linear polarizer (analyzer) at two orthogonal polarization states. By this way, we obtained a co-linear image I||(x,y) with the analyzer at the same polarization state as the input polarization illumination, and a cross-linear image I(x,y) is obtained with the analyzer at the orthogonal state as the illumination. In this work, we assume that the co-linear signal is greater than the corresponding cross-linear signal, and for example I||(x,y)I(x,y). According to Eq. (4), I||(x,y) and I(x,y) can be expressed as

I||(x,y)=D||(x,y)+B||(x,y),
I(x,y)=D(x,y)+B(x,y),
and the total intensity of the acquired irradiance is

I(x,y)=I||(x,y)+I(x,y).

The polarization orientation of the backscatter (veiling light) is close to that of incident linearly polarized light [13], then the degree of polarization (DOP) of the backscatter can be described as [14,15]

Pscat(x,y)=B||(x,y)B(x,y)B||(x,y)+B(x,y).

Strictly speaking, the value of Pscat(x,y) depends on the distance ρ(x,y) due to multiple scattering, and different pixels with the coordinates of (x,y) in the image corresponding to different depths ρ(x,y), which indicates the variation of Pscat(x,y) over (x,y). However, for the narrow field of view, Pscat(x,y) does not vary much [6]. Therefore, for simplicity, we assume the value of Pscat(x,y) to be spatially constant in our case, and thus we replace the denotation Pscat(x,y) by Pscat. Combining the equations above, the expression of the backscatter B(x,y) can be written as

B(x,y)=[I||(x,y)I(x,y)][D||(x,y)D(x,y)]Pscat=ΔI(x,y)ΔD(x,y)Pscat,
where ΔI(x,y) and ΔD(x,y) stand for the polarized-difference (PD) image of total intensity and the PD image of the target signal respectively. According to Eqs. (3) and (10), we can express the transmittance t(x,y) as

t(x,y)=1ΔI(x,y)ΔD(x,y)PscatA.

It can be seen from Eq. (11) that there are two factors for estimating the transmittance t(x,y). First we need to estimate the global parameters A and Pscat, which are intrinsic parameters of the condition of underwater imaging. Due to the assumption that A and Pscat are spatially constant, these two global parameters could be measured from a selected region of background directly, which will be described in Section 3.1. Secondly, the PD image of target signal ΔD(x,y) could have great impact on the accuracy of the estimation of the transmittance t(x,y), which means that the polarization of object radiance cannot be ignored. In previous studies, this parameter was often neglected both in the cases of natural illumination [6,16] and active polarized illumination [14,15]. However, the method could fail in image areas where the light reflected or scattered by the object is partially polarized (with DOP greater than 0). In order to overcome this problem, we deduce the transmittance t(x,y) and L(x,y) by proposing a method for estimating ΔD(x,y) to improve the image recovery algorithm.

3. Recovering process for underwater image

3.1 Estimation of A and Pscat

For estimating A and Pscat, we measure the gray levels of the pixels corresponding to a region of background (with no target) in the image of I||(x,y) and I(x,y) respectively, which are considered to be the values of A measured at two orthogonal orientations of the analyzer denoted as A|| and A. Consequently, A and Pscat can be estimated as follows

A^=A||+A,
P^scat=A||AA||+A.

In practice, P^scat is often calculated according to a designated region in the background [15], however, P^scat could vary slightly if one chooses different regions in the background. Therefore, there could be an error for estimation of Pscat, which could affect the result of image recovery. Therefore, we employ the parameter ε to modifyP^scat as εP^scat, in order to find an appropriate value of εP^scat for image recovery, which has also been introduced in previous works [6,14].

3.2 Estimation of the PD image of the signal

The method proposed in [6, 14] assumes that polarization of the light in the field of view is associated only with the backscatter. In other words, the objects is not polarized, which means that ΔD(x,y) is considered to be 0. However, the polarization of object radiance may dominate over the partially polarized backscatter in some cases, such as in the region of the image corresponding to the smooth metal surfaces [10]. It can be seen from Eq. (11) that, in the regions corresponding to the objects with ΔD(x,y) greater than 0, the transmittance t(x,y) without considering the polarization effect of the object can be considerably underestimated. Consequently, there could be considerable distortion in recovered image in such situations.

In our study, we consider the fact that the polarization of the object radiance cannot be ignored. Substituting Eq. (11) in Eq. (5) and replacing P^scat with εP^scat, we get the new expression of the object radiance as

L(x,y)=εP^scatAI(x,y)A[ΔI(x,y)ΔD(x,y)]εP^scatA[ΔI(x,y)ΔD(x,y)],
which considers the polarization effects of both backscatter and object radiance. The total intensity irradiance I(x,y) and the PD image of the total intensity ΔI(x,y) can be obtained from I||(x,y) and I(x,y) according to Eqs. (8) and (10). Besides, the method of obtain the estimators A^ and P^scat have been discussed in Section 3.1.

Therefore, in order to deduce the transmittance t(x,y) and L(x,y) properly, the key issue in our case is to estimate ΔD(x,y). We proposed a curve fitting method to deduce the transmittance t(x,y) and L(x,y) without much prior knowledge (such as the location and geometric shape of the objects), and in this method, all we have to know in advance is the location of a region of the background.

Substituting Eq. (3) in Eqs. (6) and (7), we get the expressions

I||(x,y)=D||(x,y)+A||[1t(x,y)],
I(x,y)=D(x,y)+A[1t(x,y)].

Based on Eqs. (15) and (16), we introduce an intermediate image K(x,y), which is expressed as

K(x,y)=I||(x,y)A||I(x,y)A=D||(x,y)A||D(x,y)A.

Note that the expression of K(x,y) is similar to that of ΔD(x,y), and the only difference between the expressions of K(x,y) and ΔD(x,y) is the coefficients of D||(x,y) and D(x,y). Besides, A|| and A in Eq. (17) can be directly obtained from I||(x,y) and I(x,y) in a region of background, as mentioned in Section 3.1.

In our case, the value of the intensity in the image is defined to range from 0 to 1. It can be seen from Eqs. (6) and (7) that the values of D||(x,y) and D(x,y) are lower than the values of I||(x,y) and I(x,y), and thus the values of D||(x,y) and D(x,y) have to be between 0 and 1. Therefore, D||(x,y) and D(x,y) should satisfy following inequality group

{0D||(x,y)10D(x,y)1D(x,y)D||(x,y).

According to Eq. (17), the relation between the image K(x,y) and the PD image of target signal can be expressed as

ΔD(x,y)=A||K(x,y)+D(x,y)(A||AA)=AK(x,y)+D||(x,y)(A||AA||).

Since A||>A, the second term on the right side of Eq. (19) is always positive, and therefore, ΔD(x,y)A||K(x,y). Besides,ΔD(x,y)AK(x,y)+(A||AA||) because D||(x,y)1. In particular, ΔD(x,y)0 because D(x,y)D||(x,y). Therefore, the range of ΔD(x,y) is defined as

{ΔD(x,y)0(AA||)A||AK(x,y)0ΔD(x,y)A||K(x,y)0<K(x,y)1A||ΔD(x,y)AK(x,y)+(A||AA||)(AA||)A||AK(x,y)1A||.

We assume that there is a mathematical relation between K(x,y) and ΔD(x,y), which should satisfy the inequalities discussed in Eqs. (18) and (20). We will try to fit the relation between ΔD(x,y) and K(x,y) in the following.

The inequalities in Eq. (20) indicate a triangle region shown in Fig. 1, which corresponds to the possible relation between ΔD(x,y) and K(x,y). Since the geometry of the area defined by Eq. (20) is an obtuse-angled triangle, intuitively speaking, an exponential function could be a proper choice for the fitting function of ΔD(x,y) and K(x,y). In addition, there are two characteristics for the relation between ΔD(x,y) and K(x,y). First, it can be seen from Eq. (19) that ΔD(x,y) generally increases with the increase of K(x,y). Second, the values of K(x,y) could be negative while the values of ΔD(x,y) must always be positive. The exponential function can fulfill these two characteristics. Moreover, the flexibility of the exponential function is good, and there are only two coefficients for optimizing, which is beneficial to reducing the computation time for optimization.

 figure: Fig. 1

Fig. 1 The triangle region defined by the inequalities in Eq. (20), which indicates the possible relation between ΔD and K.

Download Full Size | PPT Slide | PDF

According to the analysis above, we believe that the relationship between K(x,y) and ΔD(x,y)can be roughly fitted with an exponential function given by

ΔD^(x,y)=aexp[bK(x,y)],
where the a and b are two coefficients to be optimized, and a and b should be greater than 0 to ensure the characteristic of the increasing trend of ΔD(x,y)with K(x,y). The role of a is adjusting the scale while the role of b is adjusting the base value of the exponential function. The true relation betweenΔD(x,y) and K(x,y) can be denoted by the regions inside the obtuse-angled triangle in Fig. 1. In order to restore the true relation between ΔD(x,y) and K(x,y) by Eq. (21), the coefficients a and b are adjusted to make the curve given by Eq. (21) coincide with these regions.

3.3 Optimization of image quality

In this section, we describe our algorithm for optimizing the image quality. With the estimation of A, Pscat and the PD image of target signal ΔD(x,y), we can recover the object radiance according to Eq. (14).

It can be seen from Eqs. (14) and (21) that there are three undetermined coefficient (a, b and ε) which have significant impact on the quality of the recovered image. In our study, the value of measure of enhancement (EME) is used to quantify the quality of the image [17], which is defined as

EME=|1k1k2l=1k2k=1k120logimax;k,lω(x,y)imin;k,lω(x,y)+q|,
where the image is split up into k1×k2blocks with the sequence number of (k, l) at two dimensions, while imax;k,lω(x,y) and imin;k,lω(x,y) are the maximum and minimum value in the blocks ω with sequence number of (k, l) in the image, and q (equals to 0.0001) is a small constant to avoid being divided by zero, without considerably modify the result of EME. It needs to be clarified that the value of the q we chosen is one ten thousandth of the maximum intensity, which is much less than the values of the signal intensity. Therefore, the sensitivity of the result to q is very low.

In order to optimize the quality of the recovered image, we perform an exhaustive search [18,19] of the coefficients a, b and ε to find the optimal values of these coefficients that maximize the value of EME. Then, we could obtain ΔD^(x,y) based on the optimal values of a and b according to Eq. (21), and consequently, the object radiance L(x,y) can be retrieved based on ΔD^(x,y) by Eq. (14). It needs to be clarified that our goal is to recover the object radiance L(x,y) given by Eq. (14), and we assume that L(x,y) corresponds to the recovered image with optimal quality (with maximum value of EME).

The reasons for us to choose the EME as the criterion for our application include [17]: 1) EME actually measures the Weber contrast for the image with complex structures and details, and Weber contrast can be considered as a common criterion for the image quality [20]; 2) EME shows a proportional relationship between an increase or decrease of the image quality; 3) EME has more distinct peaks (local maximums), which makes it easy for us to find the best parameters to obtain the optimal image quality; 4) The algorithm for EME is simple, which is beneficial for us to perform the optimization for the multiple parameters. Actually, there are also other criterions for evaluating the quality of the image, such as the entropy [21], and the criterions for image quality have generality. In other words, the image with the optimal EME value could also have the good (close to the optimal) value of entropy or other criterions.

4. Real-world experiment

Let us now consider a real-world underwater imaging scenario. The experimental setup is shown in Fig. 2. The light source is a He-Ne laser with the wavelength of 632.8 nm. The imaging device is a 14 bits digital CCD camera (AVT Stingray F-033B) with pixel number of 492 × 656. There is a linear polarizer placed in front of the laser to produce the linear polarized light, and there is a rotating linear polarizer placed before CCD as an analyzer. We used a transparent PMMA tank filled with water, and we make the water turbid by blending the clear water with milk.

 figure: Fig. 2

Fig. 2 Experimental setup for underwater imaging.

Download Full Size | PPT Slide | PDF

In our case, the target region consists of a metal coin, which is sticked on a plastic cube. The target is immersed in the turbid water, and the intensity image of this scene is shown in Fig. 3, with the value of EME calculated to be 0.6. It can be seen that the visibility of Fig. 3 is poor, and the details of the scene are severe degraded in the intensity image.

 figure: Fig. 3

Fig. 3 Intensity image of the scene.

Download Full Size | PPT Slide | PDF

We obtainI||(x,y) and I(x,y) at two orthogonal orientations of the polarizer before the CCD, as shown in Fig. 4. It can be seen that the co-linear image in Fig. 4(a) is more visible than the intensity image in Fig. 3. In order to estimateA and Pscat, it is necessary to find a region in the image with no target to obtain A||and A. In our experiments, this estimation is realized by averaging the intensity values in the designated region, which is indicated by the yellow rectangle in Fig. 4(a) and 4(b).

 figure: Fig. 4

Fig. 4 (a) Co-linear image I||(x,y). (b) Cross-linear image I(x,y). The yellow square is the region belonging to the background, which is used to estimating the intensity of the backscatter at infinity and DOP of the backscatter.

Download Full Size | PPT Slide | PDF

An important point that needs to be noticed in Fig. 4 is that there is a great difference between I||(x,y) and I(x,y) in the region corresponding to the metal coin, while that for the plastic cube is not so great. This indicates that the depolarization degree of the metal coin is low, while that for the plastic cube is high. Moreover, it can be seen that the intensity inside the yellow square in I||(x,y) is greater than that in I(x,y), which also verifies the assumption that B||(x,y)B(x,y) in our case.

The methods in previous works assumed that the light emanating from objects in the scene is unpolarized [6,14–16], which means ΔD(x,y)=0 or D||(x,y)=D(x,y). It needs to be clarified that the configuration in Ref. 6 is different from that of this paper. In particular, Ref. 6 assumes natural illumination and diffusive object, while our work uses a polarized illumination source and metal object to enhance the polarization difference, which could present the feasibility and superiority of our method more apparently. In order to compare our method with previous ones, we first assume that ΔD(x,y)=0. According to Eq. (11), it can be concluded that the transmittance t(x,y) could be substantially underestimated in the image areas corresponding to the objects with ΔD(x,y) greater than 0. In particular, the estimation produces negative values of t(x,y) at the pixels corresponding to the coin [as shown in Fig. 5 (a)], which deviate from the physicality of the transmittance according to Eq. (2). Consequently, it could encounter the negative values of L(x,y), which causes abnormal dark region in the recovered image of L(x,y) shown in Fig. 5(b).

 figure: Fig. 5

Fig. 5 The deduced (a) transmittance t(x,y) and the recovered (b) radiance of the objects L(x,y), assuming that the light emanating from objects in the scene is unpolarized.

Download Full Size | PPT Slide | PDF

Besides, since the logarithmic function of EME in Eq. (22) can only handle positive intensity values, the value of EME of the recovered object radiance L(x,y) in Fig. 5(b) is not available due to the negative values of L(x,y) in the coin region.

Let us now take the polarization characteristics of object radiance into account. As mentioned above, the PD image of the target signalΔD(x,y) is estimated by an exponential function according to Eq. (21), and consequently, the deduction of object radiance L(x,y) is performed according to Eq. (14). Therefore, L(x,y) is the function of a, b and ε.

For optimizing the quality of the recovered image, we perform the search of a, b and ε to maximize the value of EME given by Eq. (22), and we increase the algorithm efficiency by the method of varied step and gradual iteration [22]. First, an area of 5 × 5 × 5 wide is performed with a resolution step of 1 for the coefficients of a, b and ε, and the highest local maxima are kept. Then around this local maxima, a 1 × 1 × 1 wide area is swept with a resolution step of 0.1, and the coefficients leading to the highest quality are kept. Finally, around these coefficients, a 0.1 × 0.1 × 0.1 wide area is swept with a resolution step of 0.01, and the state leading to the highest quality is kept. For our Matlab program running on the computer with Intel Pentium G3220@3.00GHz processor, the iterative algorithm takes about 70s for finding the optimal values of a, b and εin our case. The optimal values of a, b and ε are found to be a = 0.10, b = 2.53, ε = 1.09.

Figure 6 shows the retrieved transmittance t(x,y) and the radiance of the object L(x,y) at optimal values of a, b and ε, by employing the method discussed in Section 3. It can be seen that the visibility is greatly improved, and the image quality is enhanced to EME = 2.3. Compared with the intensity image [see Fig. 3], it can be seen that the details of the object in scene become more clear and the veiling is almost removed. The most significant aspect is that our method overcomes the problem of recovering L(x,y) in the areas corresponding to the coin shown in Fig. 5(b). Besides, as shown in Fig. 6(b), the radiance of the coin is bright than the radiance of the plastic cube and the background, which is consistent with the fact that the metal coin has the higher reflectivity than the plastic cube.

 figure: Fig. 6

Fig. 6 The retrieved (a) transmittance t(x,y) and (b) radiance of the objects L(x,y) by our method discussed in Section 3.

Download Full Size | PPT Slide | PDF

In order to show the details of the image better, we display in Fig. 7 the enlarged views of parts of Fig. 3 and Fig. 6. It can be seen in Fig. 7 that the images recovered by our method reveals the details of the scene that are not visible in the intensity images, no matter for the area corresponding to the objects with high depolarization degree (plastic cube) or that with low depolarization degree (metal coin).

 figure: Fig. 7

Fig. 7 Enlarged view of parts of the intensity image (Fig. 3), the co-linear image [Fig. 4(a)] and the recovered image [Fig. 6(b)], which correspond to the regions of plastic cube (a) or metal coin (b) respectively.

Download Full Size | PPT Slide | PDF

In order to verify the universality of our method for underwater image recovering, we perform another group of experiment, in which the target region consists of a metal ruler and a disc with some words on it. The co-linear image, the cross-linear image and the intensity image are shown in Fig. 8(a) to (c) respectively, while the recovered image by our method is shown in Fig. 8(d). It can be seen that the details are blurred in the intensity image, while those in the recovered image become more clear, and the image quality is enhanced to EME = 2.8. The experiment result in Fig. 8 also demonstrates the effectiveness of our method.

 figure: Fig. 8

Fig. 8 (a) Co-linear image. (b) Cross-linear image. (c) The intensity image of the scene. (d) The recovered image of the scene by our method.

Download Full Size | PPT Slide | PDF

5. Conclusion

In this paper, we additionally consider the contribution of the object radiance to the polarization of the light, which is neglected by the previous methods, and we proposed a method of underwater image recovery based on estimating the polarized-difference (PD) image of the target signal, which addresses successfully the issue of the underwater vision. The results of real-word experiment in this paper show that it is feasible to improve and optimize the quality of the recovered image by our method. In addition, the detail information and visual quality of the recovered image can be ameliorated. In particular, the experimental results show that our method could considerably improve the underwater images quality no matter the depolarization degree of the objects is high or low, while the previous methods could fail when the depolarization effect of the object is low. Therefore, the method proposed in this paper is particularly effective in the cases where both object radiance and scattered light contribute to the polarization, such as underwater detection of the artifact object. Besides, the method proposed in this paper does not need any prior information of the scene except a designated region of the background.

Acknowledgment

This work is supported by the National Natural Science Foundation of China (NSFC) (No.61405140), the National Instrumentation Program (No. 2013YQ030915), the Natural Science Foundation of Tianjin (No.15JCQNJC02000), and the Project Sponsored by the Scientific Research Foundation for the Returned Overseas Chinese Scholars. Haofeng Hu acknowledges the Fondation Franco-Chinoise pour la Science et ses Applications (FFCSA) and the China Scholarship Council (CSC).

References and links

1. S. Sabbah, A. Lerner, C. Erlick, and N. Shashar, “Under water polarization vision—a physical examination,” Recent Res. Dev. Exp. Theor. Biol 1, 123–176 (2005).

2. G. N. Bailey and N. C. Flemming, “Archaeology of the continental shelf: marine resources, submerged landscapes and underwater archaeology,” Quat. Sci. Rev. 27(23-24), 2153–2165 (2008). [CrossRef]  

3. L. B. Wolff, “Polarization vision: a new sensory approach to image understanding,” Image Vis. Comput. 15(2), 81–93 (1997). [CrossRef]  

4. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011). [CrossRef]   [PubMed]  

5. E. Trucco and A. T. Olmos-Antillon, “Self-tuning underwater image restoration,” IEEE J. Oceanic Eng. 31(2), 511–519 (2006). [CrossRef]  

6. Y. Y. Schechner and N. Karpel, “Recovery of underwater visibility and structure by polarization analysis,” IEEE J. Oceanic Eng. 30(3), 570–587 (2005). [CrossRef]  

7. J. Liang, L. Ren, H. Ju, W. Zhang, and E. Qu, “Polarimetric dehazing method for dense haze removal based on distribution analysis of angle of polarization,” Opt. Express 23(20), 26146–26157 (2015). [CrossRef]   [PubMed]  

8. J. Liang, L. Ren, H. Ju, E. Qu, and Y. Wang, “Visibility enhancement of hazy images based on a universal polarimetric imaging method,” J. Appl. Phys. 116(17), 173107 (2014). [CrossRef]  

9. N. Agarwal, J. Yoon, E. Garcia-Caurel, T. Novikova, J.-C. Vanel, A. Pierangelo, A. Bykov, A. Popov, I. Meglinski, and R. Ossikovski, “Spatial evolution of depolarization in homogeneous turbid media within the differential Mueller matrix formalism,” Opt. Lett. 40(23), 5634–5637 (2015). [CrossRef]   [PubMed]  

10. J. C. Stover, Optical Scattering: Measurement and Analysis (SPIE, 1995).

11. J. S. Jaffe, “Computer modeling and the design of optimal underwater imaging systems,” IEEE J. Oceanic Eng. 15(2), 101–111 (1990). [CrossRef]  

12. S. Sabbah and N. Shashar, “Light polarization under water near sunrise,” J. Opt. Soc. Am. A 24(7), 2049–2056 (2007). [CrossRef]   [PubMed]  

13. J. Cariou, B. L. Jeune, J. Lotrian, and Y. Guern, “Polarization effects of seawater and underwater targets,” Appl. Opt. 29(11), 1689–1695 (1990). [CrossRef]   [PubMed]  

14. T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 385–399 (2009). [CrossRef]   [PubMed]  

15. M. Dubreuil, P. Delrot, I. Leonard, A. Alfalou, C. Brosseau, and A. Dogariu, “Exploring underwater target detection by imaging polarimetry and correlation techniques,” Appl. Opt. 52(5), 997–1005 (2013). [CrossRef]   [PubMed]  

16. Y. Y. Schechner and Y. Averbuch, “Regularized image recovery in scattering media,” IEEE Trans. Pattern Anal. Mach. Intell. 29(9), 1655–1660 (2007). [CrossRef]   [PubMed]  

17. S. S. Agaian, K. Panetta, and A. M. Grigoryan, “Transform-based image enhancement algorithms with performance measure,” IEEE Trans. Image Process. 10(3), 367–382 (2001). [CrossRef]   [PubMed]  

18. M. Boffety, H. Hu, and F. Goudail, “Contrast optimization in broadband passive polarimetric imaging,” Opt. Lett. 39(23), 6759–6762 (2014). [CrossRef]   [PubMed]  

19. B. Huang, T. Liu, J. Han, and H. Hu, “Polarimetric target detection under uneven illumination,” Opt. Express 23(18), 23603–23612 (2015). [CrossRef]   [PubMed]  

20. E. Peli, “Contrast in complex images,” J. Opt. Soc. Am. A 7(10), 2032–2040 (1990). [CrossRef]   [PubMed]  

21. A. P. Jacquemin and C. H. Berry, “Entropy measure of diversification and corporate growth,” J. Inst. Econ. 27(4), 359–369 (1979). [CrossRef]  

22. G. Anna, F. Goudail, and D. Dolfi, “Polarimetric target detection in the presence of spatially fluctuating Mueller matrices,” Opt. Lett. 36(23), 4590–4592 (2011). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. S. Sabbah, A. Lerner, C. Erlick, and N. Shashar, “Under water polarization vision—a physical examination,” Recent Res. Dev. Exp. Theor. Biol 1, 123–176 (2005).
  2. G. N. Bailey and N. C. Flemming, “Archaeology of the continental shelf: marine resources, submerged landscapes and underwater archaeology,” Quat. Sci. Rev. 27(23-24), 2153–2165 (2008).
    [Crossref]
  3. L. B. Wolff, “Polarization vision: a new sensory approach to image understanding,” Image Vis. Comput. 15(2), 81–93 (1997).
    [Crossref]
  4. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011).
    [Crossref] [PubMed]
  5. E. Trucco and A. T. Olmos-Antillon, “Self-tuning underwater image restoration,” IEEE J. Oceanic Eng. 31(2), 511–519 (2006).
    [Crossref]
  6. Y. Y. Schechner and N. Karpel, “Recovery of underwater visibility and structure by polarization analysis,” IEEE J. Oceanic Eng. 30(3), 570–587 (2005).
    [Crossref]
  7. J. Liang, L. Ren, H. Ju, W. Zhang, and E. Qu, “Polarimetric dehazing method for dense haze removal based on distribution analysis of angle of polarization,” Opt. Express 23(20), 26146–26157 (2015).
    [Crossref] [PubMed]
  8. J. Liang, L. Ren, H. Ju, E. Qu, and Y. Wang, “Visibility enhancement of hazy images based on a universal polarimetric imaging method,” J. Appl. Phys. 116(17), 173107 (2014).
    [Crossref]
  9. N. Agarwal, J. Yoon, E. Garcia-Caurel, T. Novikova, J.-C. Vanel, A. Pierangelo, A. Bykov, A. Popov, I. Meglinski, and R. Ossikovski, “Spatial evolution of depolarization in homogeneous turbid media within the differential Mueller matrix formalism,” Opt. Lett. 40(23), 5634–5637 (2015).
    [Crossref] [PubMed]
  10. J. C. Stover, Optical Scattering: Measurement and Analysis (SPIE, 1995).
  11. J. S. Jaffe, “Computer modeling and the design of optimal underwater imaging systems,” IEEE J. Oceanic Eng. 15(2), 101–111 (1990).
    [Crossref]
  12. S. Sabbah and N. Shashar, “Light polarization under water near sunrise,” J. Opt. Soc. Am. A 24(7), 2049–2056 (2007).
    [Crossref] [PubMed]
  13. J. Cariou, B. L. Jeune, J. Lotrian, and Y. Guern, “Polarization effects of seawater and underwater targets,” Appl. Opt. 29(11), 1689–1695 (1990).
    [Crossref] [PubMed]
  14. T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 385–399 (2009).
    [Crossref] [PubMed]
  15. M. Dubreuil, P. Delrot, I. Leonard, A. Alfalou, C. Brosseau, and A. Dogariu, “Exploring underwater target detection by imaging polarimetry and correlation techniques,” Appl. Opt. 52(5), 997–1005 (2013).
    [Crossref] [PubMed]
  16. Y. Y. Schechner and Y. Averbuch, “Regularized image recovery in scattering media,” IEEE Trans. Pattern Anal. Mach. Intell. 29(9), 1655–1660 (2007).
    [Crossref] [PubMed]
  17. S. S. Agaian, K. Panetta, and A. M. Grigoryan, “Transform-based image enhancement algorithms with performance measure,” IEEE Trans. Image Process. 10(3), 367–382 (2001).
    [Crossref] [PubMed]
  18. M. Boffety, H. Hu, and F. Goudail, “Contrast optimization in broadband passive polarimetric imaging,” Opt. Lett. 39(23), 6759–6762 (2014).
    [Crossref] [PubMed]
  19. B. Huang, T. Liu, J. Han, and H. Hu, “Polarimetric target detection under uneven illumination,” Opt. Express 23(18), 23603–23612 (2015).
    [Crossref] [PubMed]
  20. E. Peli, “Contrast in complex images,” J. Opt. Soc. Am. A 7(10), 2032–2040 (1990).
    [Crossref] [PubMed]
  21. A. P. Jacquemin and C. H. Berry, “Entropy measure of diversification and corporate growth,” J. Inst. Econ. 27(4), 359–369 (1979).
    [Crossref]
  22. G. Anna, F. Goudail, and D. Dolfi, “Polarimetric target detection in the presence of spatially fluctuating Mueller matrices,” Opt. Lett. 36(23), 4590–4592 (2011).
    [Crossref] [PubMed]

2015 (3)

2014 (2)

J. Liang, L. Ren, H. Ju, E. Qu, and Y. Wang, “Visibility enhancement of hazy images based on a universal polarimetric imaging method,” J. Appl. Phys. 116(17), 173107 (2014).
[Crossref]

M. Boffety, H. Hu, and F. Goudail, “Contrast optimization in broadband passive polarimetric imaging,” Opt. Lett. 39(23), 6759–6762 (2014).
[Crossref] [PubMed]

2013 (1)

2011 (2)

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011).
[Crossref] [PubMed]

G. Anna, F. Goudail, and D. Dolfi, “Polarimetric target detection in the presence of spatially fluctuating Mueller matrices,” Opt. Lett. 36(23), 4590–4592 (2011).
[Crossref] [PubMed]

2009 (1)

T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 385–399 (2009).
[Crossref] [PubMed]

2008 (1)

G. N. Bailey and N. C. Flemming, “Archaeology of the continental shelf: marine resources, submerged landscapes and underwater archaeology,” Quat. Sci. Rev. 27(23-24), 2153–2165 (2008).
[Crossref]

2007 (2)

S. Sabbah and N. Shashar, “Light polarization under water near sunrise,” J. Opt. Soc. Am. A 24(7), 2049–2056 (2007).
[Crossref] [PubMed]

Y. Y. Schechner and Y. Averbuch, “Regularized image recovery in scattering media,” IEEE Trans. Pattern Anal. Mach. Intell. 29(9), 1655–1660 (2007).
[Crossref] [PubMed]

2006 (1)

E. Trucco and A. T. Olmos-Antillon, “Self-tuning underwater image restoration,” IEEE J. Oceanic Eng. 31(2), 511–519 (2006).
[Crossref]

2005 (2)

Y. Y. Schechner and N. Karpel, “Recovery of underwater visibility and structure by polarization analysis,” IEEE J. Oceanic Eng. 30(3), 570–587 (2005).
[Crossref]

S. Sabbah, A. Lerner, C. Erlick, and N. Shashar, “Under water polarization vision—a physical examination,” Recent Res. Dev. Exp. Theor. Biol 1, 123–176 (2005).

2001 (1)

S. S. Agaian, K. Panetta, and A. M. Grigoryan, “Transform-based image enhancement algorithms with performance measure,” IEEE Trans. Image Process. 10(3), 367–382 (2001).
[Crossref] [PubMed]

1997 (1)

L. B. Wolff, “Polarization vision: a new sensory approach to image understanding,” Image Vis. Comput. 15(2), 81–93 (1997).
[Crossref]

1990 (3)

1979 (1)

A. P. Jacquemin and C. H. Berry, “Entropy measure of diversification and corporate growth,” J. Inst. Econ. 27(4), 359–369 (1979).
[Crossref]

Agaian, S. S.

S. S. Agaian, K. Panetta, and A. M. Grigoryan, “Transform-based image enhancement algorithms with performance measure,” IEEE Trans. Image Process. 10(3), 367–382 (2001).
[Crossref] [PubMed]

Agarwal, N.

Alfalou, A.

Anna, G.

Averbuch, Y.

Y. Y. Schechner and Y. Averbuch, “Regularized image recovery in scattering media,” IEEE Trans. Pattern Anal. Mach. Intell. 29(9), 1655–1660 (2007).
[Crossref] [PubMed]

Bailey, G. N.

G. N. Bailey and N. C. Flemming, “Archaeology of the continental shelf: marine resources, submerged landscapes and underwater archaeology,” Quat. Sci. Rev. 27(23-24), 2153–2165 (2008).
[Crossref]

Berry, C. H.

A. P. Jacquemin and C. H. Berry, “Entropy measure of diversification and corporate growth,” J. Inst. Econ. 27(4), 359–369 (1979).
[Crossref]

Boffety, M.

Brosseau, C.

Bykov, A.

Cariou, J.

Delrot, P.

Dogariu, A.

Dolfi, D.

Dubreuil, M.

Erlick, C.

S. Sabbah, A. Lerner, C. Erlick, and N. Shashar, “Under water polarization vision—a physical examination,” Recent Res. Dev. Exp. Theor. Biol 1, 123–176 (2005).

Flemming, N. C.

G. N. Bailey and N. C. Flemming, “Archaeology of the continental shelf: marine resources, submerged landscapes and underwater archaeology,” Quat. Sci. Rev. 27(23-24), 2153–2165 (2008).
[Crossref]

Garcia-Caurel, E.

Goudail, F.

Grigoryan, A. M.

S. S. Agaian, K. Panetta, and A. M. Grigoryan, “Transform-based image enhancement algorithms with performance measure,” IEEE Trans. Image Process. 10(3), 367–382 (2001).
[Crossref] [PubMed]

Guern, Y.

Han, J.

He, K.

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011).
[Crossref] [PubMed]

Hu, H.

Huang, B.

Jacquemin, A. P.

A. P. Jacquemin and C. H. Berry, “Entropy measure of diversification and corporate growth,” J. Inst. Econ. 27(4), 359–369 (1979).
[Crossref]

Jaffe, J. S.

J. S. Jaffe, “Computer modeling and the design of optimal underwater imaging systems,” IEEE J. Oceanic Eng. 15(2), 101–111 (1990).
[Crossref]

Jeune, B. L.

Ju, H.

J. Liang, L. Ren, H. Ju, W. Zhang, and E. Qu, “Polarimetric dehazing method for dense haze removal based on distribution analysis of angle of polarization,” Opt. Express 23(20), 26146–26157 (2015).
[Crossref] [PubMed]

J. Liang, L. Ren, H. Ju, E. Qu, and Y. Wang, “Visibility enhancement of hazy images based on a universal polarimetric imaging method,” J. Appl. Phys. 116(17), 173107 (2014).
[Crossref]

Karpel, N.

Y. Y. Schechner and N. Karpel, “Recovery of underwater visibility and structure by polarization analysis,” IEEE J. Oceanic Eng. 30(3), 570–587 (2005).
[Crossref]

Leonard, I.

Lerner, A.

S. Sabbah, A. Lerner, C. Erlick, and N. Shashar, “Under water polarization vision—a physical examination,” Recent Res. Dev. Exp. Theor. Biol 1, 123–176 (2005).

Liang, J.

J. Liang, L. Ren, H. Ju, W. Zhang, and E. Qu, “Polarimetric dehazing method for dense haze removal based on distribution analysis of angle of polarization,” Opt. Express 23(20), 26146–26157 (2015).
[Crossref] [PubMed]

J. Liang, L. Ren, H. Ju, E. Qu, and Y. Wang, “Visibility enhancement of hazy images based on a universal polarimetric imaging method,” J. Appl. Phys. 116(17), 173107 (2014).
[Crossref]

Liu, T.

Lotrian, J.

Meglinski, I.

Novikova, T.

Olmos-Antillon, A. T.

E. Trucco and A. T. Olmos-Antillon, “Self-tuning underwater image restoration,” IEEE J. Oceanic Eng. 31(2), 511–519 (2006).
[Crossref]

Ossikovski, R.

Panetta, K.

S. S. Agaian, K. Panetta, and A. M. Grigoryan, “Transform-based image enhancement algorithms with performance measure,” IEEE Trans. Image Process. 10(3), 367–382 (2001).
[Crossref] [PubMed]

Peli, E.

Pierangelo, A.

Popov, A.

Qu, E.

J. Liang, L. Ren, H. Ju, W. Zhang, and E. Qu, “Polarimetric dehazing method for dense haze removal based on distribution analysis of angle of polarization,” Opt. Express 23(20), 26146–26157 (2015).
[Crossref] [PubMed]

J. Liang, L. Ren, H. Ju, E. Qu, and Y. Wang, “Visibility enhancement of hazy images based on a universal polarimetric imaging method,” J. Appl. Phys. 116(17), 173107 (2014).
[Crossref]

Ren, L.

J. Liang, L. Ren, H. Ju, W. Zhang, and E. Qu, “Polarimetric dehazing method for dense haze removal based on distribution analysis of angle of polarization,” Opt. Express 23(20), 26146–26157 (2015).
[Crossref] [PubMed]

J. Liang, L. Ren, H. Ju, E. Qu, and Y. Wang, “Visibility enhancement of hazy images based on a universal polarimetric imaging method,” J. Appl. Phys. 116(17), 173107 (2014).
[Crossref]

Sabbah, S.

S. Sabbah and N. Shashar, “Light polarization under water near sunrise,” J. Opt. Soc. Am. A 24(7), 2049–2056 (2007).
[Crossref] [PubMed]

S. Sabbah, A. Lerner, C. Erlick, and N. Shashar, “Under water polarization vision—a physical examination,” Recent Res. Dev. Exp. Theor. Biol 1, 123–176 (2005).

Schechner, Y. Y.

T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 385–399 (2009).
[Crossref] [PubMed]

Y. Y. Schechner and Y. Averbuch, “Regularized image recovery in scattering media,” IEEE Trans. Pattern Anal. Mach. Intell. 29(9), 1655–1660 (2007).
[Crossref] [PubMed]

Y. Y. Schechner and N. Karpel, “Recovery of underwater visibility and structure by polarization analysis,” IEEE J. Oceanic Eng. 30(3), 570–587 (2005).
[Crossref]

Shashar, N.

S. Sabbah and N. Shashar, “Light polarization under water near sunrise,” J. Opt. Soc. Am. A 24(7), 2049–2056 (2007).
[Crossref] [PubMed]

S. Sabbah, A. Lerner, C. Erlick, and N. Shashar, “Under water polarization vision—a physical examination,” Recent Res. Dev. Exp. Theor. Biol 1, 123–176 (2005).

Sun, J.

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011).
[Crossref] [PubMed]

Tang, X.

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011).
[Crossref] [PubMed]

Treibitz, T.

T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 385–399 (2009).
[Crossref] [PubMed]

Trucco, E.

E. Trucco and A. T. Olmos-Antillon, “Self-tuning underwater image restoration,” IEEE J. Oceanic Eng. 31(2), 511–519 (2006).
[Crossref]

Vanel, J.-C.

Wang, Y.

J. Liang, L. Ren, H. Ju, E. Qu, and Y. Wang, “Visibility enhancement of hazy images based on a universal polarimetric imaging method,” J. Appl. Phys. 116(17), 173107 (2014).
[Crossref]

Wolff, L. B.

L. B. Wolff, “Polarization vision: a new sensory approach to image understanding,” Image Vis. Comput. 15(2), 81–93 (1997).
[Crossref]

Yoon, J.

Zhang, W.

Appl. Opt. (2)

IEEE J. Oceanic Eng. (3)

J. S. Jaffe, “Computer modeling and the design of optimal underwater imaging systems,” IEEE J. Oceanic Eng. 15(2), 101–111 (1990).
[Crossref]

E. Trucco and A. T. Olmos-Antillon, “Self-tuning underwater image restoration,” IEEE J. Oceanic Eng. 31(2), 511–519 (2006).
[Crossref]

Y. Y. Schechner and N. Karpel, “Recovery of underwater visibility and structure by polarization analysis,” IEEE J. Oceanic Eng. 30(3), 570–587 (2005).
[Crossref]

IEEE Trans. Image Process. (1)

S. S. Agaian, K. Panetta, and A. M. Grigoryan, “Transform-based image enhancement algorithms with performance measure,” IEEE Trans. Image Process. 10(3), 367–382 (2001).
[Crossref] [PubMed]

IEEE Trans. Pattern Anal. Mach. Intell. (3)

Y. Y. Schechner and Y. Averbuch, “Regularized image recovery in scattering media,” IEEE Trans. Pattern Anal. Mach. Intell. 29(9), 1655–1660 (2007).
[Crossref] [PubMed]

T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 385–399 (2009).
[Crossref] [PubMed]

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011).
[Crossref] [PubMed]

Image Vis. Comput. (1)

L. B. Wolff, “Polarization vision: a new sensory approach to image understanding,” Image Vis. Comput. 15(2), 81–93 (1997).
[Crossref]

J. Appl. Phys. (1)

J. Liang, L. Ren, H. Ju, E. Qu, and Y. Wang, “Visibility enhancement of hazy images based on a universal polarimetric imaging method,” J. Appl. Phys. 116(17), 173107 (2014).
[Crossref]

J. Inst. Econ. (1)

A. P. Jacquemin and C. H. Berry, “Entropy measure of diversification and corporate growth,” J. Inst. Econ. 27(4), 359–369 (1979).
[Crossref]

J. Opt. Soc. Am. A (2)

Opt. Express (2)

Opt. Lett. (3)

Quat. Sci. Rev. (1)

G. N. Bailey and N. C. Flemming, “Archaeology of the continental shelf: marine resources, submerged landscapes and underwater archaeology,” Quat. Sci. Rev. 27(23-24), 2153–2165 (2008).
[Crossref]

Recent Res. Dev. Exp. Theor. Biol (1)

S. Sabbah, A. Lerner, C. Erlick, and N. Shashar, “Under water polarization vision—a physical examination,” Recent Res. Dev. Exp. Theor. Biol 1, 123–176 (2005).

Other (1)

J. C. Stover, Optical Scattering: Measurement and Analysis (SPIE, 1995).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 The triangle region defined by the inequalities in Eq. (20), which indicates the possible relation between Δ D and K.
Fig. 2
Fig. 2 Experimental setup for underwater imaging.
Fig. 3
Fig. 3 Intensity image of the scene.
Fig. 4
Fig. 4 (a) Co-linear image I || ( x , y ) . (b) Cross-linear image I ( x , y ) . The yellow square is the region belonging to the background, which is used to estimating the intensity of the backscatter at infinity and DOP of the backscatter.
Fig. 5
Fig. 5 The deduced (a) transmittance t(x,y) and the recovered (b) radiance of the objects L(x,y), assuming that the light emanating from objects in the scene is unpolarized.
Fig. 6
Fig. 6 The retrieved (a) transmittance t(x,y) and (b) radiance of the objects L(x,y) by our method discussed in Section 3.
Fig. 7
Fig. 7 Enlarged view of parts of the intensity image (Fig. 3), the co-linear image [Fig. 4(a)] and the recovered image [Fig. 6(b)], which correspond to the regions of plastic cube (a) or metal coin (b) respectively.
Fig. 8
Fig. 8 (a) Co-linear image. (b) Cross-linear image. (c) The intensity image of the scene. (d) The recovered image of the scene by our method.

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

D ( x , y ) = L ( x , y ) t ( x , y ) ,
t ( x , y ) = e β ( x , y ) ρ ( x , y ) .
B ( x , y ) = A [ 1 t ( x , y ) ] ,
I ( x , y ) = D ( x , y ) + B ( x , y ) .
L ( x , y ) = I ( x , y ) A [ 1 t ( x , y ) ] t ( x , y ) .
I || ( x , y ) = D || ( x , y ) + B || ( x , y ) ,
I ( x , y ) = D ( x , y ) + B ( x , y ) ,
I ( x , y ) = I || ( x , y ) + I ( x , y ) .
P s c a t ( x , y ) = B || ( x , y ) B ( x , y ) B || ( x , y ) + B ( x , y ) .
B ( x , y ) = [ I || ( x , y ) I ( x , y ) ] [ D || ( x , y ) D ( x , y ) ] P s c a t = Δ I ( x , y ) Δ D ( x , y ) P s c a t ,
t ( x , y ) = 1 Δ I ( x , y ) Δ D ( x , y ) P s c a t A .
A ^ = A || + A ,
P ^ s c a t = A || A A || + A .
L ( x , y ) = ε P ^ s c a t A I ( x , y ) A [ Δ I ( x , y ) Δ D ( x , y ) ] ε P ^ s c a t A [ Δ I ( x , y ) Δ D ( x , y ) ] ,
I || ( x , y ) = D || ( x , y ) + A || [ 1 t ( x , y ) ] ,
I ( x , y ) = D ( x , y ) + A [ 1 t ( x , y ) ] .
K ( x , y ) = I || ( x , y ) A || I ( x , y ) A = D || ( x , y ) A || D ( x , y ) A .
{ 0 D || ( x , y ) 1 0 D ( x , y ) 1 D ( x , y ) D || ( x , y ) .
Δ D ( x , y ) = A || K ( x , y ) + D ( x , y ) ( A || A A ) = A K ( x , y ) + D || ( x , y ) ( A || A A || ) .
{ Δ D ( x , y ) 0 ( A A || ) A || A K ( x , y ) 0 Δ D ( x , y ) A || K ( x , y ) 0 < K ( x , y ) 1 A || Δ D ( x , y ) A K ( x , y ) + ( A || A A || ) ( A A || ) A || A K ( x , y ) 1 A || .
Δ D ^ ( x , y ) = a exp [ b K ( x , y ) ] ,
E M E = | 1 k 1 k 2 l = 1 k 2 k = 1 k 1 20 log i max ; k , l ω ( x , y ) i min ; k , l ω ( x , y ) + q | ,

Metrics