## Abstract

Polarization navigation is a promising orientation-determination method inspired by insects’ foraging behavior that offers the advantages of autonomous and high precision. In this paper, using the solar meridian as an azimuth reference is proposed. The model of the distribution pattern of the polarized skylight projected onto an imaging sensor is analyzed. The sufficient features of the solar meridian are proven. According to these features, an angle algorithm for an imaging polarization navigation sensor based on a machine-vision algorithm is proposed. In consideration of noise in images, the relation between the measured angle and the noise in images is modeled. This model cannot only optimize the threshold tolerance *R* in the algorithm but also describe the effects of several primary factors that can affect the measuring precision. In the simulation test, the measurement accuracy was better than 0.34°. When the algorithm was tested on the polarization-detection system, the measurement accuracy was better than 0.37°.

© 2015 Optical Society of America

## 1. Introduction

Polarization navigation is a promising navigation method inspired by insects’ foraging behavior. When studying insects’ foraging behavior, scientists discovered insects’ navigation ability to get back to their home along an approximate straight line despite an irregular foraging path about 200 m away. In the foraging process, insects’ vision system can detect and use scattered polarized skylight as an azimuth reference [1,2]. Since the orientation ability of insects was discovered, the polarization-sensitive structure of insects and the pattern of polarized skylight have been studied in detail. Gábor Horváth measured the pattern of polarized skylight in the atmosphere under different weather conditions and found that the pattern of the polarization angle was stable. Meanwhile, the distribution pattern of polarized skylight was modeled based on Rayleigh's law of scattering [3–6]. Based on these research results, several navigation polarization sensors were developed [7–11]. These prototypes share the similarity that they all use photodiodes as photoelectric converters. Although the structure and working principles of these polarization sensors are simple, when the beams are shrouded by shelters, such as clouds, there is a decline of precision or even failure of the sensors. However, imaging polarization navigation sensors have the ability to solve these problems by detecting the polarization-light-vector field in the sky. With the abundant information from the polarization-light-vector field, higher precision, and stronger anti-interference can be obtained. Imaging polarization navigation is the trend for developing new high-performance polarization navigation systems. At present, few researchers have studied the algorithms for imaging polarization navigation sensors.

In this paper, a model of the distribution pattern of polarized skylight that was projected onto an imaging sensor is analyzed. The sufficient conditions of the solar meridian were demonstrated, which is the only line through the sun in an angle of polarization (AoP) image. The design and implementation of an angle algorithm based on the Hough transform (HT) for imaging polarization navigation sensors are discussed, according to the analysis results. To implement the algorithm, the relation between measured angle and noise in images was modeled. The relationship between some factors and the measurement precision is discussed and a model to describe the relationship is given. The experimental results indicate that the performance of this algorithm is better than that of traditional algorithms, as discussed in the last section.

## 2. Azimuth reference in the pattern of polarized skylight

#### 2.1 Pattern of polarized skylight projected onto imaging sensor

Studies have shown that insects can determine their orientation by detecting the pattern of polarized skylight. The polarization phenomenon of scattered skylight is caused by particles in the atmosphere that can scatter sunlight. According to the scale of particles, scattering in the atmosphere can be divided into Mie scattering and Rayleigh scattering. Mie scattering is formed by particles whose scales are approximately equal to the wavelength of light. Rayleigh scattering is caused by particles whose scales are far less than the wavelength of light, such as gas molecules in the atmosphere. Because gas molecules are widespread, Rayleigh scattering is stable and distributes regularly in the atmosphere [3]. This distribution provides a precise orientation reference for polarization navigation.

Figure 1 shows the principle of Rayleigh scattering. The E-vector of the scattered beam is perpendicular to the surface, which is defined by the incident light and the scattered light. In fact, the light received by the observer may be scattered several times. However, as [3] pointed out the single-scattering Rayleigh (SSR) model can describe a large portion of the sky and these areas can be utilized for polarization navigation as Fig. 2 [3]. Thus, we use the SSR model to simplify the distribution pattern of polarized skylight. From spherical trigonometry, the angle *ɑ* of the E-vector of the light clockwise from the local meridian can be expressed as

*h*and

_{p}*h*are the elevation of the observation point and the solar elevation, respectively, and

_{s}*φ*and

_{p}*φ*denote the angular distance from the observation point to the

_{s}*x*-axis and the solar meridian, respectively. The

*x*-axis and

*y*-axis are along the

*x*-orientation and

*y*-orientation of the imaging sensor, respectively, the

*z*-axis points to the optical axis of the lens, and the origin of the coordinate is fixed to the principal point of the sensor. We consider the lens model as rectangular projection [12].

#### 2.2 Sufficient features of solar meridian

It is obvious that the angle of the E-vector of the light from the solar meridian is 90°.

This section will prove that if the angles of the E-vectors of the beams equal 90°, then the projections of these beams are on the solar meridian.

In rectangular projection, the projection vector of vector ** h** is ${h}_{xy}={\left[{x}_{h},{y}_{h}\right]}^{\text{T}}$ and the projection vector of vector

**is ${s}_{xy}={\left[\mathrm{cos}{\phi}_{s},\mathrm{sin}{\phi}_{s}\right]}^{\text{T}}$. Therefore,**

*s*Substituting (2) into (1):

Because $\alpha ={90}^{\circ}$ and the numerator is not equal to zero,

Point (*x _{h}*,

*y*) is on the solar meridian.

_{h}If the angle of the E-vector of a beam equals 90°, then the projection of this beam must be on the solar meridian.

Hence, we can conclude three features of the solar meridian: E-vector of 90°, straight line, and through the principal point.

## 3. Design of algorithm

According to Eq. (4) there are three features of the solar meridian as discussed above. These three features are sufficient conditions to define a line as the solar meridian. Thus, the problem of solving the azimuth can be divided into two sub-problems: to extract points whose value is 90° and to detect a line that goes through the principal point. The procedure of the algorithm is shown in Fig. 3. We can use a simple threshold-extraction method to distinguish those points whose values are around 90°. There is a class of line-detection methods, including the HT, which has the advantage of robustness but at the expense of large computation costs. Fortunately, if the line goes through the principal point, the HT can reduce to a 1-D HT, which significantly reduces computation costs.

The threshold extraction is expressed as follows:

*p_angle*is the angle of the E-vector image and

*imaBin*is the binary image of the solar meridian. Because of noise, there is no exact 90° point and a tolerance

*R*is needed. In addition, the solar meridian consisting of these points in the image should be extracted with the HT expressed in Eq. (6) because of its good anti-interference performance [13].

In Eq. (6) ${X}_{i}=\left[\begin{array}{cccc}{x}_{1i}& {x}_{2i}& \dots & {x}_{Ni}\end{array}\right]$ are the feature points defined in an *N*-dimensional feature space and *i* is the index of the feature point. $\Omega =\left[\begin{array}{cccc}{\omega}_{1}& {\omega}_{2}& \dots & {\omega}_{M}\end{array}\right]$ defines a point in the *M*-dimensional parameter space. The function *k* is the HT kernel [14].

In the algorithm, the feature points are defined in the two-dimensional space and there is only one parameter *θ* in the 1-D HT, since the line goes through the principal point. Function *k* is the traditional Hat HT kernel function. ${X}_{i}=\left({x}_{i},{y}_{i}\right)$ consists of coordinates of all points whose values equal 1 in the image *imaBin(i,j)* The 2-D HT can reduce to a 1-D HT, which has just one parameter to measure, expressed as

Because the solar meridian is the longest line in the image, the maximum point in the parameter space $H\left(\theta \right)$ must be the angle of the solar meridian. The calculation complexity of the 1-D HT is far less than the 2-D HT for a straight line. For an HT processing *n* binary points with an angle resolution of *s*, the complexity is *n* in the 1-D HT and *n* × *s* in the 2-D HT for a straight line.

## 4. Effects of key parameters on measurement precision metric

Though the algorithm is simple in theory, many factors that could affect the algorithm should be taken into consideration. In this section, we will discuss the effect and selection criterion of key parameters including the solar-zenith angle, the signal-to-noise ration (SNR) of the image, the maximal degree of linear polarization (DoLP), and the threshold tolerance *R.* Figure 4 shows the relationship between key parameters and the error-distribution interval. Because the three key parameters above are determined by the hardware of the detection system and weather conditions, we only discuss their effect on the measurement precision to provide guidance on the design of the polarization-detection system. Since the threshold tolerance *R* is a parameter we can adjust it in the algorithm, hence we discuss not only its effect but also its selection criterion. In this process we also designed a filter to solve an “inherent defect” in the HT when the resolution requirement is high. To study these relationships, a model considering the noise level of the input data and the error propagation of the HT is established. We use the probabilities of the measurement values or the expectation of the amount of points collected by bins in the parameter space to express the precision metric of the algorithm.

#### 4.1 Operating conditions and error model of measurement

### 4.1.1 Operating conditions of algorithm

To study the effect of key parameters on measurement precision, we need a model as a tool, and to use the model, we also need the operating conditions of the algorithm to run the model. From the performance of the instrument we could achieve and the distribution of the DoLP [19] in the clear sky, we modeled the source image according to Table 1.

### 4.1.2 Noise model of source image

The source image of the algorithm is the image of the angle of the E-vector with noise. For the sake of simplicity, we assume the noise of *X _{i,j}* is addictive, Gaussian, white noise [15]. Therefore, the value of an arbitrary (

*i, j*) in the image can be regarded as an independent random variable

*X*whose mean is the ideal value of the angle

_{i,j}*α*of the E-vector on the (

_{i,j}*i, j*) pixel and whose variance is expressed as Eq. (10).

First, we analyzed the mean of *X _{i,j}*. Beams received by the detector are described by an SSR model as defined in Eq. (1).

In the SSR model, the beams are defined by *h _{p}* and

*φ*. However, the angle of the E-vector image was expressed in Cartesian coordinates. We used a pinhole camera model to express the geometric relationships of beams in Cartesian coordinates, while the pinhole camera model projects orthogonal projection as shown in Fig. 5.

_{p}From the pinhole camera model, the angle of the E-vector image model was described as follows:

Generally, the polarization-angle image is calculated from a set of intensity images instead of being measured directly. Noise in the angle image is related to the structure of the measurement system and the noise in intensity images is independent and Gaussian. Because the Stokes parameters in intensity images of incident light are linearly transformed, the distribution of Stokes parameters is Gaussian but not always independent. Reference [15] gives an approximate variance of noise in an angle image computed by the pseudo-inverse estimator method as in Eq. (10), which is an approximate reference used to estimate the noise level in the source data.

*p*denotes the DoLP, and

*N*denotes the number of intensity images. It must be noted that Eq. (10) is only valid when

*P*is sufficiently large. When incident light is depolarized (

_{i,j}*P*= 0),

_{i,j}*α*is subjected to Cauchy distribution. Indeed, the points of a

*P*value less than 4% can be avoided in polarization navigation as suggested in [16]. Thus Eq. (10) can describe the noise in AoP images in polarization navigation.

_{i,j}### 4.1.3 Error propagation in algorithm

The final output of the error-propagation model, which can express the precision metric of the algorithm whose input has noise, is the expectation of all values ${\mu}_{\theta}$ in the parameter space (in Eq. (14)) and the probability $p\left({N}_{{\theta}_{0}}\text{is}\text{the}\text{maximum}\right)$ that the parameter point ${\theta}_{0}$ is the maximal parameter point in the parameter space.

The algorithm contains two steps: threshold extraction and HT. The AoP image is the input data of the threshold extraction and the output data of the threshold extraction are the input data of HT.

First, we analyzed the statistical characters of the threshold extraction. Threshold extraction is a binarization program that sets the values of AoP that are in the interval of $\left[\frac{\text{\pi}}{2}-R,\frac{\text{\pi}}{2}+R\right]$ to 1; otherwise, it sets the values of AoP to 0 and then outputs binary results. Because the random variable *X _{i,j}* is independent, the probability that the value of the point is in the interval is an independent Bernoulli trial. Consequently, an image called the probability binary (PB) image was generated to express the probability of binary points whose value is true. The values in the PB image are expressed as

*X*, which is expressed as

_{i,j}Because the value *N _{θ}* in the HT parameter space is a random variable, which is the sum of the amount of points that satisfy Eq. (7) and Eq. (8) and its statistical expression is Eq. (13), it is the result of a set of independent experiments similar to Bernoulli experiments,

In fact, it is not subject to Bernoulli binomial distribution. Per the definition of Bernoulli experiments, the experiments must be independent and the conditions need to be the same. However, the stochastic process in this paper is not stationary, because the probabilities in the PB image are different. Using [17], the stochastic process in this paper can be described by Poisson binomial distribution. The mean, variance, and skewness of Poisson binomial distribution are

*N*are

_{θ}Because of these statistical characteristics in the parameter space, we calculate the distribution of all *N _{θ}* in the parameter space to get the distribution range of the solar azimuth, which is measured by the HT. Though Poisson binomial distribution costs too much to accomplish our goal; the refined normal approximation (RNA) in [18] works well for us, but the Poisson approximation and normal approximation have too many limitations. The RNA, which approximates the cumulative distribution function (CDF) of the Poisson binomial distribution, is defined as

For a certain point in the parameter space, $p\left({N}_{{\theta}_{0}}\text{is}\text{the}\text{maximum}\right)$stands for the probability of the event that ${N}_{{\theta}_{0}}$ is the maximum in the parameter space where the amount of points in bin *θ _{0}* transformed by HT (

*θ*= 1, 2 …, n, where n is the resolution of the HT) is written as ${N}_{{\theta}_{0}}$. Its complementary event is that point ${N}_{{\theta}_{0}}$ is not maximum, the probability of which is written as $p\left({N}_{{\theta}_{0}}\text{isn't}\text{the}\text{maximum}\right)$. The event ${N}_{{\theta}_{0}}$isn't the maximum consists of a set of mutually exclusive events that ${N}_{{\theta}_{0}}$ is not the maximum under the condition

_{0}*Z*where

_{θ0}= z,*Z*is the event that the maximum in the parameter space except point

_{θ0}*θ*equals

_{0}*z*. Its probability is written as $p\left({N}_{{\theta}_{0}}\text{isn't}\text{the}\text{maximum|}{Z}_{{\theta}_{0}}=z\right)$ and expressed by

In the 1-D HT, each point (*i,j*) votes just one time; it maps one-to-one. Consequently, the values of different points, ${N}_{{\theta}_{1}}$ and ${N}_{{\theta}_{0}}$ (${\theta}_{1}\ne {\theta}_{0}$), have no intersections in (*i,j*) space except (0,0), and they are determined, respectively, by completely different points. Thus, these points are independent of each other. From the rules of probability, the CDF of maximum ${Z}_{{\theta}_{0}}$ in the parameter space except the point *θ _{0}* in the parameter space is subject to the following rule:

The probability mass function (PMF) of the maximum ${Z}_{{\theta}_{0}}$ except *θ _{0}* is the difference of the CDF of ${Z}_{{\theta}_{0}}$:

According to the law of total probability,

So we can get the probability that ${\theta}_{0}$ is the maximum in the parameter space.

#### 4.2 Effects of solar-zenith angle, SNR of image, and maximal DoLP

The solar-zenith angle, the SNR of the image, and the maximal DoLP were determined by the hardware of the detection system and aerosols in atmosphere [19], so here we only discuss their effects to provide guidance for the design and use of the polarization-detection system.

According to Fig. 6, the larger the SNR maximal DoLP, or solar-zenith angle the higher the precision. The SNR of the image is one of the essential factors. When the SNR is less than 80 dB, there will not be an effective peak in the parameter space. The SNR of the image can decompose into two parts: the SNR of the camera and the mean intensity of image. The SNR of the camera is defined as $20\mathrm{lg}\raisebox{1ex}{$I$}\!\left/ \!\raisebox{-1ex}{${\sigma}^{2}$}\right.$ [20] where *I* denotes the mean intensity of the image, and the SNR of the image can be expressed as $20\mathrm{lg}\raisebox{1ex}{${I}^{2}$}\!\left/ \!\raisebox{-1ex}{${\sigma}^{2}$}\right.=20\mathrm{lg}\raisebox{1ex}{$I$}\!\left/ \!\raisebox{-1ex}{${\sigma}^{2}$}\right.+20\mathrm{lg}I$. For all cameras, a larger intensity will result in a higher SNR of the image. Thus, a small *F* number of the optical system and a fine adjustment will enhance the performance of the polarization-navigation system.

The DoLP is another important factor that affects the performance of the algorithm. Using Eq. (10), the variance of the E-vector image is determined by the DoLP. Consequently, a larger maximal DoLP will cause the global DoLP to rise, and the mean noise in the image to decrease. A larger solar-zenith angle equates to a larger angular distance from the sun. The larger angular distance area has a higher DoLP with lower noise. Thus, a larger solar-zenith angle will improve the precision.

#### 4.3 Effect and selection criterion of threshold tolerance R

The threshold tolerance *R* is a critical parameter that determines the performance of the algorithm. Too large or too small *R* will decrease the performance of the algorithm. The conclusion that the E-vector angle of all points on the solar meridian is 90° is only valid in ideal conditions. If 0° was chosen as the threshold around 90°, the result of the threshold extraction will be a null image, even in simulation. Thus, tolerance of *R* must be involved in the threshold extraction as in Eq. (5). If *R* is too small, there will be no effective peak as shown in Fig. 7(a). If *R* is too large, there will be too many interference points involved in calculation to calculate a precise result. To express the effect of minimal *R*, we test the minimal *R* under the condition that the maximal DoLP is 35% and the solar-zenith angle is 30° and the minimal *R* should be larger than 0.01°. If *R* is too large, it will cause a serious “false peak” phenomenon. Figure 7 expresses the expected result in the parameter space. Figure 7(a) expresses the algorithm with an *R* of 0.02°. The expectation curve is smooth. If the R<0.02°, there will hardly be a significant peak. Figure 7(b) expresses the algorithm with an *R =* 0.1. There is a “false peak” on the point 129.8°. When *R* reaches 0.1, the glitch grows extremely high and is larger than the actual peak, with great probability (about 100%) that the result will be 129.8°. The problem is caused by quantization of the HT, which is an “inherent defect” of the HT.

Figure 8 shows the result of a special HT that transforms all points in an image, no matter what the points equal to. There are glitches and special peaks such as 45°, 90°, and 135°, with the amount of points 1024 or 1023 gathered by these bins. Because the image is an orthogonal coordinate system and the parameter space is expressed in a polar coordinate system, some bins may collect more points as shown in Fig. 9. Points in Fig. 9 stand for pixels in the PB image. For example, if the true value is −0.1°, the probabilities of the points gathered by bin −0.1° are 90%, and the probabilities gathered by bin 0° are 50%. However, bin −0.1° can gather no more than 2 points and bin 0° can gather 4 points. Thus, the expectation of the amount collected by bin 0° is 50% × 4 = 2 and the expectation of the amount collected by bin −0.1° is 90% × 2 = 1.8. In the example, the false peak is larger than the true value. Therefore, we should limit the peak number to lower than the mean of the curve in Fig. 8, which is about 45, to eliminate the interference of these points.

In conclusion, *R* must be large enough to ensure that there is an effective peak under the worst conditions. In this paper, 0.02° was chosen as the lower bound of *R.* On the other hand, *R* should be less than the mean of the bins’ collective capacity in the parameter space to avoid the problem of “false peak.”

#### 4.4 Filter in parameter space

In the previous chapter, there was a caveat to choose a threshold tolerance *R* that is small rather than large. This leads to two problems. Small *R* would cause a small peak, which makes it difficult to find a detection peak. The other problem is the fact that the “false peak” phenomenon is an “inherent defect” of the HT. We can diminish it but not eliminate it. Thus, a filter in the parameter space is a used to diminish the “false peak” phenomenon, which can extrude the peak. The filter is simple but effective. We use a mean filter in the parameter space. Figure 10 is the number expected in the parameter space. We find that the peak in Fig. 10(a) is too small to be detected by the interference noise. The peak in Fig. 10(b) is higher and effective. The simulation result (see Fig. 11 and Fig. 12) indicates that the filter can enhance the performance of the algorithm. According to the simulation, given that the true value is 130°, the algorithm without the filter has a mean of 129.953° and a variance of 0.0123°. In contrast, the algorithm with the filter has a mean of 129.989° and a variance of 0.0055°.

## 5. Verification experiment

#### 5.1 Simulation

The simulation was implemented for different solar azimuths in full scale under the worst conditions (solar zenith 30° and maximal DoLP 20%) to estimate the error bounds (See Fig. 13 and Fig. 14). The maximal error was within 0.34°.

#### 5.2 Experiment results

The experiment was implemented with a detection system using a continuously spinning polarization analyzer as shown in Fig. 15 [21], and its performance index is given in Table 2. The optical parameters of the detection system were calibrated based on Zhang’s method given in [22].

The experiment was conducted in the morning on October 23, 2013 at 40° 0ʹ 22ʺ N, 116° 19ʹ 9ʺ E at Tsinghua University, Beijing. Seven sets of data were acquired every half hour from 7:30 am to 10:30 am at different solar-zenith angles and solar azimuths, resulting in 49 pieces of data total (see Fig. 17). The error of these data was less than 0.37°, as shown in Fig. 16.

## 6. Summary

In this paper, using solar meridian as an azimuth reference was proposed for polarization navigation. From the analysis results, three features of the solar meridian were found: 90° E-vector, straight line, and through the principal point. Taking advantage of these features, we designed an algorithm to solve the solar azimuth, which contains two parts: threshold extraction and 1-D HT. The error model for testing the effects of key parameters was modeled. According to the model, the effects of the SNR of the system, the maximal DoLP in the sky, and the solar zenith were discussed in this work. The principle of choosing the threshold tolerance *R,* a key parameter, was also proposed. To overcome the “inherent defect” of the HT for high-precision measurement, a simple mean filter was introduced into the parameter space. The proposed algorithm was verified through both numerical simulation and experimental tests. The simulation results showed that measurement accuracy is better than 0.34°. The experimental test results fall into a range that is less than 0.37° before any error compensation. The measurement accuracy is better than current polarization prototypes.

## Acknowledgments

This work is partially supported by a grant from the National High Technology Research and Development Program of China (863 Program) (No. 2013AA122601). The laboratory experiments were performed at the State Key Laboratory of Precision Measurement Technology and Instruments at Tsinghua University. We gratefully acknowledge the support of both institutions. We would like to thank Dr. Chuan Luo for manuscript revision. We thank LetPub (www.letpub.com) for its linguistic assistance during the preparation of this manuscript.

## References and links

**1. **S. Rössel and R. Wehner, “Polarization vision in bees,” Nature **323**(6084), 128–131 (1986). [CrossRef]

**2. **T. Labhart and K. Keller, “Fine structure and growth of the polarization-sensitive dorsal rim area in the compound eye of larval crickets,” Naturwissenschaften **79**(11), 527–529 (1992). [CrossRef]

**3. **B. Suhai and G. Horváth, “How well does the Rayleigh model describe the E-vector distribution of skylight in clear and cloudy conditions? A full-sky polarimetric study,” J. Opt. Soc. Am. A **21**(9), 1669–1676 (2004). [CrossRef] [PubMed]

**4. **I. Pomozi, G. Horváth, and R. Wehner, “How the clear-sky angle of polarization pattern continues underneath clouds: full-sky measurements and implications for animal orientation,” J. Exp. Biol. **204**(Pt 17), 2933–2942 (2001). [PubMed]

**5. **R. Hegedüs, S. Åkesson, and G. Horváth, “Polarization patterns of thick clouds: overcast skies have distribution of the angle of polarization similar to that of clear skies,” J. Opt. Soc. Am. A **24**(8), 2347–2356 (2007). [CrossRef] [PubMed]

**6. **R. Hegedüs, S. Åkesson, R. Wehner, and G. Horváth, “Could Vikings have navigated under foggy and cloudy conditions by skylight polarization? On the atmospheric optical prerequisites of polarimetric Viking navigation under foggy and cloudy skies,” P. Roy. Soc. A: Math. Phy. **463**(2080), 1081–1095 (2007). [CrossRef]

**7. **J. Chahl and A. Mizutani, “Biomimetic attitude and orientation sensors,” IEEE Sens. J. **12**(2), 289–297 (2012). [CrossRef]

**8. **J. Chu, K. Zhao, T. Wang, and Q. Zhang, “Research on a novel polarization sensor for navigation,” in Proceedings of IEEE Conference on Information Acquisition (IEEE, 2007), pp. 241–246. [CrossRef]

**9. **J. Chu, K. Zhao, Q. Zhang, and T. Wang, “Construction and performance test of a novel polarization sensor for navigation,” Sens. Actuators A Phys. **148**(1), 75–82 (2008). [CrossRef]

**10. **D. Lambrinos, R. Möller, T. Labhart, R. Pfeifer, and R. Wehner, “A mobile robot employing insect strategies for navigation,” Robot. Auton. Syst. **30**(1), 39–64 (2000). [CrossRef]

**11. **W. Stürzl and N. Carey, “A fisheye camera system for polarisation detection on UAVs,” in *Proceedings of Computer Vision–ECCV 2012. Workshops and Demonstrations *(Springer, 2012), pp. 431–440.

**12. **D. Miyazaki, M. Ammar, R. Kawakami, and K. Ikeuchi, “Estimating sunlight polarization using a fish-eye lens,” IPSJ J. **49**, 1234–1246 (2008).

**13. **D. H. Ballard, “Generalizing the Hough transform to detect arbitrary shapes,” Pattern Recognit. **13**(2), 111–122 (1981). [CrossRef]

**14. **J. Princen, J. Illingworth, and J. Kittler, “Hypothesis testing: a framework for analyzing and optimizing Hough transform performance,” IEEE Trans. Pattern Anal. Mach. Intell. **16**(4), 329–341 (1994). [CrossRef]

**15. **F. Goudail and A. Bénière, “Estimation precision of the degree of linear polarization and of the angle of polarization in the presence of different sources of noise,” Appl. Opt. **49**(4), 683–693 (2010). [CrossRef] [PubMed]

**16. **H. Lu, K. Zhao, and Z. You, “Automatic detection system for skyligth polarized pattern,” Opt. Precis. Eng. **21**(2), 239–245 (2013). [CrossRef]

**17. **L. Le Cam, “An approximation theorem for the Poisson binomial distribution,” Pac. J. Math. **10**(4), 1181–1197 (1960). [CrossRef]

**18. **Y. Hong, “On computing the distribution function for the Poisson binomial distribution,” Comput. Stat. Data Anal. **59**, 41–51 (2013). [CrossRef]

**19. **J. Chu, W. Wang, Y. Cui, W. Zhi, and Q. Gao, “Measurement for influence of aerosols on polarized sky radiance,” Opt. Precis. Eng. **20**(3), 520–526.

**20. **EMVA, “EMVA-1288-3.0, ” http://www.emva.org/cms/upload/Standards/Stadard_1288/EMVA1288-3.0.pdf. accessed 6/6/14.

**21. **H. Lu, K. Zhao, Z. You, and Q. Ma, “Design and implementation of detection system for skylight polarized pattern using continuously spinning polarization analyzer,” J. Astronaut. **35**(9), 1087–1094.

**22. **Z. Zhang, “A flexible new technique for camera calibration,” IEEE T. Pattern Anal. **22**(11), 1330–1334 (2000). [CrossRef]