Abstract
The spectral power distributions (SPD) of outdoor light sources are not constant over time and atmospheric conditions, which causes the appearance variation of a scene and common natural illumination phenomena, such as twilight, shadow, and haze/fog. Calculating the SPD of outdoor light sources at different time (or zenith angles) and under different atmospheric conditions is of interest to physically-based vision. In this paper, for computer vision and its applications, we propose a feasible, simple, and effective SPD calculating method based on analyzing the transmittance functions of absorption and scattering along the path of solar radiation through the atmosphere in the visible spectrum. Compared with previous SPD calculation methods, our model has less parameters and is accurate enough to be directly applied in computer vision. It can be applied in computer vision tasks including spectral inverse calculation, lighting conversion, and shadowed image processing. The experimental results of the applications demonstrate that our calculation methods have practical values in computer vision. It establishes a bridge between image and physical environmental information, e.g., time, location, and weather conditions.
© 2016 Optical Society of America
1. Introduction
Solar irradiance is changed by atmospheric transmittance effects including absorption, reflecting, and scattering, which causes the spectral power distribution (SPD) of the light that ultimately reaches the Earth’s surface to vary with time and air conditions. As shown in Fig. 1, atmospheric transmittance effects lead to many natural things related to computer vision applications such as the variation of sun and sky appearance, twilight, shadow, haze/fog, and cloud. Therefore, in computer vision, a SPD calculation method of outdoor light sources is useful to deal with the problems caused by different time (or zenith angles) and weather conditions.
Nature light modeling and image illumination processing have received a lot of attention in the computer vision and computer graphics community. Based on the fact that the luminance varies with the different parts of the sky, a physically-based sky luminance model is employed to infer the camera azimuth in [1] and to recover camera focal length in [2]. Lalonde et al. [3] presented how they can geolocate a camera by using the sky appearance and sun position annotated in images. Liu et al. [4] proposed a method to estimate the lighting condition of outdoor scenes through learning a set of images captured at the same sun position. Sunkavalli et al. [5] presented that sunlight and skylight SPDs can be recovered by analyzing a time-lapse video of outdoor scenes. Haber et al. [6] presented an approach to compute the colors of the sky during twilight period before sunrise and after sunset based on the theory of light scattering by small particles. Perez et al. [7] proposed a popular sky model that has been widely used in computer vision and graphics. In general, most of these methods focus on modeling or applying the sky spatial radiance distributions rather than spectral information. Spectral information, i.e., the SPD knowledge of light sources, is useful in some computer vision tasks, such as reflectance recovery [8], camera spectral sensitivities estimation [9], color constancy [10], relighting [11], image rendering [12], and augmented reality [13].
In the computer vision community, two methods are usually applied to estimate SPD illuminations. The first one employs Planck’s blackbody radiation law to approximately calculate the SPDs of outdoor light sources. Finlayson et al. [14] and Tian et al. [15] applied Planck’s blackbody radiation law to approximately estimate the SPDs of daylight and skylight for deriving shadow invariant images and for detecting shadows, respectively. The second one employs daylight characteristic vectors to recover the SPDs of outdoor light sources. Judd et al. [16] proposed the characteristic vector analysis method on 622 samples of daylight measured at 10 nm intervals over the visible range. Their results suggested that most of the daylight samples can be approximated accurately by a linear combination of the three fixed characteristic vectors. Based on Perez sky model [7], Preetham et al. [17] proposed an improved sky model. It suggests an analytical approximation of the five distribution coefficients in Perez sky model to be a linear function of a single parameter, turbidity, by fitting the Perez formulas to the reference images. With another parameter, sun angle, the absolute sky luminance Y as well as the CIE chromaticities x and y of a sky element can be calculated. They also provide a way to convert the outputs to spectral radiance data (See Appendix 5 in [17]). This analytic model of spectral sky-dome radiance is widely-used. Using images captured during a single day, Jung et al. [18] presented an outdoor photometric stereo method via skylight estimation according to the Preetham sky model. Given sky images and viewing geometry as the input, Kawakami et al. [19] proposed a method to estimate the turbidity of the sky by fitting image intensities to the Preetham sky model. Then, the sky spectrum can be calculated. Having the sky RGB values and their corresponding spectrum, the method estimates the camera spectral sensitivities and white balance setting. In the experimental section, we will analyze and compare with the Planck’s blackbody radiation and the Preetham sky model.
In meteorology, there are SPD calculation methods based on analyzing the transmittance functions of absorption and scattering along the path of solar radiation through the atmosphere. The latest versions of two representative SPD calculation models with wavelength resolution are SMARTS2 [20] and MODTRAN [21]. The SPDs calculated by these two methods can approximate real measured ones well. However, they may be too complex to be used in computer vision, because they contain many parameters to be determined and unfortunately most of these meteorological parameters are not readily available for computer vision applications. Bird et al. [22] proposed a simpler method to calculate the SPD of outdoor light sources. Though Bird’s method is much simpler than those in [20, 21], it is still difficult to be used in computer vision. In general, the existing methods are not developed for computer vision. They consider the full spectrum and involve many factors and parameters that have no effects on the visible spectrum. Besides, in all existing methods, the characteristics of human vision are not taken into account. Therefore, they may be more suitable for the research on meteorology than computer vision. Because most computer vision tasks concentrate on analyzing images captured in the visible spectrum, it is possible for us to develop the simpler and effective SPD calculation method that can be easily applied in computer vision tasks, such as illumination processing, shadow removal, and image relighting.
The main contribution of this paper is that we propose a very simple method to calculate the SPDs of sunlight and skylight in the visible spectrum. Different from previous calculations, we first calculate the atmosphere’s absorption, since some light emitted from the sun never reach the Earth’s surface neither as direct sunlight nor as diffuse skylight due to the atmosphere’s absorption. We propose a simple new absorption calculation method and provide the new “total absorption coefficient”. During the development of our method, we pay more attention to the wavelengths near the peaks of the human color matching functions (CMFs), to guarantee the accuracy of color reproduction when using our calculated SPDs to substitute the real ones.
2. A simple method for computing the spectral power distribution of sunlight and skylight
2.1. Light and image
Because our SPDs calculations are for imaging and computer vision, we develop our method based on image formation theories. Figure 2 shows the image formation procedure in outdoor environments. Since both our eyes and conventional cameras are only sensitive to the visible spectrum, all the calculations and analysis in this paper will be done within the spectrum 400–700nm. Given illumination SPD E(λ) and object’s reflectance S(λ), the tristimulus values are,
where QH(λ) denotes the XYZ or sRGB color matching functions in three channels. In this paper, tristimulus values in XYZ will be used in Section 2.4 and those in sRGB will be used in Section 4 or in the experiments in Section 4. The detail about XYZ to sRGB conversion can be found in [23].2.2. Absorption of extraterrestrial irradiance passing through the atmosphere
When the extraterrestrial irradiance passes through the atmosphere, it is attenuated by absorption, reflection, and scattering processes, and thereby its spectral composition is changed considerably. Some solar radiation will never reachs the Earth’s surface neither as direct sunlight nor as diffuse skylight due to the atmosphere’s absorption by ozone, nitrogen dioxide, mixed gases, and water vapor. We denote Eoλ as extraterrestrial irradiance at mean earth-sun distance for wavelength λ. After absorption, the irradiance becomes,
where Toλ, TNλ, Twλ, and Tuλ denote the transmittance functions for molecular ozone, nitrogen dioxide, molecular water vapor, and mixed gases, respectively. Extraterrestrial irradiance also varies with the sun-earth distance which is determined by the Day Number of a year. Since this factor is independent of wavelength, it can be taken outside the integration and can be consider as exposure time in Eq. (1). Therefore this factor can be neglected for computer vision application. Toλ and TNλ can be represented by, where aoλ and anλ are the absorption coefficients of molecular ozone and nitrogen dioxide, respectively. In [22, 24], Twλ and Tuλ are given by, where auλ and awλ are the absorption coefficients of molecular water vapor, and mixed gases, respectively. W is water vapor amount, and its typical value is 1.6. m is air mass that is calculated by,θ denotes zenith angle in this paper. Unlike that Toλ and TNλ take noticeable effects in the visible spectrum, Tuλ mainly takes effect on wavelengths longer than 1000nm and in the visible spectrum takes effect in 687nm∼691nm [26]; Twλ mainly takes effect on wavelengths that are longer than 800nm and in the visible spectrum takes effect in 690nm∼700nm [27]. Tuλ and Twλ have little effect in the visible spectrum but their computations are more complex than those of Toλ and TNλ. More importantly, Tuλ and Twλ have different forms with Toλ and TNλ, which brings difficulties for merging these four terms. Therefore, we here want to seek simple and effective expressions for Tuλ and Twλ. Eq. (5) and Eq. (6) have similar formulas, thus we first merge Tuλ into Twλ, i.e.,
Where ε can be determined by,
The result is ε = 4.3. We have,
The two attenuations caused by water vapor and mixed gases are merged into a single attenuation coefficient. We define the new term as ,
where .For different m, we find that data counterparts of satisfy a power function. So in the following we write as the form of Toλ and TNλ. We define a new transmittance function of water vapor and uniform mixture gas as,
with,The results of Eq. (13) are p = −0.055 and q = 0.56. We apply the simple exhaustive search method to solve the optimization problem in Eq. (9) and Eq. (13). We set the sampling step size of λ as 1nm and the step size of ε, p, and q as 0.01. The new attenuation expression of water vapor and mixed gases is approximated as a decreasing exponential function of the combined coefficient.
Then since all the absorption factors can be expressed by a decreasing exponential function, we rewrite Eq. (2) as,
where,The results in Fig. 3 show that our new simple expression produces similar results compared with the combined effects of the original two terms in the visible spectrum. We name the proposed Tλ as “total absorption transmittance function” and τ′ (λ) as “total absorption coefficient” that is represented by,
The total absorption coefficient can be found in the Appendix (Table 4) and is plotted in Fig. 4, in which aoλ, anλ, auλ, and awλ are derived from [20] with aoλ = Aoλ, anλ = Anλ, auλ = 100Agλ, and awλ = 100Awλ. The SPD of extraterrestrial irradiance and an example of its result after absorption at m = 2 is shown in Fig. 5.
Using our proposed total absorption coefficient and transmittance function, the absorption calculation of ozone, nitrogen dioxide, water vapor, and mixed gases can be considered all together. Eq. (16) is much simpler to use.
2.3. Computing direct sunlight
The direct irradiance at Earth’s surface normal to the direction of the sun can be calculated by,
where Ein is the SPD of extraterrestrial irradiance after absorption, and Trλ, Taλ denote the transmittance functions of Rayleigh scattering and Aerosol scatting, respectively. The transmittance function for the Rayleigh scattering in [28] is adopted in this paper.The transmittance function for the aerosol scattering in [28] is adopted in this paper.
where α is wavelength exponent and the value of 1.3 is commonly employed for α. The parameter β is the cleanliness index (turbidity) which varies from 0.0 to 0.4. For clear weather, turbid weather, and very turbid weather, its value is about 0.1, 0.2, and 0.3 respectively [28].2.4. Computing diffuse skylight
The calculations of diffuse skylight in other works are complicated. Here we propose a very simple method. From Eq. (19) and Eq. (20), we know that Trλ and Taλ describe the direct transmittance functions of Rayleigh scattering and Aerosol scattering respectively. Thus (1−Trλ Taλ) actually describes the sum of scattered light that can reach the Earth’s surface and that are lost in the atmosphere. So the key point becomes how to determine the proportion of the scattered light that can reach the ground. Therefore, the diffuse skylight on the horizontal surface can be calculated by,
where κ accounts for the proportion of the scattered light that can reach the ground.As shown in Fig. 6, κ should be zenith-dependent. The longer light passes through the atmosphere, the more light will be lost, i.e., less scattered light can reach the ground. We determine κ by fitting CIE ΔELab between the calculated SPDs by Eq. (21) and those from the true measurements by a spectrometer SOC710. In detail, for the ideal white reflectance, S(λ) = 1 is applied in calculation. For the calculated SPD by our method, the XYZ tristimulus values are,
For the measured SPD, the XYZ tristimulus values are,
The XYZ tristimulus values are converted to CIELAB color space and ΔELab is applied to evaluate the error between Esλ and .
κ can be determined by,
The reason that we apply XYZ values and ΔELab to determine κ is to keep higher accuracy near the peaks of CMFs. The values of κ with different zenith angles are tabulated in Table 1.
3. Experiments and comparisons
Figure 7 shows the comparisons of the calculated SPDs by our method with those by SMARTS2 method [20] and Bird’s method [22] in clear weather, i.e. β = 0.1. The results generally denote that there are no significant discrepancies between the calculated SPDs by our simple method and by the other two complex meteorology ones, especially for direct sunlight. To evaluate the influences of the discrepancies on imagery, we simulate sRGB values (see the 2nd and 4th columns in Fig. 7) of Xrite colorchecker which contains 24 common colors in our daily life. From the results, it is hard to find differences by our naked eyes on the simulated images under SPDs calculated by the three methods. The last column in Fig. 7 shows the quantitative discrepancies of pixel values of the simulated images. For each color patch, in three channels, max differences versus max values are plotted as percent errors.
The results in Fig. 7 also show that the SPDs do not vary significantly from 20 degree to 40 degree, which indicates that the outdoor light sources are quite stable during noon. The variation of intensity is larger than that of spectrum. Because variation of intensity can be taken outside the integration and can be put into exposure time in Eq. (1), it has no effect on imagery.
Figure 8 shows the comparison of the calculated SPDs by our method with those by SMARTS2 and Bird’s method under different turbidities. The consistency by the three methods on sunlight is better than that on skylight. We can see that for very clear weather β = 0, the outputs of the three methods are quite similar for skylight. The similarity degrades with increasing β. For very turbid weather, β = 0.3, the three outputs are obvious different. Overall, our results are more close to those of SMARTS2 method in the skylight calculation.
The comparison on the formula expressions of our method with those of SMARTS2 and Bird’s method are listed in Table 2. From the table, we can see that our expressions are much simpler than those in the other two methods. More importantly, in the other two methods there are many parameters that may be hard to be determined in computer vision.
3.1. Comparison with blackbody radiation
Planck’s blackbody radiation law is usually applied to calculate the SPDs of outdoor light sources. It is well known that extraterrestrial solar irradiance can be approximated by the black-body radiation at 5777 K. However, from Fig. 9, we can see that the extraterrestrial solar irradiance roughly follows the blackbody radiation at 5777 K in the near infrared spectrum while it does not follow blackbody radiation law well enough in the visible region. Figure 10 shows the chromaticity of the blackbody radiation and that of sunlight, skylight, and daylight. We find that the chromaticity of daylight is very near to Planckian locus and so it can be accurately approximated by the blackbody radiation. In contrast, the chromaticity of sunlight and skylight deviate from that calculated by blackbody radiation noticeably, which indicates that noticeable errors occur when blackbody radiation is employed to model the SPDs of sunlight and skylight.
3.2. Comparison with Preetham sky model
The Preetham model [17] also only requires the zenith angle and turbidity as varying parameters. Using Judd’s characteristic vectors, this model can produce radiance spectral information of all points on the skyphere. Details about recovering radiance spectral information from sky chromaticity values based on Judd’s characteristic vectors can be found in the Appendix 5 in [17]. After integration of the incoming light from all the directions in the hemisphere, SPD of skylight irradiance can be calculated. Figure 11 shows the comparison between our method, SMARTS2, Bird’s method, and Preetham method. We can find that, compared with our method, Preetham method is not accurate enough to match the two meteorology methods. This is caused by that some errors may arise when recovering full spectra from sky X, Y, Z values. It only uses three principle basis. Another disadvantage of using the Preetham sky model to recover SPD is that it can only recover skylight SPD while it cannot recover SPD of direct sunlight and daylight.
4. Applications
4.1. The recovery of camera capture information
We first demonstrate estimating illumination, zenith angle, turbidity, and camera spectral sensitivities from a single 24 color checker image. Rewriting E(λ) in Eq. (1), we have,
Jiang’s et al. [29] shows that camera spectral sensitivities (CSS) can be decomposed as QH= σH·CH, where σH = [σH,1, σH,2] ; CH=[cH,1,cH,2]T is the eigenvector matrix that is precomputed by a dataset including different CSS. Given a 24 color checker image and the reflectance spectrum as shown in Fig. 12, parameters σH, θ, T can be recovered by iteratively minimizing the following RMS, and then illumination SPDs and CSS can be recovered.
Compared with Jiang’s method that can recover CSS and one illuminations-SPD, as shown in Fig. 13, our method can recover CSS and SPDs of three lights, i.e., sunlight, daylight, and skylight simultaneously. More importantly, using our method, sun angle and turbidity can be recovered from the image while Jiang’s method can only recover color temperature.
4.2. Shadow features for shadow detection
Shadow is an active research topic in computer vision. Its generation and characteristics have close relation to direct sunlight and diffuse skylight. Tian et al. [30] show that the pixel values of a surface illuminated by daylight (in non-shadow region) are proportional to those of the same surface illuminated by skylight (in shadow region). If fH denote pixel values of a surface in shadow area and FH denote those of the same surface in non-shadow area, [30] shows,
whereSince our calculating method can simultaneously calculate SPDs of daylight and skylight. Based on our calculated SPDs, we can easily obtain KH for any sun angles and turbidity. As shown in Fig. 14, we capture three images at 43 degree sun angle and in clear weather with 0.15 turbidity (estimated by section 4.1). We calculate the normalized KH in the Canon 5D Mark II RAW images without Gamma correction. We can find these values are quite near to the calculated KH = [0.69,0.57,0.45] by our calculated SPDs, indicating that our SPD calculation method can predict KH in shadowed images.
We also calculated KH under different sun angles and turbidity to find some rules that will be useful for shadow detection. We list these KH values in Table 3. From the table we can find that three properties of shadows.
- Property 1. KH satisfy KR > KG > KB
- Property 2. KH decrease as turbidity increases.
- Property 3. KH decrease as sun angle increases.
To test property 1, we collected 563 real-world images downloaded from the web or real-captured. Particularly we carefully captured images at different sun angles and under different weather conditions (including clear sky, little overcast, and heavy clouds that didn’t cover the sun). For each image, shadow regions were manually labeled. We got 3672 shadow & non-shadow counterparts and calculated all the KH by these counterparts. We find that 90.50% KH satisfy property 1. Preliminary analysis shows that overexposure, thin shadows, and too large contrast enhancement by post-processing on images usually cause that the calculated KH do not satisfy property 1.
Since we usually don’t know under what sun angles and turbidity an image is captured, property 2 and 3 are not convenient to be directly used. They may be useful to some inverse problem from images, e.g., using detected shadows to infer sun angle and turbidity. In detail, if we know the CSS of a camera, we can obtain a table like Table 3, and coarsely infer sun angle and turbidity by comparing the KH in the table and the KH calculated from the detected shadows. The application in this section shows that our SPD calculation is valuable for finding novel shadow features, which may hard to accomplish by other SPD calculation methods since their methods cannot calculate Eq. (28) and Table 3.
4.3. Deriving intrinsic images
Our SPD calculation method can be used to derive intrinsic images. If we convert the linear RGB to gamma-corrected sRGB vaules (According to [23]), Eq. (28) becomes
From Eq. (30), the following equation holds.
whereFor a pixel in an image, Eq. (31) represents a shadow invariant. For an arbitrary pixel and its RGB value vector, (vR, vG,vB)T, log(vR+14)+log(vG+14) − β1·log(vB+14) keeps the same no matter whether the pixel is in shadow region or not. Apparently, a grayscale intrinsic image is obtained. One result is shown in the middle of Fig. 15.
To obtain a color intrinsic image, for a RGB value vector (vR,vG,vB)T, we first define the following grayscale intrinsic values I1 according to Eq. (31).
Similar to Eq. (33), we can obtain two other grayscale intrinsic values I2 and I3,
whereA color intrinsic image can be obtained from the unique particular solution of the equation set generated by Eqs. (31), (34), and (35) (More details can be found in [31]). One color intrinsic image result is shown in the right of Fig. 15. When deriving the intrinsic images, KH are the key parameters. More accurate estimation of KH can produce better intrinsic image results. As shown in Fig. 16, using KH computed by the SPDs calculation method proposed in this paper, better intrinsic image results can be obtained. In [31], KH is just approximately computed from the mean value of some real measured SPDs.
4.4. Lighting conversion
This application converts an image from one illumination to another. In detail, using our SPD calculation method, we firstly recover reflectance from the original image using the method in [32], and then we can relight the same scene according to Eq. (1) and convert it to sRGB color space. As shown in Fig. 17, we first convert illumination from skylight to daylight at zenith angle equals to 60 degree. Both the color and the intensity of the converted image are quite similar to the real captured one by our naked eyes. The errors of the conversion are shown in Fig. 18. The mean error of pixel value is 9. The max error of pixel value is 36, which occurs at patch No.15 in Red channel. The error mainly arises from that commercial cameras apply some post processing such as tone mapping and gamut mapping on the images. These factors are not considered when we convert RAW images to sRGB JPG images in our calculation.
Figure 19 shows two more results of image lighting conversion. The two images in the first row of Fig. 19 show that image lighting conversion from afternoon to noon. The color of the converted image is more similar to the scene at noon than the original one. The second row of Fig. 19 shows image lighting converted from sunset to afternoon. Both the color and the clarity of the converted image are better than that of the original one.
5. Conclusion
Illumination is one of the important components of imaging. Understanding the properties of spectral distributions of outdoor light and their dynamical changes at different times and under different atmospheric conditions is of interest to computer vision. In this paper, we proposed a simple method for computing SPDs of sunlight and skylight for a given zenith angle and turbidity. In the computer vision community, researchers usually apply the blackbody radiation or the eigenvectors of daylight [16] to approximate real SPDs. Compared with these two kinds of methods, our method has two advantages. First of all, SPDs calculated by our method can approximate those calculated by meteorology ones. The second is that our method can simultaneously calculate SPDs of daylight, sunlight, and skylight. The advantages especially the second one are important for our applications. These applications are difficult to accomplish by the blackbody or the eigenvector model since they cannot simultaneously calculate the corresponding SPDs of daylight, sunlight, and skylight. Therefore, it is difficult to employ these models to simultaneously recover the SPDs of daylight and skylight in Sec. 4.1, estimate KH in Sec. 4.2 and 4.3, and relight the ColorChecker image from skylight to daylight under the same sun angle in Sec. 4.4.
Our method is much simpler than the existing atmospheric methods and shows its advantages in a number of computer vision applications. Different from other models that need many parameters such as ozone, nitrogen dioxide, mixed gases, and water vapor that have no close relation to computer vision and are not easily obtained in computer vision, our model only requires two parameters that are most related to physically-based computer vision: sun angle (geometry) and turbidity (weather). Our model establishes a bridge between image and physical environmental information, e.g., time, location, and weather conditions. Since we focus on computer vision applications rather than atmosphere physics or meteorology, the basic calculations in atmosphere physics, e.g., transmission functions of scattering, are still based on previous atmospheric sciences literature.
Appendix
Acknowledgments
This work was supported by the National Natural Science Foundation of China under Grants 61473280 and 61102116.
References and links
1. N. Jacobs, N. Roman, and R. Pless, “Toward fully automatic geo-location and geo-orientation of static outdoor cameras,” in IEEE Workshop on Applications of Computer Vision, (IEEE, 2008), pp. 1–6.
2. J.-F. Lalonde, S. G. Narasimhan, and A. A. Efros, “What does the sky tell us about the camera?” in Proc. ECCV, (Springer, 2008), pp. 354–367.
3. J.-F. Lalonde, S. G. Narasimhan, and A. A. Efros, “What do the sun and the sky tell us about the camera?” Int. J. Comput. Vis. 88, 24–51 (2010). [CrossRef]
4. Y. Liu, X. Qin, S. Xu, E. Nakamae, and Q. Peng, “Light source estimation of outdoor scenes for mixed reality,” The Visual Computer 25, 637–646 (2009). [CrossRef]
5. K. Sunkavalli, F. Romeiro, W. Matusik, T. Zickler, and H. Pfister, “What do color changes reveal about an outdoor scene?” in Proc. CVPR (IEEE, 2008), pp. 1–8.
6. J. Haber, M. Magnor, and H.-P. Seidel, “Physically-based simulation of twilight phenomena,” ACM Trans. Graph. 24, 1353–1373 (2005). [CrossRef]
7. R. Perez, R. Seals, and J. Michalsky, “All-weather model for sky luminance distribution—preliminary configuration and validation,” Sol. Energy 50, 235–245 (1993). [CrossRef]
8. K.-J. Yoon, E. Prados, and P. Sturm, “Joint estimation of shape and reflectance using multiple images with known illumination conditions,” Int. J. Comput. Vis. 86, 192–210 (2010). [CrossRef]
9. D. Wu, J. Tian, B. Li, Y. Wang, and Y. Tang, “Recovering sensor spectral sensitivity from raw data,” J. Electron. Imaging 22, 023032 (2013). [CrossRef]
10. G. D. Finlayson and S. D. Hordley, “Color constancy at a pixel,” J. Opt. Soc. Am. A 18, 253–264 (2001). [CrossRef]
11. X. Xing, W. Dong, X. Zhang, and J.-C. Paul, “Spectrally-based single image relighting,” in Entertainment for Education. Digital Techniques and Systems (Springer, 2010), pp. 509–517. [CrossRef]
12. T. Gierlinger, D. Danch, and A. Stork, “Rendering techniques for mixed reality,” J. Real-Time Image Processing 5, 109–120 (2010). [CrossRef]
13. J. Wither, S. DiVerdi, and T. Höllerer, “Annotation in outdoor augmented reality,” Comp. Graphics 33, 679–689 (2009). [CrossRef]
14. G. D. Finlayson, M. S. Drew, and C. Lu, “Entropy minimization for shadow removal,” Int. J. Comput. Vis. 85, 35–57 (2009). [CrossRef]
15. J. Tian, J. Sun, and Y. Tang, “Tricolor attenuation model for shadow detection,” IEEE Trans. Image Process. 18, 2355–2363 (2009). [CrossRef] [PubMed]
16. D. B. Judd, D. L. MacAdam, G. Wyszecki, H. Budde, H. Condit, S. Henderson, and J. Simonds, “Spectral distribution of typical daylight as a function of correlated color temperature,” J. Opt. Soc. Am. A. 54, 1031–1040 (1964). [CrossRef]
17. A. J. Preetham, P. Shirley, and B. Smits, “A practical analytic model for daylight,” in Proceedings of the 26th annual conference on Computer graphics and interactive techniques, (ACM Press/Addison-Wesley Publishing Co., 1999), pp. 91–100.
18. J. Jung, J. Lee, and I. Kweon, “One-day outdoor photometric stereo via skylight estimation,” in Proc. CVPR (IEEE, 2015), pp. 4521–4529.
19. R. Kawakami, H. Zhao, R. T. Tan, and K. Ikeuchi, “Camera spectral sensitivity and white balance estimation from sky images,” Int. J. Comput. Vis. 105,187–204, 2013. [CrossRef]
20. C. Gueymard, SMARTS2: A Simple Model of the Atmospheric Radiative Transfer of Sunshine: Algorithms and Performance Assessment (Florida Solar Energy Center Cocoa, FL, 1995).
21. A. Berk, L. S. Bernstein, and D. C. Robertson, Modtran: A moderate resolution model for lowtran, Tech. Rep., DTIC Document (DTIC1987).
22. R. E. Bird and C. Riordan, “Simple solar spectral model for direct and diffuse irradiance on horizontal and tilted planes at the earth’s surface for cloudless atmospheres,” J. Climate Appl. Meteor. 25, 87–97 (1986). [CrossRef]
23. International Electrotechnical Commission, Multimedia systems and equipment - Colour measurement and management - Part 2-1: Colour management - Default RGB colour space - sRGB, Tech. Rep. IEC 619966-2-1(1999).
24. B. Leckner, “The spectral distribution of solar radiation at the earth’s surface—elements of a model,” Sol. Energy 20, 143–150 (1978). [CrossRef]
25. R. Schroeder and J. Davies, “Significance of nitrogen dioxide absorption in estimating aerosol optical depth and size distributions,” Atmosphere-Ocean 25, 107–114 (1987). [CrossRef]
26. J. H. Pierluissi and C.-M. Tsai, “New lowtran models for the uniformly mixed gases,” App. Opt. 26, 616–618 (1987). [CrossRef]
27. L. Zhou, P. Guo, and Y. Tan, “A new way to study water-vapor absorption coefficient,” Marine Science Bulletin7 (2005).
28. M. Iqbal, An Introduction to Solar Radiation (Elsevier, 2012).
29. J. Jiang, D. Liu, J. Gu, and S. Susstrunk, “What is the space of spectral sensitivity functions for digital color cameras?,” in IEEE Workshop on Applications of Computer Vision (IEEE, 2013), pp. 168–179.
30. J. Tian and Y. Tang, “Linearity of each channel pixel values from a surface in and out of shadows and its applications,” in Proc. CVPR (IEEE, 2011), pp. 985–992.
31. L. Qu, J. Tian, Z. Han, and Y. Tang, “Pixel-wise Orthogonal Decomposition for Color Illumination Invariant and Shadow-free Image,” Opt. Express 23,2220–2239, 2015. [CrossRef] [PubMed]
32. J. Tian and Y. Tang, “Wavelength-sensitive-function controlled reflectance reconstruction,” Opt. Lett. 38, 2818–2820 (2013). [CrossRef] [PubMed]