Abstract
Using structured light to measure the 3D shape of a high dynamic range (HDR) surface has been always a challenging problem, and fusion of multi-group images with different exposures is recognized as an effective solution. It tends to select the phase with unsaturated and maximum gray intensity as the final phase, which has two problems: 1) the selection criteria are too simple to fully evaluate the phase quality, and 2) it is affected by the image noise, camera’s nonlinear response, local reflection and other factors and the phase with the best quality among the initial phases may also have an error. Aiming to solve these issues, this paper presents a hybrid-quality-guided phase fusion (HPF) model. In this model, a hybrid-quality measure is first proposed to evaluate the phase quality more comprehensively. Then, all initial phases are weighted and fused under the guidance of the hybrid-quality measure to obtain a more accurate phase as the final one. Through this model, a more complete and accurate 3D shape of the HDR surface can be reconstructed, and its validity has been verified by several experiments.
© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
1. Introduction
Structured light 3D measurement has been widely used in many aspects of industrial production such as defect detection, waste classification, and parameter optimization because of its good stability and excellent measurement speed [1–5]. However, due to the limitation of the dynamic range of camera device, measuring 3D shape of objects with high dynamic range (HDR) surface, such as car shells and engine blades, is always a challenging conundrum [1,6].
Many HDR 3D measurement methods using structured light technology have been studied in depth. These methods can be classified into two categories: single best measurement (SBM) and multiple measurement fusion (MMF). SBM methods try to get full reconstruction in a single measurement with extra process such as adaptive projection [7–9] and deep learning [10–12]. It can quickly acquire data but take long time for preparation, and the effect is far from satisfactory. MMF methods try to synthesize HDR image by sequence images obtained with different parameters such as exposure [13–17], projection intensity [18–21] and polarization filter [22–24] to achieve 3D reconstruction. Compared with SBM, the measurement time of MMF is longer, but it gets simpler operation and higher accuracy.
In practical applications, the multiple exposure fusion (MEF) method that belongs to MMF is simpler and more effective in measuring surface with variable reflectance, and it tries to select pixel in exposure sequences based on criteria of max and unsaturation to achieve HDR imaging. For the subsequent phase fusion process, the gray value of each pixel in the synthesized HDR fringe images should be selected from the images with the same exposure [13], which means the pixels for phase calculation come from the same exposure. The essence of this operation is to select the phase under a single exposure time as the final fused phase according to the principle of maximum and unsaturation. Therefore, the accuracy of the final phase depends on the selection criteria and the initial phases from multiple exposures.
In order to obtain the accurate phase value, researchers tried finding more suitable quality measure as selection criteria or setting more appropriate exposures to obtain good initial phases. Jiang et al. [14] selected the appropriate exposures by setting a series of initial candidate parameters and projecting uniformly illuminated images. Moreover, a criterion that selected pixels with maximum modulation and unsaturation was proposed. The effect depends on the initial candidate parameters selected by manual experience. Zhong et al. [25] proposed a criterion of selecting pixels within the intensity range from 30 to 220 to reduce the influence of the camera's nonlinear response. Feng et al. [22] proposed an automatic generation algorithm of exposures based on a reflectivity histogram. Rao et al. [26] selected exposures according to the modulation of the histogram. However, the histogram distribution can only be applicable to objects with significant reflectance category distribution, and may produce blocking effects. Zhang [16] determines the optimal exposure through an image under low exposure with a uniform white image projected first, then find the next exposure with the principle of making the last minimum value become the maximum value of this exposure. Its actual effect depends on the set threshold and pre-calibration precision. In general, these methods can help to obtain unsaturated HDR fringe images with maximum intensity, as shown in Fig. 1(a).
Nevertheless, due to the effects of the camera's nonlinear response [27], image noise and local reflections, merely selecting the pixel obtained under unsaturated conditions with maximum gray intensity may also have a significant error in calculating phase. In 2D imaging, lots of weighted fusion methods in the HDR technology, rather than the replacement methods, have been developed to reduce these errors to keep more local detail and contrast in fusion image [28–31]. However, in structured light 3D imaging, directly performing weighted fusion in the HDR image will cause errors in the subsequent phase calculations, as mentioned before.
In this paper, a hybrid-quality-guided phase fusion (HPF) model is proposed by directly perform fusion with different weights at multi-exposure phase maps to obtain a more accurate phase, as shown in Fig. 1(b). In this model, the sources of phase errors are first systematically analyzed to put forward a more comprehensive hybrid-quality measure that encodes desirable measures like well-exposedness and local reflectance for each pixel in the multi-exposure phase maps. Then, all initial phases are guided by the hybrid-quality measure for weighted fusion to obtain a more accurate phase as the final phase. Through this model, more complete and accurate 3D point cloud can be reconstructed with same initial data.
The rest of this paper is organized as follows: Section 2 explains the principles and specific implementation methods that support the proposed model; Section 3 presents various experimental results to verify the performance of the proposed model; Section 4 summarizes this paper.
2. Principles
2.1 Camera-imaging model in structured light
In structured light 3D measurement, the camera-imaging model shown in Fig. 2 can be expressed as
where I is the intensity of image pixels, $\alpha $ is the sensitivity coefficient of the camera, t is the exposure of the camera, $\beta $ is the reflection coefficient of the measured object, ${I^p}$ is the light intensity of the projector, ${I^{{a_1}}}$ is the ambient light reflected by the measured object, ${I^{{a_2}}}$ is the ambient light directly entering the camera, and $\mu $ is the noise error of the camera.Considering that the projected image is a series of sinusoidal fringe images with constant phase-shifting amount, the grayscale distribution of the image can be expressed by simplifying and modifying the aforementioned formula as:
2.2 HPF model
As mentioned previously, ordinary methods only perform pixel replacement at gray level, so the resulting phase-map from HDR fringe images can be expressed as
where ${P^F}$ is the fused phase-map, P is the source phase map from multiple exposures, $Mask$ indicates whether to select phase, the subscript $x,y,k$ refers to pixel $({x,y} )$ in the $k$-th exposure phase-map. When the pixel intensity or modulation optimally meets the defined criteria, $Mas{k_{x,y,k}}$ is $1$, otherwise it is $0$. The method can help to obtain phase under unsaturated conditions with high gray intensity from multiple exposures. However, affected by the camera's nonlinear response, image noise and local reflections, the accuracy and stability of the final phase are still not fully guaranteed.Compared with pixel replacement at raw fringe images, this paper emphasizes directly perform fusion with different weights at phase maps. The weights are given by the hybrid-quality measure including well-exposedness, local reflectance and phase gradient smoothness. Therefore, it would enhance the phase calculation signal noise ratio (SNR), suppress camera’s nonlinear response, reduce the error from local reflections, and guide phase to be smoother. The overall workflow of the HPF model is shown in Fig. 3. From the multi-groups fringe images, modulation maps and phase maps, weights based on different quality measures in the hybrid-quality measure are separately calculated to generate final weights. The final weight W that used to guide fusion process can be expressed as
Well-exposedness: The well-exposedness can be mainly divided into two aspects: image noise and camera’s non-linear response. For image noise, the classic quality measure in structured light technology is the modulation intensity, as explained in Eq. (2). The larger of modulation, the less the image noise will affect the phase calculation. For camera’s nonlinear response, many weight evaluation methods have been proposed in the field of HDR 2D imaging, like hat-shaped function [32], etc. One of these classic methods is to weight each normalized intensity i based on how close it is to 0.5 using a Gauss curve [29]: $\textrm{exp}({ - {{({i - 0.5} )}^2}/2{\sigma^2}} )$, where $\textrm{exp}$ is the natural exponential function, σ is a constant value set by experience. This method is good for evaluating the intensity of a single pixel, but cannot simultaneously evaluating multiple pixels participating in phase calculation. Hence, based on this method, we use the ratio of the number of pixels in the optimal range to all the pixels to measure the camera’s nonlinear response.
So, combining the two aspects, the calculation of well-exposedness can be expressed as
Local reflectance: As shown in Fig. 4, the intensity of a pixel is affected by the light reflection of surrounding area, and the magnitude of influence depends on local position’s shape and light intensity. Moreover, due to the different phases of the projected fringe images, the effects of the area around the same pixel in various images on the current pixel are different. Consequently, error will occur when using ordinary camera-imaging model without consider local reflections for phase calculation, especially when the local position’s light intensity changes significantly (such as border transition positions in over-dark or overexposed areas). To better evaluate the phase quality, we introduce the local reflections into the existing camera-imaging model and analyze its law. In order to evaluate the local reflection, we make two assumptions in advance and simplify the calculation. The assumptions are: (1) the phase of the local area $\Omega $ changes uniformly in the gradient direction $({{n_x},{n_y}} )$, which can be expressed as ${\phi _{x + u,y + v}} - {\phi _{x,y}} = {\phi _{x,y}} - {\phi _{x - u,y - v}}$, where $({x + u,y + v} )$ is the coordinates of the image coordinate system $({x + i,y + j} )$ in the local phase gradient coordinate system. The local phase gradient coordinate system takes $({x,y} )$ as the center point, $({{n_x},{n_y}} )$ as the positive direction of the X axis. (2) The reflection coefficient of the same distance to the pixel in the local area $\Omega $ is the same, which can be expressed as ${r_{x + u,y + v}} = {r_{x - u,y - v}}$, r is the reflection coefficient from the surrounding pixel to the desired pixel.
Through these two assumptions, the actual light intensity ${I_n}^r$ which affected by local area should be rewritten as
Phase gradient smoothness: The two quality measures above are evaluated from the perspective of possible errors when calculating the phase, but lack of the direct evaluation of the phase itself. Generally speaking, if the phase gradient at a certain pixel is inconsistent with surrounding pixels’, it may have occurred a significant calculation error, as shown in Fig. 5. Hence, the phase gradient smoothness should be added into the model to guide smoother final phase gradient in local area. It can be expressed as
where L is the Laplace operator, G is the gauss operator to suppress the influence of noise, $\mathrm{\ast }$ is the symbol of convolution. In order to better describe the changes in the surrounding area, the eight-neighborhood operator is used for L, so L can be expressed as3. Experiment
In order to verify the performance of the proposed model, we developed a measurement system including two CCD cameras (model: Basler ace acA1300-30gm) attached with 8 mm lens (model: RICOH FL 0814A-2M) and a digital-light-processing (DLP) projector (model: DLP4500). The resolution of projector and camera is $912 \times 1140$ and $1296 \times 966$. The weights of the three quality measures are set to ${\omega _M} = 1,{\omega _E} ={-} 0.5,{\omega _C} ={-} 0.5$, respectively. In most experiments, we adopted 20 exposure times as the experimental parameter unless otherwise mentioned. The computer CPU model in this paper is Intel Core i7-10700 K. When the number of exposures is 20, the serial calculation time from original fringe images (images num: 20*12*2 = 480) to the reconstructed point cloud is about 12.2s(HPF) and 2.4s(MEF) in total.
3.1 Accuracy evaluation
To evaluate the measurement accuracy of the proposed model more comprehensively, we use standard ball-bar and step block to evaluate multiple indicators such as the distance, sphericity, flatness, and plane height difference. The standard values of the standard ball-bar and step block are verified by coordinate measuring machine.
First, a standard ball-bar which consists of two ceramic spheres named A and B was used to measure, as shown in Fig. 6(a). The sphericity of A and B are 0.9$\mathrm{\mu m}$ and 1.3 $\mathrm{\mu m}$, respectively. The distance between the center A and B is 200.118 mm. After the standard ball-bar is reconstructed, the least square method is used to perform sphere fitting on all the point clouds of the ball, since the ball point clouds exists independently. And then calculate the distance between the center of the two balls and the sphericity, as shown in Fig. 6(b). The absolute measurement error ${\mathrm{\varepsilon }_{dis}} = |{{L_m} - {L_s}} |$ and standard deviation of sphere fitting ${\mathrm{\varepsilon }_{std}} = \sqrt {\mathop \sum \nolimits_{i = 1}^n {{({di{s_i}} )}^2}/n} $ are used as the evaluation metrics, where ${L_s}$ is the standard distance between center A and B, ${L_m}$ is the measurement distance, $di{s_i}$ is the distance from the $i$-th point to the fitting sphere. In order to ensure the stability of the measurement results, we used three methods including single exposure, MEF and HPF to measure the standard ball-bar from ten different positions, and performed statistical analysis for the measurement results, as shown in Fig. 6(c). Furthermore, the error diagram of sphere fitting is shown in Fig. 7, clearly shows the HPF model can get more accurate data than MEF.
The measurement data of standard ball-bar is shown in Table 1. The mean errors of ${\mathrm{\varepsilon }_{dis}}$ are similar with respect to the three methods, but the proposed model can get the smallest std error of 0.0101. According to the average value of the two balls, the mean error of ${\mathrm{\varepsilon }_{std}}$ by proposed model is approximately reduced to 75% of the MEF method. It can be seen that the proposed model can achieve lower distance error and sphericity error than MEF. Furthermore, the std error of ${\mathrm{\varepsilon }_{std}}$ is approximately reduced to 38% of the MEF method, which demonstrates the proposed model can achieve better stability.
Then, a step block which consists of two step planes named A and B was used to measure, as shown in Fig. 8(a). The step block is made of aluminum alloy, and the flatness of plane A and B are 0.0101 mm and 0.0106 mm, respectively. The height difference between the plane A and B is 20.1095 mm. Unlike standard ball-bar, its reconstruction point clouds of side parts can hardly be removed completely. Directly fitting all the point clouds to the plane will cause significant errors. Therefore, only a part of the point clouds in the middle area are selected for plane fitting, as shown in Fig. 8(b). The absolute plane height difference error ${\varepsilon _{height}} = |{{H_m} - {H_s}} |$ and standard deviation of plane fitting ${\mathrm{\varepsilon }_{std}} = \sqrt {\mathop \sum \nolimits_{i = 1}^n {{({di{s_i}} )}^2}/n} $ are used as the evaluation metrics, where $di{s_i}$ is the distance from the i-th point to the fitting plane, ${H_s}$ and ${H_m}$ are the standard height difference and measurement height difference between the plane A and B, respectively. In this experiment, the plane fitting and height difference analyses are both through Geomagic software. Similar to the standard ball-bar, we also used three methods to measures the step block from ten different positions, and performed statistical analysis for the measurement results, as shown in Fig. 8(c). The error diagram of plane fitting is shown in Fig. 9, clearly reflects the HPF model can get more accurate data than MEF. In addition, we further analyzed the error of single-column reconstruction point clouds, as shown in Fig. 10. It can be seen that the error obtained by our method is smaller and the curve is smoother.
The measurement data of step block is shown in Table 2. Like the standard ball-bar, the mean errors of ${\varepsilon _{height}}$ are similar regarding three methods, but the proposed model can get the smallest std error of 0.0163. The mean error of ${\mathrm{\varepsilon }_{std}}$ by proposed model is approximately reduced to 82% of the MEF method. According to the average flatness of the two planes, the Std error of ${\mathrm{\varepsilon }_{std}}$ is approximately reduced to 45% of the MEF method, which further verifies the stability of the proposed model.
3.2 3D shape measurement
We experimentally validated the proposed idea by measuring car’s sheet metal parts with HDR surface. First, we compared the reconstruction point cloud obtained from MEF and HPF under the same multiple exposure time, as shown in Fig. 11. The captured image under uniform light with 30 ms is shown in Fig. 11(a), and the corresponding reconstruction result is shown in Fig. 11(b), clearly revealing that the result is less than satisfactory under the conditions of over-dark or over-exposure. The measurement results of MEF and HPF with the same multiple exposures are shown in Fig. 11(c) and (d). It can be seen that the both methods can get more complete point clouds by multiple exposures, but the proposed model can better reconstruct the complete 3D point cloud data including darker areas and oversaturated areas compared with MEF. And as shown in Fig. 12, the proposed model is more stable for getting accurate data to ensure clear details.
Then, we changed the number of exposure times under the premise that the shortest and longest exposure time are fixed, and compared the reconstruction point cloud obtained by MEF and HPF, as shown in Fig. 13 and Fig. 14. The images captured under uniform blue light with the shortest and longest exposure time are shown in Fig. 13(a) and (c), respectively. As shown in Fig. 13(b), and (d), when the number of exposures increases, the reconstruction result from MEF method cannot get significant improvement, and some areas may even deteriorate. On the contrary, the reconstruction result from HPF keeps getting better as the number of exposures increases. Correspondingly, when the number is small, such as 4, the HPF can get better results than MEF does, but the improvement is not very significant. When the number of exposures is much bigger, such as 19, the HPF can get more complete and accurate result with significant improvement. This experiment clearly demonstrated that the proposed HPF can achieve better results with fewer exposures than MEF does.
4. Summary
In summary, this paper proposed a hybrid-quality-guided model for full reconstruction of high dynamic range objects through phase fusion. The main values of this model are embodied in following two aspects.
- (1) More comprehensive phase quality measures. This model not only analyzes the relationship between phase error and camera nonlinear response, image noise, and other factors, but also gives specific quality measures, which can evaluate the phase accuracy more directly and comprehensively. Furthermore, in addition to being used in this model, these quality measures can also be used in other methods like MEF, multi-projection fusion, polarization filter, etc.
- (2) More accurate and stable data. Compared with traditional methods, this model can get more accurate and stable phase by weighted fusion under the guidance of the hybrid-quality measure. Consequently, more complete and accurate point cloud can be obtained with the same initial data. Beyond that, under the premise that the shortest and longest exposure times are fixed, this model can also obtain better results with fewer exposures than MEF does.
In the future, we will further explore the influencing factors of phase error, such as over-saturation and texture to improve the phase accuracy.
Funding
National Key Research and Development Program of China (2018YFB1305700); Shenzhen Fundamental Research Program (JCYJ20210324142007022); Excellent Young Program of Natural Science Foundation in Hubei Province (2019CFA045); Key Research and Development Program of Hubei Province (2020BAB137); Major Technology Innovation of Hubei Province (2019AAA073); Fundamental Research Funds for the Central Universities (2021JYCXJJ045).
Disclosures
The authors declare no conflicts of interest.
Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
References
1. J. Xu and S. Zhang, “Status, challenges, and future perspectives of fringe projection profilometry,” Opt. Lasers Eng. 135, 106193 (2020). [CrossRef]
2. L. Song, X. Li, Y. Yang, X. Zhu, Q. Guo, and H. Liu, “Structured-light based 3D reconstruction system for cultural relic packaging,” Sensors 18(9), 2981 (2018). [CrossRef]
3. G. Zhan, L. Han, Z. Li, Z. Liu, J. Fu, and K. Zhong, “Identification and Documentation of Auricle Defects using Three-dimensional Optical Measurements,” Sci. Rep. 8(1), 2869 (2018). [CrossRef]
4. L. Han, X. Cheng, Z. Li, K. Zhong, Y. Shi, and H. Jiang, “A Robot-Driven 3D Shape Measurement System for Automatic Quality Inspection of Thermal Objects on a Forging Production Line,” Sensors 18(12), 4368 (2018). [CrossRef]
5. G. Zhan, H. Tang, K. Zhong, Z. Li, Y. Shi, and C. Wang, “High-speed FPGA-based phase measuring profilometry architecture,” Opt. Express 25(9), 10553–10564 (2017). [CrossRef]
6. Z. Song, H. Jiang, H. Lin, and S. Tang, “A high dynamic range structured light means for the 3D measurement of specular surface,” Opt. Lasers Eng. 95, 8–16 (2017). [CrossRef]
7. H. Lin, J. Gao, Q. Mei, Y. He, J. Liu, and X. Wang, “Adaptive digital fringe projection technique for high dynamic range three-dimensional shape measurement,” Opt. Express 24(7), 7703 (2016). [CrossRef]
8. D. Li and J. Kofman, “Adaptive fringe-pattern projection for image saturation avoidance in 3D surface-shape measurement,” Opt. Express 22(8), 9887 (2014). [CrossRef]
9. C. Chen, N. Gao, X. Wang, and Z. Zhang, “Adaptive pixel-to-pixel projection intensity adjustment for measuring a shiny surface using orthogonal color fringe pattern projection,” Meas. Sci. Technol. 29(5), 055203 (2018). [CrossRef]
10. H. Yu, D. Zheng, J. Fu, Y. Zhang, C. Zuo, and J. Han, “Deep learning-based fringe modulation-enhancing method for accurate fringe projection profilometry,” Opt. Express 28(15), 21692 (2020). [CrossRef]
11. L. Zhang, Q. Chen, C. Zuo, and S. Feng, “High-speed high dynamic range 3D shape measurement based on deep learning,” Opt. Lasers Eng. 134, 106245 (2020). [CrossRef]
12. X. Liu, W. Chen, H. Madhusudanan, J. Ge, C. Ru, and Y. Sun, “Optical Measurement of Highly Reflective Surfaces From a Single Exposure,” IEEE Trans. Ind. Inf. 17(3), 1882–1891 (2021). [CrossRef]
13. S. Zhang and S.-T. Yau, “High dynamic range scanning technique,” Opt. Eng. 48(3), 033604 (2009). [CrossRef]
14. H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: A novel 3-D scanning technique for high-reflective surfaces,” Opt. Lasers Eng. 50(10), 1484–1493 (2012). [CrossRef]
15. S. Feng, Q. Chen, C. Zuo, and A. Asundi, “Fast three-dimensional measurements for dynamic scenes with shiny surfaces,” Opt. Commun. 382, 18–27 (2017). [CrossRef]
16. S. Zhang, “Rapid and automatic optimal exposure control for digital fringe projection technique,” Opt. Lasers Eng. 128, 106029 (2020). [CrossRef]
17. V. Suresh, Y. Wang, and B. Li, “High-dynamic-range 3D shape measurement utilizing the transitioning state of digital micromirror device,” Opt. Lasers Eng. 107, 176–181 (2018). [CrossRef]
18. C. Waddington and J. Kofman, “Saturation avoidance by adaptive fringe projection in phase-shifting 3D surface-shape measurement,” 2010 International Symposium on Optomechatronic Technologies (2010), pp. 1–4.
19. C. Waddington and J. Kofman, “Camera-independent saturation avoidance in measuring high-reflectivity-variation surfaces using pixel-wise composed images from projected patterns of different maximum gray level,” Opt. Commun. 333, 32–37 (2014). [CrossRef]
20. H. Sheng, J. Xu, and S. Zhang, “Dynamic projection theory for fringe projection profilometry,” Appl. Opt. 56(30), 8452 (2017). [CrossRef]
21. L. Zhang, Q. Chen, C. Zuo, and S. Feng, “High dynamic range 3D shape measurement based on the intensity response function of a camera,” Appl. Opt. 57(6), 1378 (2018). [CrossRef]
22. S. Feng, Y. Zhang, Q. Chen, C. Zuo, R. Li, and G. Shen, “General solution for high dynamic range three-dimensional shape measurement using the fringe projection technique,” Opt. Lasers Eng. 59, 56–71 (2014). [CrossRef]
23. B. Salahieh, Z. Chen, J. J. Rodriguez, and R. Liang, “Multi-polarization fringe projection imaging for high dynamic range objects,” Opt. Express 22(8), 10064 (2014). [CrossRef]
24. T. Chen, H. P. A. Lensch, C. Fuchs, and H.-P. Seidel, “Polarization and Phase-Shifting for 3D Scanning of Translucent Objects,” in 2007 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.
25. K. Zhong, Z. Li, X. Zhou, Y. Li, Y. Shi, and C. Wang, “Enhanced phase measurement profilometry for industrial 3D inspection automation,” Int. J. Adv. Manuf. Technol. 76(9-12), 1563–1574 (2015). [CrossRef]
26. L. Rao and F. Da, “High dynamic range 3D shape determination based on automatic exposure selection,” J. Vis. Commun. Image Represent. 50, 217–226 (2018). [CrossRef]
27. Z. Zheng, J. Gao, J. Mo, L. Zhang, and Q. Zhang, “A Fast Self-Correction Method for Nonlinear Sinusoidal Fringe Images in 3-D Measurement,” IEEE Trans. Instrum. Meas. 70, 1–9 (2021). [CrossRef]
28. T. Mertens, J. Kautz, and F. Van Reeth, “Exposure fusion,” in 15th Pacific Conference on Computer Graphics and Applications (PG’07) (IEEE, 2007), pp. 382–390.
29. T. Mertens, J. Kautz, and F. Van Reeth, “Exposure Fusion: A Simple and Practical Alternative to High Dynamic Range Photography,” Comput. Graph. Forum 28(1), 161–171 (2009). [CrossRef]
30. A. Galdran, “Image dehazing by artificial multiple-exposure image fusion,” Sig. Process. 149, 135–147 (2018). [CrossRef]
31. Z. Li, Z. Wei, C. Wen, and J. Zheng, “Detail-enhanced multi-scale exposure fusion,” IEEE Trans. on Image Process. 26(3), 1243–1252 (2017). [CrossRef]
32. P. Debevec and J. Malik, Recovering high dynamic range radiance maps from photographs (ACM Press/Addison-Wesley Publishing Co, 1997), pp. 369–378.