Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

A systematic non-uniformity correction method for correlation-based ToF imaging

Open Access Open Access

Abstract

Correlation-based time-of-flight (ToF) imaging enables a diverse range of applications for its high frame rate, high resolution and low cost. However, the non-uniformity of the sensor significantly affects the flat-field accuracy of the ToF imaging system. In this paper, we analyze the sources of the non-uniformity and propose a systematic non-uniformity correction (NUC) method. The method utilizes the amplitude image, which can directly reflect the non-uniformity characteristics of the ToF sensor, to conduct NUC. Based on the established NUC system, the effectiveness and feasibility of the proposed NUC method are verified. Compared with the traditional methods, the RMSE was significantly reduced, while the SNR and PSNR were effectively improved. We believe this study provides new insights into the understanding of noise in the correlation-based ToF imaging system, and also provides effective references for the NUC of the three-dimensional measuring instruments.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Over the past years, real-time three-dimensional imaging has been playing a critical role in the field of advanced driver assistance system (ADAS), augmented reality (AR), virtual reality (VR), face recognition, etc. [13]. It mainly includes three ways, which are the structured light imaging, the stereo vision imaging and the ToF imaging [4]. Among them, The ToF imaging possesses the advantages of stable performance, strong anti-interference capability and low cost, have found widely application. The ToF imaging can be classified into two main categories, which are the impulse ToF imaging [5] based on impulse wave and the correlation-based ToF imaging [6] based on continuous wave. For impulse ToF imaging, the round-trip time is accurately recorded by single photon avalanche diodes (SPADs) to calculate the range information [7]. Limited by the manufacturing of the SPAD, it’s hard to improve the resolution of the impulse ToF detector [8]. For correlation-based ToF imaging, the sensor is processed based on traditional CMOS technology, which possesses higher resolution, and lower cost [9]. Compared with other techniques like the structured light imaging and the stereo vision imaging, the correlation-based ToF imaging has the advantages of high frame rate, high robustness to illumination changes and compact, etc [10].

However, there exist measuring errors during the work of the ToF imaging system, due to the internal limitation like the modulated frequency and the radiant power, as well as the external disturbance like the ambient light, the reflectivity and the imaging range. Meanwhile, there exists pixel response non-uniformity because of the limit of manufacturing process. Due to these two reasons, there inevitably exists flat-field effect of error. Thus, it is necessary to carry out NUC of all pixels on the correlation-based ToF imaging system.

For eliminating the non-uniformity of ToF imaging system, several researches have conducted some efforts. Cheon et.al [11] utilized a modified joint bilateral filter to reduce the non-uniformity. While the method is superficial and limited because they didn’t analyze the causes of the error. Jung et.al [12] proposed a joint correction method based on a ToF camera and an RGB camera. The method received a good result based on the extra camera, but increased the complexity of the system. Huang et.al [13] raised an integration time auto adaptation method based on amplitude data. However, only integration time was considered, leading to a bad robustness. Reinhardt et.al [14] presented a photo-response NUC method. While the method didn’t consider the sources of the non-uniformity, leading to a bad adaptation. In summary, the works can be divided into two categories, which are the target-based method [11,12,1418] and the scene-based method [13,19]. The target-based method utilizes a reflecting plate to conduct all pixel calibration. The scene-based method compensates the non-uniformity by introducing an extra device, e.g., an RGB camera.

However, the existing studies utilize “posteriori data”, e.g., the depth image or the point cloud, to conduct NUC, leading to poor robustness and bad environment adaptivity. As a result, the compensation effect is limited. In detail, the “posteriori data” is acquired through a cross-correlation operation [17] during the demodulation, which cannot directly reflect the non-uniformity characteristics of the ToF sensor.

For this problem, we propose a systematic non-uniformity correction method for correlation-based ToF imaging. In this method, the amplitude data, which can be read out directly from the ToF sensor, is chosen to be the “priori data” to correct the pixels because of its property of reflecting the received signal intensity of the pixels. Focusing on the non-uniformity of the ToF imaging system, the error sources got systematically analyzed and compensated. The main contributions include,

  • 1. To reduce the influence of non-ideal imaging quality cause by inappropriate integration time, an image quality control strategy is proposed. The difference between the current mean amplitude and the referenced amplitude is calculated. Taking the difference as the evaluate criterion, a “squeeze” method is used to obtain the proper integration time interval.
  • 2. To eliminate the flat-field error caused by pixel response non-uniformity, a two-point NUC method based on least square fitting (LSF) is proposed. Based on the optimal amplitude obtained by the LSF, the two-point method is used to obtain the gain/offset coefficients of all-pixels.
  • 3. To obtain the best integration time in the interval, a signal-to-noise ratio (SNR) optimization scheme based on recursive maximum likelihood estimation (RMLE) is proposed. The scheme is a multi-step method. In each step, SNR is chosen to evaluate the effectiveness.

The paper is organized as follows: Section 2 introduces the principle of the ToF imaging system and analyzes the error sources of the non-uniformity. Section 3 describes the proposed NUC method in detail. Section 4 represents the experiments and the discussions. At last, in Section 5 the whole work of the paper is concluded.

2. System and error sources analysis

2.1 Foundation of the ToF imaging system

The ToF imaging system mainly consists of the emitting unit, the receiving unit, the processing unit and the power unit, as shown in Fig. 1. The emitting unit emits array modulated infrared radiation to illuminate the object. In this unit, four vertical-cavity surface-emitting lasers (VCSELs) are used to generate modulated, uniform laser spot. The receiving unit receives the light reflected from the object and demodulates the received signal. It mainly contains an objective lens and a ToF sensor. When the received light arrives the sensor through the lens, each pixel of the sensor controls two capacitors to synchronously accumulate charge in multiple phase windows. By this way the raw data is acquired. Then the raw data is transmitted to the processing unit, where the flight time is demodulated and the distance is calculated. The processing unit simultaneously generates the amplitude image and the grayscale image. At last, the images are transmitted to the upper computer. The power unit supplies DC power of the other three units.

 figure: Fig. 1.

Fig. 1. Foundation of the ToF imaging system.

Download Full Size | PDF

The distance demodulation establishes the basis of the ToF imaging system, as shown in Fig. 2. Under each pixel of the ToF sensor, there are two capacitors (CA, CB) and two integral windows with a phase difference π (Fig. 2(a)). In one sampling period, the capacitors accumulate charge multiple times and demodulate the signal with the sampling results (Fig. 2(b)). This process is called the differential correlation sampling (DCS). Take the 4-DCS method as an example, the capacitors sample signal four times at the phase of 0°, 90°, 180° and 270° (Fig. 2(c)). Then the sample results (Fig. 2(d)) are used to demodulate the phase shift between the emitted signal and the received signal. Finally, the distance is calculated with the phase shift.

 figure: Fig. 2.

Fig. 2. Distance demodulation process. (a) The structure diagram under each pixel of the sensor. (b) The two phase windows of the two capacitors. (c) The principle of the 4-DCS method. (d) The result of the 4-DCS.

Download Full Size | PDF

Assuming that the sampling signal is $Q = {A_0}\cos ({w{t_i} + \varphi } )+ {B_0}$ (i =0,1,2,3), where w is the modulated frequency, ${A_0}$ is the amplitude of the emitted signal, ${B_0}$ is the noise signal, $\varphi$ is the phase difference. From [20], the amplitude/intensity/phase can be calculated with,

$$X = \frac{{DC{S_2} - DC{S_0}}}{2}\;\;\;\;\;\;\;\;\;\;Y = \frac{{DC{S_3} - DC{S_1}}}{2}$$

The amplitude can be calculated with,

$$A = \sqrt {{X^2} + {Y^2}}$$

The intensity can be calculated with,

$$B = \frac{{DC{S_0} + DC{S_1} + DC{S_2} + DC{S_3}}}{4}$$

The phase shift can be calculated with,

$$\varphi = \arctan \left( {\frac{Y}{X}} \right)$$

The distance can be calculated based on the phase shift,

$$D = \frac{c}{2} \cdot \frac{1}{{2\pi f}} \cdot \varphi + {D_{offset}}$$
where ${f_{}}$ represents the modulated frequency, c is the speed of light and ${D_{offset}}$ is the initial offset which arise from the duration of demodulation in ToF sensor.

Increasing the number of the samples can help improve the measuring accuracy, while the calculating cost increases simultaneously. Base on the principle of DCS, the distance demodulation process can be extended to N-quad,

$$X ={-} \sum\nolimits_{i = 0}^N {DC{S_i}} \cos (\frac{{2\pi i}}{N})\;\;\;\;\;\;\;\;\;\;\;\;Y = - \sum\nolimits_{i = 0}^N {DC{S_i}} \sin (\frac{{2\pi i}}{N})$$

Accordingly, the Eq. (2) and (3) are changed into,

$$A = \frac{2}{N}\sqrt {{X^2} + {Y^2}}$$
$$B = \frac{1}{N}\sum\nolimits_{i = 0}^{N - 1} {DC{S_i}}$$

2.2 Analysis of non-uniformity sources

The ToF imaging system suffers from multiple error sources, which significantly affect the accuracy. Error sources like the stray light, the reflectivity and the temperature drift, which are independent with the system, are classified as the “depth-related sources”. These sources have been discussed in our previous work [21] and will not be analyzed in this paper. While other sources, including the integration time, the non-uniformity between pixels caused by manufacturing, the ADCs read-out differences and the row addressing variation, which can be classified as “flat-field related sources”, significantly bring flat-field error into the sensor, result in non-uniformity of the depth image. Thus, analyzing the flat-field error is necessary to suppress the non-uniformity and improve the performance of the ToF imaging system.

2.2.1 Integration time

Compared with other 3D measurement technologies, the sensitivity of ToF sensor is relatively low, thus a period of time is needed to capture sufficient number of photons for accurate distance calculation. This period time is called the integration time. Generally, the integration time is set by operator’s subjective judgment. This empirical way is inefficient and unreliable. When the integration time is too short, the too few photons accumulation will lead to local information lose or measurement unreliable. Inversely, when the integration time is too long, the excessive photons accumulation will result in local saturation. Figures 3(a)∼(c) show the depth images under different integration time. The measurements in this section and the following sections in section 2 are conducted with a standard reflecting plate. The red boxes show the depth distributions in 3d mode. The x and y axes represent x direction and y direction respectively, while the z axis represents the depth value. In Figs. 3(a) the integration time is too short, results in a distinct fluctuation. In Figs. 3(c) the integration time is too long, results in a “high-in-surrounding regions and low-in-middle region” distribution pattern. Meanwhile, part of the values become invalid because of the oversaturation. Only when the integration time is proper, the depth values are reliable, as shown in Figs. 3(b).

 figure: Fig. 3.

Fig. 3. Depth images under different integration times. (a) Too short. (b) Normal. (c) Too long.

Download Full Size | PDF

2.2.2 Fixed-pattern noise (FPN)

Due to the limit of the manufacture, there inevitably exists distinction of photon detection efficiency between each single pixel, leading to response inconsistency of pixels. When illuminated with uniform light, the output image of the sensor will show FPN. Additionally, there exists the dark current effect. The leakage current adds random offsets on the sensor even no external illumination, further increase the FPN. It can be seen from Fig. 4 that the FPN distinctly affects the image quality of both amplitude image and depth image. The red boxes show the value distributions in 3d mode. The x and y axes represent x direction and y direction respectively, while the z axis represents the amplitude value in Fig. 4(a) and the depth value in Fig. 4(b). It can be observed in the red boxes that either the amplitude values or depth values are not uniform in distribution.

 figure: Fig. 4.

Fig. 4. Fixed-pattern noise. (a) The amplitude image. (b) The depth image.

Download Full Size | PDF

2.2.3 Column ADC variation

Generally, the read-out of the ToF sensor is undertaken by a series of analog-to-digital converters (ADCs) arrayed along the columns. Through this way the voltage signals are transferred into digital signals. depending on the sensor design, one or more columns are linked with one ADC. Due to the manufacturing limit, there inevitably exists distinction between each single ADC, leading to inconsistent outputs between columns. As a result, vertical-stripe like error values are added into the output images. Additionally, the differences of row addresses of ADCs induce extra errors, as shown in Fig. 5. The red boxes in Fig. 5(a) and Fig. 5(b) represent the amplitude and depth distribution along x direction in Line 1 and Line 2, respectively. It can be observed in the red boxes that either the amplitude values or the depth values show zig-zag pattern, which is induced by the differences between the column ADCs.

 figure: Fig. 5.

Fig. 5. Column ADC variation. (a) The amplitude image. (b) The depth image.

Download Full Size | PDF

2.2.4 Row address variation

Except for the column ADC variation, there exists transmission difference between rows. This is due to the address variation. For demodulator, each row has a certain location. The closer the location to the demodulator, the smaller the delay induced by control circuit. Thus, the row nearer the demodulator has more accurate outputs and vice versa, as shown in Fig. 6. The red boxes in Fig. 6(a) and Fig. 6(b) represent the amplitude and depth distribution along y direction in Line 1 and Line 2, respectively. It can be observed in the red boxes that either the amplitude values or the depth values show zig-zag pattern, which is induced by the differences between the row addresses.

 figure: Fig. 6.

Fig. 6. Row address variation. (a) The amplitude image. (b) The depth image.

Download Full Size | PDF

3. NUC method

Traditional methods proposed in literatures mostly compensate the non-uniformity based on the “posteriori data”, leading to the limitations of the compensation effect. While the amplitude data, which can be read out directly from the ToF sensor, is suitable to be the “priori data” to correct the pixels because of its property of reflecting the received signal intensity of the pixels [17]. Based on this idea, this paper proposes a systematic non-uniformity correction method for correlation-based ToF imaging. Combined with our previous works, a more complete and robust calibration method for ToF imaging system is presented.

The scheme of the NUC method is shown in Fig. 7.

 figure: Fig. 7.

Fig. 7. Scheme of the proposed NUC method.

Download Full Size | PDF

The three main parts of the method and the logical relation of each part is illustrated as follows. Firstly, the ToF imaging system suffers from low image quality when the integration time is not appropriate. Thus, we propose an image quality control strategy. The strategy acquires the proper interval of integration time, with ${t_{low}}$ as the start point and ${t_{high}}$ as the end point, for high image quality. The strategy effectively suppresses the “overall” non-uniformity caused by improper integration time. While some other errors, e.g. the non-uniformity between pixels caused by manufacturing, the ADCs read-out differences and the row addressing variation, could induce “local” non-uniformity, which furtherly affect the range accuracy. Therefore, we then propose a two-point NUC method based on LSF. The method achieves all-pixels NUC. At last, a SNR optimization scheme based on RMLE is proposed to obtain the optimal image in the interval. For the problem that the traditional method is hard to accurately calculate the true SNR value, especially when the environmental disturbance is nonnegligible, the proposed method establishes an amplitude/phase/intensity combined error model in statistical term, and is able to acquire the correct SNR value.

3.1 Image quality control strategy

As an “overall” non-uniformity source, the integration time has a significant influence on the image quality. Therefore, it’s necessary to obtain the appropriate integration time interval before correcting all-pixels. Traditionally, the integration time is set empirically, which is unreliable and inaccurate [20,22]. For this reason, an image quality control strategy is proposed. The specific steps are as follows,

  • (1) Set the initial integration time as ${t_0}\textrm{ = }0\mu s$.
  • (2) Calculate the mean amplitude $\overline {{A_k}}$ by,
    $$\overline {{A_k}} \textrm{ = }\frac{1}{{M \times N}}\sum\limits_{i = 1}^M {\sum\limits_{j = 1}^N {\sqrt {X_{ij}^2 + Y_{ij}^2} } }$$

Then calculate the deviation between the mean amplitude and the referenced amplitude ${A^ \ast }$, $\Delta {A^\ast } = \overline {{A_k}} - {A^ \ast }$. If $\Delta {A^\ast } \le \varepsilon$, conduct the step (3). Otherwise conduct the step (5). Generally, the referenced amplitude ${A^ \ast }$ is empirically determined in the range of 200-1500LSB, the tolerance $\varepsilon$ is empirically determined with a value of 100LSB [17].

  • (3) Calculate the effective pixel ratio $\eta$ with,
    $$\eta = \frac{{{V_{pixel}}}}{{{A_{pixel}}}}\ast 100\%$$
    where ${V_{pixel}}$ is the effective pixel number with a value in the range of 200-1500LSB, ${A_{pixel}}$ is the total pixel number. If $\eta > {\eta ^ \ast }$, conduct the step (4), otherwise turn to step (5).
  • (4) Capture the mean amplitude $\overline {{A_{k + 1}}}$ of the frame $k + 1$, calculate the difference of the mean amplitude between the frame k and $k + 1$, $\Delta A = |{\overline {{A_{k + 1}}} - \overline A } |$. If $\Delta A \le \varepsilon$, turn to step (6), otherwise turn to step (5).
  • (5) Calculate the renewed integration time with,
    $${t_{k + 1}} ={-} \alpha \Delta {A^\ast } + {t_k}$$
    where $\alpha$ is the amplitude coefficient, ${t_k}$ is the integration time currently, ${t_{k + 1}}$ is the modified integration time. Then return to step (2).
  • (6) Set the current integration time as ${t_{low}}$. Correspondingly, the ${t_{high}}$ is obtained by the same process of (1)-(5). The only difference is renewing the integration time from a high value (the image is oversampled under this value).

3.2 Two-point NUC method based on LSF

After solving the “overall” problem, the “local” problem which means the non-uniformity caused by the response differences between pixels is necessary to be corrected [23]. For this problem, a two-point NUC method based on LSF is proposed. The specific steps are as follows,

  • (1) To prevent the local abnormal values from affecting the overall correction result, as shown in Fig. 8 the initial amplitude image with $M \times N$ pixels is partitioned with small windows with size of $a \times b$ and step size of d. Through this way the reshaped image with size of $m \times n$ is obtained, where $m = (M - a + 1)/d$ and $n = (N - b + 1)/d$.
  • (2) The optimal amplitude $\bar{A}$ is obtained by LSF to minimize the residual sum of squares error S.
    $$S = \min \left\{ {\sum\limits_{i = 1}^m {{{\sum\limits_{j = 1}^n {[{A({i,j} )- \bar{A}} ]} }^2}} } \right\}$$
    where $A({i,j} )$ represents the amplitude value at the point $({i,j} )$.
  • (3) Take the two endpoint values in the integration time interval, which are ${t_{low}}$ and ${t_{high}}$ as the feature points. Capture the amplitude images under these points and calculate the gain and offset coefficients with,
    $${\bar{A}_{{t_{low}}}} = k(i,j){A_{{t_{low}}}}(i,j) + b(i,j)\;\;\;\;\;\;\;\textrm{ }{\bar{A}_{{t_{high}}}} = k(i,j){A_{{t_{high}}}}(i,j) + b(i,j)$$
    $$k(i,j) = \frac{{{{\bar{A}}_{{t_{low}}}}(i,j) - {{\bar{A}}_{{t_{high}}}}(i,j)}}{{{A_{{t_{low}}}}(i,j) - {A_{{t_{high}}}}(i,j)}}\;\;\;\;\;\;\;\textrm{ }b(i,j) = \frac{{{A_{{t_{low}}}}(i,j){{\bar{A}}_{{t_{high}}}}(i,j) - {{\bar{A}}_{{t_{low}}}}(i,j){A_{{t_{high}}}}(i,j)}}{{{A_{{t_{low}}}}(i,j) - {A_{{t_{high}}}}(i,j)}}$$
    where ${\bar{A}_{{t_{low}}}}$, ${\bar{A}_{high}}$ are the optimal amplitudes of all pixels under integration time ${t_{low}}$ and ${t_{high}}$ respectively. $k(i,j)$ and $b(i,j)$ are the gain and offset coefficients of pixel $(i,j)$.
  • (4) After obtaining the appropriate integration time ${t_i}$, all pixels are corrected by,
    $${\hat{A}_{{t_i}}}(i,j) = k(i,j){A_{{t_i}}}(i,j) + b(i,j)$$
    where ${\hat{A}_{{t_i}}}(i,j)$ and ${A_{{t_i}}}(i,j)$ represent the amplitudes of pixel $(i,j)$ under integration time ${t_i}$ after correction and before correction.

3.3 SNR optimization scheme based on RMLE

Although the all-pixels over the integration time interval has been compensated, there exist best integration time in the interval. Therefore, a SNR optimization scheme based on RMLE is put forward to obtain the optimal image. The traditional method is hard to accurately calculate the true SNR value, especially when the environmental disturbance is non-negligible. For this problem, the proposed method establishes an amplitude/phase/intensity combined error model in statistical term, and is able to acquire the correct SNR value. Note that the environmental disturbance generally includes the variation of the reflectivity, the temperature and the ambient light.

In the 4-quad method, as the ideal sampling in a single period, $\mathop {DC{S_i}}\limits^ \sim $ (i = 0,1,2,3) is related with the number of the received photons and the actual measured values of multiple frames. Assuming the actual value is $\mathop {DC{S_i}}\limits^ \sim $. The value is determined by the sum of the actual number of photons that can be collected by the sensor at the specific integration time. The statistical distribution of amplitude/phase/intensity can be approximately regarded as normal distribution [24].

Assuming that $A,\varphi ,B$ are the estimating values of amplitude, phase and intensity, $\tilde{A},\tilde{\varphi },\tilde{B}$ are the measuring values of amplitude, phase and intensity. According to the normal distribution,

$$\tilde{X} \sim N({A\cos \varphi ,B/2} )\;\;\;\;\;\;\;\;\;\;\tilde{Y} \sim N({A\sin \varphi ,B/2} )$$

From (16), the joint density function of the amplitude and the phase can be given by,

$$P\left( {\tilde{A},\tilde{\varphi }|A,\varphi ,\sqrt {B/2} } \right) = \frac{{\tilde{A}}}{{\pi B}}\textrm{exp} \left\{ { - \frac{1}{B}[{{A^2} + {{\tilde{A}}^2} - 2A\tilde{A}\cos ({\tilde{\varphi } - \varphi } )} ]} \right\}$$

Then the marginal probability density of $\tilde{A}$ can be calculated by,

$$P\left( {\tilde{A}|A,\sqrt {B/2} } \right) = \int_0^{2\pi } {P({\tilde{A},\tilde{\varphi }} )} d\tilde{\varphi } = \frac{{\tilde{A}}}{{B/2}}\textrm{exp} \left( { - \frac{{{A^2} + {{\tilde{A}}^2}}}{B}} \right){I_0}\left( {\frac{{A\tilde{A}}}{{B/2}}} \right)$$
where ${I_0}$ is the modified Bessel function of zeroth order [25].

As the sum of the normal distribution, the probability density of intensity B is given by,

$$P({\tilde{B}} )= \frac{1}{{2\pi \sum\limits_{i = 0}^3 {\sigma _i^2} }}\textrm{exp} \left( { - \frac{{{{\left[ {B - \sum\limits_{i = 0}^3 {{\mu_i}} } \right]}^2}}}{{2\sum\limits_{i = 0}^3 {\sigma_i^2} }}} \right)$$
where ${\mu _i}$ represents the i-th DCS, $\sigma _i^2$ represents the i-th DCS/4.

For ToF imaging system, the error is related with multiple sources, which can be given by [26],

$$\sigma = \frac{c}{{4\sqrt 2 \pi f}} \cdot \frac{{\sqrt {B + A} }}{{{c_d}A}}$$
where ${c_d}$ is the modulation contrast.

Traditionally, the SNR of the received signal can be given by [27],

$$SNR = \frac{A}{{\sqrt {B/2} }}$$

Low reflectivity of the object, improper temperature or long distance can lead to low SNR. Increasing integration time is a solution, while this could result in motion blur and oversaturation [24]. To solve this problem, we combine the multiple intensity and amplitude images, meanwhile considering the statistical properties of the signals,

$$SNR = \frac{{\sqrt 2 }}{{\sqrt K }}\frac{{\sum\nolimits_{i = 1}^K {\tilde{A}} }}{{\sqrt {\sum\nolimits_{i = 1}^K {\tilde{B}} } }}$$
where K is the number of the acquired frames.

However, the method is not a valid estimator especially under situations of low SNR. In this case, the model of $\tilde{A}$ becomes non-Gaussian, leading to serious calculation error [28]. For this problem, we propose an iterative method to update the error values. Compared with setting the initial value, this method possesses higher convergence and real-time performance.

Assume that $\tilde{A}$ represent a set of amplitude samples with noise. The maximum likelihood estimation function of the amplitude is given by,

$$\begin{array}{l} L\left( {\tilde{A}|A,\sqrt {B/2} } \right) = \prod\limits_{i = 1}^K {P\left( {\tilde{A}(i)|A,\sqrt {B/2} } \right)} \\ = \prod\limits_{i = 1}^K {\frac{{\tilde{A}(i)}}{{B/2}}} \textrm{exp} \left( { - \frac{{{A^2} + \tilde{A}{{(i)}^2}}}{B}} \right)\left( {\frac{{A\tilde{A}(i)}}{{B/2}}} \right) \end{array}$$

The maximum likelihood estimation is defined by, ${\tilde{A}_{ML}} = \arg \max (\log L)$[29], the estimation of the ${\tilde{A}_{ML}}$ is obtained by solving the partial derivative of the $\log L$ to A,

$${\left. {\frac{{\partial \log L}}{{\partial A}}} \right|_{{{\tilde{A}}_{ML}}}} = 0$$

To obtain precise error distribution parameters, the accumulation of the multi-frame amplitude errors is utilized to correct the estimation of the next frame [30,31]. In detail,

$$\tilde{A}(i + 1) = avg\sum\limits_{i = 1}^n {\tilde{A}(i)} + \delta ({A(i) - \tilde{A}(i)} )$$
where $\delta$ is the modify coefficient of the amplitude. The estimation of ${\tilde{A}_{ML}}$ and ${\tilde{B}_{ML}}$ is then given by,
$${\tilde{A}_{ML}} - \frac{1}{K}\sum\nolimits_{i = 1}^K {\tilde{A}(i)} \left( {\frac{{\tilde{A}(i){{\tilde{A}}_{ML}}}}{{{{\tilde{B}}_{ML}}/2}}} \right) = 0$$
where ${\tilde{B}_{ML}} = \frac{1}{N}\sum\nolimits_{i = 1}^N {\tilde{B}(i)}$.

Therefore, the estimation of SNR is given by,

$$\mathop {SNR}\limits^ \sim{=} \frac{{\sqrt 2 {{\tilde{A}}_{ML}}}}{{\sqrt {{{\tilde{B}}_{ML}}} }}$$

Use the SNR estimation method above to calculate the SNR values under different integration time in the interval. Then the optimal integration time is obtained by finding the highest SNR. Finally, the NUC of the ToF imaging system is achieved.

4. Experiments and discussions

To verify the effect of the NUC method, we design the implementation scheme and establish the experimental system.

4.1 NUC experimental scheme

4.1.1 Experimental description

The NUC experimental system mainly includes the ToF imaging system, the reflecting plate, the DC power supply, the rail and the PC, as shown in Fig. 9. As described in Section 2, the ToF imaging system is composed of the emitting unit, the receiving unit, the processing unit and the power unit. A standard reflecting plate is fixed on the rail and chosen as the calibration board.

 figure: Fig. 8.

Fig. 8. The diagram of the partition process.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. The NUC experimental system.

Download Full Size | PDF

4.1.2 NUC process

The detailed implemental process of NUC is shown as follows,

  • (1) Set the initial distance between the ToF imaging system and the reflecting plate. Set the initial integration time and capture the amplitude/depth images
  • (2) Calculate the mean amplitude and the referenced amplitude, as well as the deviation.
  • (3) Calculate the effective pixel ratio and renew the integration time. Repeat steps (1)-(3) until the proper integration time interval is captured.
  • (4) Reshape the current amplitude image with small blocks and calculate the best amplitude with LSF.
  • (5) Capture the amplitude images under integration times obtained in (3) and calculate the gain and offset coefficients of the two-point model.
  • (6) Calculate the SNR under different integration time in the interval base on RMLE. Then find the highest SNR with the optimal integration time.

4.1.3 Correction data analysis

In the correction process, a reflecting plate with the reflectivity of 80% is chosen as the measuring object. The amplitude and depth images before and after correction are obtained at different integration times of 500us and 1500us respectively. The results are shown in Figs. 10 and 11. Central regions are chosen as the region of interest (ROI) to help observe the details of the images. From Figs. 10(a) and 11(a), it is obvious that the quality of the amplitude images before correction is relatively low. Due to the flat-field effect, the central regions of the images are darker and the surrounding regions are brighter. Meanwhile, there exist distinct strip-like noises. While the non-uniform effect got effectively eliminated after correction, as shown in Figs. 10(b) and 11(b). Compared with the amplitude images, the flat-field errors are more obvious within depth images, as shown in Figs. 10(c) and 11(c). The depth values fluctuate distinctly in the images. After correction, the errors got effectively eliminated, as shown in Figs. 10(d) and 11(d).

 figure: Fig. 10.

Fig. 10. Images before/after corrections under integration time of 500µs. Panels (a) and (b) are the amplitude images before/after correction. Panels (c) and (d) are the depth images before/after correction.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Images before/after corrections under integration time of 1500µs. Panels (a) and (b) are the amplitude images before/after correction. Panels (c) and (d) are the depth images before/after correction.

Download Full Size | PDF

To better illustrate the effect of the method, all-pixel values in Fig. 10 and Fig. 11 are extracted to conduct a statistical analysis, as shown in Fig. 12. Figure 12(a) and (b) are the results under integration time of 500µs and1500µs respectively. We firstly plot all-pixel values along x direction, where the values before and after corrections are colored in gray and black respectively. Then, to better explain the distribution feature, we plot several curves based on the points. Results under 500µs and 1500µs show similar features, thus we choose 500µs (Fig. 12(a)) to conduct analysis. For the amplitude data, the values before correction are discrete, fluctuating in a large range of about 2500∼3300LSB. While those after correction, the amplitude data is more concentrated in a range of about 2680∼2720LSB. The blue curves are the amplitude fitting curves. It can be found that the curve is not change significantly before and after correction, which means FPN occupies the main part of the error. The existence of FPN doesn’t change the mean value but enlarge the dispersion degree of the data. For the depth data, the values before correction show higher in the middle region (up to about 1300 mm) and lower on the both sides (down to about 800 mm). The fitting curve further verifies this distribution pattern, which means the column ADC variation occupies the main part of the error. The existence of ADC variation brings distinct variation between columns. After correction, this phenomenon got effectively eliminated, where the values are more concentrated within a range of about 1120∼1140 mm.

 figure: Fig. 12.

Fig. 12. The statistical analysis. (a) The results under integration time of 500µs. (b) The results under integration time of 1500µs.

Download Full Size | PDF

4.2 NUC experimental verification

To verify the effect of the NUC method in reality, we choose four different 3D-real scenes to conduct the verification experiments. The testing objects are a set of plaster castings, include a ball (which is used to test the effect for continuous surface), a cube (continuous surface), a statue of Alexander (irregular surface) and a plaster assemblage (irregular surface), as shown in Fig. 13.

 figure: Fig. 13.

Fig. 13. The testing objects, where (a)-(d) represent the ball, the cube, the statue of Alexander and the plaster assemblage respectively.

Download Full Size | PDF

The amplitude/depth images before/after correction are captured and analyzed. During the whole process, the test environment and the other parameters are remained the same.

Figure 14 shows the amplitude images obtained before and after correction respectively, where Figs. 14(a)-(d) are the images before correction and (e)-(h) are the corresponding images after correction. For the continuous surfaces, the amplitude images before correction, as shown in Figs. 14(a) and (b), show distinct vertical-stripe like non-uniformity, meanwhile the FPN is non-neglectable. After correction, as shown in Figs. 14(e) and (f), the vertical-stripe pattern and FPN got effectively eliminated. For the irregular surfaces, the amplitude images before correction, as shown in Figs. 14(c) and (d), the images show low contrast, fuzzy outline as well as ambiguous detail. After correction, as shown in Figs. 14(g) and (h), the phenomenon of ambiguous detail got significantly eliminated, which brings a distinct improvement of the contrast.

 figure: Fig. 14.

Fig. 14. Amplitude images before/after corrections. Panels (a)-(d) are the images before correction (e)-(h) the corresponding images after correction.

Download Full Size | PDF

Similarly, the results of the depth images are shown in Fig. 15. For the continuous surfaces, the depth images before correction, as shown in Figs. 15(a) and (b), contains obvious noise points as well as invalid points. After correction, as shown in Figs. 15(e) and (f), the number of noise and invalid pixels got distinct decreased. For the irregular surfaces, the amplitude images before correction, as shown in Figs. 15(c) and (d), it’s hard to distinguish the features of the object (e.g., the nose and the eye) due to the flat-field error before correction. After correction, as shown in Figs. 15(g) and (h), the images are clearer and more detailed. Meanwhile, the resolving ability of depth got significantly improved.

 figure: Fig. 15.

Fig. 15. Depth images before/after correction. Panels (a)-(d) the images before correction. Panels (e)-(h) the corresponding images after correction.

Download Full Size | PDF

To better illustrates the changes before and after correction, a statistical analysis is then conducted, as shown in Figs. 16, where (a)-(d) represent the amplitude values distributions in four scenes and (e)-(f) are the corresponding depth values distributions. Data before/after correction is colored in blue/orange respectively. The standard deviation (STD), which is helpful to evaluate the dispersion degree of the data, are calculated and labeled. For the continuous surfaces, as shown in Figs. 16(a), (b), (e) and (f), the distributions after correction are more concentrated, meanwhile the STDs are smaller, which represents that the flat-field error got effectively correction. For the irregular surfaces, as shown in Figs. 16(c), (d), (g) and (h), the details of the images got better restored after correction, which result in a wider range of values. Therefore, the values are more discrete with higher STDs.

 figure: Fig. 16.

Fig. 16. The distribution histogram under different scenes. Panels (a)-(d) show the amplitude values distributions in four scenes and panels (e)-(f) show the corresponding depth values distributions.

Download Full Size | PDF

Furtherly, a quantitative analysis is conducted. Root-mean-square error (RMSE), signal-to-noise ratio (SNR) and peak signal-to-noise ratio (PSNR) are chosen to evaluate the quality of the depth images.

RMSE is defined as the average of the errors squared, calculated as the difference between the depth image and the reference image. PSNR is measured as the ratio between the corrupted noise and the maximum possible power of the image. RMSE and PSNR are expressed as:

$$RMSE = \sqrt {\frac{1}{{MN}}{{\sum\limits_{i = 1}^M {\sum\limits_{j = 1}^N {[{{D_R} - {D_M}} ]} } }^2}}$$
$$PSNR = 20{\log _{10}}\left( {\frac{{\max \{{{D_M}} \}}}{{RMSE({{D_R},{D_M}} )}}} \right)$$
where ${D_R}$ is the reference depth data, ${D_M}$ is the measuring depth data. M, N are the pixel number in column and row respectively. In addition, the SNR is calculated by Eq. (27).

The results are shown in Table 1. It is obvious that after correction, the RMSEs got significantly reduced (take scene 3 as an instance, the RMSE reduces from 27.5 to 13.7 mm), which mean the non-uniformity of the images got effectively eliminated. The SNRs got improved (32.4 to 39.0 dB), which mean the noise disturbance got suppressed. Meanwhile the PSNRs got improved (32.9 to 39.3 dB), which mean the image quality of the depth images got effectively improved. The quantitative analysis shows that the method for the non-uniformity correction significantly reduce the influence of improper integration time, FPN, column ADC variation and row address variation to the depth data. The method effectively improves the range accuracy of the ToF imaging system.

Tables Icon

Table 1. RMSE, SNR and PSNR results

To better illustrate the effectiveness of the proposed method, a comparison study is conducted. Several methods relevant to ToF depth data have been applied on NUC, including non-Local means (NLM) [32], block matching over 3-D (BM3D) [33], cross-bilateral filtering with depth hypothesis bilateral regularizer (CBF-HBR) [34] and local polynomial approximation with intersection of confidence intervals rule (LPA-ICI) [35]. Scene 3 and scene 4 are chosen as the testing object, and the result is shown in Table 2. It can be obviously figure out that the proposed method has the optimal performance on NUC compared with methods in [32], [33] [34] and [35].

Tables Icon

Table 2. Detailed performance compared with existing methods

5. Conclusion

In correlation-based ToF imaging, the non-uniformity reduces the imaging quality, which significantly affect the range accuracy. In this paper, we analyze the sources of the non-uniformity in the ToF imaging system and propose a systematic non-uniformity correction method for correlation-based ToF imaging. We use the amplitude data, which can be read-out directly from the ToF sensor, to acquire the gain and offset coefficients of the two-point NUC model. Then the all-pixel correction is achieved with the model.

To verify the effectiveness and practicability of the proposed NUC method, we establish the ToF NUC system. The experiments are conducted with four different scenes. The characteristics of the targets in either depth image or amplitude image become clearer after correction, which means the image quality get improved with the method. In addition, we conduct a quantitative analysis with the data chosen in the ROI. The RMSE of the depth image get significantly reduced after compensated (from 27.5 to 13.7 mm with scene 3), while the SNR (from 32.4 to 39.0 dB) and PSNR (from 32.9 to 39.3 dB) get distinctly improved. The results show that the proposed method effectively eliminates the non-uniformity of the ToF imaging system. Compared with the existing methods, the proposed method offers more precise solution (the RMSE got a decrease of 10% with the method in [35]) for NUC in ToF imaging system.

According to the theoretical analysis and experimental results, the particularities of the non-uniformity in ToF imaging system are discovered and discussed. We believe this study provides potential oriental into the understanding of noise, and also provides effective guidance for researchers realizing the NUC for other three-dimensional measuring instruments.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. H. Nguyen, T. Tan, Y. Wang, and Z. Wang, “Three-dimensional shape reconstruction from single-shot speckle image using deep convolutional neural networks,” Opt. Lasers Eng. 143(1), 106639 (2021). [CrossRef]  

2. E. L. Francois, A. Griffiths, J. D. Mckendry, H. C. Chen, D. D. U. Li, R. K. Henderson, J. Herrnsdorf, M. D. Dawson, and M. J. Strain, “Combining time of flight and photometric stereo imaging for 3d reconstruction of discontinuous scenes,” Opt. Lett. 46(15), 3612–3615 (2021). [CrossRef]  

3. W. Wang, P. Liu, R. Ying, J. Wang, J. Qian, J. Jia, and J. Gao, “A High-Computational Efficiency Human Detection and Flow Estimation Method Based on TOF Measurements,” Sensors 19(3), 729 (2019). [CrossRef]  

4. Y. He and S. Y. Chen, “Recent Advances in 3D Data Acquisition and Processing by Time-of-Flight Camera,” IEEE Access 7, 12495–12510 (2019). [CrossRef]  

5. M. J. Sun, M. Edgar, G. Gibson, B. Q. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7(1), 12010 (2016). [CrossRef]  

6. Y. J. Fang, X. Wang, Z. B. Sun, B. H. Su, and J. W. Xue, “Method to improve the accuracy of depth images based on differential entropy,” Opt. Eng. 60(03), 033105 (2021). [CrossRef]  

7. D. Stoppa, L. Pancheri, M. Scandiuzzo, L. Gonzo, and A. Simoni, “A CMOS 3-D imager based on single photon avalanche diode,” IEEE Trans. Circuits Syst. I 54(1), 4–12 (2007). [CrossRef]  

8. E. Conca, I. Cusini, F. Severini, R. Lussana, and F. A. Villa, “Gated SPAD Arrays for Single-Photon Time-Resolved Imaging and Spectroscopy,” IEEE Photonics J. 11(6), 1–10 (2019). [CrossRef]  

9. Y. He, B. Liang, Z. Yu, J. He, and J. Yang, “Depth Errors Analysis and Correction for Time-of-Flight (ToF) Cameras,” Sensors 17(1), 92 (2017). [CrossRef]  

10. S. Foix, G. Alenya, and C. Torras, “Lock-in Time-of-Flight (ToF) Cameras: A Survey,” IEEE Sensors J. 11(9), 1917–1926 (2011). [CrossRef]  

11. C. Lee, S. Y. Kim, B. Choi, Y. M. Kwon, and Y. S. Ho, “Depth error compensation for camera fusion system,” Opt. Eng. 52(7), 073103 (2013). [CrossRef]  

12. J. Jung, J. Y. Lee, Y. Jeong, and I. S. Kweon, “Time-of-flight sensor calibration for a color and depth camera pair,” IEEE Trans. Pattern Anal. Mach. Intell. 37(7), 1501–1513 (2015). [CrossRef]  

13. T. Huang, K. Qian, and Y. Li, “All Pixels Calibration for ToF Camera,” in 2018 IOP Conference Series: 2nd International Symposium on Resource Exploration and Environmental Science, (IOP, Ordos, China,2018), pp. 170–178.

14. A. Reinhardt, C. Bradley, A. Hecht, and P. Mcmanamon, “Windowed region-of-interest non-uniformity correction and range walk error correction of a 3d flash lidar camera,” Opt. Eng. 60(02), 023103 (2021). [CrossRef]  

15. M. Lindner, I. Schiller, A. Kolb, and R. Koch, “Time-of-Flight sensor calibration for accurate range sensing,” Comput. Vis. Image Underst. 114(12), 1318–1328 (2010). [CrossRef]  

16. D. D. Lichti, X. Qi, and T. Ahmed, “Range camera self-calibration with scattering compensation,” ISPRS-J. Photogramm. Remote Sens. 74, 101–109 (2012). [CrossRef]  

17. M. Georgiev, R. Bregovic, and A. Gotchev, “Fixed-pattern noise modeling and removal in time-of-flight sensing,” IEEE Trans. Instrum. Meas. 65(4), 808–820 (2016). [CrossRef]  

18. Y. He and S. Chen, “Error correction of depth images for multi-view time-of-flight vision sensors,” Int. J. Adv. Robot. Syst. 17(4), 172988142094237 (2020). [CrossRef]  

19. I. Schiller, C. Beder, and R. Koch, “Calibration of a PMD-camera using a planar calibration pattern together with a multi-camera setup,” ISPRS Int. J. Geo-Inf. 21, 297–302 (2008).

20. S. Hussmann, F. Knoll, and T. Edeler, “Modulation method including noise model for minimizing the wiggling error of tof cameras,” IEEE Trans. Instrum. Meas. 63(5), 1127–1136 (2014). [CrossRef]  

21. X. Q. Wang, P. Song, and W. Y. Zhang, “An Improved Calibration Method for Photonic Mixer Device Solid-State Array Lidars Based on Electrical Analog Delay,” Sensors 20(24), 7329 (2020). [CrossRef]  

22. H. Shim and S. Lee, “Hybrid exposure for depth imaging of a time-of-flight depth sensor,” Opt. Express 22(11), 13393–13402 (2014). [CrossRef]  

23. M. Feigin, R. Whyte, A. Bhandari, A. Dorington, and R. Raskar, “Modeling wiggling as a multi-path interference problem in AMCW ToF imaging,” Opt. Express 23(15), 19213–19225 (2015). [CrossRef]  

24. Y. J. Fang, X. Wang, Z. B. Sun, K. Zhang, and B. H. Su, “Study of the depth accuracy and entropy characteristics of a tof camera with coupled noise,” Opt. Lasers Eng. 128(5), 106001 (2020). [CrossRef]  

25. M. Abramowitz and I.A. Stegun, “Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables,” Dover Publications, (1972).

26. G. Bouquet, J. Thorstensen, K. Bakke, and P. Risholm, “Design tool for ToF and SL based 3D cameras,” Opt. Express 25(22), 27758 (2017). [CrossRef]  

27. F. Mufti and R. Mahony, “Statistical analysis of signal measurement in time-of-flight cameras,” ISPRS-J. Photogramm. Remote Sens. 66(5), 720–731 (2011). [CrossRef]  

28. A. N. Morabito, D. B. Percival, J. D. Sahr, Z. M. P. Berkowitz, and L. E. Vertatschitsch, “Rician parameter estimation using phase information in low SNR environments,” IEEE Commun. Lett. 12(4), 244–246 (2008). [CrossRef]  

29. J. M. Bonny, J. P. Renou, and M. Zanca, “Optimal measurement of magnitude and phase from MR data,” J. Magn. Reson. Ser. B 113(2), 136–144 (1996). [CrossRef]  

30. Y.M. Kim, D. Chan, C. Theobalt, and S. Thrun, “Design and calibration of a multiview TOF sensor fusion system,” in 2008 Proceedings IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR), (IEEE, Anchorage, Alaska, USA,2008), pp. 1–7.

31. T. Muraji, K. Tanaka, T. Funatomi, and Y. Mukaigawa, “Depth from phasor distortions in fog,” Opt. Express 27(13), 18858 (2019). [CrossRef]  

32. M. Georgiev, A. Gotchev, and M. Hannuksela, “De-noising of distance maps sensed by time-of-flight devices in poor sensing environment,” in IEEE International Conference on Acoustics, Speech and Signal Processing, (IEEE, Vancouver, BC, Canada,2013), pp.1533–1537.

33. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16(8), 2080–2095 (2007). [CrossRef]  

34. S. Smirnov, A. Gotchev, and K. Egiazarian, “Methods for depth-map filtering in view-plus-depth 3D video representation,” EURASIP J. Adv. Signal Process. 2012(1), 25 (2012). [CrossRef]  

35. V. Katkovnik, J. Astola, and K. Egiazarian, “Phase local approximation (PhaseLa) technique for phase unwrap from noisy data,” IEEE Trans. Image Process. 17(6), 833–846 (2008). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1.
Fig. 1. Foundation of the ToF imaging system.
Fig. 2.
Fig. 2. Distance demodulation process. (a) The structure diagram under each pixel of the sensor. (b) The two phase windows of the two capacitors. (c) The principle of the 4-DCS method. (d) The result of the 4-DCS.
Fig. 3.
Fig. 3. Depth images under different integration times. (a) Too short. (b) Normal. (c) Too long.
Fig. 4.
Fig. 4. Fixed-pattern noise. (a) The amplitude image. (b) The depth image.
Fig. 5.
Fig. 5. Column ADC variation. (a) The amplitude image. (b) The depth image.
Fig. 6.
Fig. 6. Row address variation. (a) The amplitude image. (b) The depth image.
Fig. 7.
Fig. 7. Scheme of the proposed NUC method.
Fig. 8.
Fig. 8. The diagram of the partition process.
Fig. 9.
Fig. 9. The NUC experimental system.
Fig. 10.
Fig. 10. Images before/after corrections under integration time of 500µs. Panels (a) and (b) are the amplitude images before/after correction. Panels (c) and (d) are the depth images before/after correction.
Fig. 11.
Fig. 11. Images before/after corrections under integration time of 1500µs. Panels (a) and (b) are the amplitude images before/after correction. Panels (c) and (d) are the depth images before/after correction.
Fig. 12.
Fig. 12. The statistical analysis. (a) The results under integration time of 500µs. (b) The results under integration time of 1500µs.
Fig. 13.
Fig. 13. The testing objects, where (a)-(d) represent the ball, the cube, the statue of Alexander and the plaster assemblage respectively.
Fig. 14.
Fig. 14. Amplitude images before/after corrections. Panels (a)-(d) are the images before correction (e)-(h) the corresponding images after correction.
Fig. 15.
Fig. 15. Depth images before/after correction. Panels (a)-(d) the images before correction. Panels (e)-(h) the corresponding images after correction.
Fig. 16.
Fig. 16. The distribution histogram under different scenes. Panels (a)-(d) show the amplitude values distributions in four scenes and panels (e)-(f) show the corresponding depth values distributions.

Tables (2)

Tables Icon

Table 1. RMSE, SNR and PSNR results

Tables Icon

Table 2. Detailed performance compared with existing methods

Equations (29)

Equations on this page are rendered with MathJax. Learn more.

X = D C S 2 D C S 0 2 Y = D C S 3 D C S 1 2
A = X 2 + Y 2
B = D C S 0 + D C S 1 + D C S 2 + D C S 3 4
φ = arctan ( Y X )
D = c 2 1 2 π f φ + D o f f s e t
X = i = 0 N D C S i cos ( 2 π i N ) Y = i = 0 N D C S i sin ( 2 π i N )
A = 2 N X 2 + Y 2
B = 1 N i = 0 N 1 D C S i
A k ¯  =  1 M × N i = 1 M j = 1 N X i j 2 + Y i j 2
η = V p i x e l A p i x e l 100 %
t k + 1 = α Δ A + t k
S = min { i = 1 m j = 1 n [ A ( i , j ) A ¯ ] 2 }
A ¯ t l o w = k ( i , j ) A t l o w ( i , j ) + b ( i , j )   A ¯ t h i g h = k ( i , j ) A t h i g h ( i , j ) + b ( i , j )
k ( i , j ) = A ¯ t l o w ( i , j ) A ¯ t h i g h ( i , j ) A t l o w ( i , j ) A t h i g h ( i , j )   b ( i , j ) = A t l o w ( i , j ) A ¯ t h i g h ( i , j ) A ¯ t l o w ( i , j ) A t h i g h ( i , j ) A t l o w ( i , j ) A t h i g h ( i , j )
A ^ t i ( i , j ) = k ( i , j ) A t i ( i , j ) + b ( i , j )
X ~ N ( A cos φ , B / 2 ) Y ~ N ( A sin φ , B / 2 )
P ( A ~ , φ ~ | A , φ , B / 2 ) = A ~ π B exp { 1 B [ A 2 + A ~ 2 2 A A ~ cos ( φ ~ φ ) ] }
P ( A ~ | A , B / 2 ) = 0 2 π P ( A ~ , φ ~ ) d φ ~ = A ~ B / 2 exp ( A 2 + A ~ 2 B ) I 0 ( A A ~ B / 2 )
P ( B ~ ) = 1 2 π i = 0 3 σ i 2 exp ( [ B i = 0 3 μ i ] 2 2 i = 0 3 σ i 2 )
σ = c 4 2 π f B + A c d A
S N R = A B / 2
S N R = 2 K i = 1 K A ~ i = 1 K B ~
L ( A ~ | A , B / 2 ) = i = 1 K P ( A ~ ( i ) | A , B / 2 ) = i = 1 K A ~ ( i ) B / 2 exp ( A 2 + A ~ ( i ) 2 B ) ( A A ~ ( i ) B / 2 )
log L A | A ~ M L = 0
A ~ ( i + 1 ) = a v g i = 1 n A ~ ( i ) + δ ( A ( i ) A ~ ( i ) )
A ~ M L 1 K i = 1 K A ~ ( i ) ( A ~ ( i ) A ~ M L B ~ M L / 2 ) = 0
S N R = 2 A ~ M L B ~ M L
R M S E = 1 M N i = 1 M j = 1 N [ D R D M ] 2
P S N R = 20 log 10 ( max { D M } R M S E ( D R , D M ) )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.