Abstract

Recursive estimation of nonlinear functions of the return power in a lidar system entails use of a nonlinear filter. This also permits processing of returns in the presence of multiplicative noise (speckle). The use of the extended Kalman filter is assessed here for estimation of return power, log power, and speckle noise (which is regarded as a system rather than a measurement component), using coherent lidar returns and tested with simulated data. Reiterative processing of data samples using system models comprising a random walk signal together with an uncorrelated speckle term leads to self-consistent estimation of the parameters.

© 1989 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. B. J. Rye, R. M. Hardesty, “Time Series Identification and Kalman Filtering Techniques for Doppler Lidar Velocity Estimation,” Appl. Opt. 28, 879–891 (1989).
    [CrossRef] [PubMed]
  2. A. Gelb, Ed., Applied Optimal Estimation (MIT Press, Cambridge, MA, 1974).
  3. A. P. Sage, J. L. Melsa, Estimation Theory with Applications to Communications and Control (McGraw-Hill, New York, 1971).
  4. An alternative approach to the measurement Eq. (4) is logarithmic transformation of the measurement; using y = ln[Y − U], x = ln[S], w = ln[W], and in the absence of additive noise, we then obtain a linear measurement equation like that of Eq. (2) but with w appearing as the additive noise term. A complication arises if w does not have zero mean. If x is known to be constant and the statistics of w are stationary, the problem is simply one of determining the resulting bias in the average14; otherwise it is necessary to show that the variation of the bias is negligible (e.g., less than other sources of error) over the range of parameters encountered. For differential log ratio measurements of the form x = ln[S1/S2], y = ln[(Y1 − U1)/(Y2 − U2)], the problem is mitigated because w = ln[W1/W2] does have zero mean provided the statistics of W1 and W2 are identical. This removes bias in the absence of extra additive noise and leads7 to relatively small bias provided the variances R1 and R2 of this noise are small or S1/S2 ∼ R1R2.
  5. R. E. Warren, “Adaptive Kalman-Bucy Filter for Differential Absorption Lidar Time Series Data,” Appl. Opt. 26, 4755–4760 (1987).
    [CrossRef] [PubMed]
  6. Strictly P can only be interpreted as the covariance of the estimate for a linear filter with known system model. For nonlinear filters, including adaptive filters designed to determine the properties of an unknown system model, P should be at best regarded as a useful approximation to the covariance matrix; here we use the term estimate covariance matrix for brevity.
  7. B. J. Rye, “Power Ratio Estimation in Incoherent Backscatter Lidar: Heterodyne Receiver with Square Law Detection,” J. Climate Appl. Meteorol. 22, 1899–1913 (1983).
    [CrossRef]
  8. Because the approximation described makes the linear filter slightly suboptimal and might arguably lead to results that are prejudiced against it, the process was repeated with the power, rather than the log power, generated using a random walk [using Eqs. (6a) and (2)] and filtered optimally with the constant value for Q1 from the simulation; the log power was then filtered suboptimally using a variable Q1 generated by Eq. (14). The conclusions drawn from the results were unaffected by these changes.
  9. Inspection of Eqs. (13c) and (13d) indicates that, if the variance terms are normalized12 to Q1, the unknowns can be combined to leave only two, Q1m and Q1R. It is believed that physical interpretation calls for knowledge of all three despite the additional computational burden entailed.
  10. J. A. Nelder, R. Mead, “A Simplex Method for Function Minimization,” Comput. J. 7, 308–313 (1965).
    [CrossRef]
  11. R. K. Mehra, “On the Identification of Variances and Adaptive Kalman Filtering,” IEEE Trans. Autom. Control AC-15, 175–184 (1970).
    [CrossRef]
  12. R. H. Jones, “Maximum Likelihood Fitting of ARMA Models to Time Series with Missing Observations,” Technometrics 22, 389–395 (1980).
    [CrossRef]
  13. B. J. Rye, “A Wavelength Switching Algorithm for Single Laser Differential Absorption Lidar Systems,” Proc. Soc. Photo-Opt. Instrum. Eng. 1062, 267–273 (1989).
  14. D. S. Zrnic, “Mean Power Estimation with a Recursive Filter,” IEEE Trans. Aerosp. Electron. Syst. AES-13, 281–289 (1977).
    [CrossRef]

1989

B. J. Rye, “A Wavelength Switching Algorithm for Single Laser Differential Absorption Lidar Systems,” Proc. Soc. Photo-Opt. Instrum. Eng. 1062, 267–273 (1989).

B. J. Rye, R. M. Hardesty, “Time Series Identification and Kalman Filtering Techniques for Doppler Lidar Velocity Estimation,” Appl. Opt. 28, 879–891 (1989).
[CrossRef] [PubMed]

1987

1983

B. J. Rye, “Power Ratio Estimation in Incoherent Backscatter Lidar: Heterodyne Receiver with Square Law Detection,” J. Climate Appl. Meteorol. 22, 1899–1913 (1983).
[CrossRef]

1980

R. H. Jones, “Maximum Likelihood Fitting of ARMA Models to Time Series with Missing Observations,” Technometrics 22, 389–395 (1980).
[CrossRef]

1977

D. S. Zrnic, “Mean Power Estimation with a Recursive Filter,” IEEE Trans. Aerosp. Electron. Syst. AES-13, 281–289 (1977).
[CrossRef]

1970

R. K. Mehra, “On the Identification of Variances and Adaptive Kalman Filtering,” IEEE Trans. Autom. Control AC-15, 175–184 (1970).
[CrossRef]

1965

J. A. Nelder, R. Mead, “A Simplex Method for Function Minimization,” Comput. J. 7, 308–313 (1965).
[CrossRef]

Hardesty, R. M.

Jones, R. H.

R. H. Jones, “Maximum Likelihood Fitting of ARMA Models to Time Series with Missing Observations,” Technometrics 22, 389–395 (1980).
[CrossRef]

Mead, R.

J. A. Nelder, R. Mead, “A Simplex Method for Function Minimization,” Comput. J. 7, 308–313 (1965).
[CrossRef]

Mehra, R. K.

R. K. Mehra, “On the Identification of Variances and Adaptive Kalman Filtering,” IEEE Trans. Autom. Control AC-15, 175–184 (1970).
[CrossRef]

Melsa, J. L.

A. P. Sage, J. L. Melsa, Estimation Theory with Applications to Communications and Control (McGraw-Hill, New York, 1971).

Nelder, J. A.

J. A. Nelder, R. Mead, “A Simplex Method for Function Minimization,” Comput. J. 7, 308–313 (1965).
[CrossRef]

Rye, B. J.

B. J. Rye, R. M. Hardesty, “Time Series Identification and Kalman Filtering Techniques for Doppler Lidar Velocity Estimation,” Appl. Opt. 28, 879–891 (1989).
[CrossRef] [PubMed]

B. J. Rye, “A Wavelength Switching Algorithm for Single Laser Differential Absorption Lidar Systems,” Proc. Soc. Photo-Opt. Instrum. Eng. 1062, 267–273 (1989).

B. J. Rye, “Power Ratio Estimation in Incoherent Backscatter Lidar: Heterodyne Receiver with Square Law Detection,” J. Climate Appl. Meteorol. 22, 1899–1913 (1983).
[CrossRef]

Sage, A. P.

A. P. Sage, J. L. Melsa, Estimation Theory with Applications to Communications and Control (McGraw-Hill, New York, 1971).

Warren, R. E.

Zrnic, D. S.

D. S. Zrnic, “Mean Power Estimation with a Recursive Filter,” IEEE Trans. Aerosp. Electron. Syst. AES-13, 281–289 (1977).
[CrossRef]

Appl. Opt.

Comput. J.

J. A. Nelder, R. Mead, “A Simplex Method for Function Minimization,” Comput. J. 7, 308–313 (1965).
[CrossRef]

IEEE Trans. Aerosp. Electron. Syst.

D. S. Zrnic, “Mean Power Estimation with a Recursive Filter,” IEEE Trans. Aerosp. Electron. Syst. AES-13, 281–289 (1977).
[CrossRef]

IEEE Trans. Autom. Control

R. K. Mehra, “On the Identification of Variances and Adaptive Kalman Filtering,” IEEE Trans. Autom. Control AC-15, 175–184 (1970).
[CrossRef]

J. Climate Appl. Meteorol.

B. J. Rye, “Power Ratio Estimation in Incoherent Backscatter Lidar: Heterodyne Receiver with Square Law Detection,” J. Climate Appl. Meteorol. 22, 1899–1913 (1983).
[CrossRef]

Proc. Soc. Photo-Opt. Instrum. Eng.

B. J. Rye, “A Wavelength Switching Algorithm for Single Laser Differential Absorption Lidar Systems,” Proc. Soc. Photo-Opt. Instrum. Eng. 1062, 267–273 (1989).

Technometrics

R. H. Jones, “Maximum Likelihood Fitting of ARMA Models to Time Series with Missing Observations,” Technometrics 22, 389–395 (1980).
[CrossRef]

Other

Because the approximation described makes the linear filter slightly suboptimal and might arguably lead to results that are prejudiced against it, the process was repeated with the power, rather than the log power, generated using a random walk [using Eqs. (6a) and (2)] and filtered optimally with the constant value for Q1 from the simulation; the log power was then filtered suboptimally using a variable Q1 generated by Eq. (14). The conclusions drawn from the results were unaffected by these changes.

Inspection of Eqs. (13c) and (13d) indicates that, if the variance terms are normalized12 to Q1, the unknowns can be combined to leave only two, Q1m and Q1R. It is believed that physical interpretation calls for knowledge of all three despite the additional computational burden entailed.

A. Gelb, Ed., Applied Optimal Estimation (MIT Press, Cambridge, MA, 1974).

A. P. Sage, J. L. Melsa, Estimation Theory with Applications to Communications and Control (McGraw-Hill, New York, 1971).

An alternative approach to the measurement Eq. (4) is logarithmic transformation of the measurement; using y = ln[Y − U], x = ln[S], w = ln[W], and in the absence of additive noise, we then obtain a linear measurement equation like that of Eq. (2) but with w appearing as the additive noise term. A complication arises if w does not have zero mean. If x is known to be constant and the statistics of w are stationary, the problem is simply one of determining the resulting bias in the average14; otherwise it is necessary to show that the variation of the bias is negligible (e.g., less than other sources of error) over the range of parameters encountered. For differential log ratio measurements of the form x = ln[S1/S2], y = ln[(Y1 − U1)/(Y2 − U2)], the problem is mitigated because w = ln[W1/W2] does have zero mean provided the statistics of W1 and W2 are identical. This removes bias in the absence of extra additive noise and leads7 to relatively small bias provided the variances R1 and R2 of this noise are small or S1/S2 ∼ R1R2.

Strictly P can only be interpreted as the covariance of the estimate for a linear filter with known system model. For nonlinear filters, including adaptive filters designed to determine the properties of an unknown system model, P should be at best regarded as a useful approximation to the covariance matrix; here we use the term estimate covariance matrix for brevity.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1
Fig. 1

Block diagrams of various measurement models: (a) a linear measurement equation [Eq. (2)]; (b) the system variable is the logarithm of the measurement variable [Eq. (3)]; (c) the system variables are a random walk and multiplicative noise modeled using the linear approximation [Eqs. (6) and (7a)]; z−1 is the backward shift operator, i.e., z−1[x(k)] = x(k − 1); (d) the logarithm of the first system component is modeled as a random walk, and the second is multiplicative noise [Eqs. (6) and (7b)]; (e) as (d), except that the multiplicative noise is modeled using the exponential approximation.

Fig. 2
Fig. 2

Comparison of (1) the logarithm of a power estimate that is determined from a linear filter and (2) the log power estimate obtained directly from a nonlinear filter: (a) log power simulated as a random walk with Q1 = 1.1 × 10−4 (arbitrary units); (b) simulated power measurement data degraded by additive noise; the variance of the latter is R = 105; (c) estimated standard deviation (square root of covariance estimate) obtained from the nonlinear filter; (d) difference between the nonlinear filter estimate for the log power and the true (simulated) log power [Fig. 2(a)], as a fraction of the standard deviation estimate from the nonlinear filter [Fig. 2(c)]; (e) difference between the logarithm of the linear filter estimate for the power and the true log power, as a fraction of the standard deviation estimate from the nonlinear filter.

Fig. 3
Fig. 3

Evaluation of signal covariance estimates obtained using synthetic data generated using a random walk model for the power signal, and multiplicative noise generated with the linear model [see Table I and Eq. (6)]: (a) simulated signal including multiplicative noise; (b) estimated standard deviation of log power; (c) difference between simulated data and estimated values as a fraction of the estimated standard deviation.

Fig. 4
Fig. 4

Data sample from a range of 3 km; processing power and log power estimates: (a) measured return superimposed over the estimated return signal power obtained from the filter for power; (b) log power estimate from the filter for log power; (c) histogram of the power filter estimates of the speckle and chi-square density function of order 14; the latter is normalized to make the areas under the two curves identical.

Tables (3)

Tables Icon

Table I Nonlinear Filter Output Using a Simulated Data Sequencea

Tables Icon

Table II Nonlinear Filter Output Using a Real Data Sequencea

Tables Icon

Table III Nonlinear Filter Output Obtained Using a Set of Real Data Sequencesa

Equations (33)

Equations on this page are rendered with MathJax. Learn more.

Y ( k ) = S ( k ) + V ( k ) + U ,
y ( k ) = x 1 ( k ) + υ ( k ) ,
y ( k ) = exp [ x 1 ( k ) ] + υ ( k )
Y ( k ) = S ( k ) W ( k ) + V ( k ) + U ,
y ( k ) = x 1 ( k ) W ( k ) + υ ( k ) .
y ( k ) = x 1 ( k ) [ 1 + w ( k ) ] + υ ( k ) , x 1 = S
y ( k ) = exp [ x 1 ( k ) ] [ 1 + w ( k ) ] + υ ( k ) , x 1 = ln [ S ]
x 1 ( k ) = x 1 ( k 1 ) + w 1 ( k ) ,
x 2 ( k ) = 1 + w 2 ( k ) ,
y ( k ) = x 1 ( k ) x 2 ( k ) + υ ( k ) , x 1 = S ,
y ( k ) = exp [ x 1 ( k ) ] x 2 ( k ) + υ ( k ) , x 1 = ln [ S ] .
x ( k ) = f [ x ( k 1 ) ] + w ( k ) ,
y ( k ) = h [ x ( k ) ] + υ ( k ) .
x p ( k ) = f [ x e ( k 1 ) ] ,
P p ( k ) = F ( k ) P ( k 1 ) F T ( k ) + Q ( k ) ,
V ( k ) = H ( k ) P p ( k ) H T ( k ) + R ( k ) ,
K ( k ) = P p ( k ) H T ( k ) V ( k ) 1
x e ( k ) = x p ( k ) + K ( k ) e ( k ) ,
P ( k ) = [ I K ( k ) H ( k ) ] P p ( k ) ,
h [ x ( k ) ] = { x 1 ( k ) x 2 ( k ) , x 1 = S , exp [ x 1 ( k ) ] x 2 ( k ) , x 1 = ln [ S ] .
h ( k ) = x e 1 ( k 1 ) , H 1 ( k ) = 1 , H 2 ( k ) = x e 1 ( k 1 ) , x 1 = S ,
h ( k ) = H 1 ( k ) = H 2 ( k ) = exp [ x e 1 ( k 1 ) ] , x 1 = ln [ S ] ,
y ( k ) = Y ( k ) N ,
e ( k ) = y ( k ) h ( k ) ,
V ( k ) = H 1 ( k ) 2 [ P ( k 1 ) + Q 1 ] + H 2 ( k ) 2 / m + R ,
K 1 ( k ) = [ P ( k 1 ) + Q 1 ] H 1 ( k ) / V ( k ) ,
x e 1 ( k ) = x e 1 ( k 1 ) + K 1 ( k ) e ( k ) ,
P ( k ) = [ 1 K 1 ( k ) H 1 ( k ) ] [ P ( k 1 ) + Q 1 ] ,
K 2 ( k ) = H 2 ( k ) / [ m V ( k ) ] ,
x e 2 ( k ) = 1 + K 2 ( k ) e ( k ) .
Q 1 [ x 1 ] / x 1 2 ~ Q 1 [ ln ( x 1 ) ] ,
e ( k ) 2 > 9 V ( k ) , e ( k ) > 0 ,
e ( k ) 2 > 4 V ( k ) , e ( k ) < 0 .

Metrics