Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Forecasting the atmospheric refractive index structure constant profile with an altitude-time correlations-inspired deep learning model

Open Access Open Access

Abstract

An accurate forecast of the atmospheric refractive index structure constant ($C_n^2$) is vital to analyzing the influence of atmospheric turbulence on laser transmission in advance. In this paper, we propose a novel method to forecast the atmospheric refractive index structure constant $C_n^2$ profile, which is inspired by the turbulence characteristics (i.e., the altitude-time correlations). A deep convolutional neural network (DCNN) is adopted in the hope that with the stacked convolutional layers to abstract the altitude-time correlations of $C_n^2$, it can accurately forecast the $C_n^2$ profile in the near future based on the accumulated historical measurement data. While the sliding window algorithm is introduced to segment the measured time series data of the $C_n^2$ profiles to generate the input-output pair data for training and testing. Experimental results demonstrate its high forecast accuracy, as the obtained root mean square error and the correlation coefficient are 0.515 and 0.956 in the one-step-ahead $C_n^2$ profile forecast case, 0.753 and 0.9046 in the 36-step-ahead forecast case, respectively. Moreover, the forecast accuracy versus altitude and its relationship with the distribution of $C_n^2$ against altitude are analyzed. Most importantly, with a series of experiments of various input feature sizes, the appropriate sliding window width for $C_n^2$ forecast is explored, and the short-term correlation of $C_n^2$ is also verified.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Atmospheric turbulence will cause laser intensity fluctuation [1], beam spread [2], and arrival angle fluctuation [1] in the transmission process, which will increase the bit error rate of the communication system, and even interrupt communication in the strong turbulence. The atmospheric refractive index structure constant $C_n^2$ is an important parameter to characterize the atmospheric turbulence intensity [3]. Therefore, forecasting the atmospheric refractive index structure constant in the future (denoted as ${ {C_n^2} |^{_{\textrm{future}}}}$) can help to predict the channel [4], design the receiver system [5,6], and compensate for the error in laser communication [7].

Researchers have developed the empirical models of atmospheric refractive index structure constant $C_n^2$ based on experimental data (such as the SLC model [8], AFGL AMOS model [9], and CLEAR I model [10]), the parametric models based on meteorological parameters (such as the Hufnagel model [11], VanZandt model [12], and Tatarski model [13]), and numerical simulation models (such as the MESO-NH simulation model [14], MM5 model [15], and WRF model [16]). The above models can characterize the average trend of $C_n^2$. However, they are difficult to capture and reconstruct the real-time dynamic fluctuation patterns of turbulence in real scenarios. Therefore, measurement is still needed to get local atmospheric refractive index structure constant data in real time [17,18].

In recent years, due to the superior capabilities of adaptive feature extraction and nonlinear representation, machine learning (ML), a data-driven statistical learning approach, has made remarkable achievements in many fields, such as image recognition [19] and protein structure prediction [20]. ML has also been introduced to predict the atmospheric refractive index structure constant $C_n^2$ and has achieved striking performance. In 2016, Wang et al. proposed a multilayer perceptron (MLP) artificial neural network (ANN) architecture for estimating the optical turbulence $C_n^2$ in the atmospheric surface layer with the measured meteorological data at Mauna Loa, Hawaii [21]. In 2020, Su et al. combined the backpropagation neural network and the adaptive niche-genetic algorithm (AGA-BP) to forecast the $C_n^2$ with the measured meteorological data in the Antarctic [22]. The above two prediction models, i.e., MLP ANN and AGA-BP, depend on real-time meteorological data. Except for the capability of adaptively characterizing the relationship between $C_n^2$ and meteorological parameters, ANN can also be used to study the time-varying characteristics (the reflection of time correlations) of $C_n^2$. Ma et al. proposed using two time series forecast models, i.e., the recurrent neural network (RNN) and the long and short-term memory (LSTM), to forecast the next-time $C_n^2$ with the previous observations, without the need for the measured meteorological data [23]. Notably, in addition to the temporal correlations that should be considered, the spatial correlations are also essential features of $C_n^2$ and should not be ignored.

Space-time correlations quantify how turbulent fluctuations at one location and one instant covary with those at another location and another instant, and thus describe the dynamic behaviors of turbulent fluctuations across spatial and temporal scales. In [24], the authors reviewed space-time correlations in turbulence flows, involving Taylor’s frozen-flow model and the elliptic approximation (EA) model, and discussed the application of space-time correlations for the development of time-accurate subgrid-scale (SGS) models for large-eddy simulation of turbulence-generated noise and particle-laden turbulence. Besides, it is also pointed out that space-time correlations have been fundamental for the analyses of experimental and direct numerical simulation turbulence data [25]. The atmospheric refractive index structure constant is an important parameter to characterize turbulence characteristics, so it also has space-time correlations. Studying its spatiotemporal dynamic varying trend from its space-time correlations is beneficial to making accurate $C_n^2$ forecasts.

Inspired by the space-time correlations [24] (precisely, the altitude-time correlations), we propose a sliding window algorithm-assisted deep convolutional neural network (SW-DCNN) for accurately forecasting the atmospheric refractive index structure constant $C_n^2$ profile. The main contributions of this work are three-fold:

  • 1) Based on the atmospheric turbulence theory and the measured data of $C_n^2$ profiles, we demonstrate that the variation of $C_n^2$ has altitude-time correlations. Whist, we propose a deep learning-based method to adaptively abstract the altitude-time correlations, since the altitude-time correlations are complex, nonlinear, and implicit. To our best knowledge, little literature has been reported regarding combing space-time correlations of turbulence with deep learning to forecast the $C_n^2$ profile.
  • 2) To effectively abstract these deep-rooted correlations and make an accurate forecast, we first convert these measured $C_n^2$ data into images with altitude and time order, representing the altitude-time correlations as image features. And then, we propose a 14-layer DCNN model with 6 skip connections for the $C_n^2$ forecast task as DCNN has decided advantages in image feature extraction and nonlinear representation. Besides, a global average pooling layer is integrated into the proposed DCNN to make it applicable to the input data of various time window widths in practice.
  • 3) The sliding window algorithm is introduced to segment the measured time-series data of the $C_n^2$ profiles into input-output pair data. On this basis, the appropriate sliding window width for network learning is explored, and the short-term correlation of turbulence is clearly verified. That is, our method provides a new path to explore the characteristics of turbulence.

The remainder of this paper is organized as follows. In Section 2, the measured data are described in detail, the existence of altitude-time correlations is theoretically analyzed and confirmed by measured data, and the SW-DCNN model for forecasting the $C_n^2$ profile is presented. In Section 3, experiments are performed to evaluate the one- and multi-step-ahead forecast performance of the proposed method, to analyze the error distribution, and to explore the appropriate sliding window width. Section 4 concludes this paper.

2. Data acquisition and forecasting method

In this section, we introduce our method in detail from the following three aspects: 1) the devices and systems used for $C_n^2$ profile measurement; 2) Theoretical analysis and data validation of altitude-time correlations of $C_n^2$, as well as data preprocessing; 3) The architecture of the proposed DCNN model. Figure 1 illustrates the idea of the proposed altitude-time correlations-inspired deep learning method to forecast the $C_n^2$ profile. The space-time correlations [24] support the altitude-time correlations of $C_n^2$ profiles, which means that the variations of $C_n^2$ profiles with altitude and time are not independent of each other. This provides a theoretical foundation for using historical $C_n^2$ profile measurement data to forecast future $C_n^2$ profiles. Therefore, A deep convolutional neural network (DCNN) is used to fully mine the altitude-time correlations of $C_n^2$ profile data to build a surrogate model for forecasting future $C_n^2$ profiles. In this paper, the historical measured $C_n^2$ profile data is divided into training set and test set with a ratio of 4:1. After training on the training set, the DCNN model can be used for one- and multi-step-ahead forecast the $C_n^2$ profiles in the future. The test set can help us learn about the forecast accuracy and the error distribution obtained by the trained DCNN model.

 figure: Fig. 1.

Fig. 1. Idea of the proposed altitude-time correlations-inspired deep learning method to forecast the $C_n^2$ profile.

Download Full Size | PDF

2.1 Measured data by TWP3 boundary layer wind profile radar

We adopt a TWP3 boundary layer wind profile radar to measure the atmospheric refractive index structure constant $C_n^2$, which can precisely obtain $C_n^2$ by processing the scattered echo of electromagnetic waves [26,27]. The TWP3 boundary layer wind profile radar is located in Mianzhu City, Sichuan Province ($31^\circ 22^{\prime}$N, $104^\circ 07^{\prime}$E, altitude 724 m). TWP3 boundary layer wind profile radar is mainly composed of array antennas with a mask, radio acoustic sounding system, transmitting-receiving system, and high-speed signal processing system, as shown in Fig. 2. Table 1 shows the main device parameters of the TWP3 boundary layer wind profile radar.

 figure: Fig. 2.

Fig. 2. TWP3 boundary layer wind profile radar located in Sichuan.

Download Full Size | PDF

Tables Icon

Table 1. Main operation parameters of the TWP3 boundary layer wind profile radar

According to the operation parameters and signal-to-noise ratio ${S_{NR}}$ of TWP3 boundary layer wind profile radar, the refractive index structure constant $C_n^2$ $({\textrm{m}^{ - {2 / 3}}})$ can be obtained by [28]

$$C_n^2 = \frac{{{k_B}{T_0}B{N_F}}}{{5.4 \times {{10}^{ - 5}}{\lambda ^{{5 / 3}}}{P_t}({l / 2})G{L^2}}}{d^2} \cdot {S_{NR}}.$$
where ${k_B} = 1.38 \times {10^{ - 23}}$ is the Boltzmann constant in ${\textrm{J} / \textrm{K}}$, ${T_0}$ represents the absolute temperature in $\textrm{K}$, B is the noise bandwidth in MHz, ${N_F}$ is the noise factor in dB, $\lambda $ denotes the radar wavelength in m, ${P_t}$ is the radar transmitted power in dBm, l is the transmitted pulse length in ${\mathrm{\mu} \mathrm{s}}$, G is the gain of the array antennas in dB, L represents the radar system loss in dB, d is the range to the target in m. Figure 3 illustrates the source data file, including the radar location, the sampling heights, and the measured $C_n^2$ values.

 figure: Fig. 3.

Fig. 3. Illustration of source data file.

Download Full Size | PDF

2.2 Altitude-time correlation analysis and data preprocessing

To intuitively characterize the altitude (height)-time correlations of the atmospheric refractive index structure constant $C_n^2$, we reform these measured $C_n^2$ data into two-dimensional images in altitude and time order. Figure 4 shows three exemplary pseudo-color images of the atmospheric refractive index structure constant $C_n^2$. It can be seen that the distribution of $C_n^2$ shows some altitude-time evolution patterns, such as the cluster-like pattern, strip-like pattern, and wave-like pattern (marked with boxes), which verify the altitude-time correlations.

 figure: Fig. 4.

Fig. 4. Exemplary illustrations (marked with boxes) of the local space-time patterns that the atmospheric refractive index structure constant $C_n^2$ exists.

Download Full Size | PDF

The altitude-time correlations of $C_n^2$ are not only verified by measured data, but also supported by the atmospheric turbulence theory. The atmospheric refractive index structure constant $C_n^2$ is usually used to characterize the intensity of atmospheric turbulence. Therefore, the variation of $C_n^2$ is closely related to the motion of turbulence. According to Taylor’s frozen-flow hypothesis, the space-time correlation of turbulent motions can be expressed by [24]

$$R(r,\tau ) \equiv \left\langle {u({\boldsymbol x},t)u({\boldsymbol x} + {\boldsymbol r},t + \tau )} \right\rangle = R(r - U\tau ,0),$$
where $u(\boldsymbol{x},t)$ is the streamwise velocity component at location ${\boldsymbol x} = ({x_1},{x_2},{x_3})$ and time t, $u({\boldsymbol x} + {\boldsymbol r},t + \tau )$ is the velocity at the downstream location ${\boldsymbol x} + {\boldsymbol r} = ({x_1} + r,{x_2},{x_3})$ and the later time $t + \tau $ ($r$ is the space separation and $\tau $ is the time delay), U is a constant that denotes the convection velocity, and $\left\langle \cdot \right\rangle $ denotes an ensemble average. For the altitude-dependent case, i.e., let $r = \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}} \over h}$ ($\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}} \over h}$ is the space separation in altitude), we can obtain
$$R(\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}} \over h} ,\tau ) = R(\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}} \over h} - U\tau ,0).$$

Based on the measured data and above theoretical analysis, we infer that $C_n^2$ exists altitude-time correlations, that is,

$$\exists \textrm{ }{R_{C_n^2}}(\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}} \over h} ,\tau ) \equiv \left\langle {C_n^2(h,t)C_n^2(h + \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}} \over h} ,t + \tau )} \right\rangle .$$

Exploring the altitude-time correlations of $C_n^2$ may help us to accurately forecast $C_n^2$ profile in the future. Given that these correlation formulas are usually complex, nonlinear, and implicit, we propose a deep learning-based model to automatically abstract and represent the altitude-time correlations, and eventually make accurate $C_n^2$ profile forecasts.

Data is the foundation of deep learning. Hence, we conducted a 34-day (from December 3, 2020, to January 5, 2021, in UTC) measurement program in Mianzhu, China. The $C_n^2$ values at the altitudes of $[0.824,0.884, \cdots ,3.524]\textrm{ km}$ (sampling heights are $[0.1,0.16, \cdots ,2.8]\textrm{ km}$, site altitude is 0.724 km) were measured every 5 minutes. Then, these measured $C_n^2$ data were segmented by the sliding window algorithm [29] to generate the input-output pair data for training and testing. The detailed procedures are as follows:

Step 1: Set the sliding window width q. As is depicted in Fig. 5, the sliding window (the green box) covers the q time steps historical $C_n^2$ data, which are used as the input feature. The $C_n^2$ data measured at the following time step (the blue line) is used as the output. They form an input-output pair.

 figure: Fig. 5.

Fig. 5. Illustration of the sliding window algorithm for data preparation.

Download Full Size | PDF

Step 2: Slide the window right side by one step to cover another q time steps measured $C_n^2$ data, then combine the next time step $C_n^2$ data to form a new input-output pair.

Step 3: Repeat Steps 2 and 3 until all the measured data are segmented.

We totally generate 8228 input-output pairs (34 days, 242 pairs per day) for validation experiments. These pairs are randomly divided into the training set and test set with a ratio of 4:1.

2.3 Proposed DCNN architecture for $C_n^2$ forecast

DCNN has the capabilities of adaptive feature extraction and nonlinear representation [30]. The stacked convolutional layers enable DCNN to extract the image features from the shallow to the deep, thus DCNN has the ability to process images converted from $C_n^2$ profile data to mine the latent altitude-time correlations. DCNN also has translation invariance, which can detect the same or similar features that may exist in different positions of the image. This characteristic is very important for image recognition, as well as the extraction and characterization of the altitude-time correlations of $C_n^2$ profiles. In Fig. 4, we observe that images converted from $C_n^2$ profile data have similar local features at different periods on the same day, at the same periods on different days, and at different periods on different days. In addition, DCNN does well in adaptively reconstructing the nonlinear mapping relationship between input and output. Therefore, we can use a unified DCNN model to forecast the $C_n^2$ profiles in different seasons and regions. One is only required to use the $C_n^2$ profile data measured in the corresponding scenario for training, and does not need to redesign forecast model based on prior knowledge. Hence, we use DCNN to automatically capture the potential altitude-time correlations (see Fig. 4) of $C_n^2$, so as to make an accurate forecast. Specifically, we hope that DCNN can adaptively establish the relationship between the altitude-dependent $C_n^2$ data measured in the previous q time steps and the altitude-dependent $C_n^2$ values at the next time step in the training process, that is,

$$\left[ {\begin{array}{c} {C_n^2|{_{{t_q}}^{{h_{p - 1}}}} }\\ \vdots \\ {C_n^2|{_{{t_q}}^{{h_0}}} } \end{array}} \right] = \tilde{f}\left( {\left[ {\begin{array}{ccc} {C_n^2|{_{{t_0}}^{{h_{p - 1}}}} }& \ldots &{C_n^2|{_{{t_{q - 1}}}^{{h_{p - 1}}}} }\\ \vdots & \ddots & \vdots \\ {C_n^2|{_{{t_0}}^{{h_0}}} }& \cdots &{C_n^2|{_{{t_{q - 1}}}^{{h_0}}} } \end{array}} \right]} \right).$$

In Eq. (5), $C_n^2|{_{{t_q}}^{{h_{p - 1}}}} $ denotes the $C_n^2$ value at the altitude ${h_{p - 1}}$ and the time step ${t_q}$, $\tilde{f}({\cdot} )$ is the nonlinear mapping to be learned. In this work, $p = q = 46$, $\Delta t = {t_q} - {t_{q - 1}} = \ldots = {t_1} - {t_0} = 5\textrm{ min}$, $\Delta h = {h_{p - 1}} - {h_{p - 2}} = \ldots = {h_1} - {h_0} = 0.06\textrm{ km}$.

Figure 6 illustrates the architecture of our DCNN model. Firstly, A convolutional layer with 64 $3 \times 3$ kernels is constructed to extract the input feature map. Then, 6 residual blocks [19] including 12 convolutional layers are used to avoid the vanishing gradient problem caused by the deep architecture. Lastly, a linear mapping layer connects the output layer. In addition, batch normalization (BN) is adopted to reduce the internal covariate shift and accelerate the proposed DCNN training [31]. The rectified linear unit (ReLU) [32] is chosen as the activation function for its advantages in accelerating learning and making the model sparse to reduce generalization error. To retain more useful features while reducing parameters, average pooling (AvgPool) is used here [33]. Most importantly, to make the proposed DCNN applicable to the input data of various time steps (not limited to $q = 46$), we introduce a global average pooling (GlobalAvgPool) layer [34] before the linear mapping layer. For the input feature maps ${{\boldsymbol X}_{C \times H \times W}}$, the output features ${{\boldsymbol Z}_{C \times 1}}$ can be obtained by

$${Z_{k,1}} = GlobalAvgPool({\boldsymbol X}[k]) = \frac{{\sum\limits_{i = 1}^H {\sum\limits_{j = 1}^W {X(k,i,j)} } }}{{H \times W}},k = 1, \cdots ,C,$$
where C, H, and W are the number, height, and width of the input feature maps, respectively. ${\boldsymbol X}[k]$ denotes the $k\textrm{ - th}$ feature map, ${Z_{k,1}}$ is the $k\textrm{ - th}$ output feature, i and j denote the index of an element in the feature map, respectively.

 figure: Fig. 6.

Fig. 6. Proposed DCNN model for forecasting the atmospheric refractive index structure constant $C_n^2$ profile.

Download Full Size | PDF

For the learning, the loss function is the mean square error defined as

$$\mathop {\min }\limits_{w} L({\boldsymbol w}) = \frac{1}{{N \times p}}\sum\limits_{\upsilon = 1}^{{N_{\textrm{tr}}}} {\sum\limits_{m = 0}^{p - 1} {{{(\tilde{C}_n^2|{_{{t_{q,\upsilon }}}^{{h_m}}} - C_n^2|{_{{t_{q,\upsilon }}}^{{h_m}}} )}^2}} } + {R_e}{\boldsymbol w}{{\boldsymbol w}^\textrm{T}},$$
where ${\boldsymbol w}$ is the vector composed of all the weights to be optimized, ${N_{\textrm{tr}}}$ is the number of training data, ${R_e}$ is the regularization coefficient to avoid overfitting, $\tilde{C}_n^2|{_{{t_{q,\upsilon }}}^{{h_m}}} $ and $C_n^2|{_{{t_{q,\upsilon }}}^{{h_m}}} $ are the DCNN-forecasted and measured values of the $\upsilon \textrm{ - th}$ sample, respectively.

3. Model evaluation and result analysis

In this part, we mainly evaluate our method in the following three aspects: 1) One-step-ahead forecast accuracy; 2) Multi-step-ahead forecast accuracy; 3) Impact of sliding window width on forecast accuracy.

3.1 One-step-ahead forecast performance

To comprehensively evaluate the forecast performance of the proposed SW-DCNN model, experiments based on measured $C_n^2$ data are carried out. Firstly, the proposed SW-DCNN model is trained on the training set with the gradient-based optimizer, AdamW [35]. The learning rate, the regularization coefficient ${R_e}$, and the maximum training epoch are set to 0.05, 1.73 × 10−4, and 500, respectively. Then, the performance of the trained SW-DCNN model, including the forecast accuracy and the error distribution, is assessed on the test set.

3.1.1 Forecast accuracy

Three examples in the test set are shown in Fig. 7. It can be seen that, although the input images (Fig. 7(a)) show various patterns, the forecasted $C_n^2$ profiles of the trained SW-DCNN model are in good agreement with the measured ones, as shown in Fig. 7(b). In Fig. 7(c), we observe that the absolute percentage errors (APEs) at different altitudes fluctuate to some extent, but are generally less than 6.037%. The formula for APE is as follows:

$$\textrm{AP}{\textrm{E}_\upsilon }({h_m}) = \left|{\frac{{\tilde{C}_n^2|{_{{t_{q,\upsilon }}}^{{h_m}}} - C_n^2|{_{{t_{q,\upsilon }}}^{{h_m}}} }}{{C_n^2|{_{{t_{q,\upsilon }}}^{{h_m}}} }}} \right|\times 100\%.$$

 figure: Fig. 7.

Fig. 7. One-step-ahead $C_n^2$ forecast examples with the trained SW-DCNN model. (a) Network input. (b) Comparison between the forecasted values (red dotted line) and the measured values (black solid line). (c) Absolute percentage error (APE) versus altitude.

Download Full Size | PDF

We also quantitatively evaluate the reliability of the $C_n^2$ values forecasted by our method based on the whole test set. Two commonly-used metrics for performance evaluation, i.e., the root mean square error (RMSE), the correlation coefficient R on the whole test set are adopted and defined as follows:

$$\left\{ \begin{array}{l} \textrm{RMSE} = \sqrt {\frac{1}{{N \times p}}\sum\limits_{\upsilon = 1}^{{N_{\textrm{te}}}} {\sum\limits_{m = 0}^{p - 1} {{{(\tilde{C}_n^2|{_{{t_{q,\upsilon }}}^{{h_m}}} - C_n^2|{_{{t_{q,\upsilon }}}^{{h_m}}} )}^2}} } } \\ R = \frac{{\sum\limits_{\upsilon = 1}^{{N_{\textrm{te}}}} {\sum\limits_{m = 0}^{p - 1} {(C_n^2|{_{{t_{q,\upsilon }}}^{{h_m}}} - \bar{C}_n^2)(\tilde{C}_n^2|{_{{t_{q,\upsilon }}}^{{h_m}}} - \bar{\tilde{C}}_n^2)} } }}{{\sqrt {\sum\limits_{\upsilon = 1}^{{N_{\textrm{te}}}} {\sum\limits_{m = 0}^{p - 1} {{{(C_n^2|{_{{t_{q,\upsilon }}}^{{h_m}}} - \bar{C}_n^2)}^2}} } } \sqrt {\sum\limits_{\upsilon = 1}^{{N_{\textrm{te}}}} {\sum\limits_{m = 0}^{p - 1} {{{(\tilde{C}_n^2|{_{{t_{q,\upsilon }}}^{{h_m}}} - \bar{\tilde{C}}_n^2)}^2}} } } }} \end{array} \right.,$$
where $\bar{C}_n^2$ and $\bar{\tilde{C}}_n^2$ represent the averages of the measured and forecasted values, respectively, ${N_{\textrm{te}}}$ is the number of test data. Figure 8 depicts the correlation between the forecast and measurement values. We can see that the data points are concentrated near the blue benchmark line (the forecasted $C_n^2$ value equals the measured one). Small RMSE (0.515) and high correlation coefficient (0.956) demonstrate that our method can accurately forecast the $C_n^2$ profile.

 figure: Fig. 8.

Fig. 8. One-step-ahead $C_n^2$ forecast case: the correlation of $\textrm{l}o{g_{10}}(C_n^2)$ between the forecast value and measurement value.

Download Full Size | PDF

In addition, the forecast accuracy of our proposed model is compared with that of the AGA-BP model [22], the WRF [16], and the MLP ANN model [21]. It should be pointed out that, the data in [22] and [16] were measured at 0.002 km above the ground at the Chinese Antarctic Taishan Station where the above mean sea level (amsl) is 2.621 km, thus the altitude of the measurement position is 2.623 km. The data in [21] were measured at 0.002 km above the ground at the Mauna Loa Observatory (3.397 km amsl), thus the altitude of the measurement position is 3.399 km. To compare the prediction performance of these methods at corresponding altitudes, we define the correlation coefficient at altitude ${h_m}$ as follows:

$$R({h_m}) = \frac{{\sum\limits_{\upsilon = 1}^{{N_{\textrm{te}}}} {(C_n^2|{_{{t_{q,\upsilon }}}^{{h_m}}} - {{ {\bar{C}_n^2} |}^{{h_m}}})(\tilde{C}_n^2|{_{{t_{q,\upsilon }}}^{{h_m}}} - {{ {\bar{\tilde{C}}_n^2} |}^{{h_m}}})} }}{{\sqrt {\sum\limits_{\upsilon = 1}^{{N_{\textrm{te}}}} {{{(C_n^2|{_{{t_{q,\upsilon }}}^{{h_m}}} - {{ {\bar{C}_n^2} |}^{{h_m}}})}^2}} } \sqrt {\sum\limits_{\upsilon = 1}^{{N_{\textrm{te}}}} {{{(\tilde{C}_n^2|{_{{t_{q,\upsilon }}}^{{h_m}}} - {{ {\bar{\tilde{C}}_n^2} |}^{{h_m}}})}^2}} } }}.$$
where ${ {\bar{C}_n^2} |^{{h_m}}}$ and ${ {\bar{\tilde{C}}_n^2} |^{{h_m}}}$ represent the averages of the measured and forecasted values at altitude ${h_m}$, respectively.

Table 2 lists the correlation coefficients of different methods and the corresponding altitudes. It can be seen that at altitude about 2.620 km, the correlation coefficient of our method is 0.958, while those of AGA-BP and WRF are 0.932 and 0.700, respectively. At altitude about 3.400 km, the correlation coefficient of our method is 0.918, while that of MLP ANN is 0.787. The correlation coefficient R of our method on the whole test set (including 46 altitudes) is 0.956, which is still higher than their results. Therefore, these results fully demonstrate that the proposed approach has significant advantages in forecasting the $C_n^2$ profile.

Tables Icon

Table 2. Quantitative evaluation between different models

3.1.2 Forecast accuracy distribution of MAPE in altitude

To evaluate the forecast performance of our method at different altitudes, we adopt the mean absolute percentage error (MAPE) at every altitude as follows:

$$\textrm{MAPE(}{h_m}\textrm{)} = \frac{1}{{{N_{\textrm{te}}}}}\sum\limits_{\upsilon = 1}^{{N_{\textrm{te}}}} {\left|{\frac{{\tilde{C}_n^2|{_{{t_{q,\upsilon }}}^{{h_m}}} - C_n^2|{_{{t_{q,\upsilon }}}^{{h_m}}} }}{{C_n^2|{_{{t_{q,\upsilon }}}^{{h_m}}} }}} \right|\times 100\%} .$$

Figure 9 shows the altitude-dependent MAPEs obtained by the trained SW-DCNN model on the whole test set. The MAPEs at different altitudes are in the range of 1.94% to 3.55%. Notably, the MAPE gradually decreases with the increase in altitude. This trend is inextricably linked with the distribution characteristics of the $C_n^2$ values at different altitudes in the training set. The violin plot, which combines the box plot and density trace, was commonly used to show the distribution status and probability density of multiple data sets [36,37]. Therefore, we adopt a violin plot (see Fig. 10) to clearly show the distribution characteristics of $C_n^2$ in the training set at different altitudes. We observe that the fluctuation pattern of the median value (white point at center) of $C_n^2$ with altitude is generally consistent with that of the MAPE curve in Fig. 9. Moreover, at lower altitudes, the variances of $C_n^2$ are relatively large, that is, the $C_n^2$ value varies sharply, which brings greater uncertainty to make an accurate forecast, resulting in greater MAPEs. Whereas, at higher altitudes, the $C_n^2$ value stays relatively stable. This means that the uncertainty for the $C_n^2$ profile forecast with the trained SW-DCNN model is relatively small, which results in smaller MAPEs. Notably, the maximum MAPE is 3.55% which is still small, this fully illustrates the high forecast accuracy of the proposed SW-DCNN model.

 figure: Fig. 9.

Fig. 9. Mean absolute percentage error (MAPE) of ${\log _{10}}C_n^2$ versus altitude when testing the trained SW-DCNN model on the whole test set.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Distributions of ${\log _{10}}C_n^2$ in the training set (output) at different altitudes.

Download Full Size | PDF

3.2 Multi-step-ahead forecast performance

Considering that the one-step-ahead $C_n^2$ forecast (the time interval is only 5 minutes) may not meet the needs of some actual scenarios, we explore the performance of our method in multi-step-ahead $C_n^2$ forecast in this subsection. Specifically, we analyze the performance of the trained DCNN model in forecasting the $C_n^2$ profiles at the next 36-time steps (3 hours) based on the measured $C_n^2$ profiles in the previous 46-time steps. The 36-step-ahead $C_n^2$ forecast scheme with the trained DCNN model is shown in Fig. 11.

 figure: Fig. 11.

Fig. 11. 36-step-ahead $C_n^2$ profile forecast scheme with the trained DCNN model.

Download Full Size | PDF

Figure 12 shows three examples of using the trained DCNN model to make 36-step-ahead $C_n^2$ forecasts according to the scheme in Fig. 11. It can be seen that the forecasted results of $C_n^2$ profiles show similar image features to the measured results, which indicates that the trained DCNN model can be used for continuous forecast for a period of time. To quantify the similarity between them, here we introduce the cosine similarity defined by [38]

$$\textrm{similarity} = \frac{{\sum\limits_{q = 46}^{81} {\sum\limits_{m = 0}^{45} {(\tilde{C}_n^2|{_{{t_q}}^{{h_m}}} \times C_n^2|{_{{t_q}}^{{h_m}}} )} } }}{{\sqrt {\sum\limits_{q = 46}^{81} {\sum\limits_{m = 0}^{45} {{{(\tilde{C}_n^2|{_{{t_q}}^{{h_m}}} )}^2}} } } \sqrt {\sum\limits_{q = 46}^{81} {\sum\limits_{m = 0}^{45} {{{(C_n^2|{_{{t_q}}^{{h_m}}} )}^2}} } } }}.$$

 figure: Fig. 12.

Fig. 12. 36-step-ahead $C_n^2$ forecast examples with the trained SW-DCNN model. (a) Network input. (b) Ground truth of the $C_n^2$ profiles measured at the next 36-time steps. (c) Forecasted $C_n^2$ profiles at the next 36-time steps.

Download Full Size | PDF

After calculation, we obtain that their cosine similarity is 0.9994, 0.9996, and 0.9997, respectively. This means that the forecasted $C_n^2$ profiles at the next 36-time steps are in good agreement with the measured results.

Figure 13 illustrates the 36-step-ahead $C_n^2$ forecast performance of our method on the whole test set. Data points are concentrated near the blue benchmark line. The RMSE and the correlation coefficient R of our method are 0.753 and 0.9046, respectively. Compared with the results in the one-step-ahead forecast case, the RMSE slightly increases by 0.238 and the correlation coefficient R slightly decreases by 5.17%. It indicates that the trained DCNN model has the potential to be used in multi-step-ahead $C_n^2$ profile forecast. Figure 14 plots the curve of the correlation coefficient R versus the time step of forecast. As the time step of forecast increases, the correlation coefficient R decreases increasingly fast (especially when the time step of forecast exceeds 24 steps). When reaching 72 time steps (6 hours), the correlation coefficient R has decreased to 0.8 below. The long-term forecast of $C_n^2$ profile is challenging, because cumulative errors will be generated and increasingly amplified when repeatedly calling the trained DCNN model for one-step-ahead forecast. Therefore, further research is desired.

 figure: Fig. 13.

Fig. 13. 36-step-ahead $C_n^2$ forecast case: the correlation of ${\log _{10}}C_n^2$ between the forecasted and measured value.

Download Full Size | PDF

 figure: Fig. 14.

Fig. 14. Relationship between the correlation coefficient R of multi-step-ahead $C_n^2$ profile forecast and the time step of forecast.

Download Full Size | PDF

3.3 Impact of sliding window width on forecast accuracy

For the proposed SW-DCNN model, the sliding window width q determines the timescale of the input feature. Based on the training data of different timescales, the altitude-time correlations that can be learned by the SW-DCNN model will be different, finally resulting in different forecast accuracy. Here, to quantitatively analyze the impact of the sliding window width q on the forecast accuracy, we define the overall mean absolute percentage error as:

$$\textrm{MAP}{\textrm{E}_{\textrm{overall}}} = \frac{1}{{{N_{\textrm{te}}} \times p}}\sum\limits_{\upsilon = 1}^{{N_{\textrm{te}}}} {\sum\limits_{m = 0}^{p - 1} {\left|{\frac{{\tilde{C}_n^2|{_{{t_{q,\upsilon }}}^{{h_m}}} - C_n^2|{_{{t_{q,\upsilon }}}^{{h_m}}} }}{{C_n^2|{_{{t_{q,\upsilon }}}^{{h_m}}} }}} \right|\times 100\%} } .$$

And the same processes, including data segmentation with the sliding window algorithm, the SW-DCNN model training and test, are also applied for $q = 6,14,22,30,38,54,62$ and $70$ (the case of $q = 46$ has been analyzed in the previous experiment). The results are shown in Fig. 15. As the sliding window gradually widens, the forecast error generally shows a slow downward trend, reaching the minimum $\textrm{MAP}{\textrm{E}_{\textrm{overall}}}$ of 2.57% when $p = q = 46$. Therefore, to pursue the forecast accuracy and efficiency, we suggest that the value of sliding window width q and the number of altitude values p should be equal as the time scale and the spatial scale are equally important for SW-DCNN. Significantly, the $\textrm{MAP}{\textrm{E}_{\textrm{overall}}}$ decreases from 2.71% to 2.57%. Such a small difference (only 0.14%) just demonstrates that the time correlations of $C_n^2$ are more likely to be short-term as the typical coherence times of turbulence are around ten or tens of milliseconds [39]. It means that higher forecast accuracy of $C_n^2$ may be obtained on more time-intensive data.

 figure: Fig. 15.

Fig. 15. Relationship between the forecast error and the sliding window width.

Download Full Size | PDF

Due to the limitation of experimental equipment, we cannot obtain more time-intensive $C_n^2$ profile observation data. Here, we introduce interpolation methods to enrich our limited data. Specifically, we use cubic spline interpolation [40] to obtain the $C_n^2$ profile data every minute based on the measured data every five minutes, as shown in Fig. 16. On these interpolated $C_n^2$ data, we conduct similar verification experiments as above, and the results are shown in Fig. 17. Obviously, the $C_n^2$ profile forecast accuracy of the DCNN model has been significantly improved. When $q = 46$, the MAPE is only 1.20% (less than half of 2.57%). In addition, as the sliding window width widens, the MAPE curve still shows a downward trend, but the downward trend is significantly steeper than the curve in Fig. 15. This shows that when the $C_n^2$ data density increases, the space-time correlation between the data is enhanced, which is more conducive to the DCNN model to make accurate forecasts. Therefore, it inspires us to collect $C_n^2$ profile data as intensively as possible in future work. Although the MAPE is not the minimum when $q = 46$, from the downward trend of the curve, $q = 46$ is still a good choice for our experiments.

 figure: Fig. 16.

Fig. 16. Comparison between the original $C_n^2$ profile data and the interpolated ones.

Download Full Size | PDF

 figure: Fig. 17.

Fig. 17. Relationship between the forecast error and the sliding window width after the data are enriched with cubic spline interpolation.

Download Full Size | PDF

4. Conclusion

In this paper, a sliding window algorithm and deep convolutional neural network-based method (SW-DCNN) for forecasting the $C_n^2$ profile are proposed. Experiments based on measured data demonstrate that the proposed SW-DCNN model can effectively capture the altitude-time correlations of $C_n^2$ and make accurate forecasts (the forecasted and measured values are close to each other). In the one-step-ahead $C_n^2$ profile forecast case, the RMSE and the correlation coefficient R of our method are 0.515 and 0.956, respectively, which are better than those obtained by other related works. At different altitudes, the MAPE between the forecasted and measured values ranges from 1.94% to 3.55%. The higher the altitude (in the atmospheric boundary layer), the more stable the $C_n^2$, and the higher the forecast accuracy (smaller MAPE). In the 36-step-ahead $C_n^2$ profile forecast case, the correlation coefficient of our method is 0.9046 when our method is used to forecast the $C_n^2$ profiles in 3 hours. It means that our method is applicable for forecasting the $C_n^2$ profiles in several hours. In addition, as the sliding window width increases from 6 to 70, the overall MAPE decreases from 2.69% to 2.57%. Such a small difference demonstrates that the time correlations of $C_n^2$ are more likely to be short-term. When the sampling interval of the data becomes dense, the space-time correlations of the $C_n^2$ data are enhanced, and the forecast accuracy is further improved. As the temporal and spatial scales of features are of equal importance, the SW-DCNN model often achieves a smaller overall MAPE when the sliding window width q equals the feature length p in altitude.

Except for the space-time correlations, $C_n^2$ has other features, such as the meteorological correlations. Previous studies have shown that the measured meteorological data can be used to estimate $C_n^2$. Therefore, the combination of space-time correlations and meteorological parameter-based prediction models is expected to further improve the forecast accuracy of $C_n^2$ and satisfy more possible application scenarios. Whilst, $C_n^2$ shows different variation behaviors in different seasons and regions. Therefore, it is important to conduct long-term observations to obtain sufficient data for the study of $C_n^2$ forecast. The short-term correlation of $C_n^2$ also tells us that we should intensively measure the $C_n^2$ profile to obtain the data with strong space-time correlations in our future work, providing high-quality data for making accurate $C_n^2$ forecasts. Accurate long-term $C_n^2$ forecast is meaningful and challenging, how to improve the forecast accuracy of $C_n^2$ is also open for further research.

Funding

National Natural Science Foundation of China (61771375; 92052106).

Acknowledgments

We sincerely acknowledge the researchers of Chengdu Meteorological Office for their help in measurement.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

The data presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. A. Consortini, Y. Y. Sun, C. Innocenti, and Z. Li, “Measuring inner scale of atmospheric turbulence by angle of arrival and scintillation,” Opt. Commun. 216(1-3), 19–23 (2003). [CrossRef]  

2. C. Luo and X. Han, “Evolution and beam spreading of arbitrary order vortex beam propagating in atmospheric turbulence,” Opt. Commun. 460, 124888 (2020). [CrossRef]  

3. Y. Gu and G. Gbur, “Measurement of atmospheric turbulence strength by vortex beam,” Opt. Commun. 283(7), 1209–1212 (2010). [CrossRef]  

4. K.-Y. Chien, “Predictions of channel and boundary-layer flows with a low-Reynolds-number turbulence model,” AIAA J. 20(1), 33–38 (1982). [CrossRef]  

5. D. K. Borah, A. C. Boucouvalas, C. C. Davis, S. Hranilovic, and K. Yiannopoulos, “A review of communication-oriented optical wireless systems,” J. Wirel. Comm. Network 2012(1), 91 (2012). [CrossRef]  

6. D. Jiang, Y. Yang, J. Huang, Z. Yao, B. Zhu, and K. Qin, “A variable aperture method to simultaneously estimate atmospheric extinction coefficient and refractive index structure constant,” Opt. Commun. 320, 138–144 (2014). [CrossRef]  

7. S. Li, S. Chen, C. Gao, A. E. Willner, and J. Wang, “Atmospheric turbulence compensation in orbital angular momentum communications: Advances and perspectives,” Opt. Commun. 408, 68–81 (2018). [CrossRef]  

8. R. R. Beland, “A decade of balloon microthermal probe measurements of optical turbulence,” in Adaptive Optics for Large Telescopes, 1992 Technical Digest Series (Optica Publishing Group, 1992), paper AMB1.

9. R. Good, R. Beland, E. Murphy, J. Brown, and E. Dewan, “Atmospheric models of optical turbulence,” Proc. SPIE 0928, 165–186 (1988). [CrossRef]  

10. R. R. Beland and J. H. Brown, “A deterministic temperature model for stratospheric optical turbulence,” Phys. Scr. 37(3), 419–423 (1988). [CrossRef]  

11. R. Hufnagel and N. Stanley, “Modulation transfer function associated with image transmission through turbulent media,” J. Opt. Soc. Am. 54(1), 52–61 (1964). [CrossRef]  

12. T. VanZandt, J. Green, K. Gage, and W. Clark, “Vertical profiles of refractivity turbulence structure constant: Comparison of observations by the Sunset radar with a new theoretical model,” Radio Sci. 13(5), 819–829 (1978). [CrossRef]  

13. V. I. Tatarski, Wave Propagation in a Turbulent Medium (Courier Dover Publications, 2016).

14. E. Masciadri and P. Jabouille, “Improvements in the optical turbulence parameterization for 3D simulations in a region around a telescope,” Astron. Astrophys. 376(2), 727–734 (2001). [CrossRef]  

15. O. Cuevas, “Estimation of the optical turbulence and seeing from MM5 data in paranal/armazones site,” Rev. Mex. Astron. Astr. 41, 16–19 (2011).

16. C. Qing, X. Wu, H. Huang, Q. Tian, W. Zhu, R. Rao, and X. Li, “Estimating the surface layer refractive index structure constant over snow and sea ice using Monin-Obukhov similarity theory with a mesoscale atmospheric model,” Opt. Express 24(18), 20424–20436 (2016). [CrossRef]  

17. A. Tunick, “Optical turbulence parameters characterized via optical measurements over a 2.33 km free-space laser path,” Opt. Express 16(19), 14645–14654 (2008). [CrossRef]  

18. A. Tunick, “Experiment to characterize optical turbulence along a 2.33 km free-space laser path via differential image motion measurements,” Proc. SPIE 7463, 746303–23 (2009). [CrossRef]  

19. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 770–778.

20. A. W. Senior, R. Evans, J. Jumper, J. Kirkpatrick, L. Sifre, T. Green, C. Qin, A. Žídek, A. W. R. Nelson, A. Bridgland, H. Penedones, S. Petersen, K. Simonyan, S. Crossan, P. Kohli, D. T. Jones, D. Silver, K. Kavukcuoglu, and D. Hassabis, “Improved protein structure prediction using potentials from deep learning,” Nature 577(7792), 706–710 (2020). [CrossRef]  

21. Y. Wang and S. Basu, “Using an artificial neural network approach to estimate surface-layer optical turbulence at Mauna Loa, Hawaii,” Opt. Lett. 41(10), 2334–2337 (2016). [CrossRef]  

22. C. Su, X. Wu, T. Luo, S. Wu, and C. Qing, “Adaptive niche-genetic algorithm based on backpropagation neural network for atmospheric turbulence forecasting,” Appl. Opt. 59(12), 3699–3705 (2020). [CrossRef]  

23. S. Ma, S. Hao, Q. Zhao, C. Xu, and J. Xiao, “Prediction of atmospheric turbulence refractive index structure constant based on deep learning,” Proc. SPIE 11717, 67–74 (2020). [CrossRef]  

24. G. He, G. Jin, and Y. Yang, “Space-time correlations and dynamic coupling in turbulent flows,” Annu. Rev. Fluid Mech. 49(1), 51–70 (2017). [CrossRef]  

25. J. M. Wallace, “Space-time correlations in turbulent flow: A review,” Theor. Appl. Mech. Lett. 4(2), 022003 (2014). [CrossRef]  

26. D. Chen, G. Sun, L. Zhu, H. Zhang, K. Zhang, N. Weng, and X. Li, “Estimation of boundary-layer turbulence parameters in Hefei based on wind profile radar,” Proc. SPIE 12169, 545–2512 (2022). [CrossRef]  

27. Y. Han, P. Gao, J. Huang, T. Zhang, J. Zhuang, M. Hu, and Y. Wu, “Ground-based synchronous optical instrument for measuring atmospheric visibility and turbulence intensity: Theories, design and experiments,” Opt. Express 26(6), 6833–6850 (2018). [CrossRef]  

28. H. Wang, S. Su, H. Tang, L. Jiao, and Y. Li, “Atmospheric duct detection using wind profiler radar and RASS,” J. Atmos. Ocean. Tech. 36(4), 557–565 (2019). [CrossRef]  

29. L. Mozaffari, A. Mozaffari, and N. L. Azad, “Vehicle speed prediction via a sliding-window time series analysis and an evolutionary least learning machine: A case study on San Francisco urban roads,” Eng. Sci. technol. 18(2), 150–162 (2015). [CrossRef]  

30. P. Li, Z. Chen, L. T. Yang, Q. Zhang, and M. J. Deen, “Deep Convolutional Computation Model for Feature Learning on Big Data in Internet of Things,” IEEE Trans. Ind. Inf. 14(2), 790–798 (2018). [CrossRef]  

31. S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” in Proceedings of the 32nd International Conference on Machine Learning, (Proceedings of Machine Learning Research, 2015), pp. 448–456.

32. X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, (JMLR Workshop and Conference Proceedings, 2011), pp. 315–323.

33. Y.-L. Boureau, F. Bach, Y. LeCun, and J. Ponce, “Learning mid-level features for recognition,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (IEEE, 2010), pp. 2559–2566.

34. H. Lee, J. Park, and J. Y. Hwang, “Channel attention module with multiscale grid average pooling for breast cancer segmentation in an ultrasound image,” IEEE T. Ultrason. Ferr. 67(7), 1344–1353 (2020). [CrossRef]  

35. I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” in International Conference on Learning Representations Representations, (ICLR, 2018).

36. H. Kwon and S. J. Hong, “Use of optical emission spectroscopy data for fault detection of mass flow controller in plasma etch equipment,” Electronics 11(2), 253 (2022). [CrossRef]  

37. J. L. Hintze and R. D. Nelson, “Violin plots: a box plot-density trace synergism,” Am. Stat. 52(2), 181–184 (1998). [CrossRef]  

38. P. Xia, L. Zhang, and F. Li, “Learning similarity with cosine similarity ensemble,” Inf. Sci. (N. Y.) 307, 39–52 (2015). [CrossRef]  

39. S. M. Walsh, S. F. E. Karpathakis, A. S. McCann, B. P. Dix-Matthews, A. M. Frost, D. R. Gozzard, C. T. Gravestock, and S. W. Schediwy, “Demonstration of 100 Gbps coherent free-space optical communications at LEO tracking rates,” Sci. Rep. 12(1), 18345–12 (2022). [CrossRef]  

40. E. Bouhoubeiny and P. Druault, “Note on the POD-based time interpolation from successive PIV images,” C. R. Mec. 337(11-12), 776–780 (2009). [CrossRef]  

Data availability

The data presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1.
Fig. 1. Idea of the proposed altitude-time correlations-inspired deep learning method to forecast the $C_n^2$ profile.
Fig. 2.
Fig. 2. TWP3 boundary layer wind profile radar located in Sichuan.
Fig. 3.
Fig. 3. Illustration of source data file.
Fig. 4.
Fig. 4. Exemplary illustrations (marked with boxes) of the local space-time patterns that the atmospheric refractive index structure constant $C_n^2$ exists.
Fig. 5.
Fig. 5. Illustration of the sliding window algorithm for data preparation.
Fig. 6.
Fig. 6. Proposed DCNN model for forecasting the atmospheric refractive index structure constant $C_n^2$ profile.
Fig. 7.
Fig. 7. One-step-ahead $C_n^2$ forecast examples with the trained SW-DCNN model. (a) Network input. (b) Comparison between the forecasted values (red dotted line) and the measured values (black solid line). (c) Absolute percentage error (APE) versus altitude.
Fig. 8.
Fig. 8. One-step-ahead $C_n^2$ forecast case: the correlation of $\textrm{l}o{g_{10}}(C_n^2)$ between the forecast value and measurement value.
Fig. 9.
Fig. 9. Mean absolute percentage error (MAPE) of ${\log _{10}}C_n^2$ versus altitude when testing the trained SW-DCNN model on the whole test set.
Fig. 10.
Fig. 10. Distributions of ${\log _{10}}C_n^2$ in the training set (output) at different altitudes.
Fig. 11.
Fig. 11. 36-step-ahead $C_n^2$ profile forecast scheme with the trained DCNN model.
Fig. 12.
Fig. 12. 36-step-ahead $C_n^2$ forecast examples with the trained SW-DCNN model. (a) Network input. (b) Ground truth of the $C_n^2$ profiles measured at the next 36-time steps. (c) Forecasted $C_n^2$ profiles at the next 36-time steps.
Fig. 13.
Fig. 13. 36-step-ahead $C_n^2$ forecast case: the correlation of ${\log _{10}}C_n^2$ between the forecasted and measured value.
Fig. 14.
Fig. 14. Relationship between the correlation coefficient R of multi-step-ahead $C_n^2$ profile forecast and the time step of forecast.
Fig. 15.
Fig. 15. Relationship between the forecast error and the sliding window width.
Fig. 16.
Fig. 16. Comparison between the original $C_n^2$ profile data and the interpolated ones.
Fig. 17.
Fig. 17. Relationship between the forecast error and the sliding window width after the data are enriched with cubic spline interpolation.

Tables (2)

Tables Icon

Table 1. Main operation parameters of the TWP3 boundary layer wind profile radar

Tables Icon

Table 2. Quantitative evaluation between different models

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

C n 2 = k B T 0 B N F 5.4 × 10 5 λ 5 / 3 P t ( l / 2 ) G L 2 d 2 S N R .
R ( r , τ ) u ( x , t ) u ( x + r , t + τ ) = R ( r U τ , 0 ) ,
R ( h , τ ) = R ( h U τ , 0 ) .
  R C n 2 ( h , τ ) C n 2 ( h , t ) C n 2 ( h + h , t + τ ) .
[ C n 2 | t q h p 1 C n 2 | t q h 0 ] = f ~ ( [ C n 2 | t 0 h p 1 C n 2 | t q 1 h p 1 C n 2 | t 0 h 0 C n 2 | t q 1 h 0 ] ) .
Z k , 1 = G l o b a l A v g P o o l ( X [ k ] ) = i = 1 H j = 1 W X ( k , i , j ) H × W , k = 1 , , C ,
min w L ( w ) = 1 N × p υ = 1 N tr m = 0 p 1 ( C ~ n 2 | t q , υ h m C n 2 | t q , υ h m ) 2 + R e w w T ,
AP E υ ( h m ) = | C ~ n 2 | t q , υ h m C n 2 | t q , υ h m C n 2 | t q , υ h m | × 100 % .
{ RMSE = 1 N × p υ = 1 N te m = 0 p 1 ( C ~ n 2 | t q , υ h m C n 2 | t q , υ h m ) 2 R = υ = 1 N te m = 0 p 1 ( C n 2 | t q , υ h m C ¯ n 2 ) ( C ~ n 2 | t q , υ h m C ~ ¯ n 2 ) υ = 1 N te m = 0 p 1 ( C n 2 | t q , υ h m C ¯ n 2 ) 2 υ = 1 N te m = 0 p 1 ( C ~ n 2 | t q , υ h m C ~ ¯ n 2 ) 2 ,
R ( h m ) = υ = 1 N te ( C n 2 | t q , υ h m C ¯ n 2 | h m ) ( C ~ n 2 | t q , υ h m C ~ ¯ n 2 | h m ) υ = 1 N te ( C n 2 | t q , υ h m C ¯ n 2 | h m ) 2 υ = 1 N te ( C ~ n 2 | t q , υ h m C ~ ¯ n 2 | h m ) 2 .
MAPE( h m ) = 1 N te υ = 1 N te | C ~ n 2 | t q , υ h m C n 2 | t q , υ h m C n 2 | t q , υ h m | × 100 % .
similarity = q = 46 81 m = 0 45 ( C ~ n 2 | t q h m × C n 2 | t q h m ) q = 46 81 m = 0 45 ( C ~ n 2 | t q h m ) 2 q = 46 81 m = 0 45 ( C n 2 | t q h m ) 2 .
MAP E overall = 1 N te × p υ = 1 N te m = 0 p 1 | C ~ n 2 | t q , υ h m C n 2 | t q , υ h m C n 2 | t q , υ h m | × 100 % .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.