Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Satellite-derived bathymetry based on machine learning models and an updated quasi-analytical algorithm approach

Open Access Open Access

Abstract

Retrieving the water depth by satellite is a rapid and effective method for obtaining underwater terrain. In the optical shallow waters, the bottom signal has a great impact on the radiation from the water which related to water depth. In the optical shallow waters, the spatial distribution characteristic of water quality parameters derived by the updated quasi analysis algorithm (UQAA) is highly correlated with the bottom brightness. Because the bottom reflection signal is strongly correlated with the spatial distribution of water depth, the derived water quality parameters may helpful and applicable for optical remote sensing based satellite derived bathymetry. Therefore, the influence on bathymetry retrieval of the UQAA IOPs is worth discussing. In this article, different machine learning algorithms using a UQAA were tested and remote sensing reflectance at water depth in situ points and their detection accuracy were evaluated by using Worldwiew-2 multispectral remote sensing images and laser measurement data. A backpropagation (BP) neural network, extreme value learning machine (ELM), random forest (RF), Adaboost, and support vector regression (SVR) machine models were utilized to compute the water depth retrieval of Ganquan Island in the South China Sea. According to the obtained results, bathymetry using the UQAA and remote sensing reflectance is better than that computed using only remote sensing reflectance, in which the overall improvements in the root mean square error (RMSE) were 1 cm to 5 cm and the overall improvement in the mean relative error (MRE) was 1% to 5%. The results showed that the results of the UQAA could be used as a main water depth estimation eigenvalue to increase water depth estimation accuracy.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Bathymetry data is a key factor in marine environmental exploration and an important part of hydrographic surveying and chart making. It is of great significance to the research of coastal areas and marine engineering construction, and provides important information for maritime transportation and shipping. Detailed knowledge of shallow water depths also contributes to the management and monitoring of coral reefs and the conservation of ecosystems [1,2]. Timely and accurate bathymetry data is essential to help develop effective resource policy and management in coastal areas and ensure a safe and comfortable environment for humans [3].

There are many ways to obtain water depth. Single beam and multibeam echo sounders provide a precise and valid depth estimation method [4]. However, due to their high costs, low speeds, and weather dependence, large-scale survey ships are unsuitable for shallow-water operations. Airborne sounding LiDAR is another useful bathymetric method and its applicability has been proven in coastal regions [5]. This Lidar method is fast and unrestricted by the ocean, but it does not perform well in turbid waters, as demonstrated by tests [6]. Satellite-derived bathymetry (SDB) is becoming an affordable method that can quickly and efficiently provide a wide range of high-resolution maps. It is an efficient supplement to the conventional bathymetric method [7]. The principle of deriving the water depth from multispectral satellite images is that the light penetration of a water column with different wavelengths which is a function of the characteristics of the seawater. Although the SDB efficiency depends on the method, the forecasting difference is approximately 10% of the water depth. There are two simple but widely used model for SDB, although they have limitations [8]. The first model (linear band model constructed by Lyzenga et al. [9]) assumes that the entire bottom surface is consistent and the water column is similar in the entire coverage area. The second model (log-transformed band ratio model presented by Stumpf et al. [10]) overcomes this disadvantage, but it has no physical basis, and its parameters are calculated through a test process [11].To build a general model for water depth estimation, machine learning is also considered to be a useful model. The data recorded by multispectral sensors have multidimensional characteristics. Although it is not easy to construct a model to explain the relationship between the multidimensional eigenvalues and water depth under different observation conditions, machine learning can be used to automatically study the numerical model and provide the optimal solution [12]. In recent years, different kinds of machine learning algorithms have been applied to water depth retrieval, including neural networks [13], random forests [11,14], support vector machines [15], and others [16]. The other approach is a combination of multispectral imaging and deep learning, which uses complex algorithms to obtain higher water depth inversion accuracy [17]. On the other hand, many scholars fully excavate the information of the image and the ground truth data to achieve higher precision water depth results [17].

There are many researches in which proved IOPs can be introduced as another candidate depth predictors to improve remote sensing-based satellite derived bathymetric performance. Chen et al. used the QAA and Kd algorithm to estimate the sum of the diffuse attenuation coefficients of the green band for upwelling and downwelling light to retrieve the water depth [18]. Li et al. used an algorithm for adaptive adjustment of a depth estimator based on water column attenuation situations to estimate water depth [19]. Zhang et al. developed an inherent optical parameter linear model (IOPIM) for estimating shallow-water depth from high-spatial resolution multispectral images [20], and the results of the IOPIM showed that IOPs can indeed increase the accuracy of water depth estimation. Huang et al. developed a updated quasi-analytical algorithm (UQAA) and applied it to analytical water depth retrieval to obtain high-precision retrieval results [21]. To date, there is no evidence that IOPs are eigenvalues of water depth, but introducing IOPs as candidate depth predictors may improve bathymetric performance.

In this paper, we tested whether the IOPs can be used as another input for satellite-derived bathymetry, so the bathymetric performance can be improved. An experiment was designed to test the superiority of our newly established model. Two kinds of bathymetric models were established, one uses remote sensing reflectance and IOPs as input data while another model merely uses remote sensing reflectance as input data. Five machine learning algorithms, such as back propagation neural network (BPNN), extreme learning machine (ELM), random forest (RF), AdaBoost and support vector regression (SVR), were used to establish relationship between different input data and same depth and to test the superiority of our newly established model as well.

2. Methods

In this paper, a updated quasi analysis algorithm (UQAA) was proposed (see 2.1, Table 1). Five machine learning algorithms was used in satellite-based bathymetry (see 2.2). In the preprocesses of the satellite images, radiometric calibration, sun glint removal and atmospheric correction were processed (see 2.3). The results of UQAA add in bathymetry model were the chlorophyll-a concentration C, and the CDOM absorption coefficient at 440 nm [ag(440)] (see 2.1). This new approach was applied to Rrs of blue, green, red, and near-infrared light, the chlorophyll-a concentration C, and the CDOM absorption coefficient at 440 nm [ag (440)] with machine learning models, while the traditional bathymetry model was only applied to Rrs of blue, green, red, and near-infrared light, and its performance was then evaluated via a comparison to LiDAR bathymetry data and results obtained only with Rrs data with machine learning models. The machine learning models include artificial neural network(BP neural network and Extreme learning machine), decision Tree(random forest and AdaBoost Regressor)and support vector regression (SVR) (see 2.2). We used RMSE and MRE to evaluation the bathymetry data with LiDAR and draw a map after tide correction (Fig. 1).

2.1 Updated quasi-analytical algorithm (UQAA)

The quasi-analytical algorithm (QAA) is a semi-analytical model that employs the bio-optical model proposed by Lee et al. [22] to calculate the total absorption and backscatter coefficients of a water body. A variety of prevalent multispectral satellite images can generate only 3–4 visible wavebands. Therefore, we must decrease the number of unknown parameters of the quasi-analytical algorithm as much as possible to prevent an ill-conditioned and incorrect semi-analytical model. Based on the research performed by Lee et al. [22] and Huang et al. [21]the phytoplankton pigment absorption coefficient at 440 nm and the backscattering coefficients at 550 nm can be described in terms of the chlorophyll-a concentration C as:

$${a_\phi }(440) = 0.06 \ast {C^{0.65}},$$
$${b_{bp}}({550} )= 0.0111 \ast {C^{0.62}},$$
$${b_{bp}}(\lambda )= {b_{bp}}({550} ){({550/\lambda } )^Y},$$
where Y = 0.67875.

In clear water:

$${a_\textrm{g}}(\lambda )\textrm{ = }{a_\textrm{g}}({440} )\ast \exp ({ - 0.015 \ast ({\lambda - 440} )} ),$$
$${a_{phy}}(\lambda )= [{{a_0}(\lambda )+ {a_1}(\lambda )\ln ({{a_\phi }(440)} )} ]{a_\phi }(440),$$
$$a(\lambda )= {a_w}(\lambda )+ {a_{ph}}_y(\lambda )+ {a_g}(\lambda ),$$
$$b(\lambda )= {b_w}(\lambda )+ {b_{bp}}(\lambda ),$$
$${u_m}(\lambda )= \frac{{b(\lambda )}}{{a(\lambda )+ b(\lambda )}},$$

While in the quasi-analytical algorithm (QAA):

$${r_{rs}}(\lambda )= \frac{{{R_{rs}}(\lambda )}}{{0.52 + 1.7{R_{rs}}(\lambda )}},$$
$${u_0}(\lambda )= \frac{{ - {g_0} + {{[{{g_0}^2 + 4{g_1}{r_{rs}}(\lambda )} ]}^{1/2}}}}{{2{g_1}}},$$
where g0 and g1 are constants, which are 0.08945 and 0.1247, respectively, a is the absorption coefficient of the water column, aphy is the absorption coefficient of chlorophyll, ag is the absorption coefficient of gelbstoff, aw is the absorption coefficient of pure water, bw is the backscattering coefficient of pure water, bbp is the backscattering coefficients at 550 nm, and λ is the central wavelength of different bands. The a0 and a1 was from lee et. al(1994) [23], The differences between the estimated and actual values of the subsurface remote sensing reflectance of the optical deep waters were as slight as possible, namely, the difference between um and u0 was as small as possible, and we determined the optimum values of ag(440) and C by using the Levenberg– Marquardt method [24].

Tables Icon

Table 1. Parameters used in the optimization algorithm.

2.2 Water depth retrieval model

2.2.1 Artificial neural network

2.2.1.1 Backpropagation (BP) neural network

As a conventional artificial neural network, the backpropagation (BP) neural network mimics the learning process of neurons from feedback, and its structure consists of an input layer, an output layer, and some hidden layers. The interconnection between the neurons of adjacent layers is established by weights and bias tuned in the learning phase. The input of every neuron is the weighted summation of the total outputs of the previous layer, and the S-function is used to obtain the output [25].

2.2.1.2 Extreme learning machine

As a machine learning algorithm, an extreme learning machine (ELM) is designed for single-layer feedforward neural networks (SLFNs). Its fundamental property is that the parameters of the hidden layer nodes can be chosen randomly or artificially without any adjustment, and the training phase only requires determining the output weights. The network structure of the ELM is the same as that of the BP network with a single hidden layer, but the calculation model of the weights connecting the neurons in the ELM is different [26].

This section compares the two models with machine learning algorithms, where all models were implemented using MATLAB. For the BP neural network and ELM, the hidden layer involved 30 nodes. The experimental setup of the BP neural network experiment was as follows. The tansig and purelin were chosen as the transfer functions in the hidden and output layers, respectively, while the trainlm was chosen as the training function. The maximum number of trainings, learning rate, momentum coefficient, and learning target error were 1000, 0.05, 0.9, and 10−5, respectively. The extreme learning machine experiment that served as a neural network algorithm in the current comparison was adjusted as follows. The number of hidden neurons was 30, and the transfer function was sigmoidal. Image preprocessing, bathymetric retrieval, and result validation were then performed to obtain the experimental results of the BP neural network and ELM. The preprocessing procedures, bathymetric retrieval computer configuration, and result evaluations used for the BP neural network were the same as those used for the extreme learning machine.

2.2.2 Decision tree

2.2.2.1 Random forest (RF)

The random forest (RF) model is an integrated supervised learning model. In this algorithm, multiple prediction models are generated simultaneously, and the prediction results of each model are analyzed comprehensively to improve the prediction accuracy. The random forest algorithm performs sampling of the sample data and variables to generate a large number of decision trees. For each tree, self-help sampling is carried out, and error estimation is performed using the sample data outside the bag. When the decision tree is generated, the variables are randomly selected.

2.2.2.2 AdaBoost regressor

AdaBoost is a boosting algorithm to enhance the performance of the weak classifier by reinforcing learning on inaccurately classified samples. The AdaBoost algorithm has been extensively utilized to promote the weights of weak classifiers using a classification error function [27]. Thus, the classifier is expressed as follows:

$$h(x )= \left\{ \begin{array}{ll} 1&{_{}}if\sum\limits_{i = 1}^t {{a_t}{h_t} \ge threshold} \\ 0&{_{}}otherwise \end{array} \right.,$$
where $h(x )$ equals to 1 indicated that the sample belonging to the real class. AdaBoost models [28] can obtain an accuracy above random chance on a classification problem. Decision trees with one level can be an appropriate alternative with the AdaBoost approach. Since the mentioned decision trees are referred to as decision stumps, they include a single classification level. Each sample of a learning dataset contains a weight.

For the random forest and AdaBoost regressor, the initial training dataset contained six nodes (Rrs of red/green/blue and NIR bands, C, and ag440 with the UQAA, and four nodes (Rrs of red/green/blue and NIR bands) without the UQAA. The experimental setup of the random forest test was chosen as follows. The number of decision trees (nTree) was 300, and the number of variables (mtry) was 6 with the UQAA and 4 without the UQAA. The AdaBoost regressor experiment, which served as an ensemble learning algorithm in this comparison, was set up as follows. The weak learner was the decision tree, the maximum depth of the AdaBoost regressor was six with the UQAA and four without the UQAA, and the maximum number of iterations was 300.

2.2.3 Support vector regression (SVR)

Support vector regression (SVR) is the main representative of statistical models in the field of artificial intelligence [29]. For a linear case, regression is performed by the decision function directly. For a nonlinear case, linear regression is realized by constructing the decision function in high-dimensional space, which is suitable for the construction of a multidimensional small sample regression model [30] . In this experiment, the LIBSVM toolbox in MATLAB was used to perform numerical calculations [31].

For the SVR, the initial training dataset contained six nodes (Rrs of red/green/blue and NIR bands) with UQAA and four nodes (Rrs of red/green/blue and NIR bands) without UQAA. The radial basis function was chosen as the kernel function of the support vector machine to obtain the optimal solution of the water depth retrieval model, and the kernel function parameter γ and penalty parameter were the defaults of the MATLAB toolbox.

2.3 Image preprocessing and tidal correction

In this paper, the WorldView-2 data of the blue, green, red and near-infrared 1 bands were selected. On the one hand, these bands are similar to other high-resolution satellite (< 10 m) bands, which can better verify the applicability of the UQAA. The selection of the near infrared 1 band can facilitate sun glint removal and atmospheric correction. On the other hand, economically, it can better reduce the cost.

In order to obtain the upper surface remote sensing reflectance Rrs required for estimating shallow water depth from high spatial resolution multispectral images, many image processing methods involving radiometric calibration and atmospheric correction are used to convert DN data into Rrs data.

Radiometric calibration and Sun Glint Correction: image calibration converts raw digital numbers of a pixel to remote sensing reflectance at top of the atmosphere. Sun glint on rough water surfaces is a strong disturbance source and must be effectively suppressed. Therefore, methods for eliminating the impact of sun glint are an important research topic. In this article, we make use of the linear relationship between the reflectance of near-infrared (NIR) and visible bands [32].

$$\rho _{TOA}^{\deg }(\lambda )= {\rho _{TOA}}(\lambda )- {b_\lambda }(\lambda )({{\rho_{NIR}} - MIN{\rho_{NIR}}} ),$$
where $\rho _{TOA}^{deg}(\lambda )$ is the surface reflectance after the sun glint is removed for a given visible band, ${\rho _{TOA}}(\lambda )$ is the reflectance at the top of atmosphere (TOA), ${\rho _{NIR}}$ is the NIR reflectance, $MIN{\rho _{NIR}}$ is the minimum NIR reflectance for a given scene, and ${b_\lambda }(\lambda )$ is the regression constant relating the visible and NIR reflectance in the scene. λ is the center wavelength of different bands [32].

Atmospheric correction: The 6S (second simulation of the satellite signal in the solar spectrum) approach [33] was utilized for atmospheric correction due to its modern radiative transfer code constructed to simulate solar radiation reflection using a coupled atmosphere-surface system for an extensive domain of atmospheric, spectral, and geometric situations [34]. Four sample points were selected, and the results of atmospheric correction are shown in Fig. 3. There was little difference between the remote sensing reflectance in deep water areas and MODIS in the same period. Four sample points were selected, and the results of atmospheric correction are shown in Fig. 3. There was little difference between the remote sensing reflectance in deep water areas and MODIS in the same period.

Tide modification is also crucial for ocean bathymetry. In the current study, since the image acquisition time was determined, the height difference between the tide level and highest tide level can be extracted from the values in the tidal table. We used the height difference to correct the tide of the image-based sounding data [35].

2.4 Model validation

2.4.1 Accuracy validation

The depth estimation accuracy of all models can be obtained using the following errors:

$$M\textrm{R}E = \left( {\frac{1}{n}\sum\limits_{i = 1}^n {|{{h_i} - {{\hat{h}}_i}} |} /{h_i}_{}} \right)\mathrm{\ast }100\mathrm{\%,}$$
$$RMSE = {\left( {\sum\limits_{i = 1}^n {({{h_i} - {{\hat{h}}_i}} )} /n} \right)^{1/2}},$$
where hi represents the measured depth, $\hat{h}$ describes the estimated depth, n indicates the number of input data. The accuracy evaluation was performed based on two statistical parameters: correlation coefficient (R2) the root-mean-square error (RMSE) and mean absolute error (MAE). Smaller RMSE and MRE values reflect a higher accuracy for bathymetric retrieval.

The current work also adopts the relative bathymetric error (RBE) to describe the error at a specific position. The |RBE| for a specific validation point is described as

$$|{RBE} |= (|{{h_i} - {{\hat{h}}_i}} |/{h_i})\ast 100\%$$

3. Test and results

3.1 Study site

The Xisha Islands, located on the western continental slope of the South China Sea, compose the most extraordinary archipelago in the study region (8 km2), involving more than 30 islands, reefs, and cays, including Xuande Atoll, Yongle Atoll, Huaguang Atoll, Dongdao Atoll, and several smaller islands [36]. Ganquan Island (111.60° east longitude and 16.5° north latitude) is a part of the Xisha Islands in China. The island has a tropical monsoon climate, and it is nearly 500 m from east to west (width) and 700 m from north to south (length), with an approximate area of 0.3 km2. The water near Ganquan Island is clean and limpid and cannot be significantly influenced by human operations. The study region and multispectral images are presented in Fig. 2(a) and 2(b), respectively [37].

 figure: Fig. 1.

Fig. 1. Workflow processing steps of the presented methodology for detecting water depth from satellite images by using different models.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. (a) The geographic location of Ganquan Island is marked with a red star; (b) Multispectral image of Ganquan Island and the sampling points for atmospheric correction validation.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. (a) Spectrum comparison before atmospheric correction; (b) Spectrum comparison after atmospheric correction.

Download Full Size | PDF

3.2 Measured data

3.2.1 WorldView-2 data

The WorldView-2 satellite provides a panchromatic band and a multispectral image with 0.5-m and 2-m spatial resolutions, respectively. The passive optical image employed in this paper was multispectral (see Fig. 2(b)). The multispectral bands used were blue (450–510 nm), green (510–580 nm), red (630–690 nm), and near-infrared 1 (770–895 nm). The image acquisition time was 03:33:31 a.m. (Coordinated Universal Time, UTC) on 2 April 2014.

3.2.2 LiDAR data

In this paper, all LiDAR data was laser sounding data of Ganquan Island measured by the Leica Hawk-Eye system. The airborne laser point cloud data in Las format was processed by the TerraScan and Terrabatch modules in Terrasolid, which included airband splicing, clipping, outlier removal, and island point cloud extraction. The density of the point clouds was 2.7 points per square meter, and the total number of point clouds was more than 20 million. The average water surface elevation of a point 100 meters away from the island was taken as the average sea surface at the time of the measurement. Through elevation filtering, the elevation points below the water surface were obtained, with an accuracy at the decimeter level, which can be used as reference and verification data in remote sensing water depth retrieval. A total of 30000 water depth points was extracted, 1000 of which were used for the accuracy test, as shown in Fig. 4. Using the website http://ocean.cnss.com.cn/, the tide data of that day was queried and corrected.

 figure: Fig. 4.

Fig. 4. LiDAR data maps for the (a) calibration datasets and (b) validation datasets—depths in meters (m).

Download Full Size | PDF

3.3 Results of the updated quasi-analytical algorithm

As shown in Fig. 4 and Fig. 5, the chlorophyll-a concentration C and the CDOM absorption coefficient at 440 nm [ag(440)] around Ganquan Island were related to the water depth. The chlorophyll-a concentration C ranged in an interval, 0.1∼0.45 mg·m-3, with an average value of 0.23 mg·m-3. The range of the CDOM absorption coefficient at 440 nm [ag(440)] was 0.025∼0.09 of m-1, with an average value of 0.07 m-1. In the optical shallow waters, the spatial distribution characteristic of water quality parameters derived by the updated quasi analysis algorithm (UQAA) is highly correlated with the bottom brightness. The deeper the water depth, the lower the concentrations of chlorophyll-a concentration C and the CDOM absorption coefficient at 440 nm [ag(440)]. Not only the water depth but the bottom type also influenced the results. The results here have obvious topographic information, which can be used in water depth retrieval.

 figure: Fig. 5.

Fig. 5. (a) Chlorophyll-a concentration C (mg·m-3) and (b) CDOM absorption coefficient at 440 nm, ag 440(m-1).

Download Full Size | PDF

3.4 Model validation of five models

When looking for cloudless images, if no minimum cloud amount was detected, the delay would also affect the time offset. Nevertheless, the bathymetric LiDAR dataset was chosen in this paper because it provided comprehensive coverage and overlap of the total reef for evaluation. It was adequately precise for SDB comparisons, as the same temporal offsets between the satellite imagery and reference LiDAR have previously been combined successfully in SDB studies [38]. LiDAR data could also test the consistency of images, especially the consistency of the water depth interval.

Form Fig. 6, When the training point is greater than 4000, the depth retrieval results tend to be stable, and the accuracy of UQAA input is significantly greater than that without UQAA input. The RMSEs and MREs of the extreme learning machine and BP neural network algorithms changed slightly, while the reduction was only 2%. The RMSEs and MREs of the random forest algorithm were of the same order as those of the AdaBoost regressor algorithm. The RMSEs and MREs of the AdaBoost regressor algorithm and random forest algorithm decreased slightly when the number of training points was above 6000 and decreased significantly when the number was below 6000. The values of the AdaBoost regressor algorithm were lower than those of the extreme learning machine and BP neural network algorithms. The RMSE and MRE of the SVR algorithm decreased significantly: the RSME decreased by 0.5 m and the MRE decreased by 5%.

 figure: Fig. 6.

Fig. 6. Model validation of the two models with machine learning algorithms. The bathymetric retrieval accuracies of the two models for the five machine learning algorithms are compared.

Download Full Size | PDF

The RMSE and MRE of SVR algorithm were significantly reduced with UQAA input, and the depth retrieval accuracy was significantly improved. Compared with RMSE and MRE of BP neural network algorithm, the retrieval accuracy of RMSE and MRE of limit learning machine algorithm changed little. Where there were few training points (less than 500), the accuracy of BP neural network was relatively stable. In terms of numerical value, the accuracy with UQAA is higher than that without UQAA, but there is no significant improvement. Compared with the depth retrieval results without UQAA input, AdaBoost regression algorithm has significantly improved the accuracy of MRE, and the change of RMSE was not obvious. the newly established method with RF showed better bathymetric performance when lack of sufficient training sampling data.

The RMSE and MRE of the extreme learning machine algorithm were of the same order as those of the BP neural network algorithm (Fig. 6). The RMSE and MRE began to increase and then decreased to the same level. In numerical terms, the accuracy of the BP neural network algorithm was higher than that of the extreme learning machine algorithm. The RMSE of the random forest algorithm was of the same order as that of the AdaBoost regressor algorithm. The RMSE of both began to increase and then decreased to the same level, but the trend of the random forest algorithm to the AdaBoost regressor algorithm was smaller. In numerical terms, the accuracy of the random forest procedure was higher than that of the AdaBoost regressor algorithm, while its MRE error was of the same order as that of the AdaBoost regressor algorithm. The MRE of the random forest algorithm increased with the number of training points, but that of the AdaBoost regressor algorithm decreased with the number of training points. The RMSE and MRE of the SVR algorithm magnitude were of the same order as those of the other models.

In this paper, a mass of LiDAR data ware used in the experiment, but it was difficult to obtain enough field calibration data in practical application. The results in this case ware worth discussing (for example, the number of training points less than 1000). From the results in Fig. 6, when the number is less than 1000 points, the results of each model are inconsistent. The results of BP neural network, AdaBoost and SVR models are relatively stable. The accuracy of elm and RF methods has an obvious steep decline process with the increase of measured points. There are many reasons for this phenomenon, such as the setting of input parameters, the spatial location of training samples, etc., and different number of training points have different inversion results. There are many possibilities for the number of training points and the accuracy of results, which is difficult to list one by one, and the best number of training points is reasonable in principle. However, from the overall results, the machine learning algorithm with UQAA model has higher accuracy than that without UQAA. Since the goal of this paper is not to test which machine learning algorithm is better, but to test whether the accuracy of UQAA model can be improved. Therefore, the optimization of each machine learning algorithm is not further discussed.

In terms of the SDB prediction performance, when the number of training points was greater than 4000, the RMSE and MRE tended to balance. Thus, 4000 training sample points were utilized for test. The BP neural network and ELM were more accurate than the other three models, while the SVR was the worst (Table 2). In Table 2, general differences were observed RMSE without and with UQAA of the different machine learning models: 1 cm (BP), 3 cm (ELM), 8 cm (RF), 4 cm (AdaBoost), and 62 cm (SVR). All five machine learning models led to overprediction in shallow waters and underprediction in deeper waters (Table 2). Different machine learning algorithms were used to evaluate the performance, which is the general concept of the total test site model. When the number of training points was 4000, the retrieval accuracy began to be smooth and steady. When the number of training points exceeded 4000, the random forest algorithm showed excellent performance in accuracy. The following are the depth retrieval results of the random forest algorithm with and without the UQAA when the number of training points is 4000.

Tables Icon

Table 2. Model validation of SDB with 4000 training points.

The bathymetric retrieval accuracies of the two models for the RF machine learning algorithms are compared in Fig. 7 and Fig. 8. Figure 7 shows the different digital depth model (DDM) of Ganquan Island. Figure 8 shows the scatter plots of different methods.

 figure: Fig. 7.

Fig. 7. Digital depth model (DDM) of Ganquan Island; (a) Lidar. (b) RF without UQAA. and (c) RF with UQAA.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Bathymetric retrieval results derived from the selective image data of Ganquan Island: (a) RF without UQAA and (b) RF with UQAA.

Download Full Size | PDF

The DDM [see Fig. 7(b)] extracted from the chosen image data were based on the 4000 training points, with the UQAA input training data. The DDM [see Fig. 7(b)] extracted from the chosen image data were based on the 4000 training points, without the UQAA input training data. As expected, the DDM extracted from the chosen image data with UQAA input training data could better fit the validation data than that without UQAA input training data due to more observations in the details where the water depth changes significantly.

The RMSE of the DDM extracted from the selected image with UQAA input was lower than that without UQAA input because, with the input of UQAA training data, there were more training parameters, the input data were more abundant, and the training results were more accurate (Fig. 8). The results show that when a certain image used UQAA input training data, the depth retrieval results obtained from the entire image were better than the results obtained without this training data. This is why the results of the UQAA were used for training during the depth retrieval of remote sensing satellites to increase the input data.

The MRE and R2 value of the DDM derived from the selected image with UQAA input were smaller than those without UQAA input (Fig. 8). The MRE and R2 value can be utilized to understand where bathymetric retrieval was enhanced and how many bathymetric retrieval results varied from various image data sources. The MRE and R2 value of the first 4000 validation points with or without UQAA input were obtained to measure and verify the spatial distribution of bathymetric retrieval errors and their change among various results. From the scatter diagram, the training dataset with UQAA input had better convergence and less noise than the training dataset without the UQAA.

4. Discussion

In the experiments, we found that there were many parameters influencing the inversion results in the depth of the machine learning algorithm inversion. The chlorophyll-a concentration and CDOM absorption coefficient results were reliable for this study because the spatial distribution characteristics were highly related to the bottom brightness, which was related to the water depth. In addition to adding the resulting data of the UQAA to the input data proposed in this paper, the most important thing affecting the accuracy of water depth retrieval was the selection of training sample points and machine learning selection of model parameters. By selecting the same number of training sample points in different spatial locations, the water depth results were compared, and the effects of training sample points in different locations on the water depth results were compared. By selecting the same training sample point data and under the same machine learning algorithm, different model parameters were selected, the inversion results were compared, and the influence of model parameters on the inversion results were discussed. This section analyses what factors led to the difference in sounding results and why this method is proposed to provide ideal water depth inversion results.

4.1 Influence of bottom brightness on chlorophyll-a concentration and CDOM absorption coefficients

The QAA is an algorithm for optically deep waters. It is worth exploring for optically shallow waters. The QAA has a good application effect and can correctly reflect IOPs in shallow turbid water. Le et al. (2009) [21] validated the QAA for highly turbid eutrophic water in Meiliang Bay in Taihu Lake, and the effectiveness of the local QAA for shallow and highly turbid water was proven. The percent difference between the extracted and acquired absorption coefficients was lower than 20% for all 13 samples in the 2007 dataset, while most of them were lower than 10%. Ishan and Eurico (2018) [39] used a suite of synthetic data and in situ measurements to promote its efficiency in optically complex and shallow estuarine waters by using the QAA-V. The QAA-V derived total absorption and backscattering coefficients, validated various waters varying from highly absorbing and turbid to relatively clear shelf waters, and indicated suitable effectiveness on a Hydro light-simulated synthetic dataset (R2 > 0.87, MRE < 17%) and in situ estuarine and nearshore datasets (R2 > 0.70, MRE < 35%) and the NOMAD (R2 > 0.90, MRE < 30%). In the optical shallow waters, the spatial distribution characteristic of water quality parameters derived by the Updated Quasi Analysis Algorithm (UQAA) is highly correlated with the bottom brightness [40]. Because the bottom reflection signal is strongly correlated with the spatial distribution of water depth, the derived water quality parameters may helpful and applicable for optical remote sensing based satellite derived bathymetry [9].

4.2 Sampling of different training points

The spatial distribution of RBEs is helpful to understand the improvement direction of water depth retrieval accuracy and the difference of water depth retrieval results of different algorithms. In order to obtain and verify the spatial distribution of water depth inversion error and its changes between different results, we use the random forest algorithm to calculate the RBE value of each test point when the training points of the two algorithms are 4000 and represent different RBEs. The color symbol of this value is displayed on the spatial location map.

From the distribution diagram (Fig. 9, Fig. 10), the accuracy of the new algorithm has been significantly improved. For the algorithm without UQAA input, the number of points in absolute RBEs between the 10% – 20% area obvious increases. The number of points with UQAA input decreased significantly in the absolute RBEs between the 0% – 10% region. In absolute RBEs above 20% region their distributions are similar, and the accuracy of the new algorithm is higher. This is also consistent with the conclusion in the results.

 figure: Fig. 9.

Fig. 9. Spatial distribution of RBEs with different algorithms: (a) without UQAA input; (b) with UQAA input. For the RBE map, blue symbols indicate that the absolute RBEs are greater than or equal to 20%, green symbols indicate that the absolute RBEs are in the 10%–20% interval, and red symbols indicate that the absolute RBEs are less than 10%.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Sampling of different training points: (a) the first 4000 points; (c) the second 4000 points,;(b) RBEs for the first 4000 points of (a) and (d) RBEs for the second 4000 points of (b); selective image data. For the RBE map, blue symbols indicate that the absolute RBEs are greater than or equal to 20%, green symbols indicate that the absolute RBEs are in the 10%–20% interval, and red symbols indicate that the absolute RBEs are less than 10%.

Download Full Size | PDF

The spatial distribution of RBEs is helpful to understand the improvement direction of water depth retrieval accuracy and the difference in water depth retrieval results from different water depth point data sources. To acquire and verify the spatial distribution of the water depth retrieval error and its changes between different results, we calculated the RBE value at each test point when the number of training points at two different spatial locations was 4000 by using the random forest algorithm with UQAA input and represented the different RBEs. The colored symbol of the value is displayed on the spatial location map.

The purpose of this experiment was to verify the influence of UQAA results on water depth retrieval and weaken the influence of other factors. This requirement could be achieved by selecting training points with similar spatial distributions. On the whole, although the spatial distribution of the two datasets with 4000 different training sample points was different, they were similar in the depth gradient sampling interval, which could better reflect the changes in water depth; therefore, it was reliable to use this as the training sample point. In general, from the perspective of RBEs, the difference between the results produced by the two training sets was not much in spatial distribution, and the points with larger errors (|RBE|>20%) were concentrated in the shallower water depths. In the northern sea region, where the water depth varied gradually and was shallower, the error was smaller. In the southern sea region, where the water depth varied drastically and was deeper, the error was more significant. On the whole, although the two were different, the difference was not apparent.

Different training sample points had an impact on the water depth results. This impact had a slight difference in value and not much difference in space. This effect is worth considering for the depth inversion algorithm. Compared with the presented approach, this was not the main factor that affected the depth inversion results by using the random forest algorithm.

4.3 Different machine learning model retrieval settings

It is obvious that for machine learning algorithms, different model parameter settings lead to different water depth inversion results. To observe and analyze the influence of the machine learning algorithm model parameter settings on the water depth inversion results, we used a random forest algorithm with different numbers of decision trees (100 and 10,000) and calculated the number of training points at the same spatial location as 4000, with and without a UQAA training dataset;

From the results of water depth retrieval, all methods could reflect changes in water depth (Fig. 11 and Table 3. Although there were different water depth inversion results for different machine learning algorithm model parameters, the error changed very little under the same conditions unless there was an input of a UQAA training dataset. In the northern waters where the water depth changed gradually and was shallow, the error was small. In the southern waters where the water depth changed drastically and was deep, the error was larger. For the scatter plot, under the same training dataset, the results of 100 decision trees were worse than those of 10,000 decision trees, and the RMSE and MRE were greater. In general, although different machine learning parameter settings led to different inversion results, the difference from the method proposed in this article was not obvious. the results of bathymetry using the UQAA and remote sensing reflectance were better than that using only remote sensing reflectance, and the overall improvements were 1 cm and 2 cm (for RF + 100 trees and RF + 10000 trees). In Table 2, differences in the general RMSE errors are observed to compare the impact of the number of RF trees and the UQAA: 9 cm (RF + 100trees) and 8 cm (RF + 10000trees). Main conclusion: the difference between the use of 100 and 10,000 trees is not significant.

 figure: Fig. 11.

Fig. 11. Bathymetric retrieval results: (a) RF+100tress without UQAA; (b) RF+10000tress without UQAA; (c) RF+100tress with UQAA; (b) RF+10000tress with UQAA.

Download Full Size | PDF

Tables Icon

Table 3. Model validation of SDB with 4000 different sampling training points.

Different machine learning parameter settings have different inversion results, but the difference is not obvious when compared with the method proposed in this paper. The depth inversion results with a UQAA training dataset had smaller errors and better results. This effect was slightly different in value and not much different in space. This effect is worth considering for the depth inversion algorithm. Compared with the presented approach, this was not the main factor that affects the inversion results of the random forest algorithm.

5. Conclusions

In this study, by implementing the updated quasi-analytical algorithm (UQAA) method, we drew a map of the chlorophyll-a concentration C and the CDOM absorption coefficient at 440 nm [ag(440)]. It contains topographic information and is extremely related to water depth. We used the UQAA and WorldView-2 satellite remote sensing reflectance datasets and the UQAA results, two neural network models, two integrated learning models, and the SVM model to estimate the water depth. We discussed the accuracy of satellite bathymetry for different training samples. For the satellite-derived relative depth prediction parameters, when the number of training samples was 4000, 5 different structures and water depths were evaluated.

Using the water depth map derived from LiDAR as the ground reference data, we drew the following conclusions: on the whole, when the number of training points was greater than 4000, the RMSE and MRE tended to balance. the results of bathymetry using the UQAA and remote sensing reflectance were better than that using only remote sensing reflectance, and the overall improvements were 1 cm and 2 cm (for RF + 100 trees and RF + 10000 trees). The spatial distribution and numerical analysis of the chlorophyll-a concentration C and the CDOM absorption coefficient at 440 nm [ag(440)] were more related to water depth, which can improve the accuracy of water depth retrieval and can be used as the eigenvalue of water depth estimation from multispectral remote sensing image.

Regarding the results, it seems that only in the SVR model the difference (62 cm) is significant enough to be able to affirm that, statistically, the application of UQAA positively affects the model. In the rest of the models, the positive impact of the UQAA cannot be appreciated. One idea to be able to affirm the improvement, since it is extremely slight, is to carry out the same study in other areas (of a different nature: more sunlight, turbulence, etc.), and significantly increase the training and test data.

Funding

High Resolution Earth Observation Systems of National Science and Technology Major Projects (05-Y30B01-9001-19/20-2); Key Special Project for Introduced Talents Team of Southern Marine Science and Engineering Guangdong Laboratory (Guangzhou) (GML2019ZD0602); National Natural Science Foundation of China (61991454); National Key Research and Development Program of China (2016YFC1400901).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. F. Moberg and C. Folke, “Ecological goods and services of coral reef ecosystems,” Ecological economics 29(2), 215–233 (1999). [CrossRef]  

2. H. S. Cesar, “Coral reefs: their functions, threats and economic value,” Collected essays on the economics of coral reefs 14, (2002).

3. J. Hedley, C. Roelfsema, I. Chollett, A. Harborne, S. Heron, S. Weeks, W. Skirving, A. Strong, C. Eakin, T. Christensen, V. Ticzon, S. Bejarano, and P. Mumby, “Remote Sensing of Coral Reefs for Monitoring and Management: A Review,” Remote Sens. 8(2), 118 (2016). [CrossRef]  

4. J. Horta, A. Pacheco, D. Moura, and Ó. Ferreira, “Can recreational echosounder-chartplotter systems be used to perform accurate nearshore bathymetric surveys?” Ocean Dynamics 64(11), 1555–1567 (2014). [CrossRef]  

5. G. Chust, M. Grande, I. Galparsoro, A. Uriarte, and A. Borja, “Capabilities of the bathymetric Hawk Eye LiDAR for coastal habitat mapping: A case study within a Basque estuary,” Estuarine, Coastal Shelf Sci. 89(3), 200–213 (2010). [CrossRef]  

6. S. Coveney and X. Monteys, “Integration potential of INFOMAR airborne LIDAR bathymetry with external onshore LIDAR data sets,” J. Coast. Res. 62, 19–29 (2011). [CrossRef]  

7. C. Cahalane, A. Magee, X. Monteys, G. Casal, J. Hanafin, and P. Harris, “A comparison of Landsat 8, RapidEye and Pleiades products for improving empirical predictions of satellite-derived bathymetry,” J. Coastal Res. 233, 111414 (2019). [CrossRef]  

8. E. Vahtmäe and T. Kutser, “Airborne mapping of shallow water bathymetry in the optically complex waters of the Baltic Sea,” J. Appl. Remote Sens 10(2), 025012 (2016). [CrossRef]  

9. D. R. Lyzenga, “Passive remote sensing techniques for mapping water depth and bottom features,” Appl. Opt. 17(3), 379–383 (1978). [CrossRef]  

10. R. P. Stumpf, K. Holderied, and M. Sinclair, “Determination of water depth with high-resolution satellite imagery over variable bottom types,” Limnol. Oceanogr. 48(1part2), 547–556 (2003). [CrossRef]  

11. H. Mohamed, A. Negm, M. Zahran, and O. C. Saavedra, “Bathymetry determination from high resolution satellite imagery using ensemble learning algorithms in Shallow Lakes: Case study El-Burullus Lake,” Int. J. Environ. Sci. Dev. 7(4), 295–301 (2016). [CrossRef]  

12. T. Sagawa, Y. Yamashita, T. Okumura, and T. Yamanokuchi, “Satellite Derived Bathymetry Using Machine Learning and Multi-Temporal Satellite Images,” Remote Sens. 11(10), 1155 (2019). [CrossRef]  

13. Ö. Ceyhun and A. Yalçın, “Remote sensing of water depths in shallow waters via artificial neural networks,” Estuarine, Coastal Shelf Sci. 89(1), 89–96 (2010). [CrossRef]  

14. M. D. M. Manessa, A. Kanno, M. Sekine, M. Haidar, K. Yamamoto, T. Imai, and T. Higuchi, “Satellite-derived bathymetry using random forest algorithm and worldview-2 Imagery,” Geoplanning J Geomatics Plan 3(2), 117–126 (2016). [CrossRef]  

15. A. Misra, Z. Vojinovic, B. Ramakrishnan, A. Luijendijk, and R. Ranasinghe, “Shallow water bathymetry mapping using Support Vector Machine (SVM) technique and multispectral imagery,” Int. J. Remote Sens. 39(13), 4431–4450 (2018). [CrossRef]  

16. M. El-Diasty, “Satellite-Based Bathymetric Modeling Using a Wavelet Network Model,” ISPRS International Journal of Geo-Information 8(9), 405 (2019). [CrossRef]  

17. R. Benshila, G. Thoumyre, M. A. Najar, G. Abessolo, R. Almar, E. Bergsma, G. Hugonnard, L. Labracherie, B. Lavie, and T. Ragonneau, “A deep learning approach for estimation of the nearshore bathymetry,” J. Coast. Res. 95(sp1), 1011–1015 (2020). [CrossRef]  

18. B. Chen, Y. Yang, D. Xu, and E. Huang, “A dual band algorithm for shallow water depth retrieval from high spatial resolution imagery with no ground truth,” ISPRS J. Photogramm. Remote Sens. 151, 1–13 (2019). [CrossRef]  

19. J. Li, D. E. Knapp, S. R. Schill, C. Roelfsema, S. Phinn, M. Silman, J. Mascaro, and G. P. Asner, “Adaptive bathymetry estimation for shallow coastal waters using Planet Dove satellites,” Remote Sens. Environ. 232, 111302 (2019). [CrossRef]  

20. X. Zhang, Y. Ma, and J. Zhang, “Shallow Water Bathymetry Based on Inherent Optical Properties Using High Spatial Resolution Multispectral Imagery,” Remote Sens. 12(18), 3027 (2020). [CrossRef]  

21. R. Huang, K. Yu, Y. Wang, J. Wang, L. Mu, and W. Wang, “Bathymetry of the coral reefs of Weizhou island based on multispectral satellite images,” Remote Sens. 9(7), 750 (2017). [CrossRef]  

22. Z. Lee, K. Carder, and R. Arnone, “Deriving Inherent Optical Properties from Water Color: a Multiband Quasi-Analytical Algorithm for Optically Deep Waters,” Appl. Opt. 41(27), 5755–5772 (2002). [CrossRef]  

23. Z. Lee, Visible-infrared remote sensing model and applications for ocean waters, (University of South Florida, 1994).

24. D. W. Marquardt, “An algorithm for least-squares estimation of nonlinear parameters,” J. Soc. Ind. Appl. Math. 11(2), 431–441 (1963). [CrossRef]  

25. H. RenS. Y. Huang, and Ieee, “WATER DEPTH ESTIMATION FROM WORLDVIEW-2 IMAGE WITH BACK PROPAGATION NEURAL NETWORK IN COASTAL AREA,” in Igarss 2018 - 2018 Ieee International Geoscience and Remote Sensing Symposium (Ieee, New York, 2018), pp. 7863–7865.

26. G.-B. Huang, “What are extreme learning machines? Filling the gap between Frank Rosenblatt’s dream and John von Neumann’s puzzle,” Cognit Comput. 7(3), 263–278 (2015). [CrossRef]  

27. J. C. Maxwell, A treatise on electricity and magnetism (Clarendon University, 1873), Vol. 1.

28. I. Jacobs, “Fine particles, thin films and exchange anisotropy,” Magnetism, 271–350 (1963).

29. A. J. Smola and B. Schölkopf, “A tutorial on support vector regression,” Statistics and computing 14(3), 199–222 (2004). [CrossRef]  

30. J.-Y. Zhang, J. Zhang, Y. Ma, A.-N. Chen, J. Cheng, and J.-X. Wan, “Satellite-derived bathymetry model in the Arctic waters based on support vector regression,” J. Coastal Res. 90(sp1), 294–301 (2019). [CrossRef]  

31. C.-C. Chang and C.-J. Lin, “LIBSVM: A library for support vector machines,” ACM Trans. Intell. Syst. Technol. 2(3), 1–27 (2011). [CrossRef]  

32. S. Kay, J. D. Hedley, and S. Lavender, “Sun Glint Correction of High and Low Spatial Resolution Images of Aquatic Scenes: a Review of Methods for Visible and Near-Infrared Wavelengths,” Remote Sens. 1(4), 697–730 (2009). [CrossRef]  

33. E. F. Vermote, D. Tanré, J. L. Deuze, M. Herman, and J.-J. Morcette, “Second simulation of the satellite signal in the solar spectrum, 6S: An overview,” IEEE Trans. Geosci. Remote Sensing 35(3), 675–686 (1997). [CrossRef]  

34. F. Eugenio, J. Marcello, and J. Martin, “High-Resolution Maps of Bathymetry and Benthic Habitats in Shallow-Water Environments Using Multispectral Remote Sensing Imagery,” IEEE Trans. Geosci. Remote Sensing 53(7), 3539–3549 (2015). [CrossRef]  

35. Y. Liu, R. Deng, Y. Qin, B. Cao, Y. Liang, Y. Liu, J. Tian, and S. Wang, “Rapid estimation of bathymetry from multispectral imagery without in situ bathymetry data,” Appl. Opt. 58(27), 7538–7551 (2019). [CrossRef]  

36. N. Zhao, D. Shen, and J.-W. Shen, “Formation Mechanism of Beach Rocks and Its Controlling Factors in Coral Reef Area, Qilian Islets and Cays, Xisha Islands, China,” J. Earth Sci. 30(4), 728–738 (2019). [CrossRef]  

37. Z. Zhang, J. Zhang, Y. Ma, H. Tian, and T. Jiang, “Retrieval of Nearshore Bathymetry around Ganquan Island from LiDAR Waveform and QuickBird Image,” Appl. Sci. 9(20), 4375 (2019). [CrossRef]  

38. P. Vinayaraj, V. Raghavan, and S. Masumoto, “Satellite-Derived Bathymetry using Adaptive Geographically Weighted Regression Model,” Mar. Geod. 39(6), 458–478 (2016). [CrossRef]  

39. I. D. Joshi and E. J. D’Sa, “An estuarine-tuned quasi-analytical algorithm (QAA-V): assessment and application to satellite estimates of SPM in Galveston Bay following Hurricane Harvey,” Biogeosciences 15(13), 4065–4086 (2018). [CrossRef]  

40. J. Li, Q. Yu, Y. Q. Tian, B. L. Becker, P. Siqueira, and N. Torbick, “Spatio-temporal variations of CDOM in shallow inland waters from a semi-analytical inversion of Landsat-8,” Remote Sens. Environ. 218, 189–200 (2018). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Workflow processing steps of the presented methodology for detecting water depth from satellite images by using different models.
Fig. 2.
Fig. 2. (a) The geographic location of Ganquan Island is marked with a red star; (b) Multispectral image of Ganquan Island and the sampling points for atmospheric correction validation.
Fig. 3.
Fig. 3. (a) Spectrum comparison before atmospheric correction; (b) Spectrum comparison after atmospheric correction.
Fig. 4.
Fig. 4. LiDAR data maps for the (a) calibration datasets and (b) validation datasets—depths in meters (m).
Fig. 5.
Fig. 5. (a) Chlorophyll-a concentration C (mg·m-3) and (b) CDOM absorption coefficient at 440 nm, ag 440(m-1).
Fig. 6.
Fig. 6. Model validation of the two models with machine learning algorithms. The bathymetric retrieval accuracies of the two models for the five machine learning algorithms are compared.
Fig. 7.
Fig. 7. Digital depth model (DDM) of Ganquan Island; (a) Lidar. (b) RF without UQAA. and (c) RF with UQAA.
Fig. 8.
Fig. 8. Bathymetric retrieval results derived from the selective image data of Ganquan Island: (a) RF without UQAA and (b) RF with UQAA.
Fig. 9.
Fig. 9. Spatial distribution of RBEs with different algorithms: (a) without UQAA input; (b) with UQAA input. For the RBE map, blue symbols indicate that the absolute RBEs are greater than or equal to 20%, green symbols indicate that the absolute RBEs are in the 10%–20% interval, and red symbols indicate that the absolute RBEs are less than 10%.
Fig. 10.
Fig. 10. Sampling of different training points: (a) the first 4000 points; (c) the second 4000 points,;(b) RBEs for the first 4000 points of (a) and (d) RBEs for the second 4000 points of (b); selective image data. For the RBE map, blue symbols indicate that the absolute RBEs are greater than or equal to 20%, green symbols indicate that the absolute RBEs are in the 10%–20% interval, and red symbols indicate that the absolute RBEs are less than 10%.
Fig. 11.
Fig. 11. Bathymetric retrieval results: (a) RF+100tress without UQAA; (b) RF+10000tress without UQAA; (c) RF+100tress with UQAA; (b) RF+10000tress with UQAA.

Tables (3)

Tables Icon

Table 1. Parameters used in the optimization algorithm.

Tables Icon

Table 2. Model validation of SDB with 4000 training points.

Tables Icon

Table 3. Model validation of SDB with 4000 different sampling training points.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

a ϕ ( 440 ) = 0.06 C 0.65 ,
b b p ( 550 ) = 0.0111 C 0.62 ,
b b p ( λ ) = b b p ( 550 ) ( 550 / λ ) Y ,
a g ( λ )  =  a g ( 440 ) exp ( 0.015 ( λ 440 ) ) ,
a p h y ( λ ) = [ a 0 ( λ ) + a 1 ( λ ) ln ( a ϕ ( 440 ) ) ] a ϕ ( 440 ) ,
a ( λ ) = a w ( λ ) + a p h y ( λ ) + a g ( λ ) ,
b ( λ ) = b w ( λ ) + b b p ( λ ) ,
u m ( λ ) = b ( λ ) a ( λ ) + b ( λ ) ,
r r s ( λ ) = R r s ( λ ) 0.52 + 1.7 R r s ( λ ) ,
u 0 ( λ ) = g 0 + [ g 0 2 + 4 g 1 r r s ( λ ) ] 1 / 2 2 g 1 ,
h ( x ) = { 1 i f i = 1 t a t h t t h r e s h o l d 0 o t h e r w i s e ,
ρ T O A deg ( λ ) = ρ T O A ( λ ) b λ ( λ ) ( ρ N I R M I N ρ N I R ) ,
M R E = ( 1 n i = 1 n | h i h ^ i | / h i ) 100 % ,
R M S E = ( i = 1 n ( h i h ^ i ) / n ) 1 / 2 ,
| R B E | = ( | h i h ^ i | / h i ) 100 %
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.