Get Our e-AlertsSubmit Manuscript
Journal of Remote Sensing / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 9841804 |

Zhongping Lee, Mingjia Shangguan, Rodrigo A. Garcia, Wendian Lai, Xiaomei Lu, Junwei Wang, Xiaolei Yan, "Confidence Measure of the Shallow-Water Bathymetry Map Obtained through the Fusion of Lidar and Multiband Image Data", Journal of Remote Sensing, vol. 2021, Article ID 9841804, 16 pages, 2021.

Confidence Measure of the Shallow-Water Bathymetry Map Obtained through the Fusion of Lidar and Multiband Image Data

Received01 Nov 2020
Accepted08 Mar 2021
Published28 Apr 2021


With the advancement of Lidar technology, bottom depth () of optically shallow waters (OSW) can be measured accurately with an airborne or space-borne Lidar system ( hereafter), but this data product consists of a line format, rather than the desired charts or maps, particularly when the Lidar system is on a satellite. Meanwhile, radiometric measurements from multiband imagers can also be used to infer ( hereafter) of OSW with variable accuracy, though a map of bottom depth can be obtained. It is logical and advantageous to use the two data sources from collocated measurements to generate a more accurate bathymetry map of OSW, where usually image-specific empirical algorithms are developed and applied. Here, after an overview of both the empirical and semianalytical algorithms for the estimation of from multiband imagers, we emphasize that the uncertainty of varies spatially, although it is straightforward to draw regressions between and radiometric data for the generation of . Further, we present a prototype system to map the confidence of pixel-wise, which has been lacking until today in the practices of passive remote sensing of bathymetry. We advocate the generation of a confidence measure in parallel with , which is important and urgent for broad user communities.

1. Introduction

The ocean depth is a geophysical property puzzling humans for thousands of years. The answer not only satisfies curiosity but also is important for many aspects of human activities and scientific studies, including navigation, ecosystem management, sustainable economic development, and ocean dynamic modeling [1]. To determine ocean bathymetry, the ancient Greeks (circa 80 BCE) used the “line sounding” method and obtained depth measurements up to ~2000 m in the Mediterranean Sea, while James Clark Ross obtained a depth of 4893 m in the 1840s. All these measurements were based on ship surveys; hence, it is unsurprising that the Challenger expedition (1872-1876), an oceanic voyage explicitly targeting ocean bathymetry, obtained just 492 soundings of the Atlantic Ocean. Only after the invention of Sonar during World War II did great achievements occur in ocean bathymetry or bottom topography. Still, because the ocean bottom is covered by a thick layer of water, which is generally opaque to electromagnetic radiation, we still know much less about the seafloor compared to what we know about the surface of the Moon or Mars [2].

Since the launch and operation of satellites, our capability to observe and measure ocean bathymetry has significantly improved, where sea surface altimetry has been successfully used to indirectly infer the bathymetry [3, 4]. This approach cannot resolve small-scale variations and can only detect large seamounts that alter the earth’s gravitational field and subsequently the sea surface altimetry. One direct and precise measurement of bathymetry from airborne or space-borne platforms is Lidar (light detection and ranging) [5], which uses time lapses between emission and receiving of photons interacting with the bottom (or a target) to calculate the distance photons traveled. This time-based technique can be used to accurately calculate the bottom depth of clear oceanic waters up to about 40 m at present [6, 7]. We commonly term this technique as active remote sensing of bottom depth and here use to represent the product obtained (see Table 1 for major symbols and acronyms used in this article). Although Lidar is only feasible for relatively shallow and clear waters, due to the significance of such regions, there are many airborne Lidar systems specifically developed for bathymetry [5, 8]. One extremely exciting and valuable development is the ICESat-2 satellite system [9], which sends out a laser at 532 nm, has a vertical resolution of about 0.17 m for bathymetry, and can potentially provide for various nearshore regions of the world [7]. This Lidar system, due to its space-borne nature, obtains measurements of mesh-style points (in the various lines dictated by the Lidar system and satellite orbit), not the desired bathymetry map.

Symbol or acronymDefinitionUnit

Total absorption coefficient at 440 nmm-1
Bottom depthm
Bottom depth measured by a Lidar systemm
Bottom depth derived from an imagerm
Top-of-atmosphere reflectance
Remote sensing reflectancesr-1
from a measurementsr-1
from a modelsr-1
in the matchup data poolsr-1
from a water targetsr-1
CSSConfidence score system
EBAEmpirically based approach or algorithm
EEAExplicit empirical approach
ICESat-2Ice, Cloud and land Elevation Satellite
IEAImplicit empirical approach or algorithm
IOPsInherent optical properties
LidarLight detection and ranging
MBVAMultiband value algorithm
MLAMachine learning approach or algorithm
MLARrsMLA based on
MLAρtoaMLA based on
OSWOptically shallow water
SAASemianalytical approach or algorithm
TBRATwo-band ratio algorithm

A completely different approach of the optical measurement of bathymetry is based on radiative transfer, where a shallow bottom will affect the radiance emerging from below the sea surface. A quick and simple example is the shallow vs. deep ends of a swimming pool that appear as different colors to humans. Algorithms were then developed in the 1970s to use radiometric data from multiband imagers to estimate the bottom depth of shallow clear waters [1012]. This image-based remote sensing of the bottom depth ( hereafter) is commonly termed as passive remote sensing. Although such estimates of depth are not precise, a significant advantage is that a map of can be produced, especially with the launch and operation of more advanced sensors [1]. In recent decades, with an improved understanding of radiative transfer in optically shallow waters (OSW), more sophisticated algorithms based on radiative transfer were developed [1319], resulting in the creation of more bathymetry maps from imagers [1922]. For those algorithms based on the physics of radiative transfer, a priori depths are not required for the development of the algorithm, and can be produced as long as the input reflectance spectrum is highly reliable and has an adequate spectral resolution. However, since both the water and bottom properties affect the reflectance spectrum, there are still various uncertainties in the derived product from the reflectance spectrum.

In view of the availability of concurrent or collocated highly accurate satellite-based and high-spatial resolution multiband imagers, it is logical to develop schemes to generate bathymetry maps of OSW through the fusion of the two data sources [23, 24]. Figure 1 shows an example of measurements captured by ICESat-2 (red dashed line) and Landsat-8 operational land imager (OLI) over the Great Bahama Bank, where it is desired to expand the ICESat-2 bathymetry to the entire shallow regions covered by the Landsat-8 acquisition. As demonstrated in many studies, when collocated measurements of both and radiometric properties are available, the generation of through explicit empirical regressions [2528] or neural networks [2931] becomes straightforward. However, these products [25, 32] lack a representation of the confidence or quality of the product at each pixel, although schemes to estimate the impact of radiometric noise on have been developed [21, 33]. Usually, an averaged root mean square error or mean relative error of an algorithm is provided, but such measures of error represent the performance from a data pool and do not mean the same error or confidence at every pixel or location [34]. After the first demonstration of passive remote sensing of bottom depth [10, 12], what limited the broad-scale application of the product was not a lack of algorithms but rather the lack of a pixel-wise confidence measure for such products. Brando et al. [35] suggested using the closure between the measured and modeled remote sensing reflectance to infer the quality of . However, as this closure is an index for numerical solutions of a complex remote sensing function (see Section 2.3), a good closure does not necessarily indicate a high confidence of the derived . At present, the ability to generate a pixel-wise measure of the quality or confidence of through theoretical modeling is lacking, despite the necessity of such a measurement as it would vary spatially even within clear waters.

In this work, after review and demonstration of traditional passive schemes for bathymetry, we provide an original and novel prototype system to objectively classify the confidence of . We advocate the generation of such confidence products in parallel to remotely sensed bathymetry, where such a confidence measure would be essential to promote the use of by the broader community.

2. Overview of Passive Remote Sensing Algorithms for Shallow-Water Bathymetry

Detailed descriptions of the derivation of from a Lidar system can be found in Guenther [5], where the key is to obtain precise measurements of time lapses of photons reflected by a sea bottom. For a water body with a shallow bottom, the remote sensing reflectance (, in sr-1)—the ratio of the radiance () emerging from below the surface to the downwelling irradiance just above the surface—can be expressed as [3638]

Here, is the remote sensing reflectance of the same water body but with no impact from the bottom (i.e., optically deep); is the bottom reflectance modified by air-sea transmittance; is the diffuse attenuation coefficient of downwelling irradiance, with and for the diffuse attenuation of upwelling photons generated due to scattering in the water column and bottom reflection, respectively. , , and can be parameterized with the water inherent optical properties [13, 36, 39]. Thus, while depends on the water optical properties and bottom reflectance, it is also a function of the bottom depth (). Hence, various passive remote sensing algorithms have been developed to retrieve from this spectral signal [11, 13, 21, 40]. These algorithms can be grouped into three approaches, two of which belong to the empirically based approach (EBA) and the other classed as a semianalytical approach (SAA). The following briefly describes the essence of these three schemes.

2.1. Explicit Empirical Approach (EEA)
2.1.1. Multiband Value Algorithm (MBVA)

With collocated measurements of bottom depth and multiband radiometric data, Polcyn et al. [10] proposed the first empirical algorithm for based on the difference between the shallow- and deep-water , with the algorithm further refined by Lyzenga [12]. Generally, can be written as

Here, is the logarithm of () or () and is calculated for a specific spectral band, while are empirical coefficients tuned using collocated (or ) and . This algorithm can be improved with the use of additional bands:

is the number of bands that are available and feasible from an imager; thus, there are algorithm coefficients () to be tuned. Since Equations (2) and (3) are empirical, there is a potential that is smaller than , e.g., over dark seagrass regions, which then causes an invalid mathematical calculation for ; this formula could be modified as [41]

Since bottom depth is related directly to the value of , we term this empirical model for as a multiband value approach (MBVA). Comparing Equation (4) with Equations (1)–(3), they are essentially the same, except that the (a difference between the shallow and deep regions in an image) in Equations (2) and (3) is replaced by in Equation (4).

2.1.2. Two-Band Ratio Algorithm (TBRA)

Also, recognizing that the difference between and could be negative over dark bottom substrates, Stumpf et al. [25] developed an empirical approach from the ratio of two bands to estimate :

The three model coefficients (, ) are also tuned using pairs of collocated and . The value of ranges 500-1500 and is usually fixed at 1000 [25, 42], while Traganos and Reinartz [34] indicated that a value of works fine for a seagrass environment. As demonstrated in many studies [24, 25, 42], maps of bathymetry can be generated following this scheme. It is also possible to use the logarithm of the ratio from two bands for the empirical estimation of [43]. However, Traganos et al. [41] found that the performance is worse than using the formulation given by Equation (5); hence, its discussion is omitted here. Because this approach employs a ratio of at two bands, we term this scheme a two-band ratio algorithm (TBRA), although Equation (5) can be expanded to include more available bands. The algorithms following Equations (2)–(5) are data-based (empirical), where algorithm relationships and coefficients are explicit, and the coefficients are driven by pairs of known and . Note that if from more bands are required, it then places higher demands on atmospheric correction, especially in the longer wavelengths over optically shallow regions, where presently the OLI value in the red band sometimes is invalid.

2.2. Implicit Empirical Approach (IEA)
2.2.1. Estimation from

Different from the explicit empirical algorithms shown above, the machine learning approach (MLA, which in this manuscript collectively stands for neural networks, machine learning, and deep learning) is another data-based approach for the estimation of from remote sensing measurements [29, 30, 44, 45]. Unlike EEA, the algorithm dependence or relationships and coefficients of MLA are hidden in the computer programming architecture (various layers and neurons), so it is not obvious how varies with or spectral radiance. As there are no explicit equations or parameters for such an approach; conceptually, the algorithm can be expressed as

Here, is the available and feasible band for the estimation of (e.g., usually Bands 1-4 for Landsat-8 OLI), although there are no specific restrictions of that can be used.

2.2.2. Estimation from Top-of-Atmosphere Reflectance

Given that machine learning is empirical, another way to utilize MLA for estimation is to bypass atmospheric correction [46], thereby estimating and/or water properties directly from the top-of-atmosphere reflectance () [47]. Like Equation (7), this scheme can conceptually be defined as

To implicitly account for the contribution of atmosphere to , the range of will be from the visible to NIR-SWIR bands (e.g., Bands 1-7 for Landsat-8 OLI). Similar to the algorithms in Section 2.1, when sufficient pairs of and are available, an MLAρtoa can be developed for the remote sensing of from . While an EEA like Equation (5) could be developed with 5-10 data points, an MLA requires much more data (usually hundreds or more) in the training phase. In addition, an MLA is much more complex in the computer architecture than the simple mathematical formulation presented in Equations (2)–(5).

2.3. Semianalytical Approach (SAA)

A completely different set of algorithms for is based on the radiative transfer equation. After parameterizing the spectra of inherent optical properties (IOPs) and bottom reflectance in Equation (1), an spectrum of shallow water could be simplified with five variables and expressed as [13]

Here, and represent the absorption coefficient of phytoplankton () and detritus-gelbstoff (), is the particle backscattering coefficient (), and is the bottom reflectance, all set at a reference wavelength, such as 440 nm. The five variables can be solved numerically through spectral optimization (or minimization) by minimizing a cost function computed between the measured and modeled spectra, defined as

Provided there is a sufficient number of spectral bands and that the spectrum is in high quality, can be generated from image-based spectrometers without a priori data pairs of and [35, 48, 49]. In addition, the bottom substrate class and water optical properties could also be generated from this process [14, 15, 19]. The SAA is extremely valuable to measurements that have only, but the retrievals depend on the quality of the spectrum and the number of spectral bands [21, 33, 35, 50]. For multiband imagery that has a limited number of spectral bands in the visible domain, such as Landsat-8 OLI and Sentinel-2 MSI, modifications on the SAA variables and processing are necessary [51]. Additionally, its computational load is significantly greater than that of EBA because an SAA solves 4-5 variables simultaneously for a given spectrum; fortunately, this demand can be met with greatly improved computer technology.

3. Data and Processing

Landsat-8 OLI and ICESat-2 measurements are used here to demonstrate maps from collocated Lidar data and multiband imagers and the generation of a pixel-wise confidence score.

3.1. Landsat-8 Data

Landsat-8 is an extension of the earlier Landsat series [52] and was launched on February 11, 2013. Its OLI has seven bands in the ~440–2200 nm domain to take measurements of the earth-atmosphere system. In particular, the spatial resolution of Landsat-8 OLI is 30 m, which provides detailed features of coastal regions where bottom depth can be highly heterogeneous. The Landsat-8 OLI Level-1 data processed by the Level-1 Product Generation System (LPGS) can be downloaded from the USGS website ( The image data were processed using the SeaDAS package (v7.5) [53], and the atmospheric correction algorithm proposed by Bailey et al. [54] was adopted for the generation of . A low threshold of 0.0003 sr-1 is used for if it is found that the obtained is negative. Here, a few images over Florida Bay (24.76-24.89°N, 80.75-80.77°W; June 9 and October 15), the Great Bahama Bank (23.96-25.14°N, 76.80-76.93°W; March 7 and May 26), and the Great Barrier Reef (23.18-23.57°S, 151.68-151.93°E; August 8 and September 17) captured in 2019 were processed and utilized as examples.

3.2. ICESat-2 Data

ICESat-2 was launched on September 15, 2018. It is a follow-on mission to Ice, Cloud and land Elevation Satellite (ICESat) and provides global altimetry and atmospheric measurements with emphasis on surface elevation changes in polar regions [9]. The sole instrument onboard ICESat-2 is the Advanced Topographic Laser Altimeter System (ATLAS), a green (532 nm) wavelength, photon-counting laser altimeter with a 10 kHz pulse repetition rate [9, 55]. ATLAS uses photomultiplier tubes (PMTs) as detectors in the photon-counting mode so that a single photon reflected back to the receiver triggers a detection within the ICESat-2 data acquisition system. This single-photon-sensitive detection technique used by ATLAS to measure photon time of flight provides a very high vertical resolution required to detect small temporal changes in polar ice elevations [56, 57], as well as the bottom depth of optically shallow waters [7].

3.3. Data Matchup and Statistical Measures

Matchup datasets between the Landsat-8 OLI and ICESat-2 measurements were organized with the following processing steps: the Landsat-8 OLI pixels of dense clouds were first discriminated and removed based on the threshold of Rayleigh reflectance at the SWIR band (1238 nm) [58]. Meanwhile, the pixels of low-quality were removed based on the standard Level-2 quality flags included in SeaDAS, which include ATMFAIL (atmospheric correction failure), LAND (land pixel), CLDICE (probable cloud or ice contamination), HILT (very high or saturated observed radiance), and HIGLINT (strong sun glint contamination).

The ICESat-2 bathymetry results presented in this work use geolocated photon data, contained in the ATL03 data product, which are segmented into granules that span about 1/14th of an orbit [59]. Both OLI and ATL03 photon products include latitude and longitude information within the WGS-84 coordinate reference system.

Considering the variation of (after tidal correction) is negligible within a short period, the time constraint for “concurrent” Landsat-8 OLI and ICESat-2 data is set as ±2 weeks, and the ICESat-2 product is adjusted to match the tidal cycle of Landsat-8, where the classical tidal harmonic analysis model T_TIDE was applied to calculate tide information of interested locations [60]. To match the measurement between ICESat-2 and OLI , we first located the ICESat-2 track within the OLI image. For an ICESat-2 data point, a Landsat-8 pixel was first selected based on the closest distance. Since the spatial resolution of ICESat-2 along the orbit track is 0.7 m, while the footprint of OLI is 30 m, there are many ICESat-2 measurements within an OLI pixel. Therefore, for this OLI pixel, all ICESat-2 pixels within a radius of 15 m are used to calculate the mean value and considered the matchup for this OLI .

To measure the deviation or error of the product, in addition to the coefficient of determination between any two sets of data, the root mean square error (RMSE) and the mean absolute relative error (MARE) between and are calculated: where is the total number of pairs used in the analyses. Note that the term “error,” rather than the term “difference,” is used in these analyses. This is because the uncertainty of is very low (a few centimeters); thus, could be considered the ground “truth,” and any difference between and will be most likely in the system to produce .

4. Predictability of Empirical Schemes

For a robust empirical scheme, the first aspect is to check if there are strong correlations between the input and the desired output, which is termed as predictability here and measured by the coefficient of determination in linear regression (). A value of indicates a 100% predictability or certainty. For the case of bathymetry, the output is the bottom depth, while the input is the spectral information (spectra of or here) or value after its mathematical transformations (e.g., parameters or in Equations (2)–(5)). In the following, we use the compiled matchup datasets to show the different predictability of the abovementioned empirical schemes (TBRA, MBVA, MLARrs, and MLAρtoa).

4.1. Predictability with Data from One Image

Many publications [24, 26, 41] have shown strong predictability () of EEA (TBRA or MBVA) for the estimation of from . Such high predictability is not always the case [28] (also see Table 2). Figure 2 shows matchup measurements (>3500 pairs) over the Great Bahama Bank, an environment with generally clear water and shallow depths [43, 61, 62]. The OLI was obtained on March 7, 2019, where (obtained on March 16, 2019) ranges ~1.5–9.0 m after tide correction. The value between (for the ratio of OLI Band 2 and Band 3) and is ~0.36 (Figure 2(b)), which is dropped to ~0.20 when is changed to . These values indicate that such a ratio at most explained <40% of the variance for this dataset, although the radiometric measurements came from the same image, and that the distance of the points (see the red dashed line in Figure 2(a)) spans ~110 km. Most of the remaining variances (>60%) are likely from the water column and bottom properties (i.e., assuming that uncertainties from sun-sensor geometry and atmospheric properties can be omitted), and these variations could not be resolved from . These values are significantly lower than those reported in previous studies [24, 26, 63], indicating a high data or environmental dependence of TBRA and its algorithm coefficients (). The use of 20 or so data pairs to obtain a stable set of [63, 64] will likely be a special case, rather than a common situation. This also echoes the findings of earlier studies [25, 27, 65] that one set of empirical coefficients cannot satisfy all pixels, even for the same image, unless the threshold for acceptable uncertainty is relaxed.

Algorithm typeNumber of free variables in an algorithmMARERMSE (m)

MLARrs1 hidden layer, 5 neurons0.910.050.39
MLAρtoa1 hidden layer, 8 neurons0.900.050.40
MLARrs3 hidden layers, 128, 32, and 16 neurons0.970.040.33
MLAρtoa3 hidden layers, 128, 32, and 16 neurons0.970.040.31

The value increases to 0.88 (Figure 2(c)) if MBVA (Equation (5)) is used to predict for the same dataset, indicating significantly higher predictability of MBVA for this dataset or environment. The results are even better ( as 0.91), although not much, if an MLA with 1 hidden layer and 5 neurons (for an spectrum containing four visible bands) is used (Figure 2(d) and Table 2). These results highlight the importance of using more bands [66], explicitly (MBVA) or implicitly (MLA), to account for the likely changes of water properties and bottom substrates across an image.

In an MLA, each neuron is similar to a free variable in a multivariant nonlinear regression; thus, more free variables tend to improve regressions. For the MLA with 1 hidden layer and 5 neurons (for ), the number of free variables is equivalent to that for MBVA; therefore, the results suggest an improved capability of MLA to pick up hidden relationships between and . This predictability is further improved with a deep learning architecture of using 3 hidden layers and more neurons (see Table 2), indicating a great potential of machine learning for empirical estimation of bottom depth. Further, it is found that the statistical measures are nearly the same (see Table 2) between using and using (1 hidden layer with 8 neurons, i.e., number of spectral bands plus 1) as the input to an MLA. These results suggest that through a nonlinear scheme like MLA, it is feasible to bypass the atmospheric correction step to retrieve directly from the top-of-atmosphere measurements [46].

4.2. Predictability with Data from Multiple Images

To further observe the impact of data on the predictability of using two bands or multiple bands, especially the tolerance of MLA, a total of 5172 pairs of collocated Landsat-8 OLI and ICESat-2 data covering waters of the Bahamas (23.96-25.14°N, 76.80-76.93°W), Florida Bay (24.76-24.89°N, 80.75-80.77°W), and the Great Barrier Reef (23.18-23.57°S, 151.68-151.93°E) were compiled. For , corrected to match the tidal cycle of OLI measurements, in a range of ~1.0-11.0 m, TBRA, MBVA, and MLA produced values of 0.48, 0.84, and 0.91, respectively (see Table 3). MLAρtoa performs slightly better than MLARrs across these multiple images where there are various atmospheric properties, further supporting the concept of obtaining from when MLA is applicable. The improved predictability of MBVA and MLA is echoed by the MARE and RMSE values (see Tables 2 and 3), which are calculated after the model coefficients are determined through tuning or training. For instance, the MARE value is ~10% with RMSE as 0.54 m for MBVA, and MARE is ~8% with RMSE of 0.52 m for MLA. However, TBRA achieves a MARE value of 27% with RMSE as 1.25 m (see Table 3), which are about three times the values obtained using MBVA and MLA. These evaluations indicate improved predictability of using more bands, rather than using information from two bands, for the calculation of .

Algorithm typeNumber of free variables in an algorithmMARERMSE (m)

MLARrs1 hidden layer, 5 neurons0.910.080.52
MLAρtoa1 hidden layer, 8 neurons0.920.080.51

Such a result is expected because, as shown by Equation (9), of shallow water is governed by at least 4-5 variables; thus, a ratio of at two bands cannot resolve all unknowns, unless some of them are nearly constant or covarying with each other for a region of interest. However, even for this region in the Bahamas, as shown by Barnes et al. [61] and Garcia et al. [62], their , IOPs, and bottom substrates vary spatially. This is further evidenced by the spatial variation of (see Figure 3) derived from HOPE for the matchup data in Figure 2, where varied from ~0.03 to 0.09 m-1, showing limited correlation ( as 0.18 and an inverse relationship) with . For such a wide variability in (and no covariance with ), which plays a key role in the spectral variation of an spectrum, more bands and more free variables in an algorithm would improve the predictability. Note that in Figure 3 is derived from HOPE with fixed as from ICESat-2 (after tidal correction) and fixed as 0.002 m-1 (see Section 5.2 for details), so the only variables are , , and for Equation (9); therefore, the resulted values (and then ) are more reliable after the reduction of variables.

5. Applicability and Confidence Measure

The ultimate goal of any algorithm is to apply it to new measurements, i.e., data not used in the tuning or training, in order to obtain the desired remotely sensed product. While an empirical algorithm for can be easily developed from collocated imagery and data, the extent that such an algorithm can be applied to new data is unknown. It has been demonstrated that the model coefficients (e.g., and in Equations (2)–(5)) developed from one image cannot be applied to another image if low uncertainties are the goal [26]. The scatter in the regression shown in Figure 2 indicates that these empirical coefficients may not be applicable even for locations within the same image, unless larger uncertainties are acceptable.

Conventionally, the applicability of an algorithm is assessed by evaluating its performance using an independent dataset, with the reported RMSE and/or MARE values as justifications [2527]. It is necessary to keep in mind that such averages, although informative, are dependent on the data pool and do not represent the error or uncertainty of each pixel [34]. Because different users have different tolerance for the uncertainty of (e.g., high requirement of accuracy for navigation), the average error is insufficient to inform all users of . It is necessary and important to provide a confidence measure for products at each grid or pixel. The following addresses the confidence associated with both EBA and SSA, with a first-ever attempt to provide a pixel-wise confidence measure for .

5.1. Applicability of EBA and Measure of Confidence
5.1.1. Issues of the Map from Landsat-8 OLI

Following the practices commonly presented in the literature [23, 25], a map of 30 m resolution (see Figure 4) over the Great Bahama Bank was generated from a Landsat-8 OLI image (May 26, 2019) with a TBRA tuned using matchup data generated from this image and ICESat-2 bathymetry (May 25, 2019, the red dashed line in this map, and a total of 1707 pairs of data). As shown in the literature [23, 25] and as desired, the discrete or line-type bathymetry product from ICESat-2 is now expanded to form a bathymetry map. Overall, for the western side of Andres Island, Great Bahama Bank, the bottom depth ranged from ~2.0 to 8.0 m. This is consistent with our general understanding and depth retrievals from other observations and methods [61, 62, 67]. On average, the difference is ~28.0% when compared with that derived from MERIS [62], which is consistent with those reported in the literature [26, 28]. However, obviously, there are erroneous outputs, where the bathymetry is ~15.0 m for the Tongue of the Ocean (TOTO), which is known to be ~2000 meters deep. In other words, a “false positive” of the shallow bottom is derived from TBRA (similar “shallow bottom” for TOTO is also observed from MBVA and MLA, results not shown here). Such false positives can also be found in Caballero and Stumpf [26] for waters around Dry Tortugas, Key West (see Fig. 6d of Caballero and Stumpf [26]). These false positives are a result of the following two inherent limitations of empirical algorithms for : (1)Empirical algorithms for (e.g., Equations (2)–(8)) are developed using data from optically shallow waters, as only such data has an optical signal from the bottom(2)By design, the empirical algorithm is data-driven; i.e., it can only be applied to measurement with similar characteristics as the training data

When an EBA, such as the TBRA, is applied to multiband imagery, however, these two basic requirements or assumptions are hardly tested or evaluated a priori. In other words, an map was generated by assuming, blindly, the algorithm is applicable to any in an image used during the algorithm tuning. Consequently, an erroneous bottom depth over TOTO was generated (see Figure 4). For this image, we know TOTO is optically deep, so such false products can be easily ignored or masked out. However, not all locations or pixels within an image do we know a priori their optical properties; thus, it is not certain if the environment is optically shallow or not. Therefore, it is not straightforward to mask out optically deep waters with empirical algorithms such as Equations (2)–(8). As such, usually, the resultant of deeper depths were manually, and arbitrarily, masked out (e.g., [68]).

5.1.2. Criteria to Check the Applicability of an Empirical Algorithm for

To confidently apply an EBA algorithm to a new spectrum, to the least, it is important and necessary to check if the from this target location meets the following two criteria:

(1) Criterion 1 (Cr1). If it is optically shallow; and

(2) Criterion 2 (Cr2). If there is an identical or similar spectrum in the training pool.

Such two criteria are omitted or ignored at present in the practices related to EBA for , although an SAA can separate optically deep vs. optically shallow during data processing [20, 35].

Given that the EBA formulation for deriving depth cannot provide information on whether a pixel is optically deep or shallow, a neural network (NNOSW) based on Multilayer Perception (MLP) was developed to aid in the determination of Cr1. MLP is a class of feedforward artificial neural networks (ANN) composed of one input layer, one or multiple hidden layers associated with one output layer. Since here Landsat data are used, values of , , , and are the input, while the output is optically deep (assigned a value of 0) or shallow (assigned a value of 1). The number of hidden layers and the number of neurons of each layer were determined following the concept of minimum loss, a common approach for developing a deep learning system. Data used for the training came from known optically deep (Landsat measurements in Massachusetts Bay, Chesapeake Bay, and TOTO) and optically shallow (Great Bahama Bank, Florida Keys) environments. After many training attempts, two hidden layers with 32 and 16 neurons were found to provide the best performance for this separation. Separately, the Rectified Linear Unit (ReLu) function for the activation function of hidden layers is employed, which can largely avoid gradient disappearance. Since it is binary classification, the activation function of the output layer is a Sigmoid function. The training stage was eventually achieved when the iteration is stopped, and the loss function converges. Figure 5 shows an example of the OSW classification after applying NNOSW to the Landsat-8 OLI image displayed in Figure 1. Although the deep vs. shallow separation may not be perfect at this stage, waters of TOTO are optically deep and clearly separated. Since all neural network-based algorithms are data-driven, we envision that this initial NNOSW will be updated after more optically deep and shallow data are employed.

Further, a similarity index (SIMRrs) is designed for Cr2, such that the higher the SIMRrs value, the higher the measure of similarity; therefore, the target spectrum is likely “learned” in the phase of algorithm development. In this effort, SIMRrs is defined and calculated as follows.

A target spectrum () is evaluated against each spectrum in the training pool ():

Here, MARDRrs represents a mean absolute relative difference between two spectra, with for the th . The calculations of Equations (12) and (13) show two ways of quantifying MARD, where a small difference in at a single band plays a bigger role for MARD1Rrs, while it is the large value in that plays a larger role for MARD2Rrs. Since there are many in the data pool (1707 pairs in this case), many MARD1Rrs and MARD2Rrs values for a given will be obtained; the minimum of the combination of MARD1Rrs and MARD2Rrs is selected for the quantification of SIMRrs, calculated as

The use of 50% for both MARD1Rrs and MARD2Rrs is a compromise between the two ways of evaluating spectral differences.

5.2. Confidence Score System for

We thus propose to use this similarity measure to gauge the likely quality of . For instance, if for an , it indicates that there is an identical in the training pool. Further, we know the absolute relative error of (AREH) for each in the data pool (see Figure 6 for example); thus, an identical AREH for this is expected from that for , which can be found in the data pool. Therefore, based on the value of AREH, the confidence or quality of each can be classified as detailed below. Here, AREH is calculated as

While a low SIMRrs value indicates low confidence in (i.e., is likely out of the data range in training), a high SIMRrs value is not automatically a guarantee of high confidence or high accuracy of . As shown in Figure 6, although the mean of the AREH is ~5% ( value is 0.69 between and ) for the entire dataset, it does not suggest it is 5% for each point. For some data points, the AREH could be as high as 20%. Thus, if this matches the having an AREH of 20%, the of this is expected to have such a relative error from this algorithm.

Following the above indications, we designed a preliminary confidence score system (CSS) based on both SIMRrs and AREH in the data pool to classify the quality of the product. At this initial stage, this CSS is designed to coarsely classify the confidence of into three classes: low (), medium (), or high (), which is determined based on a decision tree (see Figure 7 for details). With this tree, from an spectrum having both high SIMRrs values and low AREH values can be considered to have high confidence.

Figure 8 shows a map of confidence for the product presented in Figure 4. Generally, for pixels not too far from the track of ICESat-2 and with depths ~6.0-8.0 m, the confidence of is high, a result of similar characteristics in the data and the environment that were used in the training. For pixels near the coast of Andros Island and those in the Exumas region (east of TOTO) where the retrieved is generally under ~4.0 m, the confidence of is low. This is because the data pool used to develop the empirical algorithm has a depth range of ~5.0-8.0 m (see Figure 6); thus, this empirical algorithm did not “learn” the spectral characteristics of of depths shallower than ~4.0 m or with very different bottom and/or water properties (see Figure 3 for the wide variation of ). This low confidence is further confirmed using obtained on March 16, 2019 (the black line in Figure 8). After adjusting the tidal cycle to match the image time of May 26, 2019, and assuming no significant changes of bottom topography in the past ~70 days, it was found that AREH is generally around 40% or more (see Figure 9), above the 25% criterion of low confidence. The overall accuracy of classifying low-confidence pixels is 80.2%, while the accuracy of classifying medium- and high-confidence pixels is ~1%. This extremely low accuracy for medium- to high-confidence pixels is due to the low number of such data points (see Figure 3), thus statistically not significant. The less than perfect classification result, in part, results from a few (~200) measurements where values are less than 4.0 m, but the AREH values for these points are under 10%; i.e., they belong to the high-confidence category. This excellent performance of TBRA for these pixels deserves further studies as the range of used in TBRA development was ~5.0-8.0 m (see Figure 6). Nevertheless, these results (Figures 6, 8, and 9) highlight that, unlike Sonar- or Lidar-produced where the uncertainty in measurement is generally uniform, the uncertainty or confidence of the product is far from uniform [34]; thus, it is important and necessary to have a pixel-wise measure of the quality of . Further, the >80% success rate suggests that the CSS does provide a good indication of the confidence of , although there is room for improvement.

Caballero and Stumpf [26] suggested the use of multiple acquisitions to measure the performance of an algorithm, with an assumption that bottom depth should remain the same (after tidal correction) for a short period of time. These multiple observations are useful and important [51], but they may not overcome systematic biases embedded in an empirical algorithm. One example is the “shallow bottom” of TOTO (Figure 4); such “shallow bottom” will repeat itself when similar empirical algorithms are applied to new multiband images.

5.3. Applicability of SAA
5.3.1. Example of from Landsat-8 OLI

SAA is not data-driven; its applicability is dependent on the spectrum itself, as well as the bio-optical models and the simplified expression for [13, 21, 35, 50]. As articulated in many studies, SAA requires a highly accurate spectrum as input, as errors in will be propagated into the retrieved IOPs and/or [21, 33, 35]. While empirical algorithms can overcome some systematic errors in in the tuning or training phase, SAA, at least in its present form, cannot. In addition, the number of wavelengths plays an important role in the retrieval of [50]. This is because, within an SAA, the IOPs and bottom properties are assumed independent variables, while empirical algorithms (especially MLA) may find, and remedy to some extent, some hidden relationships among them and therefore transfer systematic bias or relationships into the algorithm coefficients (explicitly or implicitly).

To demonstrate the retrieval of from Landsat-8 OLI data with an SAA, the default HOPE algorithm was applied to the data pairs shown in Figure 2(a) (the red line), a data pool of a wide range of bottom depths ( is ~1.5-9.0 m) and dynamic water properties ( is in a range of ~0.03-0.09 m-1). Because Landsat-8 OLI has only four usable bands for shallow-water remote sensing (the 865 nm has nearly no information of the water and bottom for most water bodies), Equation (9) is underdetermined. Considering that the spectral shapes of and in the 440–561 nm range are similar, and that and do not make significant contributions to the total absorption at 561 nm, here, in Equation (9) was fixed as 0.002 m-1 in order to process Landsat-8 , and this modified version is termed as HOPELS8. This fixed value of 0.002 m-1 is simply a reflection that for waters in this region, it is close to the lowest value for (440) [61]. Also, note that HOPELS8 is certainly subject to refinement, but that is not the focus here.

Figure 10(a) compares the profiles of from HOPELS8 with from ICESat-2, which are two independent determinations, and an value of 0.66 was obtained. Figure 10(a) also shows the profile of AREH for each pair, which ranges from 0.2 to 100%, with a median value of 17.7%. Although a generally consistent bathymetry pattern of from HOPELS8 is obtained, these statistical measures do suggest that substantially more effort is required if high-confidence is to be retrieved by HOPELS8 from such multiband .

There could be many sources contributing to the moderate performance in the retrieved . These include the sensor’s calibration, the atmospheric correction, the bio-optical models used in HOPELS8, or the number of available bands. It is not the scope of this effort to address the impact of those elements and the refinement of HOPELS8, where algorithm improvement is constantly ongoing. Here, we focus on the necessity and development of a CSS to measure the pixel-wise confidence of retrieved from an SAA (such as HOPELS8). Brando et al. [35] developed a system to classify the quality of into two categories (good or bad) based on the value (Equation (10)) and assumed that has high confidence when a low value (i.e., good closure between the measured and modeled spectra) is obtained. However, is determined by various components and many sources; there could be the same but with different results. For instance, when the bio-optical models are modified, a different would be retrieved, but the value of can remain the same. Thus, as demonstrated in Figure 10(b) and previous studies [13, 35, 49], there is no relationship between and AREH ( is ~0.1); thus, is insufficient to indicate the quality of when it is retrieved with an SAA. A small , by its definition, indicates only a high agreement between the measured and modeled spectra.

5.3.2. Confidence Score System for SAA-Derived

Following the CSS scheme for EBA, a prototype CSS for from HOPE (CSSHOPE) was also developed and presented in Figure 11. Since an SAA determines a set of solution using , a first-order decision could be based on the value of [13, 35]. If the value is higher than a threshold, it indicates that the closure between the input and output spectra is not enough; thus, the retrieved could be questionable [13, 35]. Here, we tentatively set this threshold for as 0.02, as most values are found smaller than this value (see Figure 10(b)) for the data pool shown in Figure 2. When there are no collocated and reliable data available, the maximum relative contribution from the bottom to the total is used as an indicator to gauge the confidence of estimated [20, 35]. Too low a value (usually the threshold is 20% [20]) suggests a low contribution from the bottom and low confidence of retrieved bottom properties [35]. Since there are matchups between and , the CSS developed for EBA could be employed for pixels with values below the threshold, as illustrated in Figure 11. Thus, for derived from HOPELS8, a companion confidence score could also be produced.

As an example, Figure 12 shows a map of obtained from HOPELS8 (Figure 12(a)) and its confidence map (CSSHOPE, Figure 12(b)). Similar to the bathymetry map obtained from TBRA, the depth in the west of Andros Island obtained from HOPELS8 also has a range of ~4.0-8.0 m, a generally consistent pattern as observed before. For waters of the TOTO, the map shows a depth of 20 m, which is basically the upper boundary preselected within the HOPELS8 system, where actually the contribution from the bottom is negligible when processed with HOPELS8, so it can be easily marked as optically deep water as in Lee et al. [20] and Brando et al. [35]. More importantly, the pixel-wise quality of (CSSHOPE) shown in Figure 12(b) provides a clearer indication of the confidence on the product pixel by pixel. Similarly, as Figure 8, higher confidence of is found for pixels around the ICESat-2 track, and low confidence is found for locations near the coast. Evaluation using (March 16, 2019) (the black dashed line in Figure 12(b)) indicates a success rate of ~99% in identifying low-confidence pixels. In addition, there are differences in the distributions of confidence between the two products (see Figures 4 and 12), a clear indication of the performance of different approaches for bathymetry. On the other hand, because from an SAA (e.g., HOPELS8 here) is an independent determination of from that of , SAA offers an opportunity to check consistency from the two measurements, which is not possible with EBA.

6. Conclusions and Future Perspective

Through many decades of effort, there is no shortage of remote sensing products from multiband or hyperspectral imagers, but there is a shortage of remote sensing products attached with a confidence measure; this is especially true for the remote sensing of bathymetry. Compared to active measurements of bottom depth by Sonar or Lidar, the retrieved is still facing difficulties in its applications by the broader community, where a key limiting factor until now is the lack of pixel-wise confidence for the product.

To fill this void, a prototype confidence score system (CSS) for is proposed for the first time, which at present classifies all pixels in an map of OSW into three categories: low, medium, and high, with a preliminary set of criteria. Since this CSS involves both the algorithm coefficients and the data used for the development of empirical algorithms, it is logical that not only the algorithm function and model coefficients be reported but also the data pool used for the algorithm development be deposited in a common data portal. In the future, while it is always necessary and important to continue the refinement of these algorithms, it is also important, and urgent, to develop or revise or refine such system(s) to measure the confidence of the resulting pixel by pixel. Specifically, it includes a refinement of the quality classes, thresholds, and settings of the criteria, as well as the desired statistical measures. Only the product of high confidence from multiple images could be merged to form a reliable map for the broad user communities. We call on the ocean color community to refine such schemes or to develop brand-new systems, so a mature and widely endorsed system could be implemented to clearly measure the quality of , a critical parallel product of remotely sensed bathymetry. To reach this goal, it is also urgent and important to compile, by the community and for the community, an inclusive data pool of collocated or concurrent measurements of and high-quality and spectra of a wide range of depths and environments.

Data Availability

The satellite data used to support the findings of this study are publicly available. Landsat-8 data can be downloaded from the USGS website (, while ICESat-2 data can be downloaded from the National Snow and Ice Data Center (NSIDC), where the geolocated photon data (ATL03) can be found online at

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Authors’ Contributions

ZL conceptualized the study and drafted and finalized the manuscript; MS helped in analyzing Lidar data; RG helped in Landsat-8 data processing and finalizing the manuscript; WL developed machine learning algorithms; XL helped in ICESat-2 data processing; JW processed Landsat-8 data; XY helped in data matching and empirical algorithms.


Financial support by the Chinese Ministry of Science and Technology through the National Key Research and Development Program of China (#2016YFC1400904 and #2016YFC1400905) and the National Natural Science Foundation of China (#41941008, #41890803, and #41830102), the Joint Polar Satellite System (JPSS) funding for the NOAA ocean color calibration and validation (Cal/Val) project, and the University of Massachusetts Boston is greatly appreciated. The authors would like to thank the NASA ICESat-2 team for providing the data used in this study. The ICESat-2 data are publicly available through the National Snow and Ice Data Center (NSIDC). The geolocated photon data (ATL03) are found online (, with descriptions that can be found in the cited reference of Neumann et al. [60].


  1. T. Kutser, J. Hedley, C. Giardino, C. Roelfsema, and V. E. Brando, “Remote sensing of shallow waters – a 50 year retrospective and future directions,” Remote Sensing of Environment, vol. 240, article 111619, 2020. View at: Publisher Site | Google Scholar
  2. J. Copley, “Just how little do we know about the ocean floor?” The Conversation, vol. 9, 2014. View at: Google Scholar
  3. D. Sandwell, “Bathymetry from space is now possible,” Eos, vol. 84, no. 5, pp. 37–44, 2003. View at: Publisher Site | Google Scholar
  4. D. T. Sandwell, W. H. F. Smith, S. Gille et al., “Bathymetry from space: rationale and requirements for a new, high-resolution altimetric mission,” Comptes Rendus Geoscience, vol. 338, no. 14-15, pp. 1049–1062, 2006. View at: Publisher Site | Google Scholar
  5. G. C. Guenther, Digital Elevation Model Technologies and Applications: The DEM Users Manual, D. F. Maune, Ed., vol. 2, Asprs Publications, 2007.
  6. C. W. Wright, C. Kranenburg, T. A. Battista, and C. Parrish, “Depth Calibration and Validation of the Experimental Advanced Airborne Research Lidar, EAARL-B,” Journal of Coastal Research, vol. 76, pp. 4–17, 2016. View at: Publisher Site | Google Scholar
  7. C. E. Parrish, L. A. Magruder, A. L. Neuenschwander, N. Forfinski-Sarkozi, M. Alonzo, and M. Jasinski, “Validation of ICESat-2 ATLAS bathymetry and analysis of ATLAS’s bathymetric mapping performance,” Remote Sensing, vol. 11, no. 14, p. 1634, 2019. View at: Publisher Site | Google Scholar
  8. R. C. Hilldale and D. Raff, “Assessing the ability of airborne LiDAR to map river bathymetry,” Earth Surface Processes and Landforms, vol. 33, no. 5, pp. 773–783, 2008. View at: Publisher Site | Google Scholar
  9. T. Markus, T. Neumann, A. Martino et al., “The Ice, Cloud, and land Elevation Satellite-2 (ICESat-2): science requirements, concept, and implementation,” Remote Sensing of Environment, vol. 190, pp. 260–273, 2017. View at: Publisher Site | Google Scholar
  10. F. C. Polcyn, W. L. Brown, and I. J. Sattinger, The Measurement of Water Depth by Remote-Sensing Techniques, University of Michigan, Ann Arbor, 1970.
  11. D. R. Lyzenga, “Passive remote sensing techniques for mapping water depth and bottom features,” Applied Optics, vol. 17, no. 3, pp. 379–383, 1978. View at: Publisher Site | Google Scholar
  12. D. R. Lyzenga, “Remote sensing of bottom reflectance and water attenuation parameters in shallow water using aircraft and Landsat data,” International Journal of Remote Sensing, vol. 2, pp. 71–82, 1981. View at: Publisher Site | Google Scholar
  13. Z. P. Lee, K. L. Carder, C. D. Mobley, R. G. Steward, and J. S. Patch, “Hyperspectral remote sensing for shallow waters: 2. Deriving bottom depths and water properties by optimization,” Applied Optics, vol. 38, no. 18, pp. 3831–3843, 1999. View at: Publisher Site | Google Scholar
  14. R. Garcia, Z.-P. Lee, and E. J. Hochberg, “Hyperspectral Shallow-Water Remote Sensing with an Enhanced Benthic Classifier,” Remote Sensing, vol. 10, p. 147, 2018. View at: Publisher Site | Google Scholar
  15. J. D. Hedley and P. J. Mumby, “A remote sensing method for resolving depth and subpixel composition of aquatic benthos,” Limnology and Oceanography, vol. 48, no. 1, part2, pp. 480–488, 2003. View at: Publisher Site | Google Scholar
  16. J. Hedley, C. Roelfsema, and S. R. Phinn, “Efficient radiative transfer model inversion for remote sensing applications,” Remote Sensing of Environment, vol. 113, no. 11, pp. 2527–2532, 2009. View at: Publisher Site | Google Scholar
  17. W. M. Klonowski, P. R. Fearns, and M. J. Lynch, “Retrieving key benthic cover types and bathymetry from hyperspectral imagery,” Journal of Applied Remote Sensing, vol. 1, article 011505, 2007. View at: Publisher Site | Google Scholar
  18. J. Hedley, B. Russell, K. Randolph, and H. Dierssen, “A physics-based method for the remote sensing of seagrasses,” Remote Sensing of Environment, vol. 174, pp. 134–147, 2016. View at: Publisher Site | Google Scholar
  19. C. D. Mobley, L. K. Sundman, C. O. Davis et al., “Interpretation of hyperspectral remote-sensing imagery by spectrum matching and look-up tables,” Applied Optics, vol. 44, no. 17, pp. 3576–3592, 2005. View at: Publisher Site | Google Scholar
  20. Z. P. Lee, K. L. Carder, R. F. Chen, and T. G. Peacock, “Properties of the water column and bottom derived from Airborne Visible Infrared Imaging Spectrometer (AVIRIS) data,” Journal of Geophysical Research, vol. 106, no. C6, pp. 11639–11651, 2001. View at: Publisher Site | Google Scholar
  21. R. A. Garcia, P. R. Fearns, and L. I. McKinna, “Detecting trend and seasonal changes in bathymetry derived from HICO imagery: a case study of Shark Bay, Western Australia,” Remote Sensing of Environment, vol. 147, pp. 186–205, 2014. View at: Publisher Site | Google Scholar
  22. J. A. Goodman and S. L. Ustin, “Classification of benthic composition in a coral reef environment using spectral unmixing,” Journal of Applied Remote Sensing, vol. 1, article 011501, 2007. View at: Publisher Site | Google Scholar
  23. D. R. Lyzenga, “Shallow-water bathymetry using combined lidar and passive multispectral scanner data,” International Journal of Remote Sensing, vol. 6, pp. 115–125, 1985. View at: Publisher Site | Google Scholar
  24. Y. Ma, N. Xu, Z. Liu et al., “Satellite-derived bathymetry using the ICESat-2 lidar and Sentinel-2 imagery datasets,” Remote Sensing of Environment, vol. 250, article 112047, 2020. View at: Publisher Site | Google Scholar
  25. R. P. Stumpf, K. Holderied, and M. Sinclair, “Determination of water depth with high-resolution satellite imagery over variable bottom types,” Limnology and Oceanography, vol. 48, no. 1part2, pp. 547–556, 2003. View at: Publisher Site | Google Scholar
  26. I. Caballero and R. P. Stumpf, “Retrieval of nearshore bathymetry from Sentinel-2A and 2B satellites in South Florida coastal waters,” Coastal and Shelf Science, vol. 226, article 106277, 2019. View at: Publisher Site | Google Scholar
  27. D. R. Lyzenga, N. P. Malinas, and F. J. Tanis, “Multispectral bathymetry using a simple physically based algorithm,” IEEE Transactions on Geoscience and Remote Sensing, vol. 44, no. 8, pp. 2251–2259, 2006. View at: Publisher Site | Google Scholar
  28. G. Casal, X. Monteys, J. Hedley, P. Harris, C. Cahalane, and T. McCarthy, “Assessment of empirical algorithms for bathymetry extraction using Sentinel-2 data,” International Journal of Remote Sensing, vol. 40, no. 8, pp. 2855–2879, 2019. View at: Publisher Site | Google Scholar
  29. Z. P. Lee, M. R. Zhang, K. L. Carder, and L. O. Hall, “A neural network approach to deriving optical properties and depths of shallow waters,” in Proceedings, Ocean Optics XIV., Kona, HI, 1998. View at: Google Scholar
  30. S. Liu, L. Wang, H. Liu, H. Su, X. Li, and W. Zheng, “Deriving bathymetry from optical images with a localized neural network algorithm,” IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 9, pp. 5334–5342, 2018. View at: Publisher Site | Google Scholar
  31. A. Collin, S. Etienne, and E. Feunteun, “VHR coastal bathymetry using WorldView-3: colour versus learner,” Remote Sensing Letters, vol. 8, no. 11, pp. 1072–1081, 2017. View at: Publisher Site | Google Scholar
  32. J. D. Hedley, C. Roelfsema, V. Brando et al., “Coral reef applications of Sentinel-2: coverage, characteristics, bathymetry and benthic mapping with comparison to Landsat 8,” Remote Sensing of Environment, vol. 216, pp. 598–614, 2018. View at: Publisher Site | Google Scholar
  33. J. Hedley, C. Roelfsema, and S. Phinn, “Propagating uncertainty through a shallow water mapping algorithm based on radiative transfer model inversion,” in Proceedings of the Ocean Optics XX, Anchorage, AK, USA, 2010. View at: Google Scholar
  34. D. Traganos and P. Reinartz, “Mapping Mediterranean seagrasses with Sentinel-2 imagery,” Marine Pollution Bulletin, vol. 134, pp. 197–209, 2018. View at: Publisher Site | Google Scholar
  35. V. E. Brando, J. M. Anstee, M. Wettle, A. G. Dekker, S. R. Phinn, and C. Roelfsema, “A physics based retrieval and quality assessment of bathymetry from suboptimal hyperspectral data,” Remote Sensing of Environment, vol. 113, no. 4, pp. 755–770, 2009. View at: Publisher Site | Google Scholar
  36. Z. P. Lee, K. L. Carder, C. D. Mobley, R. G. Steward, and J. S. Patch, “Hyperspectral remote sensing for shallow waters I A semianalytical model,” Applied Optics, vol. 37, no. 27, pp. 6329–6338, 1998. View at: Publisher Site | Google Scholar
  37. S. Maritorena, A. Morel, and B. Gentili, “Diffuse reflectance of oceanic shallow waters: influence of water depth and bottom albedo,” Limnology and Oceanography, vol. 39, no. 7, pp. 1689–1703, 1994. View at: Publisher Site | Google Scholar
  38. A. Albert and C. D. Mobley, “An analytical model for subsurface irradiance and remote sensing reflectance in deep and shallow case-2 waters,” Optics Express, vol. 11, no. 22, pp. 2873–2890, 2003. View at: Publisher Site | Google Scholar
  39. H. R. Gordon, O. B. Brown, R. H. Evans et al., “A semianalytic radiance model of ocean color,” Journal of Geophysical Research, vol. 93, no. D9, article 10909, 1988. View at: Publisher Site | Google Scholar
  40. R. P. Stumpf, M. E. Culver, P. A. Tester et al., “Monitoring Karenia brevis blooms in the Gulf of Mexico using satellite ocean color imagery and other data,” Harmful Algae, vol. 2, no. 2, pp. 147–160, 2003. View at: Publisher Site | Google Scholar
  41. D. Traganos, D. Poursanidis, B. Aggarwal, N. Chrysoulakis, and P. Reinartz, “Estimating satellite-derived bathymetry (SDB) with the Google Earth Engine and Sentinel-2,” Remote Sensing, vol. 10, no. 6, p. 859, 2018. View at: Publisher Site | Google Scholar
  42. I. Caballero and R. P. Stumpf, “Towards routine mapping of shallow bathymetry in environments with variable turbidity: contribution of Sentinel-2A/B satellites mission,” Remote Sensing, vol. 12, no. 3, p. 451, 2020. View at: Publisher Site | Google Scholar
  43. H. M. Dierssen, R. C. Zimmerman, R. A. Leathers, T. V. Downes, and C. O. Davis, “Ocean color remote sensing of seagrass and bathymetry in the Bahamas Banks by high-resolution airborne imagery,” Limnology and Oceanography, vol. 48, no. 1part2, pp. 444–455, 2003. View at: Publisher Site | Google Scholar
  44. J. C. Sandidge and R. J. Holyer, “Coastal bathymetry from hyperspectral observations of water radiance,” Remote Sensing of Environment, vol. 65, no. 3, pp. 341–352, 1998. View at: Publisher Site | Google Scholar
  45. B. Ai, Z. Wen, Z. Wang et al., “Convolutional neural network to retrieve water depth in marine shallow water area from remote sensing images,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 13, pp. 2888–2898, 2020. View at: Publisher Site | Google Scholar
  46. T. Kutser, I. Miller, and D. L. B. Jupp, “Mapping coral reef benthic substrates using hyperspectral space-borne images and spectral libraries,” Coastal and Shelf Science, vol. 70, no. 3, pp. 449–460, 2006. View at: Publisher Site | Google Scholar
  47. R. Doerffer and H. Schiller, “The MERIS Case 2 water algorithm,” International Journal of Remote Sensing, vol. 28, pp. 517–535, 2007. View at: Publisher Site | Google Scholar
  48. B. B. Barnes, C. Hu, B. A. Schaeffer, Z. Lee, D. A. Palandro, and J. C. Lehrter, “MODIS-derived spatiotemporal water clarity patterns in optically shallow Florida Keys waters: a new approach to remove bottom contamination,” Remote Sensing of Environment, vol. 134, pp. 377–391, 2013. View at: Publisher Site | Google Scholar
  49. A. G. Dekker, S. R. Phinn, J. Anstee et al., “Intercomparison of shallow water bathymetry, hydro-optics, and benthos mapping techniques in Australian and Caribbean coastal environments,” Limnology and Oceanography-Methods, vol. 9, no. 9, pp. 396–425, 2011. View at: Publisher Site | Google Scholar
  50. Z. P. Lee and K. L. Carder, “Effect of spectral band numbers on the retrieval of water column and bottom properties from ocean color data,” Applied Optics, vol. 41, no. 12, pp. 2191–2201, 2002. View at: Publisher Site | Google Scholar
  51. J. Wei, M. Wang, Z. Lee et al., “Shallow water bathymetry with multi-spectral satellite ocean color sensors: leveraging temporal variation in image data,” Remote Sensing of Environment, vol. 250, article 112035, 2020. View at: Publisher Site | Google Scholar
  52. D. P. Roy, M. A. Wulder, T. R. Loveland et al., “Landsat-8: science and product vision for terrestrial global change research,” Remote Sensing of Environment, vol. 145, pp. 154–172, 2014. View at: Publisher Site | Google Scholar
  53. B. A. Franz, S. W. Bailey, N. Kuring, and P. J. Werdell, “Ocean color measurements with the Operational Land Imager on Landsat-8: implementation and evaluation in SeaDAS,” Journal of Applied Remote Sensing, vol. 9, article 096070, 2015. View at: Publisher Site | Google Scholar
  54. S. W. Bailey, B. A. Franz, and P. J. Werdell, “Estimation of near-infrared water-leaving reflectance for satellite ocean color data processing,” Optics Express, vol. 18, no. 7, pp. 7521–7527, 2010. View at: Publisher Site | Google Scholar
  55. L. Magruder and K. Brunt, “Performance analysis of airborne photon- counting lidar data in preparation for the ICESat-2 mission,” IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 5, pp. 2911–2918, 2018. View at: Publisher Site | Google Scholar
  56. T. A. Neumann, A. J. Martino, T. Markus, S. Bae, M. R. Bock, and A. C. Brenner, “The Ice, Cloud, and Land Elevation Satellite – 2 mission: a global geolocated photon product derived from the Advanced Topographic Laser Altimeter System,” Remote Sensing of Environment, vol. 233, article 111325, 2019. View at: Publisher Site | Google Scholar
  57. S. C. Popescu, T. Zhou, R. Nelson et al., “Photon counting LiDAR: an adaptive ground and canopy height retrieval algorithm for ICESat-2 data,” Remote Sensing of Environment, vol. 208, pp. 154–170, 2018. View at: Publisher Site | Google Scholar
  58. M. Wang and W. Shi, “Cloud Masking for Ocean Color Data Processing in the Coastal Regions,” IEEE Transactions on Geoscience and Remote Sensing, vol. 11, pp. 3196–3105, 2006. View at: Publisher Site | Google Scholar
  59. T. Neumann, A. Brenner, D. Hancock, J. Robbins, J. Saba, and K. Harbeck, ICE, CLOUD, and Land Elevation Satellite - 2 (ICESat-2) Project Algorithm Theoretical Basis Document (ATBD) for Global Geolocated Photons ATL03, NASA Goddard Space Flight Center, Greenbelt, Maryland, 2018.
  60. R. Pawlowicz, B. Beardsley, and S. Lentz, “Classical tidal harmonic analysis including error estimates in MATLAB using T_TIDE,” Computers & Geosciences, vol. 28, no. 8, pp. 929–937, 2002. View at: Publisher Site | Google Scholar
  61. B. B. Barnes, R. Garcia, C. Hu, and Z. Lee, “Multi-band spectral matching inversion algorithm to derive water column properties in optically shallow waters: an optimization of parameterization,” Remote Sensing of Environment, vol. 204, pp. 424–438, 2018. View at: Publisher Site | Google Scholar
  62. R. Garcia, Z. Lee, B. Barnes, C. Hu, H. Dierssen, and E. Hochberg, “Benthic classification and IOP retrievals in shallow water environments using MERIS imagery,” Remote Sensing of Environment, vol. 249, article 112015, 2020. View at: Publisher Site | Google Scholar
  63. I. Caballero and R. P. Stumpf, “Atmospheric correction for satellite-derived bathymetry in the Caribbean waters: from a single image to multi-temporal approaches using Sentinel-2A/B,” Express, vol. 28, no. 8, pp. 11742–11766, 2020. View at: Publisher Site | Google Scholar
  64. I. Caballero, R. P. Stumpf, and A. Meredith, “Preliminary assessment of turbidity and chlorophyll impact on bathymetry derived from Sentinel-2A and Sentinel-3A satellites in South Florida,” Remote Sensing, vol. 11, no. 6, p. 645, 2019. View at: Publisher Site | Google Scholar
  65. N. T. O'Neill and J. R. Miller, “On calibration of passive optical bathymetry through depth soundings Analysis and treatment of errors resulting from the spatial variation of environmental parameters,” International Journal of Remote Sensing, vol. 10, pp. 1481–1501, 1989. View at: Publisher Site | Google Scholar
  66. Y. Liu, D. Tang, R. Deng et al., “An adaptive blended algorithm approach for deriving bathymetry from multispectral imagery,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 801–817, 2021. View at: Publisher Site | Google Scholar
  67. Z.-P. Lee, C. Hu, B. Casey, S. L. Shang, H. Dierssen, and R. Arnone, “Global shallow-water bathymetry from satellite ocean color data,” Eos, Transactions American Geophysical Union, vol. 91, no. 46, pp. 429-430, 2010. View at: Publisher Site | Google Scholar
  68. S. M. Hamylton, J. D. Hedley, and R. J. Beaman, “Derivation of high-resolution bathymetry from multispectral satellite imagery: a comparison of empirical and optimisation methods through geographical error analysis,” Remote Sensing, vol. 7, no. 12, pp. 16257–16273, 2015. View at: Publisher Site | Google Scholar

Copyright © 2021 Zhongping Lee et al. Exclusive Licensee Aerospace Information Research Institute, Chinese Academy of Sciences. Distributed under a Creative Commons Attribution License (CC BY 4.0).

 PDF Download Citation Citation
Altmetric Score