Research Article | Open Access
Tianyu Yu, Wenjian Ni, Zhiyu Zhang, Qinhuo Liu, Guoqing Sun, "Regional Sampling of Forest Canopy Covers Using UAV Visible Stereoscopic Imagery for Assessment of Satellite-Based Products in Northeast China", Journal of Remote Sensing, vol. 2022, Article ID 9806802, 14 pages, 2022. https://doi.org/10.34133/2022/9806802
Regional Sampling of Forest Canopy Covers Using UAV Visible Stereoscopic Imagery for Assessment of Satellite-Based Products in Northeast China
Canopy cover is an important parameter affecting forest succession, carbon fluxes, and wildlife habitats. Several global maps with different spatial resolutions have been produced based on satellite images, but facing the deficiency of reliable references for accuracy assessments. The rapid development of unmanned aerial vehicle (UAV) equipped with consumer-grade camera enables the acquisition of high-resolution images at low cost, which provides the research community a promising tool to collect reference data. However, it is still a challenge to distinguish tree crowns and understory green vegetation based on the UAV-based true color images (RGB) due to the limited spectral information. In addition, the canopy height model (CHM) derived from photogrammetric point clouds has also been used to identify tree crowns but limited by the unavailability of understory terrain elevations. This study proposed a simple method to distinguish tree crowns and understories based on UAV visible images, which was referred to as BAMOS for convenience. The central idea of the BAMOS was the synergy of spectral information from digital orthophoto map (DOM) and structural information from digital surface model (DSM). Samples of canopy covers were produced by applying the BAMOS method on the UAV images collected at 77 sites with a size of about 1.0 km2 across Daxing’anling forested area in northeast of China. Results showed that canopy cover extracted by the BAMOS method was highly correlated to visually interpreted ones with correlation coefficient () of 0.96 and root mean square error (RMSE) of 5.7%. Then, the UAV-based canopy covers served as references for assessment of satellite-based maps, including MOD44B Version 6 Vegetation Continuous Fields (MODIS VCF), maps developed by the Global Land Cover Facility (GLCF) and by the Global Land Analysis and Discovery laboratory (GLAD). Results showed that both GLAD and GLCF canopy covers could capture the dominant spatial patterns, but GLAD canopy cover tended to miss scattered trees in highly heterogeneous areas, and GLCF failed to capture non-tree areas. Most important of all, obvious underestimations with RMSE about 20% were easily observed in all satellite-based maps, although the temporal inconsistency with references might have some contributions.
Canopy cover is defined as the fraction of ground covered by the vertically projected tree crowns, which plays an important role in affecting forest succession, evaluating carbon fluxes, and managing wildlife habitats [1–5]. Canopy cover maps have been widely used in assessing forest disturbance and restoration caused by climate changes or anthropogenic activities [6–8]. Several global maps have been produced based on satellite images with low or moderate spatial resolutions, such as those acquired by Advanced Very High Resolution Radiometer (AVHRR) from the NOAA, Moderate Resolution Imaging Spectroradiometer (MODIS) aboard the Terra and Aqua satellites and Landsat. The early maps at 1 km resolution were generated using the AVHRR data by linear mixture model or regression tree method [9, 10]. Then, annual global canopy cover datasets (i.e., MOD44B Version 6 Vegetation Continuous Fields) at 250 m resolution from 2000 were produced using MODIS images (hereafter referred to as the MODIS VCF) [11, 12]. Two global products with finer resolution were derived from Landsat images. One was developed by the Global Land Cover Facility, which was composed of four maps centered on years 2000, 2005, 2010, and 2015, respectively (hereafter referred to as the GLCF canopy cover) . The other was distributed by the Global Land Analysis and Discovery laboratory, which has two maps centered on years 2000 and 2010, respectively (hereafter referred to as the GLAD canopy cover) .
Independent assessment was indispensable before the application of these satellite-based maps, which depended on accurate measurements of canopy cover as references. An easy-to-understand assessment method was to directly compare with field measurements [13, 14]. However, the collection of field measurements was laborious and time-consuming for evaluating maps with coarse resolution (e.g., 250 m of MODIS VCF), especially over mountainous areas [13, 15, 16]. Alternatively, high-resolution satellite images, including QuickBird, WorldView, IKONOS, and GeoEye, were used to produce reference data [8, 17–20]. For example, Montesano et al.  visually collected canopy covers from QuickBird images to evaluate MODIS VCF over circumpolar taiga-tundra ecotone. In addition, airborne LiDAR could also provide reference data but was limited by high cost [4, 21–24]. For example, Tang et al.  estimated accuracy of GLAD and GLCF canopy cover based on airborne LiDAR data in Teakettle Experiment Forest, CA, USA.
The reference canopy covers derived from high-resolution images or LiDAR have been used to evaluate satellite-based maps at different spatial scales and geographical regions. Results showed that each of satellite-based maps had different uncertainties. For example, the root mean square error (RMSE) of MODIS VCF varied from 5.2% to 31% [18, 19, 21]; the RMSE of GLCF canopy cover varied from 13% to 31.5% [4, 22, 24], and the RMSE of GLAD canopy cover varied from 17.64% to 30.4% [8, 17, 20, 24]. In addition to factors including definitional discrepancies of canopy cover between maps and references (e.g., considering within-crown gaps or not) and deficiencies of maps themselves (e.g., underestimation of dense forests and overestimation of sparse forests), the uncertainties of references could not be ignored, especially those collected from high-resolution satellite images. Some studies reported that the references acquired from meter-level satellite images were not always accurate because of similar spectral features of tree crowns and understory green vegetation (e.g., shrubs and grasses). Montesano et al.  found that the difference between two visual interpretations of the same QuickBird images was about 14.8%. Furthermore, the shadows caused by the illumination occlusion also effected interpretation of satellite images .
Most studies on the evaluation of global products concentrated on plots or small local area because the field or LiDAR-based measurements were hardly or costly to be collected on a large scale. Many assessments were carried out in Eurasian high-latitude forest, American temperate, and tropical forests [18, 22, 24], while the assessment from other regions was rare, such as northeast China. Therefore, more evaluations are still needed for understanding the performance of global canopy cover maps .
Recently, the unmanned aerial vehicle (UAV) as a flexible platform has been widely used in forest inventory [26–29]. The camera onboard UAV could collect high-resolution images (i.e., centimeter resolution), which made it promising to be an alternative to make field measurements of canopy covers [30, 31]. In addition, the UAV could collect images at landscape scale (e.g., 1.0 km2), which made it practical to collect references at low cost [8, 32].
UAV-based LiDAR have been used to acquire reference data of canopy covers in virtue of strong penetration in forests [1, 5, 33–35]. For example, Cai et al.  used canopy height model (CHM) derived from UAV-based LiDAR to estimate canopy covers over 18 samples with a size of 25 m 25 m in temperate forest, and the result indicated that estimation was reliable with RMSE of 1.49%. Wallace et al.  used all UAV-based LiDAR returns greater than 1.3 m to reconstruct the shape of tree crowns and evaluated the canopy cover in a 30 m 50 m patch of native dry forest, and the estimation of canopy cover was also accurate with difference of only 4% from the field measurement.
Compared with UAV-based LiDAR, the UAV equipped with RGB camera is more applicable to forest inventory with the advancement of images processing (e.g., Structural from Motion algorithm (SfM)) [36–38]. The photogrammetric point clouds could be generated from UAV-based RGB images by SfM and further were used to produce digital orthophoto map (DOM) and digital surface model (DSM) of forested areas. In addition, in sparse forested areas, many ground points could be determined by algorithms of point cloud classification and were used to produce the Digital Terrain Model (DTM) and canopy height model (CHM). Therefore, some studies used UAV-based RGB images for evaluating canopy covers in sparse forests. For example, Cunliffe et al.  identified spatial pattern of woody plants by using height threshold on CHM in dryland ecosystem; Li et al.  evaluated the canopy cover in seminatural forest. However, in dense forests, there were few ground points due to the poor penetration of optical images, which could not produce reliable DTM and CHM. In this case, the auxiliary terrain information (e.g., LiDAR-derived DTM) was needed for generation of CHM, but such terrain information often did not exist. Therefore, it is important to develop a new method for evaluating canopy cover accurately without auxiliary understory terrain over complicated forests.
In this study, a new method was proposed to accurately identify canopy cover based on the UAV-based DSM and DOM. Then, the sampling data of UAV-based canopy covers collected at 77 sites across the Daxing’anling forested area was produced and further used as references to evaluate three published satellite-based maps of canopy cover in study area. The aims of this study include (1) developing a new method to produce high-accuracy canopy covers from RGB images without priori understory terrain in natural forests and (2) evaluating the performance of satellite-based maps in Daxing’anling forested area.
2. Study Area and Data
2.1. Study Area
The Daxing’anling forested area (119°36E-125°19E, 47°3N-53°20N) is located at the northeast of China, including the northeast part of Inner Mongolia Autonomous Region and the northwest part of Heilongjiang province as shown in Figure 1. This region is about 770 km in length (i.e., from north to south) and 350 km in width (i.e., from east to west). Its elevation ranges 330~1750 m above sea level. The climate is a cold temperate continental monsoon, and the mean annual temperature is -2.8°C with interannual variability of -52.3°C in January to 39°C in July historically. The annual average precipitation is 450~550 mm, most of which falling from July to August. The period of annual snow accumulation is about 5 months, and the depth of snow is up to 30~50 cm . Dahurian larch (Larix gmelinii Kuzen.) is the dominant tree species, and other tree species include Scots pine (Pinus sylvestris L.), white birch (Betula platyphylla Suk.), and aspen (Populus davidiana Dode) .
2.2. Collection and Processing of UAV Visible Imagery
The high-resolution red-green-blue (RGB) images were collected by UAV system at 77 sampling sites from June 27th to July 18th, 2018. The weather of 45 sites was sunny, and the rest was cloudy. The spatial distribution of sampling sites was shown by green dots in Figure 1.
The UAV platform and sensor used in this study were DJI S900 and Sony NEX-5T digital camera, respectively. The UAV platform was a six-rotor platform with take-off weight of 6.8 kg. The flying height was 350 m above the ground elevation of take-off point, and the flying speed was 10 m/s. The forward and side overlaps were 90% and 60%, respectively. The flight time of each sampling area could last about 18~20 minutes including take-off and landing, and the coverage area of each sampling site was close to 1.0 km2. The NEX-5T camera had pixel detectors (Exmor APS HD CMOS). With the focal length of 16 mm and exposure of 1/60 s, the camera could acquire images with a size of 16.03 megapixel using the field of view of 72.58° and 51.98° across and along flying direction, respectively. Positions of images were determined by UAV-embedded GPS/IMU instrument.
Images were processed by SfM in Agisoft Photoscan (Agisoft LLC, St. Petersburg, Russia). They were firstly aligned to estimate positions and orientations using common points automatically detected based on image textures. Then, dense point cloud was generated based on image matching, which was further used to generate the DOM and DSM with a resolution of approximately 8.0 cm. Please referred to Ni et al.  for details of data processing.
2.3. Satellite-Based Canopy Cover Maps
Three satellite-based maps were evaluated in this study, including a MODIS-derived and two Landsat-derived datasets. The first dataset developed by DiMiceli et al.  was annual MODIS VCF from the year of 2000. The inputs were annual surface reflectance composites of MODIS bands; then, the linear regression tree algorithm was used to generate the MODIS VCF. The original dataset adopted sinusoidal projection and had nominal 250 m resolution. In this study, the MODIS VCF of 2018 was assessed.
The second dataset was GLCF canopy covers developed by Sexton et al. , which had four maps centered on 2000, 2005, 2010, and 2015. This dataset was developed by rescaling the MODIS VCF using Landsat 5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) images archived in Global Land Survey (GLS). The GLCF adopted the coordinate system of Universal Transverse Mercator and had 30 m resolution. The GLCF canopy cover of the year 2015 was evaluated in this study.
The last dataset was GLAD canopy covers developed by Hansen et al. , including the 2000 and 2010 global maps, which was an integral part of The Global Forest Change. The GLAD canopy cover of the 2010 was evaluated in this study. This map was calculated by regression tree model based on top of atmosphere reflectance derived from Landsat 7 ETM+ data. The GLAD adopted the coordinate system of WGS 1984 and had nominal 30 m resolution.
3.1. Canopy Cover Extraction
A new method is proposed in this study to identify tree crowns over sampling areas from the UAV-based DSM and DOM, which is referred to as Background Analysis Method based on Object Segmentation (BAMOS). The BAMOS method consists of two steps. Firstly, two types of backgrounds, i.e., shaded gaps and sunlit gaps, are detected using the spectral information of DOM and structural analysis of DSM, respectively; then, the DSM is inverted and segmented by watershed method, and tree crowns are mosaics of segmentations excluding the identified two types of backgrounds and other gap pixels located within segmentations. The canopy cover is the ratio of tree crowns to that of a forest stand.
3.1.1. Extraction of Backgrounds
(1) Shaded Gaps. In dense forest, tree crowns are usually brighter than shaded gaps, and even the shaded crowns are slightly brighter than surrounding gaps on DOM. Based on these differences, some methods like between-class variance, corner detection, and minimum-distance-to-means algorithm have realized the high-accuracy separation of tree crowns and shaded gaps on closed-range photographs . In this study, the between-class variance algorithm (i.e., OTSU) is used to detect shaded gaps by segmenting gray-scale DOM into a binary image using a reasonable threshold [45, 46]. In Otsu’s algorithm, the threshold can be automatically determined by looking for two distributions of pixels and separating them as much as possible. The part of binary image less than the threshold is the shaded gaps, which is the first type of background distribution.
(2) Sunlit Gaps. It can be anticipated that the detection of understory background by Otsu works well over dense forests but tends to fail over sparse forest, because in addition to the shaded gaps, there are sunlit background in sparse forest, such as grass and shrubs, which may have a brightness similar to or even higher than tree crowns on DOM. In fact, it is hard to distinguish tree crowns and sunlit background based on limited spectral information from three visible bands of DOM (i.e., red, green, and blue bands). Therefore, additional information is needed to deal with this issue. In this study, the structural analysis of DSM is proposed to detect the sunlit backgrounds over sparse forest, as described in the following procedure.
It can be found that drastic increase of elevation occurs at the transitional zones from sunlit backgrounds to tree crowns, whereas the elevation changes are small within tree crowns or sunlit backgrounds on DSM. Therefore, it is possible to detect the interface between sunlit backgrounds and tree crowns using a common algorithm of edge detection. In this study, the Sobel operator is used to quantify the elevation changes. Given the elevation change within sunlit backgrounds is smaller than transitional zones, a conservative threshold is used to detect the potential sunlit background regions. However, the conservative threshold causes some tree crown regions to be misidentified as sunlit backgrounds. Therefore, more structural analysis is needed to remove these misidentified regions from potential sunlit backgrounds. For each potential region, the inner and outer buffer zones are set up along the boundary with a width of 1.0 m. Given the average elevation of inner buffer zone should be significantly lower than that of outer buffer zone for real sunlit background region, and there is no similar phenomenon for region of tree crowns, so the real sunlit backgrounds can be identified by elevation difference between the inner and outer buffers. The sunlit backgrounds produced by structural analysis of DSM are the second type of understory backgrounds.
3.1.2. Extraction of Tree Crowns
Two maps of background have been produced using the method proposed in the previous section. Directly removing the background areas does not guarantee accurate identification of tree crowns, because the additional backgrounds within transitional zones are not considered during the process of detecting sunlit backgrounds. The method for extracting tree crowns on object level is further proposed.
The object segmentation is performed on inverted DSM by the watershed algorithm [47, 48]. It can be anticipated that the tree crowns form the lowlands and accumulate the water, whereas the understory backgrounds become drainage areas. The segmented objects are further categorized as sparse or dense ones according to whether they contain pixels of sunlit backgrounds or not. For each sparse object, pixels located within the intersection with the sunlit backgrounds are regarded as understory pixels, which provide the elevation of ground surface. Therefore, a height threshold is used to removing the background pixels within transitional zones. In order to diminish the effect of terrain, mean elevation of inner buffer rather than all sunlit backgrounds within objects is considered as understory elevation; then, the pixels whose elevations lower than a height threshold relative to understory elevations are determined as additional sunlit backgrounds. After completely removing sunlit backgrounds, the segmented objects still have shaded gaps. Therefore, these gaps are further excluded from the sparse and dense objects by using the first type of background distribution. The mosaic of all processed objects forms the tree crowns identification of sampling area. The canopy cover maps of different resolution can be calculated by the percentage of tree crowns within the new pixel range.
3.2. Assessment of UAV-Based Canopy Covers
The UAV-based canopy covers of 77 sampling sites are produced by BAMOS. The central region of approximately 700 m 700 m at each sampling site is considered as effective in order to avoid marginal effects. Their accuracy needs further evaluation based on more reliable reference data. The reference data used here consists of 231 plots with a size of , which are randomly selected from 77 sampling areas. For each plot, the tree crowns are drawn by hand-editing vector polygons in ArcGIS (Esri LLC, CA, Redlands, America), and the reference canopy cover is ratio of cumulative areas of vector polygons to plot size. Compared with evaluation of line sampling or point sampling [5, 19, 49], it can avoid errors caused by distribution of sampling points or lines, although vector polygons of tree crowns are more time-consuming and laborious.
3.3. Assessment of Satellite-Based Canopy Cover Maps
UAV-based canopy covers are further used as references to evaluate the aforementioned satellite-based maps in Daxing’anling forested area. Given the difference in coordinate systems of satellite-based maps and UAV-based canopy covers, the three satellite-based maps are firstly converted to the coordinate system of Universal Transverse Mercator, which is also the projection of UAV-based canopy covers; then, the satellite-based maps are evaluated by comparing with the UAV-based canopy covers over same sampling areas. Here, two evaluation plans are designed: (1)Evaluation at Sampling Area Level. For each sampling area, the canopy cover of satellite-based maps is the average of all pixels located within sampling areas. The reference value is directly calculated on the UAV-based canopy cover maps with a resolution of 8.0 cm by equation (1)(2)Evaluation at Level of Satellite-Based Maps’ Pixel. For each pixel of satellite-based maps located in sampling areas, the reference value is directly calculated using the covered UAV pixels by equation (1) where is the canopy cover, is the number of all pixels of UAV-based canopy cover map within evaluated range (i.e., a pixel of satellite-based map or a sampling area), and is 1 or 0 according to whether it is a tree crown pixel or not.
4.1. Extraction of Tree Crowns by BAMOS
Figures 2(a)–2(c) show detection of shaded gaps. The OTSU algorithm is applied to gray-scale DOM and identify the shaded gaps. According to Figure 2(c), the distribution of detected shaded backgrounds is consistent with that on DOM. Figures 2(d)–2(i) show detection of sunlit backgrounds. The Sobel operator is firstly applied to DSM to quantify elevation changes, and in Figure 2(e), the elevation change of transition zone is very high and forms the interface between regions with different surface elevations. The constant threshold of 1.35 m is used to produce color-coded regions of potential sunlit backgrounds. As expected, besides real sunlit backgrounds, there are some false sunlit backgrounds caused by misidentification of the tree crowns in Figure 2(f). The buffer analysis method is needed for discriminating false and real sunlit backgrounds, and the threshold of 2 m is adopted in this study. In Figure 2(g), the red and green regions are the inner and outer buffer zones, respectively. The average elevation of inner buffer is 790.03 m and that of the outer buffer is 790.05 m. The inner average elevation is only 0.02 m lower than outer average elevation, so this region is a false sunlit background caused by misidentification of tree crowns; while in Figure 2(h), the average elevation of inner and outer buffer is 776.18 m and 778.76 m, respectively, and the inner average elevation is 2 m lower than outer average elevation, so this region is real sunlit background. The final detected sunlit background is consistent with the visual interpretation in Figure 2(i).
Figure 3 show the identification of tree crowns in three typical forest stands with different densities. Figures 3(a)–3(f) are the sparsest forest with a large area of sunlit backgrounds. Figures 3(g)–3(l) are denser forest but an obvious sunlit background. Figures 3(m)–3(r) are the densest forest without sunlit background. Based on segmentations (i.e., red polygons) overlapped on DSM in Figures 3(a), 3(g), and 3(m), it can be seen that each segmented object has both tree crowns and understory backgrounds. The distribution of sunlit backgrounds (Figures 3(b), 3(h), and 3(n)) is first used to classify segmentations as sparse or dense objects according to whether there are pixels of sunlit backgrounds or not. For sparse objects, the height threshold of 2 m is used to remove additional sunlit backgrounds. The results of completely removing the sunlit backgrounds are shown as Figures 3(c), 3(i), and 3(o). Then, the shaded backgrounds (Figures 3(d), 3(j), and 3(p)) are further excluded and produce the final distribution of tree crowns (Figures 3(e), 3(k), and 3(q)). Results of tree crowns overlapped on DOM (Figures 3(f), 3(l), and 3(r)) are consistent with that of visual interpretation, indicating that the BAMOS is robustness in forests with different densities.
In this study, the UAV images are collected at 77 sampling sites in Daxing’anling forested area. Given limited time and expenses, it is impossible to only collect data under ideal weather conditions. Therefore, some sampling sites are acquired under unfavorable light conditions. Figure 4(a) shows a case of DOM with scattered cloud shadows, which causes uneven distributions of lights over sampling area, which is more clear in the enlarged subimage. Figure 4(b) is a case of DOM at a mountainous site acquired with low sun elevation angle, and the light condition is complicated as shown in the enlarged subimage. Figures 4(c) and 4(d) are identifications of tree crowns correspond to Figures 4(a) and 4(b), respectively. It can be seen that unfavorable conditions have no obvious effect on performance of BAMOS for extracting the tree crowns, which demonstrate BAMOS’s robustness.
4.2. Accuracy Assessment of UAV-Based Canopy Covers
Figure 5 shows the accuracy assessment of UAV-based canopy covers based on manually interpreted 231 plots as depicted in Section 3.2. The horizontal and vertical coordinates of each scatter point represent the reference canopy cover and UAV-based estimation of a 30 m 30 m plot, respectively. Most scatter points are distributed along the 1 : 1 diagonal line. Results quantitatively show that the correlation coefficient between UAV-based canopy covers and manually interpreted references is 0.96 with root mean square error (RMSE) of 5.7% and relative root mean square error (rRMSE) of 8.9%, indicating that UAV-based canopy covers are reliable.
4.3. Assessment of Satellite-Based Canopy Cover Maps
Figure 6 shows the geographical matching of satellite-based maps and the UAV-based canopy cover maps at a sampling site. The pixel grids marked in red in Figures 6(a)–6(c) correspond to the GLAD, GLCF, and MODIS VCF, respectively. Due to different defaults of reprojection in ArcGIS, their grid sizes are 25.6 m, 30.0 m, and 237.1 m, respectively. The background image (i.e., binary image) in Figure 6 is the UAV-based canopy cover map, which provides references for evaluating satellite-based maps as depicted in Section 3.3. In order to avoid marginal effects, the effective range highlighted by green color is determined by the 9 central pixels of MODIS VCF in each sampling area.
Figure 7 shows spatial pattern of UAV-based canopy covers and satellite-based maps including GLAD and GLCF at three typical sampling sites. Given only nine pixels for MODIS VCF at each sampling site, its spatial pattern does not present. Figures 7(a)–7(c) are true color DOMs. It can be seen that Figure 7(a) has large areas of grass, and the trees are concentrated on the southeast side; Figure 7(b) is forested area with many scattered sunlit non-tree areas (e.g., grasslands); Figure 7(c) has denser forest and less non-tree areas. Figures 7(d)–7(f) are UAV-based canopy covers whose spatial resolution is identical with the GLCF (i.e., 30 m). It can be seen that the UAV-based canopy covers can accurately capture the distribution pattern of forest on DOM in all sampling areas, even in highly heterogeneous area, e.g., scattered tree clusters along the road on Figure 7(a) and forest roads on Figures 7(b) and 7(c).
Figures 7(g)–7(i) and 7(j)–7(l) are GLAD and GLCF maps corresponding to Figures 7(a)–7(c), respectively. Although both satellite-based maps can capture the dominant spatial patterns of forest distributions, they are obviously underestimated in forest areas compared to UAV maps. In addition, while the GLCF maps tend to overestimate canopy cover for non-forest areas, e.g., the grassland regions are mistakenly defined as low canopy cover, the GLAD maps tend to neglect tree crowns close to non-tree regions (e.g., grassland), leading to these regions having high UAV-based canopy cover but very low GLAD-based value.
Figure 8 shows the relative frequency distribution of pixel values of three canopy cover maps corresponding to Figure 7. The obvious underestimation is the basic feature of GLAD and GLCF maps if taking UAV maps as references. Figure 8(a) is the statistics corresponding to Figure 7(a). For pixels with canopy , the peak position is about 75% in the UAV map but is only 45% in GLAD map and 35% in GLCF map. The situation is similar at other two sampling areas. In Figure 8(b), the three peaks are 75%, 55%, and 45% for UAV, GLAD, and GLCF, respectively. They are 75%, 35%, and 45% in Figure 8(c), respectively. For pixels with canopy at all the three sites, dominant frequency of UAV maps are among 60%-90%, but those of GLAD or GLCF maps are concentrated among 30% ~60%.
Figure 9 shows the accuracy assessment of satellite-based maps over 77 sampling sites. Figures 9(a)–9(c) are intercomparisons among three satellite-based maps at the scale of sampling area. Figures 9(d)–9(f) are comparisons between satellite-based maps against UAV canopy covers at the scale of sampling area; Figures 9(g)–9(i) are comparisons at pixel scale. It can be seen that consistency among global products is better than that against UAV canopy covers. The correlation coefficients () are 0.80, 0.77, and 0.58 for GLCF against GLAD, GLCF against MODIS VCF, and MODIS VCF against GLAD, respectively, and their corresponding RMSEs are 10.6%, 7.4% and 14.9%, while they are only 0.58, 0.53, and 0.47 for GLAD, GLCF, and MODIS VCF against UAV canopy covers, respectively. Their corresponding RMSEs are 28.2%, 21.8%, and 19.6%, respectively. At the pixel scale, the assessment result is similar to that of sampling area scale, with the correlation coefficients of 0.58, 0.61, and 0.52 and RMSEs of 33.9%, 27.8%, and 22.8% for GLAD, GLCF, and MODIS VCF, respectively. Most scattered points are under the 1 : 1 diagonal line, indicating obvious underestimations of global canopy cover products, which is also observed as in Figures 7 and 8.
5.1. Canopy Cover Extraction Using BAMOS
5.1.1. Innovation of BAMOS
Most studies are based on the canopy height model (CHM) to extract canopy covers, which are limited by the availability of understory terrain. This study develops an innovative method (i.e., BAMOS) independent on priori understory terrain and produce high-accuracy sampling data of canopy cover across Daxing’anling forest areas. The BAMOS consists of two steps: extracting two types of backgrounds (i.e., shaded gaps and sunlit gaps) through automatic OTSU algorithm and structural analysis of DSM firstly and then identifying remaining gaps pixels (i.e., those located in the transition from sunlit gaps to trees crowns) by analysis of segmentation.
Although several well-known image processing algorithms are employed including the Otsu’s algorithm, edge detection by Sobel operator, and DSM segmentation by watershed algorithm, the BAMOS is not just a simple combination of them. The Otsu’s algorithm is effective to detect small shaded gaps but fails to detect large sunlit backgrounds as shown in Figures 2(b) and 2(c). There is no doubt to get a kind of detection of potential sunlit backgrounds by applying a threshold on results of Sobel operator, but these detected regions can be either true sunlit backgrounds or false ones caused by forest crown with small changes of elevation as shown in Figure 2(f). It is challenging to keep true sunlit backgrounds but to exclude false ones. The watershed algorithm is widely used in the image segmentations. However, each segmented object of watershed algorithm has both tree crowns and backgrounds as shown in Figures 3(a), 3(g), and 3(m). Based on these imperfect results, it is innovative to find a way to accurately identify tree crowns.
5.1.2. Settings of Parameters in BAMOS
There are three key parameters used in the structural analysis of DSM. They are the threshold applied on results of Sobel calculator to find the potential sunlit backgrounds, the width of buffer zone in the buffer analysis, and the threshold of elevation difference between inner and outer buffers. They all are not sensitive for final results, but extreme settings of these parameters should be avoided. In this study, the threshold for Sobel operator is 1.35 m, which equals to a slope of 45°. Higher threshold means that the detected interface between sunlit backgrounds and forest crowns moves outward to tree crowns. Inversely, the detected interface moves inward to sunlit backgrounds. The width of buffer zone used in this study is 1.0 m, and the smaller width is not suggested. This parameter is coupled with the third parameter. Wider buffer zone tends to produce higher elevation differences between inner and outer buffers for identifying true sunlit backgrounds.
The Otsu’s algorithm is used to detect small shaded gaps, and the threshold is automatically determined based on the histogram of pixels’ value. Therefore, some image preprocessing may be needed when the contrast of sunlit to shaded parts of tree crowns is too strong. Otherwise, shaded parts of tree crowns may be mixed up with shaded gaps. Therefore, the contrast stretching is needed in the image preprocessing.
The watershed algorithm is used to split the inverted DSM into the mosaic of objects. Considering the ultra-high resolution of DSM, it can be anticipated that some trivial structural features may be segmented out within a tree crown if the watershed algorithm is directly applied. Although overfragmentation has less effect on the final results, it is better to make smooth filtering before the image segmentation in order to reduce the computation load in the following object-based analysis. The window size of smooth filtering is suggested.
5.1.3. Impact of Terrain Effects and Forest Vertical Structure
The terrain effect generally could impact the extraction of tree crowns because it can change spectral metrics. For BAMOS method, compared with structural-based analysis used in sparse forests, the spectral-based OTSU algorithm used in dense forests is more affected by terrain effect. In this case, the key is to discriminate shaded crowns and shaded gaps. Although terrain effect can change spectral features in some degree, the OTSU method can be still used to differentiate the shaded crowns and shaded gaps because the shaded crown is lighter than shaded gaps under different terrain, which are also validated in this study, e.g., although Figure 4(b) is acquire in complex mountainous region with obvious terrain effect, the UAV-based canopy cover map (Figure 4(d)) is still reliable, indicating that terrain effect has no significant impact on BAMOS method.
The forest vertical structure may have some effects on the setting of parameters. In this study, the vertical structure of Daxing’anling forested areas usually has two layers (i.e., understory vegetation and overstory tree crowns), which ensures that the regions with significantly lower inner buffer are sunlit backgrounds. The applicability of the structural analysis of DSM in forests with three or more layers should be further examined.
5.2. Assessment of Global Canopy Cover Maps
5.2.1. Impact of Temporal Inconsistency
Three global products have different temporal differences compared with the UAV-based reference data. The time intervals relative to UAV data are 8, 3, and less than 1 years for GLAD, GLCF and MODIS VCF, respectively. Temporal inconsistency could explain the differences between satellite-based and UAV-based canopy covers to some extent, e.g., Figure 7(i) shows quite different spatial pattern compared with Figure 7(f) or Figure 7(l), especially in forested regions approaching to non-forest areas. This may be related to growth of young forests during 8-year time interval. According to the results in Figure 9, the RMSE shows a decrease trend at both scale of sampling area (from 28.17 to 19.59%) and pixel (from 33.94% to 22.84%) as the temporal interval decreases, which also indicates that temporal interval may explain the different performance of global products to some extent. Nevertheless, MODIS VCF at same year of 2018 still has obvious underestimations with RMSE about 20%, indicating the temporal inconsistency in this study cannot be main factor for the obvious underestimation of global products in Daxing’anling forested area.
5.2.2. Impact of Geolocation
The geolocation of UAV-based canopy covers is decided by position/attitude measurement system, which has been used in Chianucci et al. . The geographic error of UAV-based digital orthophoto images (DOMs) is less than 3 m, which is far smaller than satellite-based pixel (e.g., 30 m or 250 m). Figure 7 also intuitively shows geographical matching between UAV-based canopy covers and global products (i.e., GLAD and GLCF). Therefore, the geolocation errors do not have a significant impact on our assessments although we addict it may be an error factor.
In this study, a new method (i.e., BAMOS) is proposed to distinguish tree crowns and understories by synergizing UAV-based DOM and DSM, and the strength is that it does not depend on understory terrain and shows robustness for terrain and weather effects. The high-accurate UAV-based canopy cover maps over 77 sampling areas across Daxing’anling forested area are produced by BAMOS with RMSE of 5.7%. These samplings of UAV-based canopy covers provide reference data for assessment of coarse satellite-based canopy cover maps. Results show that both GLAD and GLCF canopy covers can capture the dominant spatial patterns, but GLAD canopy cover tends to miss scattered trees in highly heterogeneous areas, and GLCF fails to capture non-tree areas. Most important of all, obvious underestimations with RMSE about 20% are easily observed in all satellite-based maps, although the temporal inconsistency with references may have some contributions.
The UAV-based canopy covers containing 77 sampling sites with spatial resolution of 30 m are free to access at https://zenodo.org/record/5702373.
Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of the article.
T.Y. and W.N. jointly proposed the method and wrote the paper. Z.Y., Q.L., and G.S. helped to revise the paper.
This work was supported by the National Natural Science Foundation of China (grant numbers: 42090013 and 42022009) and the National Key Research and Development Program of China (grant numbers: 2017YFA0603002 and 2020YFE0200800). The authors want to acknowledge Qiang Wang, Yao Wang, Dafeng Zhang, Jiachen Dong, and Yuan yao for collecting and processing UAV RGB images.
- S. S. Cai, W. M. Zhang, S. N. Jin et al., “Improving the estimation of canopy cover from UAV-LiDAR data using a pit-free CHM-based method,” International Journal of Digital Earth, vol. 14, no. 10, pp. 1477–1492, 2021.
- M. J. Falkowski, J. S. Evans, D. E. Naugle et al., “Mapping tree canopy cover in support of proactive prairie grouse conservation in Western North America,” Rangeland Ecology & Management, vol. 70, no. 1, pp. 15–24, 2017.
- M. Gonzalez-Roglich and J. J. Swenson, “Tree cover and carbon mapping of Argentine savannas: scaling from field to region,” Remote Sensing of Environment, vol. 172, pp. 139–147, 2016.
- J. O. Sexton, X. P. Song, M. Feng et al., “Global, 30-m resolution continuous fields of tree cover: Landsat-based rescaling of MODIS vegetation continuous fields with lidar-based estimates of error,” International Journal of Digital Earth, vol. 6, no. 5, pp. 427–448, 2013.
- X. Q. Wu, X. Shen, L. Cao, G. B. Wang, and F. L. Cao, “Assessment of individual tree detection and canopy cover estimation using unmanned aerial vehicle based light detection and ranging (UAV-LiDAR) data in planted forests,” Remote Sensing, vol. 11, no. 8, pp. 908–929, 2019.
- R. S. Defries, M. C. Hansen, J. R. G. Townshend, A. C. Janetos, and T. R. Loveland, “A new global 1-km dataset of percentage tree cover derived from remote sensing,” Global Change Biology, vol. 6, no. 2, pp. 247–254, 2000.
- M. C. Hansen, P. V. Potapov, R. Moore et al., “High-resolution global maps of 21st-century forest cover change,” Science, vol. 342, no. 6160, pp. 850–853, 2013.
- C. D. Mendenhall and A. M. Wrona, “Improving tree cover estimates for fine-scale landscape ecology,” Landscape Ecology, vol. 33, no. 10, pp. 1691–1696, 2018.
- R. S. DeFries, J. R. G. Townshend, and M. C. Hansen, “Continuous fields of vegetation characteristics at the global scale at 1-km resolution,” Journal of Geophysical Research-Atmospheres, vol. 104, no. D14, pp. 16911–16923, 1999.
- M. C. Hansen, R. S. DeFries, J. R. G. Townshend, R. Sohlberg, C. Dimiceli, and M. Carroll, “Towards an operational MODIS continuous field of percent tree cover algorithm: examples using AVHRR and MODIS data,” Remote Sensing of Environment, vol. 83, no. 1-2, pp. 303–319, 2002.
- C. Dimiceli, M. Carroll, R. Sohlberg, C. Q. Huang, M. Hansen, and J. M. Townshend, Annual Global Automated MODIS Vegetation Continuous Fields (MOD44B) at 250 M Spatial Resolution for Data Years Beginning Day 65, 2000-2014, Collection 5 Percent Canopy Cover, Version 6, University of Maryland, College Park, MD, USA, 2017.
- M. C. Hansen, J. R. G. Townshend, R. S. Defries, and M. Carroll, “Estimation of tree cover using MODIS data at global, continental and regional/local scales,” International Journal of Remote Sensing, vol. 26, no. 19, pp. 4359–4380, 2005.
- K. Hadi, L. Korhonen, A. Hovi, P. Rönnholm, and M. Rautiainen, “The accuracy of large-area forest canopy cover estimation using Landsat in boreal region,” International Journal of Applied Earth Observation and Geoinformation, vol. 53, pp. 118–127, 2016.
- M. A. White, J. D. Shaw, and R. D. Ramsey, “Accuracy assessment of the vegetation continuous field tree cover product using 3954 ground plots in the South-Western USA,” International Journal of Remote Sensing, vol. 26, no. 12, pp. 2699–2704, 2005.
- J. Heiskanen, “Evaluation of global land cover data sets over the tundra-taiga transition zone in northernmost Finland,” International Journal of Remote Sensing, vol. 29, no. 13, pp. 3727–3751, 2008.
- K. Jia, S. L. Liang, S. H. Liu et al., “Global land surface fractional vegetation cover estimation using general regression neural networks from MODIS surface reflectance,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 9, pp. 4787–4796, 2015.
- D. Cunningham, P. Cunningham, and M. E. Fagan, “Identifying biases in global tree cover products: a case study in Costa Rica,” Forests, vol. 10, no. 10, pp. 853–884, 2019.
- M. C. Hansen, R. S. DeFries, J. R. G. Townshend, L. Marufu, and R. Sohlberg, “Development of a MODIS tree cover validation data set for Western Province, Zambia,” Remote Sensing of Environment, vol. 83, no. 1-2, pp. 320–335, 2002.
- P. M. Montesano, R. Nelson, G. Sun, H. Margolis, A. Kerber, and K. J. Ranson, “MODIS tree cover validation for the circumpolar taiga-tundra transition zone,” Remote Sensing of Environment, vol. 113, no. 10, pp. 2130–2141, 2009.
- B. Pengra, J. Long, D. Dahal, S. V. Stehman, and T. R. Loveland, “A global reference database from very high resolution commercial satellite data and methodology for application to Landsat derived 30 m continuous field tree cover data,” Remote Sensing of Environment, vol. 165, pp. 234–248, 2015.
- C. Alexander, P. K. Bocher, L. Arge, and J. C. Svenning, “Regional-scale mapping of tree cover, height and main phenological tree types using airborne laser scanning data,” Remote Sensing of Environment, vol. 147, pp. 156–172, 2014.
- P. M. Montesano, C. S. R. Neigh, J. Sexton et al., “Calibration and validation of Landsat tree cover in the taiga-tundra ecotone,” Remote Sensing, vol. 8, no. 7, pp. 551–567, 2016.
- X. P. Song and H. Tang, “Accuracy assessment of Landsat-derived continuous fields of tree cover products using airborne LiDAR data in the eastern United States,” Remote Sensing and Spatial Information Sciences, vol. XL-7/W4, no. W4, pp. 241–246, 2015.
- H. Tang, X.-P. Song, F. A. Zhao et al., “Definition and measurement of tree cover: a comparative analysis of field-, LiDAR- and Landsat-based tree cover estimations in the Sierra national forests, USA,” Agricultural and Forest Meteorology, vol. 268, pp. 258–268, 2019.
- A. Strahler, L. Boschetti, G. Foody et al., Global Land Cover Validation: Recommendations for Evaluation and Accuracy Assessment of Global Land Cover Maps, European Commission, Ispra, Italy, 2006.
- J. P. Dandois and E. C. Ellis, “High spatial resolution three-dimensional mapping of vegetation spectral dynamics using computer vision,” Remote Sensing of Environment, vol. 136, pp. 259–276, 2013.
- S. Jayathunga, T. Owari, S. Tsuyuki, and Y. Hirata, “Potential of UAV photogrammetry for characterization of forest canopy structure in uneven-aged mixed conifer-broadleaf forests,” International Journal of Remote Sensing, vol. 41, no. 1, pp. 53–73, 2020.
- M. Mariana de Jesus, A. Gonzalez-Sanchez, S. Ivan Jimenez-Jimenez, R. Ernesto Ontiveros-Capurata, and W. Ojeda-Bustamante, “Estimation of vegetation fraction using RGB and multispectral images from UAV,” International Journal of Remote Sensing, vol. 40, no. 2, pp. 420–438, 2019.
- G. J. Yan, L. Y. Li, A. Coy et al., “Improving the estimation of fractional vegetation cover from UAV RGB imagery by colour unmixing,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 158, pp. 23–34, 2019.
- F. Chianucci, L. Disperati, D. Guzzi et al., “Estimation of canopy attributes in beech forests using true colour digital images from a small fixed-wing UAV,” International Journal of Applied Earth Observation and Geoinformation, vol. 47, pp. 60–68, 2016.
- T. Kattenborn, J. Lopatin, M. Forster, A. C. Braun, and F. E. Fassnacht, “UAV data as alternative to field sampling to map woody invasive species based on combined Sentinel-1 and Sentinel-2 data,” Remote Sensing of Environment, vol. 227, pp. 61–73, 2019.
- B. Melville, A. Fisher, and A. Lucieer, “Ultra-high spatial resolution fractional vegetation cover from unmanned aerial multispectral imagery,” International Journal of Applied Earth Observation and Geoinformation, vol. 78, pp. 14–24, 2019.
- L. Cao, K. Liu, X. Shen, X. Q. Wu, and H. Liu, “Estimation of forest structural parameters using UAV-LiDAR data and a process-based model in ginkgo planted forests,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 12, no. 11, pp. 4175–4190, 2019.
- T. Y. Hu, X. L. Sun, Y. J. Su et al., “Development and performance evaluation of a very low-cost UAV-LiDAR system for forestry applications,” Remote Sensing, vol. 13, no. 1, pp. 77–98, 2021.
- L. Wallace, A. Lucieer, Z. Malenovsky, D. Turner, and P. Vopenka, “Assessment of forest structure using two UAV techniques: a comparison of airborne laser scanning and structure from motion (SfM) point clouds,” Forests, vol. 7, no. 12, pp. 62–88, 2016.
- F. Giannetti, G. Chirici, T. Gobakken, E. Naesset, D. Travaglini, and S. Puliti, “A new approach with DTM-independent metrics for forest growing stock prediction using UAV photogrammetric data,” Remote Sensing of Environment, vol. 213, pp. 195–205, 2018.
- W. J. Ni, J. C. Dong, G. Q. Sun et al., “Synthesis of leaf-on and leaf-off unmanned aerial vehicle (UAV) stereo imagery for the inventory of aboveground biomass of deciduous forests,” Remote Sensing, vol. 11, no. 7, pp. 889–905, 2019.
- D. F. Zhang, J. L. Liu, W. J. Ni et al., “Estimation of forest leaf area index using height and canopy cover information extracted from unmanned aerial vehicle stereo imagery,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 12, no. 2, pp. 471–481, 2019.
- A. M. Cunliffe, R. E. Brazier, and K. Anderson, “Ultra-fine grain landscape-scale quantification of dryland vegetation structure with drone-acquired structure-from-motion photogrammetry,” Remote Sensing of Environment, vol. 183, pp. 129–143, 2016.
- L. Y. Li, J. Chen, X. H. Mu et al., “Quantifying understory and overstory vegetation cover using UAV-based RGB imagery in forest plantation,” Remote Sensing, vol. 12, no. 2, pp. 298–316, 2020.
- Y. Liu, R. Trancoso, Q. Ma, C. F. Yue, X. H. Wei, and J. A. Blanco, “Incorporating climate effects in Larix gmelinii improves stem taper models in the Greater Khingan Mountains of Inner Mongolia, northeast China,” Forest Ecology and Management, vol. 464, pp. 118065–118077, 2020.
- Y. Liu, C. F. Yue, X. H. Wei, J. A. Blanco, and R. Trancoso, “Tree profile equations are significantly improved when adding tree age and stocking degree: an example for Larix gmelinii in the greater Khingan Mountains of Inner Mongolia, Northeast China,” European Journal of Forest Research, vol. 139, no. 3, pp. 443–458, 2020.
- W. J. Ni, G. Q. Sun, Y. Pang et al., “Mapping three-dimensional structures of forest canopy using UAV stereo imagery: evaluating impacts of forward overlaps and image resolutions with LiDAR data as reference,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 11, no. 10, pp. 3578–3589, 2018.
- C. Macfarlane and G. N. Ogden, “Automated estimation of foliage cover in forest understorey from digital nadir images,” Methods in Ecology and Evolution, vol. 3, no. 2, pp. 405–415, 2012.
- M. Dalponte, H. O. Orka, L. T. Ene, T. Gobakken, and E. Naesset, “Tree crown delineation and tree species classification in boreal forests using hyperspectral and ALS data,” Remote Sensing of Environment, vol. 140, pp. 306–317, 2014.
- N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Transactions on Systems Man and Cybernetics, vol. 9, no. 1, pp. 62–66, 1979.
- T. Liu, J. H. Im, and L. J. Quackenbush, “A novel transferable individual tree crown delineation model based on Fishing Net Dragging and boundary classification,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 110, pp. 34–47, 2015.
- F. Meyer, “Topographic distance and watershed lines,” Signal Processing, vol. 38, no. 1, pp. 113–125, 1994.
- D. Zhang, H. Wang, X. Wang, and Z. Lu, “Accuracy assessment of the global forest watch tree cover 2000 in China,” International Journal of Applied Earth Observation and Geoinformation, vol. 87, pp. 102033–102043, 2020.
Copyright © 2022 Tianyu Yu et al. Exclusive Licensee Aerospace Information Research Institute, Chinese Academy of Sciences. Distributed under a Creative Commons Attribution License (CC BY 4.0).