Journal of Remote Sensing / 2021 / Article / Tab 3

Review Article

Mapping Tree Species Using Advanced Remote Sensing Technologies: A State-of-the-Art Review and Perspective

Table 3

Summary of data fusion methods frequently used for tree species mapping.

ApproachCharacteristic and descriptionAdvantage and limitationMajor factorExample

Spatial-sharpening methods with single or multisensor data
Pansharpening with single sensor’s data (PCS, GSS)Single sensor with one pan (high resolution)and several MS bands (low resolution). Running a pansharpening algorithm (e.g., either PCS or GSS sharpening in ENVI system)Sharpened image has a nominal pan high resolution but its MS property may be slightly different from the original MS propertyOne high-resolution pan band and several low-resolution MS bands[59, 112]
Sharpening with two different sensors’ dataTwo sensor’s images cover the same spatial area at high and low resolutions. The low-resolution image has to be resampled to a higher resolution so that the two images have the same size before running a sharpening algorithm (e.g., PCS or GSSSharpened image has a nominal high resolution but retains multispectral property. Two images need to be spatially registered and the low-resolution image must be resampled to a higher resolutionTwo-sensor data registered together and making the two images have the same size[120]
Fusion methods with different sensors or multisource data
Optical sensor and optical sensor’s dataDifferent optical sensors provide different spatial resolution band image and cover different spectral regions, but both images cover the same spatial areaComplement data sets from different spatial resolutions and spectral regions for improved classification. A good spatial registrationbetween the two sensors’ data is neededResampling low to high resolution and a good spatial registration needed between them[10, 55]
Optical sensor and LiDAR dataOptical sensor’s data offering sufficient spectral and spatial/textural information and LiDAR data providing vertical profile/structural informationComplement data sets for improved classification. A good spatial registration between the two sensors’ data is neededA good spatial registration is needed between the two sensors’ data[19, 47, 49]
Optical sensor, LiDAR, and ancillary dataOptical or LiDAR sensor’s data offering sufficient spectral and spatial/textural information or vertical profile/structural information and ancillary data providing information related to TS spatial distributionComplement data sets for improved classification. Digitize the ancillary data and a good spatial registration between the two sources is neededA good spatial registration is needed between the two sources’ data[56, 57, 59]
Spatiotemporal data fusion methods with different resolutions (spatial and temporal) in sensor data
STARFM-based methodsBased on the assumption that the ratio of coarse pixel reflectance to neighboring similar pixels does not change over timeA type of widely used STDF models and preserving spatial detail information; fail to address a heterogeneous landscape and to predict short time change eventsDetermine if it is temporal variance or spatial variance dominated area[121123]
Spectral unmixing-based methodsBased on the assumption that the reflectance of each coarse spatial-resolution pixel is a linear combination of the responses of all endmembers within the coarse pixelEasy-to-obtain endmembers by grouping similar pixels; may fail to catch short temporal interval events due to using 1 or 2 high-resolution images to cluster similar pixelsCluster high-quality similar pixels from high-resolution images for extracting endmembers[124, 125]