Get Our e-AlertsSubmit Manuscript
Space: Science & Technology / 2022 / Article

Review Article | Open Access

Volume 2022 |Article ID 9802793 |

Beichao Wang, Shuang Li, Jinzhen Mu, Xiaolong Hao, Wenshan Zhu, Jiaqian Hu, "Research Advancements in Key Technologies for Space-Based Situational Awareness", Space: Science & Technology, vol. 2022, Article ID 9802793, 31 pages, 2022.

Research Advancements in Key Technologies for Space-Based Situational Awareness

Received02 Apr 2022
Accepted31 May 2022
Published18 Jun 2022


The space environment has become highly congested due to the increasing space debris, seriously threatening the safety of orbiting spacecraft. Space-based situational awareness, as a comprehensive capability of threat knowledge, analysis, and decision-making, is of significant importance to ensure space security and maintain normal order. Various space situational awareness systems have been designed and launched. Data acquisition, target recognition, and monitoring constituting key technologies make major contributions, and various advanced algorithms are explored as technical supports. However, comprehensive reviews of these technologies and specific algorithms rarely emerge. It disadvantages the future development of space situational awareness. Therefore, this paper further reviews and analyzes research advancements in key technologies for space situational awareness, emphasizing target recognition and monitoring. Many mature and emerging methods are presented for these technologies while discussing application advantages and limitations. Specially, the research prospects of multiagent and synergetic constellation technologies are expected for future situational awareness. This paper indicates the future directions of the key technologies, aiming to provide references for space-based situational awareness to realize space sustainability.

1. Introduction

Developing space situational awareness (SSA) is inevitable for space security and normal order. Since the launch of the first man-made earth satellite, the number of space objects has been rapidly increasing [1]. According to the authoritative statistics from NASA, over 6,400 orbiting spacecraft still existed until early 2021. Furthermore, the total number of rocket debris above 10 cm has exceeded 16,000, which long occupies orbital resources to a congested space environment [2]. Hence, collision risks have unprecedentedly increased, causing severe threats to space security.

Space debris has broad distributions and unknown moving directions with a significant faint characteristic [3], which breaks down orbiting spacecraft at any time. In recent years, the frequency of dangerous rendezvous between space debris and spacecraft has increased to over 80 times per week, and many incidents happened. For instance, a space shuttle in the STS-135 mission brought back a handrail with 34.8 cm. It had been well in service for 8.7 years, but six impact holes were found on its surface. Besides, space debris also struck the Canadarm-2 robotic arms outside the International Space Station, punching a 14-inch hole [4]. More seriously, the debris freely rolls with frequent collision and disintegration, highly threatening the safety of orbiting spacecraft. The constant production of debris will eventually cause a chain reaction that exponentially increases collision. On another aspect, numerous satellites are deployed to low orbits with the construction of the Internet giant constellation, such as the Starlink. It rapidly increases the density of low-orbit satellites to more frequent collisions of debris and satellites. In 2019, the European Aeolus satellite executed emergency collision avoidance with the Starlink-44 satellite [5]. Afterward, the OneWeb constellation rendezvoused with the Starlink, almost causing a collision that the minimum distance was only 58 m [6]. Nearby orbits were compulsively closed, while the spacecraft avoided collisions in irregular manners. Overall, space debris threats to the orbiting spacecraft and constellations are tremendous.

Moreover, the risk of asteroids should be profoundly realized. Asteroids generally have a diameter of over 10 meters, and the size of comet nuclei is less known. More than 1,000 large asteroids exist in space, threatening the spacecraft, the Earth, and other planets with hypervelocities. The Comet Shoemaker-Levy 9 approached Jupiter and was disintegrated. It caused a continuous impact on the planet by the debris [7] and even threatened near-earth objects. Afterward, a large meteorite exploded 29.7 kilometers above Chelyabinsk, Russia. It has injured around 1,500 people [8], first threatened by extraterrestrial objects. Hence, timely and accurate situational awareness is essential for space security and order; especially autonomous collision avoidance of space threats is urgently performed.

Space situational awareness has attracted much attention as the fundamental solution to the situations. In a broad sense, SSA was defined as knowledge of natural and artificial objects passing through and orbit within the near-earth space, including the past, present, and future states [9]. Ground-based SSA was first developed for near-earth awareness by some countries. However, given the more severe space situation in the near-earth space, the ground-based awareness cannot meet higher requirements for security. Therefore, the scheme of space-based situational awareness was further explored relying on significant advantages compared with the ground-based one in terms of higher accuracy, efficiency, and stability [10]. By equipped with spaceborne sensors, the SSA is hardly affected by the atmosphere, weather, and geography. It can implement an accurate and timely awareness of the space. Performing the space-based SSA is more appropriate for severe situations.

However, the realm of near-earth space is relatively fuzzy in this stage. It is definitely within cis-lunar space but extended to the earth’s radius of at least 100,000 km to include nearly all artificial targets currently in orbit [11], while farther natural objects pose more potential risks with uncertainty. Therefore, the considered space threats further increase, meaning that the knowledge scope of the space-based situational awareness is reasonably extended on the current basis. In this process, a comprehensive capability of knowledge, analysis, and decision-making should be strengthened for the SSA. First, the knowledge involves the data acquisition technology realized by optical sensors and data processing algorithms. After acquiring the target data, further recognizing characteristics, parameters, and intention is performed to analyze the targets for more specific information. Given that space targets tend to maneuver in irregular manners, continuous monitoring is inevitable to confirm the status and location of the targets. Orbital prediction is made based on the previous information, and then, orbital tracking and maneuver detection can be performed according to the prediction results. Finally, as the decision-making section, early warning and collision avoidance technologies are adopted for space threat removal. As mentioned above, accurate analysis and timely decision-making are the primary requirements of the SSA in farther space, directly determining the final results. Therefore, target recognition and monitoring technologies in the SSA should be emphasized to study.

To sum up, reviewing and discussing research advancements in the key technologies is valuable for space-based situational awareness. In Figure 1, the reviewed components are arranged in a logical sequence. Especially, future research directions on key technologies are expected as well as multiagent and synergetic constellation awareness. It can benefit the future development of the SSA. Moreover, it is advocated that emerging algorithms, including data fusion, cloud computing, and artificial intelligence, appropriately integrate mature technologies such as laser-radar echo and photometry for the SSA. It contributes to prospective research for future space sustainability.

The rest of this paper is organized as follows. Section 2 reviews typical SSA systems developed by different countries. In Section 3, space-based sensors and data processing technologies are summarized. Section 4 reviews and analyzes various target recognition algorithms, and target monitoring algorithms are addressed in the next section. Section 6 expects research prospects, while the challenges and outlooks are presented in the final.

2. Typical SSA Systems

The capability of space-based situational awareness can be directly reflected in advanced awareness systems as the carriers. The United States, the European Union, and Russia are always leading in constructing global SSA systems. Given the mentioned requirements in farther space, typical systems available for long-distance awareness are reviewed, whose advantages and functions are also analyzed.

The United States has greatest contributed to developing the SSA systems. The existing systems in farther space mainly include the Geosynchronous Space Situational Awareness Program (GSSAP) [12], the Space-Based Surveillance System (SBSS) [13], Space-Based Infrared System (SBIRS) [14], Space Tracking and Surveillance System (STSS) [15], and James Webb Space Telescope (JWST) [16]. They not only directly observe space objects but also actively provide protection information for collision avoidance.

First, the GSSAP aims to strengthen geosynchronous situational awareness capability. This system can identify concrete features to distinguish and characterize various targets. It has detected over 23,000 objects larger than 10 cm in multidirections [17]. The SBSS has higher capabilities of data acquisition, identification, and tracking on space debris. It identifies the geosynchronous targets over 0.3 m and those of 0.05 m in low-earth orbits. The SSBS can enhance the tracking efficiency by 50% and shorten the updating period of target inventory to two days.

Then, SSA systems tend to be integrated and multicomponent. The SBIRS constellation contains four satellites and infrared payloads in high orbits. The target areas are scanned once in 8~12 s relying on higher flexibility and sensitivity, which is over ten times faster than conventional systems [18]. Target monitoring components in this system can realize farther orbital tracking, maneuver detection, and early warning. Furthermore, 24 satellites are distributed in the STSS, further extending the coverage of the SBIRS. Infrared sensors are the primary loads equipped with multispectral wide-field scanning and narrow-field gazing detectors for target monitoring [19]. The STSS has stronger capabilities of orbital tracking and maneuver detection in complicated situations. Besides, the JWST integrates a telescope with near- and midinfrared cameras for ultrafar image acquisition and target monitoring. It can operate at near-infrared wavelengths and near absolute zero [20]. Low weight, precise, and broad observation are the significant advantages of the JWST.

Following the United States, the European Union emphatically strengthens knowledge and early warning capabilities in the SSA. Typically, a dual-mode detection system [21] aims to the high-orbit awareness of noncooperative targets. It assembles laser radars for 3D imaging and infrared sensors for maneuver detection. Multiple data acquisition of the targets, including distances, velocities, and attitudes, is realized for extensive detection in farther awareness. Afterward, Russia has advanced in debris tracking, early warning, and environmental monitoring, executing over 50,000 observations and 8,500 spatial catalogs with an extensive detection of 200~40000 km. The Tree Canopy system [22] is activated as a key component of the Russian SSA systems, integrated by visible-light telescopes, radiofrequency, and laser radars. Continuous tracking and monitoring capability on space debris is highly superior to other components. Moreover, Japan, Australia, and Italy have performed programs such as the SM-3 Block, Divert and Attitude Control System [23], and Medium Extended Air Defense System [24]. However, their performance still needs improvement, and the effectiveness will take time to prove.

Overall, advanced space-based situational awareness systems constantly emerge in the United States and other countries. Various systems have unique advantages, and primary functions involve the mentioned key technologies. Nevertheless, given the large power consumption of space-based devices and uncoordinated data processing methods, the current SSA systems are restricted by the number of detectors, detection capabilities, and location distribution, thus only concurrently possessing certain functions. In this case, the systems cannot realize accurate awareness of all space targets in real time, but only for task requirements. Therefore, a comprehensive situational awareness capability should be explored by developing the key technologies concurrently.

3. Data Acquisition

The first section of the space-based SSA is to accurately acquire the data of space targets after rapidly processing. In the process, high-performance optical sensors can effectively reduce measuring errors, leading to accurate data processing in real time. Therefore, the characteristics of optical sensors and processing technologies are reviewed and discussed in data acquisition.

3.1. Optical Sensors

With the advantages of high sensitivity, rapid transmission, and strong anti-interference, optical sensors apply to the space-based situational awareness as the collectors of object data. They aim to acquire the locations and states of the targets by optical means, such as imaging and scanning [25]. Based on different data acquisition mechanisms, the sensors primarily consist of binocular vision sensors, laser radars, infrared sensors, visible-light telescopes, and multisource data fusion sensors. According to relevant references, their functions, advantages, and limitations in the SSA are indicated in Table 1, respectively.

SensorsFunctionsAdvantages and limitations

Binocular vision sensors [2630]Feature recognition
3D object measurement
(1) Simple configuration, low power consumption and cost
(2) Low reliability and sparse imaging due to overlong baseline distances

Laser radars [3135]Target recognition
3D object measurement
Orbital tracking
(1) Rapid data acquisition with reliable structures
(2) Incomplete shape recognition due to limited angular resolution
(3) Deficient platform stability and high power consumption due to continuous rotation and movement

Infrared sensors [3639]Target recognition
Orbital tracking
Temperature awareness
(1) High sensitivity, low consumption, and strong anti-interference
(2) Applicable to space targets in low temperatures

Visible-light telescopes [4043]Orbital determination
Orbital tracking
Maneuver detection
(1) High-precision farther observation, low power consumption with reliable structures
(2) Deficient platform stability and life by large weight and oscillation angles

Multisource data fusion sensors [4448]Data processing
3D object measurement
Object identification
Orbital tracking
Maneuver detection
(1) Complete and consistent evaluation
(2) Rapid and accurate decision-making, planning, and response
(3) Data redundancy avoidance
(4) Time and space coverage expansion
(5) Various intelligent approaches

Most of the single-form sensors have separate limitations in practical SSA applications. Therefore, multisource data fusion is developed for multirange 3D measurements, target recognition, orbital tracking, and maneuver detection. Data interconnection filters emerge as the prototype of data fusion [45]. Given different perceived situations, the utility value of each sensor can be fully developed. Multisource fusion integrates local data provided by multiple homogeneous or heterogeneous sensors to eliminate possible redundancy and contradiction among the sensors [46]. It realizes a complete and consistent evaluation of space situations. As for the fusion architecture, distributed fusion has better reliability and feasibility than a centralized one [48]. It reduces the high requirements of communication bandwidth and computational cost. Multisource data fusion can effectively integrate the advantages of single-form sensors in terms of accuracy, timeliness, and reliability. These sensors can also avoid data redundancy and extend time and space coverages. Therefore, multisource data fusion will be the mainstream sensing technology for the future SSA.

However, the SSA is required for multiple space targets in complicated situations. The sensors timely acquire the detailed characteristic data of the targets and dynamic states of other objects. It increases the volume and variety of the data, leading to higher timeliness requirements. Furthermore, environmental disturbances may cause the distortion and loss of data, and conventional fusion methods are unavailable for insufficient data. Therefore, imperfect data fusion should be overcome by exploring advanced fusion algorithms.

3.2. Data Processing

Data processing represents the technology of processing and analyzing large spatial data, converting them into the key information of the targets [49]. The increasing risky targets raise the requirements for processing massive data. It affects the accuracy and timeliness of situational awareness. In this case, exploring advanced processing methods is essential. Data storage, filtering, and fusion are reviewed and discussed in order.

3.2.1. Storage and Filtering Technologies

Data storage and filtering is the first stage of data processing. Numerous acquired data are stored in the databases, and the needless data should be filtered. Therefore, as an advanced distributed processing method on the cutting edge, cloud computing [50] has been widely developed to overcome massive computations and data-intensive challenges. It processes big data for the SSA relying on flexible storage, rapid computation, and effective filtering [51]. Furthermore, the on-demand scalability of available and reliable pooled computing resources can facilitate secure and rapid access to metered services from anywhere.

As for the typical research, Amazon Web Services provided cloud services by a typical space-based cloud [52]. Almost unlimited storage and filtering are available to big data. Then, application programming interfaces (API) [53] have promoted SSA cloud services in various situations [54]. The system framework and cloud are almost identical, reducing system design costs. Moreover, a space situational data storage platform based on Google Cloud can integrate different databases based on storage requirements [55]. Orbit Outlook Data Archive (OODA), as an extension of the cloud, stores situational data acquired by different organizations [56]. A distributed world model has higher ingesting and querying rates, supporting multidimensional data from different storage nodes. The OODA will be integrated with most data models for system component interactions. Cloud technologies reduce infrastructure maintenance and avoid overlapping investments. Accessible big data storage and filtering are realized by flexible computation and rapid deployment.

Furthermore, a generic data processing construction [57] is applicable to different data types and attributes. This system provides an internal ID, and each data contain requests, answers, risky targets, environments, satellites, and early warning. It offers new guidance for data storage and filtering in multiple types and attributes. Likewise, relational dataset and file systems [58] established shared situational databases of the target trajectories and sensor reports for multiple data types. The characteristic data of the targets were acquired after filtering out irrelevant data. However, data storage and filtering are time-consuming in these studies. Data compression before storage is suggested to shorten the process.

In contrast, data compression was rarely performed in the SSA. Only the Hubble space telescope observation [59] and a different and independent stochastic population (DISP) filter [60] involved single- and multiobject data compression. The effect is not evident in time series data, and lossless compression is essential. In conclusion, cloud computing is excellent in storage and filtering, but the capability of processing multiple data types still needs validation. Data compression is immature, limiting its applications in this field. The existing inadequacies should be further overcome.

3.2.2. Data Fusion Algorithms

Data fusion is the core part of data processing in the SSA. It aims to acquire a consistent interpretation and description of the measured objects, and SSA data systems have better performance than those composed of the contained subsets [61]. Data fusion technologies ensure close connectivity and timely communication between each unit of these systems and the acquisition center, highly contributing to the SSA. Based on the relative research on data fusion [6264], the process primarily consists of data collection, signal conversion, preprocessing, feature extraction, and fusion algorithms [65, 66]. The detailed process of data fusion is shown in the Figure 2.

Data fusion algorithms are the most important in the process. Based on the levels of the processing data source, data, feature, and decision-level fusion constitute the data fusion. Voting [67] and clustering methods [68] are used in data-level fusion. They have simple computations and rapid convergence but only apply to simple systems and the situations where pattern classes are accurately known. Furthermore, poor timeliness, instability, and sensor data uncertainty require higher fault tolerance. It is only available for the original data fusion of homogeneous sensors.

On another aspect, feature-level fusion aims to extract the data source’s characteristics for analysis and support subsequent decisions. Fuzzy theory [69], Dempster/Shafer (DS) evidence theory [70], and neural networks [71] are representative of feature-level fusion. The practical value of the fuzzy theory is reflected in its extension to fuzzy logic [72], but the descriptions of data by logical inference are subjective, lacking objective presentation. Next, the DS has rule combination flexibility [73], while the contradictory evidence is not handled since computations exponentially increase with the inference chains [74]. Neural networks can implement intelligent learning, memory, computation, and recognition. They perform large-scale parallel computing applicable to multiple application scenarios, such as object identification and orbital prediction. Typically, the Lambda architecture [75] and artificial neural networks (ANN) [76] implemented intelligent feature extraction and data fusion combined with infrared and sonar sensors. They reduced data uncertainty and also improved accuracy. The amount of data processed significantly decreases in feature-level fusion, and timeliness is improved after acquiring the original features.

For decision-level fusion, local decisions are combined under a certain rule to acquire the final associated results. The available algorithms are the Bayesian inference [77], the mentioned DS evidence theory, and ANN. Defining a priori likelihood is complicated in the Bayesian method. It requires incompatible opposed hypotheses, lacking the capability of allocating the total uncertainty. As the generalized extension of the Bayesian inference, the DS approach can overcome this inadequacy. Decision-level fusion has high flexibility, strong anti-interference, good fault tolerance, and low communication requirements [78], but it requires compressing the measured data, costing a lot and losing some details.

For the fusion above, suppose that the sensor data are matched, and the measured data are fused directly in the data level. Otherwise, deciding to adopt feature or decision-level fusion according to the specific situation is necessary. The fusion level is also determined for a specific task given the comprehensive effect of environments, computing resources, and data source characteristics. Data fusion highly emphasizes feature and decision-level fusion as promising research.

Furthermore, the fusion capabilities can be strengthened from two aspects of low-level and high-level (feature and decision) fusion. The low-level fusion fully utilizes various optical sensor technologies to extend the scope of data acquisition, while the potentials of sensor networks and data resources are explored in the high-level one for improving the accuracy, efficiency, and intelligence of data fusion. On another aspect, optical sensors, data filtering, and fusion technologies are closely connected. A fusion using laser radars, binocular vision, and infrared sensors emerges with combined bilateral filter and mixed particle swarm algorithms (Figure 3). These sensors generate unique images, while depth filtering and multidimensional fusion imaging are implemented by intelligent algorithms. The swarm also optimizes the ANN in feature-level fusion. The fusion generates perfect 3D images, highly improving the accuracy and reliability of data acquisition.

To sum up, data acquisition technologies are worth exploring in the space-based SSA. Especially data fusion will become the mainstream research in the future. Optical sensors (hardware), databases, and processing methods (software) are the essential components of the SSA systems. Further improving data acquisition capability is predictably realized as these components advance. Relying on the mentioned advanced technologies, subsequent target recognition and monitoring tasks can be performed smoothly.

4. Target Recognition

Given that the increase of space objects makes the environment more complicated, target recognition can obtain the target structures, loads, and other key information, determining the threat degrees and the corresponding countermeasures. Specifically, target recognition technologies primarily consist of object identification, parameter estimation, and intention recognition. In view of the sequential execution order, this section presents and analyzes the technologies, aiming to provide some references in the analysis stage of SSA.

4.1. Object Identification

Object identification is the core section of target recognition in the space-based situational awareness. This technology aims to identify potential space targets from the given optical observations taken of the target regions and determine key characteristics such as shapes, mass, and materials [81]. It utilizes radar and other related devices to obtain the echoes of space objects, analyzing echo signals and extracting features related to the target category. Then, the extracted features are input into classifiers, and the target types are determined. In recent years, laser radars have been dominant in object identification as sensors, while machine vision and ANN are highly explored as advanced identification algorithms. Thus, separated reviews and discussions are performed for these technologies.

4.1.1. Laser-Radar Echo Technologies

Relying on an excellent echo signal mechanism, laser radars are the most applied to object identification as sensors. Based on the resolution of laser-radar systems, echo technology consists of narrowband and wideband radar echo technologies.

Narrowband radars are utilized to acquire the low-resolution images of space objects. Features extracted from low-resolution images include orbital, radar cross-section (RCS), and range-dimensional features [82]. Orbital parameters are necessary to demonstrate the operation of the targets. They can realize the identification by initial orbital computation, matching, improvement, and analysis of the results [83]. This method has high timeliness, but the targets can only be observed for a short period within a cycle, limiting application scenarios in practice. Next, the RCS represents the scattering capability of space objects to electromagnetic waves irradiated by the laser radars, reflecting their structural information [84]. Especially for those with relatively stable motions, the RCS of the same target has certain regularity in continuous observations. However, the effect is less evident for small targets [85, 86]. Besides, the attitude changes to laser radars are small for the targets with simple structures. The echoes extending along the range dimension complete identification by extracting structural features. Typically, central moment feature extraction is applicable to object identification [87].

In contrast, the acquired scattering point echoes of space objects using broadband radars occupy more range units and produce abundant target information. High-resolution range profile (HRRP) echoes reflect the 2D distribution of the space on the scattering points in the views of laser radars [88]. The HRRP can extract multiple separable features of space objects [89] combined with principal component analysis [90] for accurate identification. Moreover, the HRRP also constructed the hidden Markov model (HMM) [91]. Its structural parameters were identified as features. High-order moment and transformation coefficient features are further extracted, such as the bispectral transformation. Computing bispectral circumferential and local integrals can identify the targets after dimensionality reduction. The HRRP also has higher separability to reduce translation sensitivity and computational complexity. Given the high resolution of the ISAR in both range and location dimension, the images of space objects reflect the fine structural information of the main bodies and other components, contributing to accurate identification. For the relative research, water area segmentation [92], small univalve segment assimilating nucleus (SUSAN) [93], and gradient vector manifold algorithms [94] facilitated targets’ shape and contour extraction by the ISAR. The Fourier transform and classification contributed to desired identification results. It is much superior to the HRRP in fine architecture information of the targets.

In summary, the above identification algorithms based on feature extraction all utilize the single form of echo features. Nevertheless, the description of target characteristics by a single signal or feature is limited and one-sided, leading to relatively low performance in complicated scenes. For the identification technologies based on feature extraction, more sufficient information contributes to analyzing and extracting the features of space objects with higher separability. Based on the fusion idea, fusion identification technologies can obtain more accurate and reliable object identification by comprehensively utilizing the complementarity of different information. Hence, complementary utilization of the HRRP and ISAR features is beneficial to improve identification effects. Fusion identification can be supported by deep learning, such as machine vision and the ANN. These technologies are worth studying for accurate feature extraction and analysis.

4.1.2. Machine Vision Algorithms

Machine vision algorithms aim to identify and locate space objects of interest within the visual range. They are widely applied to feature-level fusion identification, obtaining the fusion features of space objects with higher separability. Machine vision technology has no restriction on data sources in fusion, emerging higher flexibility in identification.

Many vision technologies have been developed for space object identification. In Ref. [95], the complementary information of infrared and visible echoes of space objects was fused. Integrating the moment and affine moment invariants (AMI) of infrared and visible imaging as the fusion features improved the identification accuracy and reliability. Then, template matching [96] facilitated accurate identification in guaranteed lighting, but this effect always depended on the quality of the images and was limited by the illumination conditions of image acquisition. In this situation, feature point identification [97] can take similarity measuring criteria as judgments to distinguish the similarity between the feature points in the standard images and feature points to be searched. It is greatly less affected by external illumination than template matching. Feature points are primarily described as the gradient histogram and binary conversion of gray values, and the former is dominant in this stage. The gradient features include scale-invariant feature transforms (SIFT) [98, 99] and speeded-up robust features (SURF) [100, 101]. The SIFT algorithm extracts the target features based on contours and performs similarity measurements for identification. It possesses high reliability to scale transformation, light, and noise. Coarse and optimized matching can be integrated with the SIFT for feature points but requires time-consuming computations. Afterward, random sample consensus (RANSAC) enhances the timeliness of feature point matching. On another aspect, the SURF has scale invariability and real-time capabilities of oriented fast and rotated briefly [102]. Feature points can be extracted with high efficiency. Nevertheless, the accuracy and reliability of feature point extraction still need improvements, while object identification accuracy and efficiency are not well balanced. Furthermore, overcoming false point matching is essential to enhance matching accuracy in practical applications.

To further improve the identification accuracy and efficiency, more advanced machine vision algorithms are developed. Multiple kernel learning (MKL) [100, 103] and support vector machine (SVM) are representative in this field. Moreover, singular decomposition and discrete wavelet transformation [104] and compressive sensing algorithms [105] are also available. They can realize the dimension reduction of features based on the training samples and kernel principal component analysis (KPCA). Overall, the SVM enhances identification accuracy, while compressive algorithms enhance the efficiency.

In conclusion, machine vision is maturely developed with diversified algorithms, which is applicable to space object identification. Based on relevant references, their advantages and limitations in the SSA are presented in Table 2. Moreover, the performance of these algorithms primarily relies on artificial feature extraction. The semantic disparity between the extracted low-level visual features and high-level semantic representations of actual requirements will cause the dissatisfactory effect of object identification. Therefore, these problems need further studies and solutions in machine vision algorithms.

TechnologiesMethodsPrincipal advantages and limitations

Laser-radar echoesRCSHigh timeliness and easy feature access
Insufficient acquired information and accuracy
HRRPHigh accuracy, easy access, and large information
Limited by high sensitivity in attitudes and orbits
ISARHigh resolution for fine architecture identification
Low quality in rotating and micromotion imaging

Machine visionSIFTHigh reliability to scale transformation, light, and noise
Time-consuming computation and easy mismatch
SURFHigh efficiency and timeliness, and low complexity
Existence of false matching points
AMIInvariance to scale, motion, and affine transformation
Unable to acquire the depth information of targets
MKLAccurate mapping and powerful classification
Large space occupancy and time-consuming computation
SVMAccurate nonlinear mapping, small sample learning, high robustness in sample sets, and simple computation
Difficult to overcome large training samples and multiple object classification

Artificial neural networksCNNParameter sharing to simplify computation and multiple convolution kernels for high-dimensional identification
Requirement for massive data, translation invariance, information loss, and inadequate local correlation
DNNExcellent in feature extraction and nonlinear fitting
Inadequate reliability without considering the important difference of different features
U-NetClose local correlation and flexible network structures
Very few supported training models
Res-NetRapid adjustment and optimization, and low complexity
Time-consuming training process
MSAMultiscale autonomous identification, flexible balance of computation, and modeling
FSTNNHigh accuracy, rapid computation, and advantageous in high-dimensional object identification

4.1.3. ANN-Based Algorithms

Artificial neural networks have been highly developed for object fusion identification to overcome false matching and eliminate semantic disparity for better results. In the ANN-based algorithms, multiperspective and multiscale feature identifications have been the mainstream direction [106]. Convolutional neural networks (CNN) are the most common in fusion identification. They can extract fusion features from the images of middle-level attributes, high-level semantic, and deep-level visual features by integrations. Specially, the CNN shares convolution kernels and rapidly processes high-dimensional information. Moreover, the multilevel and multimodal features of deep neural networks (DNN) [107] were integrated. The network feature complementarity at different levels and multimodal information was advanced. The DNN has stronger feature extraction and nonlinear fitting capabilities. However, these algorithms hardly considered the difference in importance of different features in fusion, leading to the inadequate reliability of object identification.

Next, to obtain more accurate and reliable identification results, decision-level fusion identification can analyze and calculate multiple classifiers' decision results, overcoming classifier and feature false matching [108]. ANN-based algorithms are also available for object identification at the decision level. Especially the CNN accepted input images and assigned importance (learnable weights and biases) to different targets. It can distinguish them from the others [109]. Data augmentation-based deep learning [110] implemented accurate identification in dynamic CNNs, but the performance is weakened in deep space. Afterward, a hybrid CNN involved partial semantic information and global features [111] for deep space identification in the locations, component segmentation, and multisource input of the targets. However, modifying parameters caused slow parameter variation near the input level as network levels deepened. It can retard the adjustments and optimization. Moreover, the correlation between each stage and the whole system should be considered in the identification [112]. Other deficiencies still exist in the CNN applications, including the low scale of information set and translation invariance.

Afterward, other novel architectures based on the ANN have been applied to decision-level fusion identification and classification, such as the U-Net [113] and Res-Net [114]. The U-Net can tighten the relation among different stages but supports very few training models. For the Res-Net, the adjustments and optimizations can be accelerated relying on its low complexity, but the high resource consumption exists due to a longer training time. Moreover, the AMI with multiscale autoconvolution (MSA) transform [115] realized a flexible balance of computations and modeling. K-means clustering boosted identification and classification relying on higher efficiency and scalability. In addition, a feature space trajectory neural network (FSTNN) [116] with the HMM can implement the autonomous material identification of space debris. It is superior in solving structural parameters with larger hidden state dimensions [117]. Therefore, the ANN-based algorithms in decision-level fusion identification can be integrated with advanced models, such as the HMM. It promotes the further advancements of intelligent technologies. Overall, the involved data sources in decision-level fusion are heterogeneous and process asynchronous information simultaneously due to lower dependence on data acquisition devices. Therefore, decision-level fusion is advantageous to the tasks requiring higher reliability and timeliness.

In conclusion, laser-radar echo, machine vision, and ANN-based technologies are very promising in space object identification. The integration of these technologies is of great significance to better identification. Moreover, feature-level fusion identification is more maturely developed in this stage, while the decision-level fusion can obtain better results. Machine vision technologies are primarily applied at the feature-level, while ANN-based technologies have been involved in both levels. With the rapid development of artificial intelligence, advanced models and algorithms for object identification are developed. The ANN-based technologies have progressed with higher maturity and diversity, which will be a research hotspot in this field. Finally, the principal advantages and limitations of the proposed technologies and methods are presented and compared in Table 2.

4.2. Parameter Estimation

As an essential condition to acquire the accurate information of space objects, parameter estimation needs to be performed in the SSA after object identification, aiming to obtain key parameters of the targets, including the size, position, attitude, and orbital parameters. Various parameter estimation technologies for space objects have been exploited so far. Photometric technologies have been more maturely developed, while optimal estimation technologies produce advanced algorithms in artificial intelligence, which are separately reviewed and discussed as representative estimation technologies.

4.2.1. Photometric Technologies

Photometric technologies contribute to the farther parameter estimation of space objects as mature technologies [118, 119]. Photometric signatures are the time trajectories of visible light reflected by space objects in orbit and acquired by space-based optical sensors [120]. Essentially, photometric signatures are the relative geometric functions of the target sizes, positions, material composition, and attitude dynamics.

First, photometric and astrometric information can be utilized to estimate scale inertial parameters of resident space objects (RSO) [121]. The inertia matrices are parameterized with relative scale inertia and the direction of principal components. Moreover, Subbarao and Henderson [122] analyzed the observability and sensitivity of photometric measurements to estimate the shapes and attitudes of the RSOs. Based on these photometric algorithms, Han et al. [123] developed topological scanning in short-sequence imaging to estimate multiple satellites. Geometric duality can obtain the approximate linear trajectory of the targets for extraction from significant clutter and noise. Do et al. [124] established photometric simulation scenarios containing the observation models of geosynchronous orbiting debris, and the influence factors consisting of the sizes, attitudes, and reflection characteristics of the targets were further analyzed. It contributes to orbital prediction and tracking. Afterward, photometric and astrometric fusion [125] implemented the shape estimation of space debris, obtaining sufficiently accurate characteristics and trajectories. Furthermore, this fusion also applies to estimating the structural, attitude, and orbital parameters with bidirectional reflectance distribution functions (BRDF) [126]. Photometric and astrometric fusion is advantageous in parameter estimation with extensive applications [127, 128]. Predictably, more fusion estimation technologies will be explored based on fusion thoughts.

Nevertheless, the radiation and reflection characteristics of space objects were not fully studied. The early studies analyzed the optical characteristics of the targets from structural characteristics, and surface materials were simplified as diffuse reflectors. Therefore, the material properties must be considered when computing the reflected signals of satellites. Spectral BRDF measurements and modeling [129] of the materials have great application values in target scattering and spectral characteristics [130]. Engineering models contain the Cook–Torrance [131], Wolf [132], and Torrance–Sparrow models [133]. The models of different materials improved the simulation effects of target characteristics for accurate estimation [134]. However, estimation errors still existed in simplifying the ontology into reflective surfaces. The BRDF on multispectral characteristics also described the motion states of spacecraft [135]. It obtained accurate reflectance and photometric signal changes of the targets. Besides, multimodel adaptive estimation implemented the surface material inversion of spacecraft and debris [136], but numerical computations are highly complex, requiring processing much information. Furthermore, multispectral signals can invert the surface materials of the tumbling targets [137]. Nevertheless, it has high requirements for the target imaging in farther space, delaying the parameter estimation process.

Given that many space objects are highly small and far from the space-based platform, accurate structural parameters are not always estimated. Only in the form of small targets obtains the information of time-sequence gray changes. Moreover, traditional estimation methods compute the trajectories of space targets by sequential images, but these images contain other needless information. Photometric signatures and the BRDFs contribute to accurate motion state estimation as well as surface materials. But some factors, including the platform vibration and noise, should be weakened for higher accuracy and stability.

4.2.2. Optimal Estimation Algorithms

Optimal estimation aims to minimize optimal indexes of the valuation results. As a typical algorithm, the Kalman filters perform the optimal estimation of object states using linear state models, input, and output observed data, including the effect of noise, disturbances, and uncertainty in the system. Optimal estimation is also regarded as a filtering process.

In recent years, the Kalman filters have been applied to optimal parameter estimation of space objects, and various improved algorithms were rapidly developed. In Ref. [138], an unscented Kalman filter (UKF) estimated the positions, rotation and motion speeds, mass, and moment of inertia of the RSOs by processing photometric and attitude information. The UKF is superior to the original Kalman filter in parameter estimation. Moreover, an extended Kalman filter (EKF) [139] estimated the rotational and translational states and areas of space debris. The mass was also determined fusing photometric and astrometric data. Afterward, a noise-adaptive Kalman filter [140] with the laser-vision data realized continuously accurate motion estimation of noncooperative targets. To sum up, the EKF is more available relying on its fast response and convenient usage, but the capability of high-order estimation needs verifications. The UKF possesses high accuracy and simple computations, whereas active anti-interference should be strengthened. Furthermore, the above algorithms are highly sensitive to linearization errors in nonlinear discrete filtering estimation. In high nonlinearity, the estimation errors caused by local linearization based on the Jacobian matrix will increase to cause filtering divergence. An exogenous Kalman filter (XKF) can overcome these difficulties. The XKF has higher reliability in nonlinear discrete systems [141, 142]. It aims to utilize the state estimation of global observers for approximate linearization of highly nonlinear systems. The errors are not introduced into the filtering estimation process based on local linearization models. The XKF inherits the advantages of the EKF and UKF, leading to more accurate and stable estimation.

Meanwhile, other advanced optimal algorithms emerge constantly, and the multiview reconstruction of machine vision is the basis. The generalized iterative-closest-point (G-ICP) [143] by foreground and background segmentation achieved target registration and estimation, but complex computations interfered with real-time parameter estimation. In Ref. [144], the capability of the DISP filter to multiobject estimation in different orbits was explored, but the decrease in estimation quality occurred once the targets had left the field of view. Afterward, stereo vision implemented the motion estimation of space free-floating targets in a wider field of view [145]. Cooperative 3D vision [146] can estimate the model parameters and dynamic states of space objects. It is applicable to congested scenarios by rapid computations. On this basis, a modified genetic algorithm [147] with visual cameras adaptively estimated the structural parameters of space debris. It reduced the errors relying on high robustness and noise resistance to outliers. Furthermore, range imaging information accelerated the concurrent estimations of satellites and debris [148]. It has the availability of harsh sensing and lighting. Then, a separable pseudolikelihood algorithm [149] implemented the fusion estimation of multisensor targets in complicated working conditions. The Bayesian inference can facilitate separable likelihoods by belief propagation in the associated Markov random field. Besides, the Hough transform [150], keystone transform [151], Radon–Fourier transform [152], and time-frequency analysis [153] also started in the parameter estimation of space objects. Especially, time-frequency analysis applies to the targets with micro characteristics [154]. It contributes to the more accurate estimation of the structural characteristics and motion forms.

In conclusion, photometric and optimal estimation technologies have highly contributed to the parameter estimation of space objects as advanced algorithms constantly emerge, including the BRDF, XKF, and G-ICP. These technologies will be the research hotspots in future parameter estimation. Nevertheless, several problems still exist and need better solutions, reflected in (1) space multiobjective estimation, (2) irregular target estimation in farther space, and (3) micromotion parameter estimation.

4.3. Intention Recognition

Intention recognition is the process of the intention awareness and behavior inference of space objects through observed actions and effects on the situations. Besides the passive objects such as space debris and asteroids, some active targets execute orbital maneuvers threatening security [155, 156]. Hence, intention recognition technologies are essential to improve the quality of early warning information and reduce the number of warnings, thus guaranteeing security. Accurate and rapid intention recognition according to the acquired information is the main research purpose in this stage.

Early studies on intention recognition focused on recognition models. Various methods based on the transition of probabilistic state-space models (SSM) have been proposed for recognition systems, and different perspectives appeared [157159]. Computational state-space models (CSSM) enabled the knowledge-based construction of the Bayesian filtering for intention recognition. The feasibility emerged in a trial scenario. A Bayesian network [160] with logical programs can implement intention-based decision-making. Moreover, hidden Markov models [161] classified various behaviors, such as aligning space objects and avoiding obstacles. In Ref. [162], the HMM in the Bayesian networks recognized the changing intention by speech recognition models. Naturalistic data trained and validated this algorithm both on the recognition time and ratio. Based on these studies, recognition models have become noteworthy research. However, constructing a prior knowledge base and selecting large training samples are high costs, and the subsequent optimizations are urgently performed for better intention recognition.

Afterward, artificial intelligence has facilitated wider research on intention recognition. The information and behaviors of space objects are integrated for intelligent recognition systems. Early automated planning [163] recognized the motion intention of the targets. For the representative algorithms, wide-area motion imagery (WAMI) [164] considered operational semantics for decisions to rapidly evaluate images and determine intention. It strengthened the recognition capability for space debris. Moreover, machine learning and ontology-based Bayesian networks [165] characterized the behaviors of the ROS, where the physical (ontic) characteristics trained the networks. The CNN classified the unknown behaviors, and adaptive Markov inference game optimization (AMIGO) [166] generated training data. A generative adversarial network (GAN) [167] rapidly recognized satellite avoiding actions, and the game theory can implement more accessible training. The game learning between generative and discriminative models produced good outputs to improve the training robustness. The game inference emphasizes stochastic modeling, propagation, detection, and tracking for future SSA. Intention recognition expects intelligent decisions and game inference for accurate and rapid requirements in complicated situations.

As an emerging recognition technology, human-machine interaction can imitate human interaction by intelligent machines to minimize human-machine communication demands. In Ref. [168], a dynamic Bayesian network constructed intention-action-state scenarios for probabilistic intention inference. Then, a leveled HMM [169] implemented behavior modeling and recorded the target data during the trajectory tracking. Furthermore, mildly context-sensitive grammars apply to sensitivity recognition given behavior constraints on the spatiotemporal trajectory. On the whole, human-machine interaction just started in the SSA. Autonomous and intelligent interaction is promising based on increasing awareness capabilities, while multimode interaction space-based platforms can be established based on intelligent interactive technologies.

In practical applications, intention recognition will be multilevel and multielement pattern recognition. The expert knowledge in the domain is expressed in a specific mode, taking the datasets acquired in situations as feature sets. Establish the mapping relation between these sets and expert models to compute the similarity [170]. On another aspect, reducing information uncertainty is of great significance for intention recognition. Specific rules and semantics can be designed to accurately and clearly express expert knowledge. Matching it with the reserved obtains the results to output the existing experience, thereby providing auxiliary information for the final decisions in the SSA.

As mentioned above, accurate and rapid object identification, parameter estimation, and intention recognition of space targets constitute advanced target recognition technologies, which are essential for the space-based SSA. Their overall advancements and trends are summarized and expected in Table 3. In contrast, object identification technologies are the most mature, whereas intention recognition needs deeper research.

TechnologiesAdvancements and trends

Object identification(1) Comprehensive exploration
(2) Full of diversity
(3) Close combination with space-based SSA
(4) Capable of further optimization
Parameter estimation(1) More emerging algorithms
(2) Emphasis on optical methods
(3) Close combination with space-based SSA
(4) Lacking robust extraction of complex targets
Intention recognition(1) Emphasis on recognition models
(2) Fully utilization of AI technologies
(3) Less applications in space-based SSA
(4) Few studies on active confidential targets

5. Target Monitoring

Target monitoring is the core technology of the space-based SSA as the decision-making section. Timely and accurate warning and collision avoidance are ultimate purposes [171]. First, the tasks to be implemented in different space situations are indicated in Table 4. In the steady period, target monitoring technologies emphasize orbital prediction, tracking, and maneuver detection, while early warning and collision avoidance dominate the risky period. Therefore, these key technologies are essential to review and analyze in the SSA.

SituationsPrimary tasks

Steady period(1) Monitor space debris
(2) Monitor new targets launched
(3) Determine orbital parameters and characteristics
(4) Track various satellites within the orbital cycles
Risky period(1) Detect and characterize active threats
(2) Communicate target information
(3) Make defense decisions

5.1. Orbital Prediction

Orbital prediction of space targets, as the foundation of collision warning mechanism and satellite measurement and control technology, has become a research hotspot in the SSA field. Efficient and accurate orbital prediction of resident space objects is a big challenge for the space-based situational awareness, where current orbital prediction based on physics models failed to reach the required accuracy for collision avoidance, thereby some incidents happened, such as the collision between U.S. Iridium-33 and Russian Cosmos-51 [172]. Predictably, the number of space objects and the conflicts among these objects will rapidly increase in the future, requiring a greater ability to perform accurate and timely predictions of the RSOs. Nevertheless, the limitations of current orbital prediction methods are the low accuracy of target dynamic models, sensor measurements, and orbital determination. For instance, atmosphere drag models generated large uncertainty for the orbital prediction in low-earth orbits [173]. More similar events have prompted scholars to explore efficient methods for orbital prediction.

5.1.1. Analytical Prediction Models

The earlier orbital prediction research emphasized analytical and numerical methods. The analytic approaches work based on the analytic solutions to the Lagrangian equations of planetary perturbation, whereas the latter represent the integral solutions of perturbation differential equations of space targets. Theoretically, the numerical methods have higher accuracy than the analytical ones but slower computational efficiency, difficultly meeting high standards of orbital prediction. Therefore, prediction models are generally developed by analytical solutions.

For the previous studies on prediction models, the Zeipel regular transformation was first utilized to develop the analytical solutions of near-earth satellite motion under harmonic terms J2, J3, J4, and J5 [174]. In the meantime, Kozai [175, 176] proposed a method of average root numbers and obtained another solution to this problem. Subsequently, Lane [177, 178] developed an atmospheric density model and obtained the power functions of atmospheric density containing integer exponents, allowing the Brouwer model [174] to contain the atmospheric damping terms completely. In Ref. [179], the solutions based on Delaunay variables were reconstructed by Poincare variables. The positions were formed as the functions of time combined with the complete solutions of the Brouwer model. This method was applied to the Navy space surveillance system. Afterward, Message [180] and Blitzer [181] separately performed the above studies running on a certain computer using the King–Hele atmospheric damping model. Miura [182] designed a simplified general perturbation (SGP) model for orbital prediction to avoid small divisors of eccentricity and sine of orbital inclination. On this basis, an SGP4 was created as a fourth-version model considering the long-term effect of atmospheric damping. The SGP4 combined with deep space perturbation (SGP4-DSP) [183] was introduced to two-line element sets for orbital prediction. The first-order solutions of the main perturbation force were obtained using a quasimean root number method, providing a valuable reference for prediction models in complicated situations.

The prediction models above belong to single-machine systems. However, due to much computation and the high complexity of orbital prediction for multitudinous space targets, meeting timeliness requirements is difficult for single-machine systems. To analyze and predict orbits in real time, simplified methods were adopted to reduce the number of the iterative steps, whereas they failed to reach the accuracy and reliability of the prediction results. In this case, parallel computing was developed that adopted the parallel work of multimachines for high-performance computations [184], which have been successfully applied in prediction. For instance, NASA utilized nearly 1,000 processors for the daily prediction of multiple spacecraft [185], initiating the most comprehensive measurements for orbital tracking, collision warning, and avoidance. Afterward, ESA designed a space cataloging system for the regional prediction of space debris [186], aiming to achieve the target tracking and collision warning. It completed the transformation of 128 processors with expanding scales. Moreover, NORAD constructed a cluster processing system with 3,000 processor nodes [187], which was utilized to track tens of thousands of the orbiting targets and update in real time. Furthermore, Geng et al. [188] first studied the effect of fitting arc length of observed orbits and solar radiation pressure on the orbital prediction of the GPS, GLONASS, Galileo, and BeiDou satellites. This study further deepened the research on orbital prediction. On this basis, other approaches, including solar radio proxies [189] and chaotic orbital trackers [190], were explored to further improve the orbital prediction of multiple targets, and the desired accuracy can be reached in simulations.

Overall, analytical orbital prediction models are maturely developed, providing a good foundation for realizing the orbital prediction of space targets. It is predicted that further integration with advanced algorithms will be the focus of future research.

5.1.2. Machine Learning Algorithms

The mature advancement of machine learning has facilitated various advanced prediction algorithms. As mentioned above, the primary cause of physics-based failure prediction is lacking required information, including the space environments and characteristics of the RSOs. Therefore, the underlying pattern of orbital prediction errors can be learned from historical data to improve the prediction accuracy by using artificial intelligence. In Ref. [191], a machine learning approach for accurate orbital prediction was proposed, where a SVM model was established to reduce prediction errors, and its generalization capability was validated in a simulative space environment [192]. Due to universal approximation capability and flexible network structure, an artificial neural network model was designed for the same purpose [193]. It was trained using historical prediction data of the RSOs in similar environments. The simulations indicated that machine learning combined with the ANN could significantly improve orbital prediction. Besides, Peng and Bai [194] attempted to utilize the Gaussian process to generate point estimates and uncertainties combined with machine learning, and orbital prediction errors decreased by the trained Gaussian model with reliable uncertainties. Moreover, the comparison among the proposed SVM, ANN, and Gaussian processes was performed [195]. It was indicated that the ANN has the best approximation capability but the data is overfitting. In contrast, the overfitting of the SVM cannot happen, whereas its performance is inferior to the ANN and Gaussian methods.

Besides the mentioned algorithms, available two-line elements and international laser ranging service (ILRS) catalogs were utilized to validate the machine learning algorithms [196], and the potential of machine learning was demonstrated in improving the accuracy of these catalogs. Moreover, the fusion of machine learning and extended Kalman filter was proposed in Ref. [197], and a fusion analytical solution was developed. This fusion highly improved the accuracy of orbital prediction and can be generalized to systems with different prediction algorithms besides the EKF. Afterward, Hartikainen et al. [198] designed a nonlinear latent force model (LFM) for the long-term orbital prediction of GPS satellites, and an integrated nonlinear Kalman filter and smoothing-based algorithm was presented for approximate state and parameter inference. It was proved that real-time computations could be realized in practical applications. Furthermore, machine learning algorithms were also explored to model the underlying pattern of orbital prediction errors of space debris from historical observations, aiming to improve future prediction performance [199]. An ensemble learning algorithm of boosting trees was developed for error modeling and prediction. The simulations indicated that the trained model could capture over 80% of the underlying pattern of the historical errors, achieving at least 50% accuracy improvement.

Based on the mentioned research, orbital prediction technologies have experienced a long time for the development, where analytical and numerical solutions are initiated as elementary methods. The advances in artificial intelligence contribute to higher accuracy of the orbital prediction in the space-based situational awareness due to the characteristics of autonomy and diversification. So far, machine learning, including the SVM and ANN, has been the mainstream intelligent algorithms of accurate orbital prediction, and more advanced technologies can be further explored and applied in the SSA.

5.2. Orbital Tracking

In the previous subsection, the orbital prediction of space objects has been reviewed, and this section will mainly emphasize orbital tracking. First, the orbital determination of the targets from the time history of measurement is the prerequisite of orbital tracking and cataloging, which consists of initial orbital determination (IOD) and continuous orbital tracking. The purpose of the IOD, especially in the case of angle-only data, is to obtain the initial estimation close enough to the actual orbit for making subsequent least squares and Kalman filter processing successful [200].

Nevertheless, only the line-of-sight observation from the optical sensors to the targets is available without range information. Considering that the assumptions are all satisfied, including linear dynamics, coasting flight, single sensor, and the sensor fixed in the center of mass, the well-known angles-only orbital determination need solutions to the lack of range observability.

5.2.1. Angles-Only Determination Algorithms

Conceptually, the orbital states are uniquely determined by line-of-sight measurement, and angles-only orbital determination is regarded as an observable problem. Given that over one group of states sharing the same line-of-sight time history make it unobservable, the solutions to the angles-only problem need to be studied, including the dynamic, orbital maneuver, multisensor measurement, and offset algorithms now.

The first algorithm is complex dynamics modeling and analysis. Classical angles-only methods were the Gaussian and Laplace algorithms. The Laplacian [201] used a group of the observed data from three angles to obtain possible solutions. The fourth dataset was added to eliminate singularity in orbital determination [202]. It was weakened equivalent to nonsingular geometry results. Then, angles-only determination applies to a cylinder-coordinated system [203], and relative orbital elements [204] were further used. On this basis, a second-order relative motion model solved this problem [205]. An improved J2-perturbed dynamic model and a nonlinear measurement model validated a maneuver-free angles-only navigation [206]. Polynomial chaos [207] without perfect measurements can remove constraints and assumptions in the IOD to develop a more robust framework. All available measurements work without the Keplerian dynamics. Overall, these algorithms provide desired results for three groups of angular data with little time difference in actual observation. However, the singularity appears in coplanar orbits or small separations.

The orbital maneuver is the second algorithm. Executing known maneuvers in suitable directions makes the problem observable, and only enough information fully determines relative positions. For instance, an optimal orbital maneuver algorithm [208] applies to any maneuver type and prior unknown trajectory. The degrees of observability (DOO) and latent range information [209] of orbital maneuver implemented high-precision relative position and velocity determination. The DOO can describe observability levels, and the latent information improves the DOO determination. Then, observable and unobservable maneuver sets were provided in Ref. [210]. Unobservable constant thrust maneuvers are reasonable during an approach trajectory. Then, Luo et al. [211] designed a closed-loop frame of multipulse sliding guidance strategy for orbital determination. The sensitivities of the navigation and guidance to the line-of-sight angle accuracy were discussed. Due to the complexity of resulting expressions, explicit results are only found for simple trajectories known as a priori. Further optimization is required for more general trajectories.

Then, the third algorithm is based on multisensor measurement. Chen and Xu [212] presented a double line-of-sight measuring scheme to obtain observability using visual cameras, and a distributed angles-only navigation method was also proposed by multiple line-of-sights [213]. The line-of-sight measurements were coupled with gyro measurements and dynamical models in the extended Kalman filter to determine the relative attitude, position, and gyro biases. Based on these studies, the relative position and velocity of space targets are estimated accurately enough using double line-of-sight measurement technologies. Compared with a single line-of-sight angles-only determination, the observability of the technologies is highly improved. Moreover, the separation angle between two line-of-sight vectors can affect the estimation errors and observability. Furthermore, Zhu L et al. [214] utilized prior information of space targets by multisensor measurement and dictionary learning algorithms, realizing the vision-based orbital determination in the tests.

The final algorithm works based on the lever-arm effect of the sensor offset from the center-of-mass. A solution to the observability problem was developed and validated in Ref. [215]. It required only a camera offset from the center-of-mass of a chaser and small rotations without orbital maneuvers, high-order models, prior geometrical knowledge of the targets, and more cameras. The ranges are observable when the camera is offset from the chaser’s center-of-mass. Ref. [216] developed a near analytical orbit solution as the camera offset was available. Three or more LOS observations were performed on either the center-of-mass of an object or on known object features. Gong et al. [217] developed the analytical covariance for the resolution of angles-only relative orbits. Afterward, a more compact and improved solution was created by state augmentation least square [218]. In these studies, the range-sensor offset enhanced observability to exclude mirror solutions. Moreover, relative state observability was explored, and observable conditions concerning the range-sensor offset were obtained. As mentioned above, many studies on angles-only orbital determination have been performed, while some limitations still exist in congested situations. Therefore, more advanced algorithms based on artificial intelligence should be explored to better solve the angles-only problems.

5.2.2. Improved Filter Tracking Algorithms

Then, the orbital tracking of resident space objects is mainly analyzed, where the state estimation is essential. The preliminary scheme is to track a single target using one sensor. Jones and Vo [219] proposed a new Bernoulli filter for the orbital tracking to recognize a single RSO. It employed a birth model based on the non-Gaussian propagation of probability density functions for the admissible areas and target states. Subsequently, the advantages of this algorithm were demonstrated by tracking known and newly discovered objects in the near geosynchronous orbits. Nevertheless, single-target orbital tracking cannot meet the requirements of target monitoring, so more studies have emphasized the joint tracking of multiple targets using cooperative sensors. Hussein et al. [220] proposed an algorithm based on finite set statistics (FISST). It was regarded as the Bayesian algorithm allowing joint estimation of existence, type, and tracking, aiming to complete characterization and data association in situational awareness comprehensively. Specially, to reduce computational burden entailed in the FISST, a Gaussian mixture approximation was also employed, not to the first moment of the full FISST update equations (known as GM-PHD), but apply the approximation directly to the full FISST equations. Moreover, Jia et al. [221] developed a separation and extended information filter (SEIF) algorithm to track space targets, where communication delay was considered in multisensor cooperative sensing. It was verified in a tracking scenario supported by the NASA mission analysis tool. The SEIF has better performance when a communication loss exists among different sensors.

Furthermore, fuzzy uncertainty overlap and false association are common due to noise in the short arc trajectory. To solve this problem, Stauch et al. [222] proposed a robust multitarget tracking algorithm by the constrained admissible region-multiple hypothesis filter (CAR-MHF). The application of the CAR-MHF to a dense population of synthetically created uncorrelated tracks was highly challenged, and the accuracy of data association was improved by integrating joint probabilistic with reverse smoothing. Besides, Jones et al. [223] compared traditional multitarget filtering models with the corresponding target tracking algorithm and analyzed orbital tracking performance based on a labeled multi-Bernoulli filter. Given that observations may be corrupted by an undesirable drift of the telescope due to mount jittering and uncompensated diurnal motion of stars, Hagen et al. [224] developed a drift compensation algorithm based on the joint estimation of sensor drift, telescope observation objects, and star states. Then, single-cluster probability hypothesis density filters were designed for group tracking, and the sensor drift can be obtained by estimating the collective motion of the stars, thereby modifying the estimation of moving objects. This study contributes to the orbital tracking of large cluster objects.

On another aspect, to solve the problem of the unstable tracking of low-orbit bright space targets, Xu and Wang [225] proposed a snake algorithm, an improved gradient vector flow method using active contour models, to realize the real-time search of object contours on the CCD image. Searched results were manifested when the initial contour enclosed the target. This algorithm could overcome the tracking error caused by the fixed window and improve tracking robustness. In Ref. [226], a new orbital tracking algorithm based on a composite weighted average consensus filter (CWACF) was developed integrating nonuniformly distributed nonlinear filters. In terms of different sensor accuracy and onboard computing power, the EKF and sparse grid quadrature filter (SGQF) were combined as local filters on different sensors. Then, the estimation of the neighborhood was performed to reach better performance based on the consensus framework. Moreover, the CWACF realized the balance between estimation accuracy and computational costs.

Overall, orbital determination and tracking are important sections of target monitoring. They have been closely connected, where orbital determination is the premise and orbital tracking is the executing purpose. The mentioned studies indicate the high advancements of the two technologies, and various algorithms emerge in the process, such as the SEIF, FISST, and CAR-MHF. Nevertheless, more advanced algorithms for data association and constrained nonlinear estimation are required to implement robust orbital tracking, whose applications in more complicated scenarios need further exploration.

5.3. Maneuver Detection

Detecting the maneuvers of space objects with retrievable historical data has become an essential mission in the SSA, especially for active objects without available operational information. Real-time detection is required to react adequately to any spacecraft anomalies and possible threats to nearby space assets. The active objects’ maneuvers are detected, recording the patterns and trends in maneuver types and magnitudes. Figure 4 manifests the process of maneuver detection operations [227].

Maneuver detection technologies in the SSA involve two kinds of algorithms. (1) Select the parameters sensitive to maneuvers as detection characteristics. (2) Improve measuring capabilities by adopting advanced algorithms to reduce requirements for these parameters in maneuver detection. Ignore the influence of the maneuvers in continuous tracking. This section reviews and discusses maneuver detection technologies based on these algorithms.

5.3.1. Sensitive Parameter Characterization Algorithms

As the first algorithm, the selection of sensitive parameters is more emphasized without specific requirements on measuring means. For instance, the longitude and height of the geosynchronous orbit are commonly selected including semimajor axis and eccentricity. Huang et al. [228] selected semimajor axis and eccentricity as the parameters to detect the maneuverability of the targets. However, this method is only applicable to the case of a single-pulse maneuver along the tangential direction of velocity at perigee or apogee in the orbital plane, and strong prior constraints are also required. Similarly, the semimajor axis and eccentricity were selected as the characteristics to detect the plane maneuvers of the targets in low-earth orbit [229]. The maneuver scheme was determined by comparing the characteristics with the respective boundary values. Besides, the influence of orbital perturbation was also analyzed. Robertsa and Linaresa [230] selected the longitude as the characteristic of maneuver detection for space targets in the geosynchronous orbit, and available two-line element data was used to train convolutional neural networks to predict and detect orbital maneuvers. However, the applicability to other orbits takes time to prove.

Moreover, Liu L et al. [231] designed an algorithm of the weighted fusion of multihypothesis tests (WFMHT) using space-based angles-only measurement for maneuver detection and modeled the residual between the sensor measurement and the corresponding predicted measurement as the characteristic. According to the theory of linear optimal filters, this residual obeys white noise in the Gaussian distribution, which is measured for continuous detection when the targets maneuver. Nevertheless, the sensor measurement can interfere with non-Gaussian errors that the applicability of this algorithm is inadequate in practice. Then, Kelecy and Jah [232] presented a similar method for detection, where filter data residuals were selected as the main test parameters, and the comparison based on the consistency tests of position and speed estimation between the filtering and smoothing updates were performed to detect an orbital maneuver. Furthermore, the position and velocity of the overlapped trajectory obtained by orbital determination fitting before and after maneuver were utilized to determine the maneuver period accurately, but it required more accurate prior information.

Similarly, two methods were developed for detecting the maneuvers of the targets based on the consistency of the time series of two-line elements [233]. Due to the requirement for higher update frequency, this approach can be limited to the impulse maneuver of the targets in low-earth orbit with only the application to the regular monitoring of specific regions. Overall, selecting sensitive parameters as characteristics for maneuver detection is gradually performed, but more practical applications should be developed to overcome the limitations under some circumstances.

5.3.2. Joint Measurement and Processing Algorithms

The second algorithm emphasizes the measuring means and processing algorithms, where improved estimation algorithms are designed to realize precise orbital determination with redundant measurements. The maneuvers are detected by measuring the residual between the actual and corresponding predictions. Many scholars have already performed relative studies on detection technologies, where joint measurement and processing based on some improved algorithms have become the mainstream direction.

For instance, Jia et al. [234] attempted to design an improved Kalman filter through angles-only measurements from multiple cooperative satellites based on orbital determination algorithms. The designed joint filter continued to accurately determine the orbits without any adjustments in the target maneuvers. However, these satellites are not cost-efficient that the algorithm is unavailable. Goff et al. [235] performed to determine to start and ending maneuvering epochs by fixed tracking residual. Still, it takes a long time to recover the accurate observation of the targets after orbital maneuvers were detected. It is found that this algorithm was inapplicable for detecting the noncooperative targets with malicious behaviors. In Ref. [236], geosynchronous satellites used two-line historical element data to characterize orbital maneuvers. The key of this algorithm was to enhance the ability to detect small changes among the noise in the raw data using basic signal processing techniques, and its feasibility for the maneuvers of electric and chemical propulsion was demonstrated. In Ref. [237], observation data processing was performed by joint filtering, where the estimation of the secondary filter was used to constrain the leading filter in real time. It was found that the detection accuracy depends on the state model and threshold, whereas the optimization of the parameters was ignored. Singh et al. [238] attempted to detect orbital maneuvers by joint optimal control and multiple hypothesis tracking. An optimal framework as a postprocessing resolved uncorrelated trajectories in maneuvers. The total velocity increments can develop cost functions to determine the feasibility of maneuvers. This algorithm is more applicable to space objects’ fuel-optimal maneuvers.

Meanwhile, other advanced joint detection technologies are gradually developed. Wang et al. [239] presented a maneuver detection algorithm based on a probabilistic decision model, where the range rates among the satellites measured by laser range finders were selected as characteristic parameters. According to the Neyman–Pearson criterion, the decision thresholds of maneuvers can be adaptively generated based on the constraints of false warning probability and the fluctuation characteristic of input data. Nevertheless, the manners and accuracy of range measurement become the limitations of this algorithm. Li et al. [240] designed an improved detection algorithm based on empirical mode decomposition (EMD), applicable to processing the input data to obtain low-frequency components that would be fitted, and orbital maneuvers were detected. In Ref. [241], multiscale detection of the orbital maneuvers of geosynchronous objects was performed. Specially, the EMD also adaptively estimated the thresholds of filters based on the median absolute deviation. Furthermore, the mentioned GAN detected space targets [242], restoring complicated and high-dimensional distribution of expected behaviors in orbit. Several unexpected orbital evolutions were detected. Afterward, data-driven Gaussian binary realized the maneuver detection of resident objects in different trajectories, and Gaussian binary classification (GBC) [243] reached higher accuracy in detection. Moreover, the GBC made more rapid decisions in maneuver detection [244].

As mentioned above, the technologies of maneuver detection are rapidly developed for the space-based situational awareness, and various approaches are successfully applied to the detection of space objects. Nevertheless, existing studies have some limitations in their applications. More sensitive characteristic parameters are highly required to achieve more accurate and rapid detections for the orbital maneuvers, while more advanced processing technologies such as artificial intelligence are improved. Finally, the reviewed algorithms are concluded in Table 5, whose principal characteristics are also listed as the references for further research.

ClassificationAlgorithmsPrincipal characteristics

Sensitive parameter characterizationSemimajor axis and eccentricityIn single-pulse maneuver along a fixed direction
TLEOnly applicable in the geosynchronous orbit
WFMHTInterfering with errors and lacking applicability
Filter data residual characterizationDemand for accurate prior information

Joint measurement and processingKalman filters with angles-onlyContinuous accuracy but the high cost
Fixed tracking residualInapplicable to noncooperative active objects
Joint filteringAccuracy relying on state models and thresholds
Multiple hypothesis trackingMore applicable to fuel-optimal maneuvers
Probabilistic decision modelsAdaptive but short of manners and accuracy
EMDGeosynchronous multiscale adaptive detection
GANPowerful capabilities of recovery and detection
GBCHigh accuracy with rapid decision-making

5.4. Monitoring Early Warning

The spacecraft’s in-orbit collision among the space objects, including debris, asteroids, and some active targets, is catastrophic. Monitoring early warning, security defense, and resource utilization are major technical challenges for the international space community and the inevitable choice to protect survival and development. Therefore, after executing the task of maneuver detection, monitoring early warning has become the key technology to rapidly make correct judgments and decisions. It is necessary to perform for mitigating the risks before collision avoidance.

To better understand early warning mechanism, a specific warning process containing multiple important sections, such as parallel computing and intersection result analysis, is manifested in Figure 5 [245], where SSR means space-state representation. In the process, data collection consists of the data of orbits, space states, characteristics, and errors. The data are imported into the databases, and the computations are initiated. Next, the early warning criterion is determined based on the surrounding environments, and the warning tasks can be triggered with parallel computations. Then, dangerous intersection results are filtered and analyzed detailedly after the computations are finished, and precise analysis is performed with updated data and improved approaches. After the risk confirmation, the information will be reported, and warning evaluation is developed to further improve the warning tasks. This process provides a solid theoretical foundation for further research.

In the meantime, monitoring early warning consists of various technologies. According to different technical principles, space-based early warning methods can be divided into visible and infrared spectral observations. Furthermore, some close-space observations, including the detections of flyby, companion, attachment, and sampling return, are also considered essential supplements to early warning [246]. Thereinto, visible observation is the primary method of asteroid monitoring, which relies on the sunlight reflected from the surface of the asteroids to determine the positions by repeated photographic observation of the same space at different times. Space-based visible telescopes are flexibly distributed in the low-earth orbit, the earth-sun Lagrange point, Venus orbit, and other positions, which can overcome most of the interference and solves the dead observation angles of solar illumination areas, significantly improving warning efficiency [247]. As a particular optical observation method, infrared observation possesses a low space-light background with significant advantages in daytime observation and spectral analysis. This observation can be utilized to identify target surface materials and estimate the temperatures, albedo, and other critical parameters, timely providing precise information for early warning.

Compared with the mentioned orbital prediction and tracking, the early warning highly emphasizes practical applications. For instance, in the NASA MIDEX mission, a wide-field infrared survey explorer (WISE) was designed [248]. It can monitor the entire space in four bands from 3.3 to 23 microns, whose sensitivity is around 1,000 times higher than that of midinfrared astronomical satellites. In 2011, the WISE discovered the first Trojan asteroid (2010-TK7) around the earth, and over 34,000 new asteroids have been detected over the mission. 135 of them are located near the earth, including 19 high-risk ones. The WISE has contributed significantly to space early warning.

After the WISE, a near-earth object surveyor (NEO Surveyor) was designed by NASA to discover and characterize asteroids and comets with potential risks to the earth. Infrared observation, focal plane modules, and electronic technologies are the core of the NEO. It has the spacecraft bus, cryogenic control systems, aperture covers, observatory assembly, integration, and tests (AI&T) [249]. The spacecraft bus is well-suited for the requirements of the space survey mission. The cryogenic control systems can provide the required bulk temperatures, thermal stability, and sensitivity for infrared devices. Then, aperture covers are deployed in orbit to prevent contamination, while similar objectives and technologies are introduced for the AI&T from the previous observatories. The NEO Surveyor will be launched in 2026, aiming to complete a 90% census for 140 m class near-earth asteroids.

Following NASA, Canada launched a near-earth object surveillance satellite (NEOS-Sat), the first dual-mode space telescope to monitor near-earth asteroids, comets, and the resident targets [250, 251]. Track-rate imaging and metric observations of fast-moving debris in the low-earth orbit is performed at speed up to 215 arcseconds on the platform. Moreover, the fine-point imaging of near-earth objects can monitor the fields and known asteroids/comets at lower solar elongations (under 15 degrees during the eclipse period). The NEOS-Sat first applies multimission microsatellites to space-based early warning.

Now, the space-based observations of small objects primarily come from the occasional observations using high-resolution telescopes, such as the Spitzer [252] and Hubble [253] (NASA). Moreover, the Astro-F satellite (JAXA) and global astrometric interferometer for astrophysics (GAIA) probe (ESA) have also contributed to observing near-earth small objects [254]. The Sentinel, NEO-Cam, and the constellation of heterogeneous wide-field near-earth object surveyors (CROWN) are activated for future early warning [255].

In conclusion, monitoring early warning technologies possess significant advantages of wide monitoring ranges, diverse tracking means, and high warning accuracy. Therefore, early warning is promising as the mainstream direction. Based on these typical schemes, the advancements and trends of monitoring early warning are concluded as follows: (1)Space-based asteroid warning projects emerge constantly in different countries, and some remarkable results have been realized(2)Visible telescopes and additional means implement the space-based observations of small objects. The launched early warning satellites for near-earth objects are the NEOS-Sat and WISE, whereas other schemes are not formally initiated(3)Improving the timeliness, accuracy, and confidence of warning technologies will be the research emphasis to further perfect the space-based systems

5.5. Collision Avoidance

After receiving the early warning on space debris and asteroids, a vital part of the SSA is to predict and avoid satellite collisions to protect space assets. The research on collision avoidance technologies focuses on collision prediction and maneuver strategies. The core of collision prediction is probability computation algorithms, while avoidance algorithms are the essence of strategy design. In this section, separated reviews and discussions are performed for these technologies.

5.5.1. Collision Probability Computation Algorithms

First, the theory of collision probability computation complies with the establishment and improvement of satellite relative motion models, supporting the design of evasive maneuvering strategies. As the relative motion theory advances, the changes of relative state information between satellite and targets are predicted accurately, thereby enabling the prediction of orbit error covariance’s evolution and the deduction of satellite’s orbit state information under high-precision orbital dynamics models. The collision risks are determined exploiting the information and the maximal probability of satellite collision. Subsequently, complying with the set collision risk judgment criteria judges the necessity to issue the collision warning and take avoidance measures.

In the early stage of collision risk assessment, the Box region algorithms were broadly applied to space shuttle collision prediction. The Box region works primarily according to the distance to assess the collision risks between the spacecraft and targets. However, the regions demarcated by this algorithm are significantly conservative. Although it has more comprehensive applications in the field, only considering the position relation in collision prediction will inevitably lead to a higher false warning rate, rapidly causing unnecessary orbit maneuvers. Given the time differences between the satellites and orbiting targets, Bredvik and Strub [256] designed a feasible launch window based on the minimal distance, aiming to avoid the collision of launching satellites. Besides, Chan [257] analyzed the collision problems of orbiting objects. He proposed to express the collision probability of satellites with the Gaussian distribution when investigating the encounter planes between satellites following the short-term relative motion of the satellites. Then, the theoretical algorithm was also designed for further dimension reduction computations of collision probability. Furthermore, Foster and Estes [258] simplified the computational formulas of collision probability through polar coordinate transformation for numerical integration algorithms, solving the integration with a fixed step size. Based on these studies, Patera [259] and Alfano [260] both proposed the improved solutions to computing collision probability and achieved relatively ideal results in the aspects of reducing dimensions, simplifying computation, and increasing computation efficiency. Furthermore, to simplify the 2D integral operation in the encounter plane, Alfano simplified the integral area functions by Gaussian error functions and achieved dimension reduction integral, while Patera [261] further proposed the idea of transforming the 2D area integral into 1D curve integral, facilitating solving collision probability on irregular spacecraft. On this basis, the 1D curve integrals further simplified collision probability expressions as the infinite series, leading to more precise computation with minor errors.

On another aspect, it is available to evaluate and analyze the collision risks between space debris and satellites utilizing collision probability computation. This approach can simplify the integration of the 3D Gaussian distribution functions based on the relative state information of satellites and the targets, reducing collision probability computation algorithms by projecting to the encounter plane. The integral is further simplified using 1D curve integral and infinite series, and a simple expression of collision probability is obtained. Nevertheless, in practical engineering, since the state information of satellites and targets is observed in orbit or inertial systems, the coordinate relationship between orbits and encounter systems should be clarified. Furthermore, given the uncertainty of error information, the maximal collision probability computation between satellites and targets should be deeply explored, which can also be used to optimize the early warning strategies of satellite collision risk assessment.

5.5.2. Maneuvering Avoidance Algorithms and Strategies

The design of maneuvering avoidance strategies aims to timely take effective measures with the constraints to avoid space collision under the set warning, where the choice of collision probability safety thresholds is of great significance to the whole maneuvering strategies. After determining the requirements for avoidance, the maneuverability of the satellites has become a vital reference index. The direction and speed of the avoidance serve as vital calculation objects that ultimately determine the thrust’s size and direction and the maneuver’s implementation time. The key to this problem is to apply the optimal strategy to the satellite evasion maneuver with an insignificant evasion effect. In essence, if the orbital prediction is perfect without orbital errors, the risk of satellite collision is essentially a problem of zero and one. Satellites will collide under the distance between the satellite and target less than the sum of the two envelope radius. With the introduction of orbital errors, collision prediction becomes a probability problem. When the collision probability between the satellites and targets exceeds specified thresholds, instantaneous correction speed is used to the satellites for maneuvering avoidance, where the size and direction of the maneuver speed should be considered.

On relative computations and design of maneuver strategies, spherical collision bodies were proposed to replace cylindrical collision bodies, eliminating the unstable influence caused by the data deviation of targets’ sizes, easier to solve than a cylindrical envelope. Given the thrust combined with the spacecraft’s orbital control and position keeping, Chan [262] acquired the expressions of maneuver speeds based on collision probability. Then, Alfano [263] and Mueller [264] both overcame the difficulties of collision avoidance and developed appropriate avoidance strategies. Dissimilarly, Alfano primarily analyzed the instantaneous maneuvering speed, while Mueller expressed the constraint avoidance and orbital positions as control functions. Then, the optimal avoidance strategy problem was described in the standard form of optimal control description with further discretization. The initial control was transformed into nonlinear programming to implement the specific solutions. In-orbit avoidance control [264] applies to two space targets at large and small velocities, removing collision risks after maneuvering by higher robustness. Furthermore, nonlinear optimization [265] investigated optimal avoidance strategies. Natural language processing developed the avoidance of minimal maneuver speeds under the constraint of minimal encounter distances. Overall, collision avoidance strategies have been promoted to the fast, robust, and highly tunable characteristics.

As for formation-flying satellite collision avoidance, maneuvering along the collision probability density gradient [266] realized the collision avoidance of formation satellites in elliptical orbits. Furthermore, the navigation accuracy of formation satellites by GPS information was used for collision avoidance. Then, small velocity pulse correction [267] is applicable to formation-flying satellites when collision probability exceeds the safety threshold. On this basis, the boundary gate theory established the Hamiltonian functions and obtained optimal control laws [268]. The boundary gate trajectories can separate the countermeasures into the collision, while noncollision zones realize the anticollision on the sun and earth translation points [269]. In summary, maneuvering avoidance strategies for formation satellites are actively developed.

It is necessary to develop a set of simulation software for satellite collision avoidance. Based on the above theoretical studies, many countries contribute to solving the collision problems of invalid loads, space debris, and other objects to normal satellites in practical engineering. Various complete and applicable systems for collision avoidance have been established. NASA has developed a collision avoidance system for the safe operation of spacecraft adopting the mentioned regional algorithms. Based on the risk assessment of satellites, red and yellow limits [270] are set for maneuvering avoidance by computations and assessments, providing the corresponding strategies for specific situations. Moreover, ESA established collision avoidance analysis software to predict and evaluate collision risks within seven days [271]. By this means, multiple maneuvering avoidance has been successfully performed for orbiting spacecraft, highly reducing the collision risks with debris and asteroids. Afterward, JAXA projected a spacecraft collision risk assessment system for observation satellites, utilizing TLE catalog data and radar measured data to conduct approach analysis and collision probability computation for risky targets [272].

Overall, various solutions to the existing problem of maneuvering avoidance strategies are presented. Nevertheless, the computational formula of maneuver distances should be set for higher efficiency based on the expected threshold of satellites and space targets. Downregulating the collision probability to reach the minimal maneuver distance needs to be performed below the expected threshold under the premise of energy consumption reduction. The mentioned problems should be further overcome in subsequent research.

6. Research Prospects

In previous sections, the key technologies of space situational awareness are reviewed and discussed. On this basis, future research directions on the key technologies are expected. Furthermore, emerging technologies apply to the SSA as the research is deepened. Typical prospects, including multiagent and synergetic constellation awareness technologies, are worth analyzing for future research.

6.1. Future Directions of Key Technologies

The future directions of specific technologies are indicated based on the previous reviews. For data acquisition in the SSA, restricted by current sensing technologies and costs, the contradiction between the quantity and quality of sensors and the increasing requirements will persist. Therefore, the coordination and allocation of sensor resources for optimizing the system’s overall awareness capability is promising. Moreover, the multisource data fusion is an essential approach to data acquisition under uncertain situations. Nevertheless, multimodal overlaps make the distribution of multiple data domains different and hard to acquire ideal results utilizing traditional “single source domain → single target domain” methods. Furthermore, the open set of heterogeneous labels still exists in multidomain overlapping transfers. It can confuse the irrelevant information in the source domain with recognition tasks and weakening cross-domain fusion recognition capability. In practice, the data processing and acquisition mainly depends on space-based sensors, monitoring stations, and cooperation networks, which should be also collaboratively developed.

Then, mathematical recognition models with generalization capability should be deeply explored to transfer and fuse the information distribution in different modes. The models and knowledge are expected to guide deep learning and training regarding the positions, forms, structures, attributes, and functions of space targets, thereby reducing the demand for input data. The adaptive mechanism of the recognition models for multitask scenarios and small sample data is imperfect in this stage. On another aspect, various intelligent algorithms are applicable to object identification and parameter estimation technologies in the SSA. Some significant inadequacies, including large training samples, multiobjective classification, and time-consuming training, should be overcome in future work. Besides, the classifier design of different categories in homologous samples is worth exploring in the future, while the learning transfers of heterogeneous data should be further studied in small samples. Moreover, the methods of reducing deep transferring costs and improving the efficiency will be the mainstream research besides the small sample transfers. As for parameter estimation, space multiobjective estimation is hopefully developed based on improved optimal estimation algorithms. Irregular target estimation, especially involving micromotion characteristics, will be also available by integrating BRDF with XKF. The development of space intention recognition tends to be a multilevel and multielement pattern. Intelligent decisions and game inferences will be commonly used for accurate and rapid recognition in this pattern. Meanwhile, information uncertainty can be overcome by improving hidden Markov models and human-machine interaction.

Finally, the coverage of space target monitoring systems is still insufficient with long monitoring intervals. The responses to satellite maneuvers and debris decomposition need acceleration, especially for small faint targets. Therefore, strengthening the capabilities of continuous orbital prediction, tracking, and monitoring in complicated situations will be the mainstream research. The existing monitoring systems observe the specific targets in a certain period by target position computations and fixed-point monitoring, prioritizing major targets only for task requirements. Therefore, the allocation process of monitoring systems is inflexible. Then, future research emphasis on early warning technologies is to enhance the timeliness, accuracy, and confidence of the systems. For collision avoidance, consumption reduction should be emphasized in the future, while maneuvering avoidance strategies and the safety thresholds of collision probability are also necessary to study.

6.2. Multiagent Awareness Technologies

In recent years, utilizing decentralized approaches to solving complicated problems in the real world has attracted more attention. Such approaches belong to the field of distributed systems, where several entities work together to solve these problems cooperatively. As a typical construction based on the cooperation idea, multiagent systems (MAS) emphasize the joint behaviors of agents with some degrees of autonomy and the complexity arising from the interactions [273]. Depending on better robustness, flexibility, and expansibility, the MAS has been widely developed in theoretical research and practical engineering.

Research on multiagent awareness emphasizes data processing by cloud technology. Typically, a comprehensive cloud robot data fusion framework has high scalability and flexibility [274]. A networked robotic system can execute more intensive computational tasks. Then, a cloud-enabled robotic system (CERS) [275] provided powerful functions while maintaining the simplicity of distributed robots. On this basis, cloud combined with blockchain technology met the requirements for big data and strong computing power. A communication framework adopting intelligent contracts and blockchain [276] overcame challenges in the communication strategy of SSA applications. Afterward, the MAS based on two-line elements accelerated blockchain data processing in computational efficiency by a factor of eight and state propagation by four orders of magnitude [277]. Furthermore, cloud computing can also support the upper layer of big data processing as the bottom of computing resources, while multiagent technologies establish distributed computational environments and various client server applications to realize real-time interactive query efficiency and analysis [278]. Overall, these studies have integrated the MAS with cloud computing and robots for deep exploration.

Then, multiagent consistency should be studied in the applications. It aims to establish a control protocol based on the local information of each agent and its neighbors so that all agents reach agreements on a specific revenue. Reinforcement learning (RL) enables these agents to interact with environments to use the response to learn the optimal strategy and find the optimal behavior from the unknown situations. It requires that system models are known. Off-policy RL algorithms generate system data through behavioral strategies and update the target strategy to find the optimum when enriching data mining. The off-policy RL overcomes two shortcomings. Data are only generated by specific algorithms, leading to limited data mining ability. Detection noise is added to the target strategy to stimulate the system, causing deviations in optimal solutions [279]. The off-policy RL is applicable to consistency. Various structures, such as leaders and leaderless, are also effective. More coordinated multiagent systems extend coverage, reduce costs, and provide redundancy. They strengthen the SSA capability in large-scale congested situations.

Multiagent awareness has promising applications, especially the cloud technology in data processing. Although multiagent systems are endowed with predesigned behaviors, online learning is required to enhance awareness capabilities by interacting with the space environment in reinforcement learning, aiming to maximize the expected rewards during the interaction. Hence, multiagent awareness are worth perfecting as follows: (1)Given that the reward functions of agents are associated and hard to independently maximize, the determination means of multiagent learning goals can be perfected rather than just converge to equilibrium(2)Typical dimension reduction methods such as multidimensional scaling are essential to avoid the curse of dimensionality in multiagent reinforcement learning(3)For multiagent systems, the stability of agent dynamic learning and adaptability to the behavioral changes of other agents are expectedly improved(4)To yield dissimilarity data in multiagent processing, digital twin technology can be tentatively developed for multiagent awareness

6.3. Synergetic Constellation Awareness Technologies

Generally, a constellation of sensor satellites is required to support the space-based SSA if the target area is the whole near-earth space. It is noteworthy that more satellites mean longer cover time and broader coverage, thereby significantly improving the capability of SSA, but more observation platforms cost much. Hence, adopting satellite constellations for the SSA is essential to achieve better awareness with maximally low consumption.

Given the difference of space situations, constellation design can fall into the targets in the geosynchronous orbit and non-GEO target observations. Generally, the space-based SSA constellations pertain to the GEO observation constellations. Most of them employ the sun-synchronous orbit at dawn and dusk (ascending node at 6 a.m. local time). Since the targets of the mentioned missions are orbiting the GEO, the sensing satellites achieve optimal observation light conditions during these periods.

In contrast, constellation design for non-GEO target observation is more sophisticated than for the GEO targets. It refers to an above-the-horizon (ATH) coverage and aims to observe the coverage in space background, while conventional below-the-horizon (BTH) coverage generally considers the continuous coverage of the earth’s surface. The horizon is tangent to the surface of the earth for the conventional BTH coverage problem, while the concept of tangential height shell was proposed in the ATH problem. The tangential height shell and tangent line formulate a tangent height cone, whose outer area is covered by the above-the-horizon. Below-the-horizon coverage includes the ground coverage and partial spatial background coverage. Therefore, visible objects may exist by the satellite’s sensor but locate within the tangent height cone, which is outside the considerable area of the satellite for the ATH coverage. Then, a dual-altitude band shell (DABS) is introduced to the ATH coverage to solve this problem, consisting of lower and upper target artillery shells. The target area is outside the tangent height cone and within the DABS. Figure 6 manifests part of relevant parameters in a single-satellite case [280], where THS refers to tangential height shell; LTAS and UTAS refer to lower and upper target altitude shells, respectively, and TL represents the line originating at the satellite and tangent to the THS. The sensor area locates outside the tangent height cone and within the sensor range shell and the DABS. The purpose of this constellation is to broaden this area maximally.

The optimal constellation for space-based observation for a specific target is to use the minimal number of satellites to provide the coverage level required by the DABS. First, the constellation is assumed to cover multiple polar orbital planes [281], and the height of satellites is within or above the dual altitude zone [282]. It involves the spherical geometry and streets of coverage to derive the formula of coverage multiplicity, such as the number of satellites sharing the observation areas. Then, coverage multiplicity is connected to the number of the required orbital planes and satellites for each plane and geocentric latitude coverage constraints. Accordingly, the number of sensors required for the global coverage of the DABS is fully determined by setting the required coverage diversity and minimal latitude. Furthermore, the coverage scale provided by constellations can be quantitatively analyzed [283]. The ultimate constellation meeting the coverage requirements is designed by numerical optimizations instead of specialized equations. It is found that constellation parameters and coverage constraints affect the coverage evolution. This study can realize complex constellation design and synergetic situational awareness.

Overall, synergetic constellation technologies have been deeply explored for the SSA. This part reviews and discusses the mission requirements and coverage characteristics of synergetic constellations. Given the increasing requirements for farther awareness, non-GEO target observation constellations will dominate this field. However, the non-GEO constellations are complicated in design, weakening flexibility and reliability in practical applications. The research orientations are expected as follows: (1)Appropriate perturbation compensation in synergetic constellation awareness should be developed to maintain awareness capability in complicated situations(2)Satellite maintenance and reconfiguration control are expected to perform for the in-orbit stabilization and mission requirement changes(3)Spare satellite strategies will be formulated to improve the reliability and ensure the service quality of the constellations in the SSA(4)Bionic cluster technologies are expected for synergetic constellation control, thereby reaching the optimal characteristics, including autonomy, adaptability, and robustness

7. Conclusion

Space-based situational awareness is inevitable for space security and order. It devotes to overcoming multiple threats. As key technologies, data acquisition, target recognition, and monitoring have been vigorously developed for the SSA. Various mature applications are realized, such as cloud in data storage and filtering, radar echoes in object identification, and photometry in parameter estimation. Emerging algorithms, represented by machine learning and artificial neural networks, tend to be more intelligent and diversified. Hence, integrating mature and emerging methods is promising for the SSA. Then, future research on key technologies is indicated. Multiagent and synergetic constellation awareness are expected for future SSA, and subsequent studies are definite. Finally, key conclusions and insights from this paper are presented as follows: (1)For the overall advance of the space-based SSA, full-dimensional and multilevel domain awareness and surveillance systems are activated. Space surveillance systems are expected to have larger coverage, higher accuracy, and shorter data updating. For system devices, the working frequency will be changed from the low to the high band. The fixed structures tend to be flexible, and a lightweight design is implemented. Furthermore, the working mechanism is evolved to the distributed and full digital array. Distributed space-based networks [284] are promising for full-dimensional and panoramic awareness(2)As an essential part of the SSA, perfect target feature databases must be established to provide more prior information for accurate and rapid situational awareness. Relying on artificial intelligence and cloud computing, the development strategies of space big data should be formulated to promote new-generation information technologies. Furthermore, efficient space traffic management [285] and commercial services are expected for higher sustainability and self-protection capability of space assets(3)The current intelligent algorithms for target recognition and monitoring mainly adopt small sample learning. Most models possess slow inference after deployment and cannot meet real-time requirements. The models are complex, and spaceborne resources can limit applications. Next, the current algorithms have insufficient generalization. They meet the setting of the unknown category, but the effect on the similar category is better. However, it significantly declines as the small sample category increases. The recognition accuracy also sharply decreases due to the large difference between new tasks and dataset samples. Therefore, designing the classifiers of different categories in homologous sample space is necessary. The learning transfers of heterogeneous data should be studied to improve the model adaptability to the target intrinsic feature changes in small samples(4)Current space-based monitoring systems have complicated in-orbit maintenance and significant influence from cosmic radiation. Payload allocation in a single satellite is also simplex. Multiagent and synergetic constellation awareness overcome these limitations. Moreover, embodied intelligence [286] and deep, general, and evolutionary learning can be applied to multiagent systems and constellations for realistic multimodal interaction. They contribute to the intelligent evolution of situational awareness systems

In conclusion, the space-based SSA has advanced. More optimistic prospects emerge as key technologies progress. This work serves to comprehensively review and discuss these technologies for the SSA. We hope that it will propel the future development of this field.

Data Availability

The data of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest to this work.

Authors’ Contributions

S. Li, B. Wang, and J. Mu contributed to conceptualization, literature review, writing–original draft, writing–review, and editing; X. Hao, W. Zhu, and J. Hu contributed to conceptualization.


This work was supported by the National Natural Science Foundation of China (Grant No. 11972182), funded by the Science and Technology on Space Intelligent Control Laboratory (Grant Nos. HTKJ2020KL502019, 6142208200203, and 2021–JCJQ–LB–010–04). The authors fully appreciate their financial supports.


  1. Q. Wang, D. Jin, and X. Rui, “Dynamic simulation of space debris cloud capture using the tethered net,” Space: Science & Technology, vol. 2021, article 9810375, 11 pages, 2021. View at: Publisher Site | Google Scholar
  2. M. Kanazaki, Y. Yamada, and M. Nakamiya, “Trajectory optimization of a satellite for multiple active space debris removal based on a method for the traveling serviceman problem,” in 2017 21st Asia Pacific Symposium on Intelligent and Evolutionary Systems (IES), pp. 61–66, Hanoi, Vietnam, 2017. View at: Publisher Site | Google Scholar
  3. A. Boccaletti, E. Sezestre, A. M. Lagrange, P. Thébault, R. Gratton, and M. Langlois, “Observations of fast–moving features in the debris disk of AU Mic on a three–year timescale: confirmation and new discoveries,” Astronomy & Astrophysics, vol. 614, p. A52, 2018. View at: Publisher Site | Google Scholar
  4. P. Maskell and L. Oram, “Sapphire: Canada's answer to space–based surveillance of orbital objects,” in Advanced Maui Optical and Space Surveillance Technologies Conference, pp. 1–8, Wailea, Maui, Hawaii, 2008. View at: Google Scholar
  5. J. L. Gonzalo and C. Colombo, “On–board collision avoidance applications based on machine learning and analytical methods,” in 8th European Conference on Space Debris, pp. 20–23, Darmstadt, Germany, 2021. View at: Google Scholar
  6. N. Reiland, A. J. Rosengren, R. Malhotra, and C. Bombardelli, “Assessing and minimizing collisions in satellite mega-constellations,” Advances in Space Research, vol. 67, no. 11, pp. 3755–3774, 2021. View at: Publisher Site | Google Scholar
  7. R. P. Di Sisto, X. S. Ramos, and T. Gallardo, “The dynamical evolution of escaped Jupiter Trojan asteroids, link to other minor body populations,” Icarus, vol. 319, pp. 828–839, 2019. View at: Publisher Site | Google Scholar
  8. O. P. Popova, P. Jenniskens, V. Emel’yanenko et al., “Chelyabinsk airburst, damage assessment, meteorite recovery, and characterization,” Science, vol. 342, no. 6162, pp. 1069–1073, 2013. View at: Publisher Site | Google Scholar
  9. Y. Hu, K. Li, Y. Liang, and L. Chen, “Review on strategies of space–based optical space situational awareness,” Journal of Systems Engineering and Electronics, vol. 32, no. 5, pp. 1152–1166, 2021. View at: Publisher Site | Google Scholar
  10. D. L. Oltrogge and S. Alfano, “The technical challenges of better Space Situational Awareness and Space Traffic Management,” Journal of Space Safety Engineering, vol. 6, no. 2, pp. 72–79, 2019. View at: Publisher Site | Google Scholar
  11. J. A. Kennewell and B. N. Vo, “An overview of space situational awareness,” in Proceedings of the 16th International Conference on Information Fusion, pp. 1029–1036, Istanbul, Turkey, 2013. View at: Google Scholar
  12. H. Zhang, Z. Li, W. Wang, H. Wang, and Y. Zhang, “Trajectory planning for optical satellite’s continuous surveillance of geostationary spacecraft,” IEEE Access, vol. 9, pp. 114282–114293, 2021. View at: Publisher Site | Google Scholar
  13. J. Du, J. Chen, B. Li, and J. Sang, “Tentative design of SBSS constellations for LEO debris catalog maintenance,” Acta Astronautica, vol. 155, pp. 379–388, 2019. View at: Publisher Site | Google Scholar
  14. W. Li, S. Yang, C. Wang, and Y. Ouyang, “SBIRS: missions, challenges and opportunities,” in 2019 IEEE 4th International Conference on Cloud Computing and Big Data Analysis (ICCCBDA), pp. 363–367, Chengdu, China, 2019. View at: Publisher Site | Google Scholar
  15. J. N. Pelton, “A path forward to better space security: finding new solutions to space debris, space situational awareness and space traffic management,” Journal of Space Safety Engineering, vol. 6, no. 2, pp. 92–100, 2019. View at: Publisher Site | Google Scholar
  16. J. Krissansen-Totton, R. Garland, P. Irwin, and D. C. Catling, “Detectability of Biosignatures in Anoxic Atmospheres with theJames Webb Space Telescope: A TRAPPIST-1e Case Study,” Astronomical Journal, vol. 156, no. 3, p. 114, 2018. View at: Publisher Site | Google Scholar
  17. S. Lambakis, “Foreign space capabilities: implications for U.S. national security,” Comparative Strategy, vol. 37, no. 2, pp. 87–154, 2018. View at: Publisher Site | Google Scholar
  18. J. T. Richelson, America's space sentinels: the history of the DSP and SBIRS satellite systems, University Press of Kansas, 2018. View at: Publisher Site
  19. M. Duncan, R. Fero, T. Smith, J. Southworth, and J. Wysack, “Real–time utilization of STSS for improved collision risk management,” in Advanced Maui Optical and Space Surveillance Technologies Conference, p. 33, Maui, Hawaii, 2012. View at: Google Scholar
  20. C. J. Willott, R. Doyon, L. Albert et al., “The near–infrared imager and slitless spectrograph for the James Webb Space Telescope. II. Wide field slitless spectroscopy,” Publications of the Astronomical Society of the Pacific, vol. 134, no. 1032, article 025002, 2022. View at: Publisher Site | Google Scholar
  21. S. Song, W. Xu, and R. Shu, “Design and implementation of infrared/laser dual–mode compound detection system,” Aerospace Systems, vol. 3, no. 3, pp. 157–166, 2020. View at: Publisher Site | Google Scholar
  22. R. M. Marino and W. R. Davis, “Jigsaw: a foliage–penetrating 3D imaging laser radar system,” Lincoln Laboratory Journal, vol. 15, pp. 23–36, 2005. View at: Google Scholar
  23. J. Xie and W. Chen, “Switching logic design for divert and attitude control system of exoatmospheric kill vehicle,” in 2017 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM), pp. 194–200, Ningbo, China, 2017. View at: Publisher Site | Google Scholar
  24. W. A. Kuhn, W. Sieprath, L. Timmoneri, and A. Farina, “Phased array radar systems in support of the medium extended air defense system (MEADS),” in IEEE International Symposium on Phased Array Systems and Technology, pp. 94–100, Boston, MA, 2003. View at: Publisher Site | Google Scholar
  25. B. Wei and B. D. Nener, “Multi–sensor space debris tracking for space situational awareness with labeled random finite sets,” IEEE Access, vol. 7, pp. 36991–37003, 2019. View at: Publisher Site | Google Scholar
  26. Y. Chen, G. Tian, J. Guo, and J. Huang, “Task planning for multiple–satellite space–situational–awareness systems,,” Aerospace, vol. 8, no. 3, p. 73, 2021. View at: Publisher Site | Google Scholar
  27. S. Sutherland, “The vision of David Marr,” Nature, vol. 298, no. 5875, pp. 691-692, 1982. View at: Publisher Site | Google Scholar
  28. S. T. Barnard and M. A. Fischler, “Computational stereo,” ACM Computing Surveys, vol. 14, no. 4, pp. 553–572, 1982. View at: Publisher Site | Google Scholar
  29. J. Wang, H. Ma, and B. Li, “3–D dimension measurement of workpiece based on binocular vision,” in Intelligent Robotics and Applications, ICIRA 2019, vol. 11741, pp. 392–404, Springer, Shenyang, China, 2019. View at: Google Scholar
  30. N. Sebe, M. S. Lew, N. Sebe, and M. S. Lew, “Robust stereo matching and motion tracking,” Robust Computer Vision, vol. 26, pp. 135–162, 2003. View at: Publisher Site | Google Scholar
  31. G. A. Ball, W. W. Morey, and P. K. Cheo, “Single– and multipoint fiber–laser sensors,” IEEE Photonics Technology Letters, vol. 5, no. 2, pp. 267–270, 1993. View at: Publisher Site | Google Scholar
  32. R. Yan, J. Wu, J. Lee, and C. Han, “3D point cloud map construction based on line segments with two mutually perpendicular laser sensors,” in 2013 13th International Conference on Control, Automation and Systems (ICCAS 2013), pp. 1114–1116, Gwangju, Korea (South), 2013. View at: Publisher Site | Google Scholar
  33. O. Wulf and B. Wagner, “Fast 3D scanning methods for laser measurement systems,” in Proceedings of the International Conference on Control Systems and Computer Science, pp. 312–317, Bucharest, Romania, 2003. View at: Google Scholar
  34. A. B. Gschwendtner and W. Keicher, “Development of coherent laser radar at Lincoln Laboratory,” Lincoln Laboratory Journal, vol. 12, no. 2, pp. 383–396, 2000. View at: Google Scholar
  35. P. Colarusso and K. R. Spring, “Imaging at low light levels with cooled and intensified charge–coupled device cameras,” Methods in Enzymology, vol. 360, pp. 383–394, 2003. View at: Publisher Site | Google Scholar
  36. T. Mohammad, “Using ultrasonic and infrared sensors for distance measurement,” World academy of science, engineering and technology, vol. 51, pp. 293–299, 2009. View at: Google Scholar
  37. A. Belbachir, R. Pflugfelder, and R. Gmeiner, “A neuromorphic smart camera for real–time 360°distortion–free panoramas,” in 2010 Fourth ACM/IEEE International Conference on Distributed Smart Cameras: ACM, pp. 221–226, Atlanta, GA, 2010. View at: Google Scholar
  38. J. H. Fuller, H. Maldonado, and J. Schlag, “Vestibular-oculomotor interaction in cat eye-head movements,” Brain Research, vol. 271, no. 2, pp. 241–250, 1983. View at: Publisher Site | Google Scholar
  39. C. Grollet, Y. Klein, and V. Megaides, “ARTEMIS: staring IRST for the FREMM frigate,” in Proceedings of SPIE–Infrared Technology and Applications XXXIII, SPIE, vol. 6542, p. 654233, Bellingham, Wash, 2007. View at: Publisher Site | Google Scholar
  40. W. Guan, Y. Wu, C. Xie, L. Fang, X. Liu, and Y. Chen, “Performance analysis and enhancement for visible light communication using CMOS sensors,” Optics Communications, vol. 410, pp. 531–551, 2018. View at: Publisher Site | Google Scholar
  41. G. H. Stokes, C. V. Braun, R. Sridharan, D. Harrison, and J. Sharma, “The space–based visible,” Lincoln Laboratory Journal, vol. 11, pp. 205–229, 1998. View at: Google Scholar
  42. M. Gruntman, “Passive optical detection of submillimeter and millimeter size space debris in low Earth orbit,” Acta Astronautica, vol. 105, no. 1, pp. 156–170, 2014. View at: Publisher Site | Google Scholar
  43. J. Silha, T. Schildknecht, A. Hinze et al., “Capability of a space–based space surveillance system to detect and track objects in GEO, MEO and LEO orbits,” in Proceedings of 65th International Astronautical Congress, pp. 1160–1168, Toronto, Canada, 2014. View at: Publisher Site | Google Scholar
  44. F. E. White, “Data fusion lexicon,” in Technical Panel for C3 in Joint Directors of Laboratories, Naval Ocean Systems Center, pp. 1–16, San Diego, CA, 1991. View at: Google Scholar
  45. K. C. Chang and Y. Bar-Shalom, “Distributed adaptive estimation with probabilistic data association,” Automatica, vol. 25, no. 3, pp. 359–369, 1989. View at: Publisher Site | Google Scholar
  46. S. Trent, E. Patterson, and D. Woods, “Challenges for cognition in intelligence analysis,” Journal of Cognitive Engineering and Decision Making, vol. 1, pp. 75–97, 2007. View at: Publisher Site | Google Scholar
  47. Y. Ashibani and Q. H. Mahmoud, “Cyber physical systems security: analysis, challenges and solutions,” Computers & Security, vol. 68, pp. 81–97, 2017. View at: Publisher Site | Google Scholar
  48. Z. Yang, Y. Cheng, and H. Wu, “Observation capability for distributed multi–sensor information fusion,” in 2019 IEEE International Conference on Signal, Information and Data Processing (ICSIDP), pp. 1–5, Chongqing, China, 2019. View at: Publisher Site | Google Scholar
  49. D. Keim, G. Andrienko, J. D. Fekete, C. Görg, J. Kohlhammer, and G. Melançon, “Visual analytics: definition, process, and challenges,” in Lecture Notes in Computer Science, vol. 4950, pp. 154–175, Springer, Berlin, Heidelberg, 2008. View at: Publisher Site | Google Scholar
  50. B. Liu, Y. Chen, D. Shen et al., “Cloud–based space situational awareness: initial design and evaluation,” in Proceedings of SPIE–Sensors and Systems for Space Applications VI, SPIE, vol. 8739, p. 87390M, 2013. View at: Publisher Site | Google Scholar
  51. S. J. Johnston, N. S. O’Brien, H. G. Lewis, E. E. Hart, A. White, and S. J. Cox, “Clouds in space: scientific computing using windows azure,” Journal of Cloud Computing: Advances, Systems and Applications, vol. 2, no. 1, pp. 2–10, 2013. View at: Publisher Site | Google Scholar
  52. B. Liu, Y. Chen, D. Shen et al., “An adaptive process–based cloud infrastructure for space situational awareness applications,” in Proceedings of SPIE–Sensors & Systems for Space Applications VII, SPIE, vol. 9085, p. 90850M, Baltimore, Maryland, 2014. View at: Publisher Site | Google Scholar
  53. J. Lindman, J. Horkoff, I. Hammouda, and E. Knauss, “Emerging perspectives of application programming interface strategy: a framework to respond to business concerns,” IEEE Software, vol. 37, no. 2, pp. 52–59, 2020. View at: Publisher Site | Google Scholar
  54. A. A. Fröhlich, “SmartData: an IoT-ready API for sensor networks,” International Journal of Sensor Networks, vol. 28, no. 3, pp. 202–210, 2018. View at: Publisher Site | Google Scholar
  55. D. Greenly, M. Duncan, J. Wysack, and F. Flores, “Space situational awareness data processing scalability utilizing Google Cloud services,” in Advanced Maui Optical and Space Surveillance Technologies Conference, pp. 1–8, Maui, Hawaii, 2015. View at: Google Scholar
  56. M. Czajkowski, A. Shilliday, N. LoFaso, A. Dipon, and D. V. Brackle, “The Orbit Outlook data archive,” in 2016 Advanced Maui Optical and Space Surveillance Technologies Conference (AMOS), pp. 1–5, Maui, Hawaii, 2016. View at: Google Scholar
  57. E. Groeneveld, “An adaptable platform independent information system in animal production: framework and generic database structure,” Livestock Production Science, vol. 87, no. 1, pp. 1–12, 2004. View at: Publisher Site | Google Scholar
  58. Q. Zhang, “Integration techniques and implementation of common operational picture,” Computer Engineering and Design, vol. 32, no. 7, pp. 2557–2561, 2011. View at: Google Scholar
  59. A. M. Koekemoer, H. Aussel, D. Calzetti et al., “The COSMOS survey: Hubble space telescope advanced camera for surveys observations and data processing,” Astrophysical Journal Supplement Series, vol. 172, no. 1, pp. 196–202, 2007. View at: Publisher Site | Google Scholar
  60. E. Delande, C. Frueh, J. Franco, J. Houssineau, and D. Clark, “Novel multi–object filtering approach for space situational awareness,” Journal of Guidance, Control, and Dynamics, vol. 41, no. 1, pp. 59–73, 2018. View at: Publisher Site | Google Scholar
  61. P. Luokkala and K. Virrantaus, “Developing information systems to support situational awareness and interaction in time–pressuring crisis situations,” Safety Science, vol. 63, pp. 191–203, 2014. View at: Publisher Site | Google Scholar
  62. P. Chu, Z. Dong, Y. Chen, C. Yu, and Y. Huang, “Research on multi–source data fusion and mining based on big data,” in 2020 International Conference on Virtual Reality and Intelligent Systems (ICVRIS), pp. 606–609, Zhangjiajie, China, 2020. View at: Publisher Site | Google Scholar
  63. F. Jiang, Y. Li, S. Yuan, X. Zhong, W. Chen, and T. Xie, “Meteor tail: octomap based multi–sensor data fusion method,” in 2021 International Conference on Artificial Intelligence, Big Data and Algorithms (CAIBDA), pp. 118–121, Xi'an, China, 2021. View at: Publisher Site | Google Scholar
  64. X. Li, Q. Yu, B. Alzahrani et al., “Data fusion for intelligent crowd monitoring and management systems: a survey,” IEEE Access, vol. 9, pp. 47069–47083, 2021. View at: Publisher Site | Google Scholar
  65. H. Liu, K. Teng, L. Ran, Y. Zhang, and S. Wang, “A two–step abnormal data analysis and processing method for millimetre–wave radar in traffic flow detection applications,” IET Intelligent Transport Systems, vol. 15, no. 5, pp. 671–682, 2021. View at: Publisher Site | Google Scholar
  66. S. Wang, C. Gao, Q. Zhang et al., “Research and experiment of radar signal support vector clustering sorting based on feature extraction and feature selection,” IEEE Access, vol. 8, pp. 93322–93334, 2020. View at: Publisher Site | Google Scholar
  67. Y. Guo, B. Wu, C. Luo, and B. Wang, “Correlation voting fusion strategy for part of speech tagging,” in International Conference on Signal Processing, p. 1835, 2006. View at: Publisher Site | Google Scholar
  68. S. Zhang, Y. Wang, P. Wan, J. Zhuang, Y. Zhang, and Y. Li, “Clustering algorithm–based data fusion scheme for robust cooperative spectrum sensing,” IEEE Access, vol. 8, pp. 5777–5786, 2020. View at: Publisher Site | Google Scholar
  69. Y. Huang, T. Li, C. Luo, H. Fujita, and S. Horng, “Dynamic fusion of multisource interval–valued data by fuzzy granulation,” IEEE Transactions on Fuzzy Systems, vol. 26, no. 6, pp. 3403–3417, 2018. View at: Publisher Site | Google Scholar
  70. X. Zeng, Z. Luo, and X. Xiong, “A new improved D–S evidence theory based on BJS divergence in multi–source information fusion,” in 2020 IEEE 5th International Conference on Signal and Image Processing (ICSIP), pp. 799–803, Nanjing, China, 2020. View at: Publisher Site | Google Scholar
  71. A. Liu, Y. Yang, Q. Sun, and Q. Xu, “A deep fully convolution neural network for semantic segmentation based on adaptive feature fusion,” in 2018 5th International Conference on Information Science and Control Engineering (ICISCE), pp. 16–20, Zhengzhou, China, 2018. View at: Publisher Site | Google Scholar
  72. A. E. Abdalla, B. Shetar, and M. S. Abdelwahab, “Data fusion algorithm based on fuzzy similarity weighted least square for positioning with the global positioning system,” in 2020 12th International Conference on Electrical Engineering (ICEENG), pp. 467–470, Cairo, Egypt, 2020. View at: Publisher Site | Google Scholar
  73. Z. Zhang, “Data fusion optimization analysis of wireless sensor networks based on joint DS evidence theory and matrix analysis,” in 2019 4th International Conference on Mechanical, Control and Computer Engineering (ICMCCE), pp. 689–6894, Hohhot, China, 2019. View at: Publisher Site | Google Scholar
  74. Z. Lei, P. Cui, and Y. Huang, “Multi–platform and multi–sensor data fusion based on D–S evidence theory,” in 2020 IEEE 3rd International Conference on Computer and Communication Engineering Technology (CCET), pp. 6–9, Beijing, China, 2020. View at: Publisher Site | Google Scholar
  75. M. Kiran, P. Murphy, I. Monga, J. Dugan, and S. S. Baveja, “Lambda architecture for cost–effective batch and speed big data processing,” in 2015 IEEE International Conference on Big Data (Big Data), pp. 2785–2792, Santa Clara, CA, 2015. View at: Publisher Site | Google Scholar
  76. H. M. Barbera, A. G. Skarmeta, M. Z. Izquierdo, and J. B. Blaya, “Neural networks for sonar and infrared sensors fusion,” in Proceedings of the Third International Conference on Information Fusion, pp. 18–25, Paris, France, 2000. View at: Publisher Site | Google Scholar
  77. O. Dagan and N. R. Ahmed, “Factor graphs for heterogeneous Bayesian decentralized data fusion,” in 2021 IEEE 24th International Conference on Information Fusion (FUSION), pp. 1–8, Sun City, South Africa, 2021. View at: Publisher Site | Google Scholar
  78. H. Leung, “Information fusion and decision support for autonomous systems,” in 2021 IEEE International Conference on Autonomous Systems (ICAS), p. 1, Montreal, QC, Canada, 2021. View at: Publisher Site | Google Scholar
  79. Y. T. Lin, X. W. Song, T. Y. Ji, and M. S. Li, “Feature extraction of Fourier infrared signals from pyrolysis products based on ZCA and PSO,” in 2020 IEEE Congress on Evolutionary Computation (CEC), pp. 1–7, Glasgow, UK, 2020. View at: Publisher Site | Google Scholar
  80. Y. Li, H. Lu, L. Zhang, and S. Serikawa, “Cross depth image filter–based natural image matting,” in 2013 14th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD 2013), pp. 601–604, Honolulu, HI, 2013. View at: Publisher Site | Google Scholar
  81. P. Makarychev, “Structural and parametric identification of nonlinear dynamic objects,” in 2020 Moscow Workshop on Electronic and Networking Technologies (MWENT), pp. 1–4, Moscow, Russia, 2020. View at: Publisher Site | Google Scholar
  82. H. Li and S. Yang, “Using range profiles as feature vectors to identify aerospace objects,” IEEE Transactions on Antennas & Propagation, vol. 41, no. 3, pp. 261–268, 1993. View at: Publisher Site | Google Scholar
  83. F. Eugenio and F. Marqués, “Automatic satellite image georeferencing using a contour–matching approach,” IEEE Transactions on Geoscience and Remote Sensing, vol. 41, no. 12, pp. 2869–2880, 2003. View at: Publisher Site | Google Scholar
  84. X. Huang, Z. Qiu, C. Chen, and Z. Zhang, “The fractal feature of space object RCS,” Chinese Space Science & Technology, vol. 25, no. 1, pp. 33–36, 2005. View at: Google Scholar
  85. X. Xiang and X. Xu, “Feature extraction for radar target recognition using time sequences of radar cross section measurements,” in 2013 6th International Congress on Image and Signal Processing (CISP), pp. 1583–1587, Hangzhou, China, 2013. View at: Publisher Site | Google Scholar
  86. D. T. Arik and A. B. Şahın, “Target classification with FMCW radar using features extracted from Fourier transform of radar cross section,” in 2019 27th Signal Processing and Communications Applications Conference (SIU), pp. 1–4, Sivas, Turkey, 2019. View at: Publisher Site | Google Scholar
  87. X. Lei, Z. Li, J. Du, J. Chen, J. Sang, and C. Liu, “Identification of uncatalogued LEO space objects by a ground-based EO array,” Advances in Space Research, vol. 67, no. 1, pp. 350–359, 2021. View at: Publisher Site | Google Scholar
  88. L. Du, H. Liu, and P. Wang, “Noise robust radar HRRP target recognition based on multitask factor analysis with small training data size,” IEEE Transactions on Signal Processing, vol. 60, no. 7, pp. 3546–3559, 2012. View at: Publisher Site | Google Scholar
  89. S. P. Jacobs, Automatic target recognition using high resolution radar range profiles, Washington University, 1997.
  90. L. Du, H. Liu, and Z. Bao, “Radar HRRP statistical recognition based on hypersphere model,” Signal Processing, vol. 88, no. 5, pp. 1176–1190, 2008. View at: Publisher Site | Google Scholar
  91. L. Du, P. Wang, H. Liu, M. Pan, and Z. Bao, “Radar HRRP target recognition based on dynamic multi–task hidden Markov model,” in 2011 IEEE Radar Con (RADAR), pp. 253–255, Kansas City, MO, USA, 2011. View at: Publisher Site | Google Scholar
  92. A. Toumi, B. Hoeltzener, and A. Khenchaf, “Using watersheds segmentation on ISAR image for automatic target recognition,” in 2007 2nd IEEE International Conference on Digital Information Management, pp. 285–290, Lyon, France, 2007. View at: Publisher Site | Google Scholar
  93. M. N. Saidi, K. Daoudi, A. Khenchaf, B. Hoeltzener, and D. Aboutajdine, “Automatic target recognition of aircraft models based on ISAR images,” in 2009 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2009), pp. 685–688, Cape Town, South Africa, 2009. View at: Publisher Site | Google Scholar
  94. Y. Wang, J. Rong, and T. Han, “Novel approach for high resolution ISAR/InISAR sensors imaging of maneuvering target based on peak extraction technique,” IEEE Sensors Journal, vol. 19, no. 14, pp. 5541–5558, 2019. View at: Publisher Site | Google Scholar
  95. X. Xiao, G. Xiao, and Z. Jing, “Study on space target recognition algorithm and its experiment,” Computer Engineering and Applications, vol. 47, no. 8, pp. 154–156, 2011. View at: Publisher Site | Google Scholar
  96. J. Yoo, S. S. Hwang, S. D. Kim, M. S. Ki, and J. Cha, “Scale–invariant template matching using histogram of dominant gradients,” Pattern Recognition, vol. 47, no. 9, pp. 3006–3018, 2014. View at: Publisher Site | Google Scholar
  97. Y. Yeh, T. Lin, Y. Chung, and Y. F. Wang, “A novel multiple kernel learning framework for heterogeneous feature fusion and variable selection,” IEEE Transactions on Multimedia, vol. 14, no. 3, pp. 563–574, 2012. View at: Publisher Site | Google Scholar
  98. I. H. Jhuo and D. T. Lee, “Boosted multiple kernel learning for scene category recognition,” in 2010 20th International Conference on Pattern Recognition (ICPR 2010), pp. 3504–3507, Istanbul, Turkey, 2010. View at: Publisher Site | Google Scholar
  99. J. Dong, S. Chen, K. Xu, and F. Jie, “Improvement of real–time performance of image matching based on SIFT,” Electronics Optics & Control, vol. 27, no. 3, pp. 80–83, 2020. View at: Google Scholar
  100. P. Gehler and S. Nowozin, “On feature combination for multiclass object classification,” in IEEE International Conference on Computer Vision, pp. 221–228, 2009. View at: Google Scholar
  101. F. Wu, J. Xiong, X. Xu, and Q. Zhang, “Research on method of space target recognition in digital image,” in 2012 5th International Congress on Image and Signal Processing, pp. 1303–1306, Chongqing, China, 2012. View at: Publisher Site | Google Scholar
  102. H. Yao, An improved local invariant feature matching algorithm and its application, Xidian University, 2019.
  103. X. Sun, L. Zhang, Z. Wang et al., “Scene categorization using deeply learned gaze shifting kernel,” IEEE Transactions on Cybernetics, vol. 49, no. 6, pp. 2156–2167, 2019. View at: Publisher Site | Google Scholar
  104. S. Ma, Q. Gong, and J. Zhang, “Space target recognition based on 2–D wavelet transformation and KPCA,” in 2011 IEEE 3rd International Conference on Communication Software and Networks (ICCSN 2011), pp. 516–520, Xi'an, China, 2011. View at: Publisher Site | Google Scholar
  105. Y. Ren, Y. Zhang, Y. Li, J. Huang, and J. Hui, “A space target recognition method based on compressive sensing,” in 2011 Sixth International Conference on Image and Graphics – A Space Target Recognition Method Based on Compressive Sensing, pp. 582–586, Hefei, China, 2011. View at: Publisher Site | Google Scholar
  106. S. Jiang, W. Min, L. Liu, and Z. Luo, “Multi–scale multi–view deep feature aggregation for food recognition,” IEEE Transactions on Image Processing, vol. 29, pp. 265–276, 2020. View at: Publisher Site | Google Scholar
  107. I. Mcquaid, L. D. Merkle, B. Borghetti, R. Cobb, and J. Fletcher, “Space object classification using deep neural networks,” in 2018 IEEE Aerospace Conference, pp. 1–8, Big Sky, MT, 2018. View at: Publisher Site | Google Scholar
  108. D. Duarte, F. Nex, N. Kerle, and G. Vosselman, “Multi–resolution feature fusion for image classification of building damages with convolutional neural networks,” Remote Sensing, vol. 10, no. 10, p. 1636, 2018. View at: Google Scholar
  109. S. Albawi, T. A. Mohammed, and S. Alzawi, “Understanding of a convolutional neural network,” in 2017 International Conference on Engineering and Technology (ICET), pp. 1–6, Antalya, Turkey, 2017. View at: Publisher Site | Google Scholar
  110. H. Zeng and Y. Xia, “Space target recognition based on deep learning,” in 2017 20th International Conference on Information Fusion (Fusion), pp. 1188–1192, Xi'an, China, 2017. View at: Publisher Site | Google Scholar
  111. X. Yang, T. Wu, N. Wang, Y. Huang, B. Song, and X. Gao, “HCNN–PSI: a hybrid CNN with partial semantic information for space target recognition,” Pattern Recognition, vol. 108, article 107531, 2020. View at: Publisher Site | Google Scholar
  112. J. H. Seldin and R. G. Paxman, “Phase–diverse speckle reconstruction of solar data,” in Proceedings of the Society of Photo–Optical Instrumentation Engineers (SPIE), vol. 2302, pp. 268–280, San Diego, CA, 1994. View at: Publisher Site | Google Scholar
  113. O. Ronneberger, P. Fischer, and T. Brox, “U–Net: convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer–Assisted Intervention–MICCAI 2015, vol. 9351, pp. 234–241, Lecture Notes in Computer Science, 2015. View at: Publisher Site | Google Scholar
  114. C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception–v4, inception–Res Net and the impact of residual connections on learning,” in Proceedings of the Thirty–First AAAI Conference on Artificial Intelligence (AAAI–17): ACM, pp. 4278–4284, San Francisco, CA, 2017. View at: Google Scholar
  115. H. Ding, X. Li, and H. Zhao, “An approach for autonomous space object identification based on normalized AMI and illumination invariant MSA,” Acta Astronautica, vol. 84, pp. 173–181, 2013. View at: Google Scholar
  116. N. W. Bruegger, “Space object identification using feature space trajectory neural networks,” in Proceedings Applications and Science of Artificial Neural Networks II, Aerospace/Defense Sensing and Controls, Orlando, FL, 1997. View at: Google Scholar
  117. Q. Zhang, H. Wang, R. J. Plemmons, and V. P. Pauca, “Tensor methods for hyperspectral data analysis: a space object material identification study,” Journal of the Optical Society of America A–Optics Image Science and Vision, vol. 25, no. 12, pp. 3001–3012, 2008. View at: Publisher Site | Google Scholar
  118. S. Kaasalainen, J. Piironen, M. Kaasalainen, A. W. Harris, K. Muinonen, and A. Cellino, “Asteroid photometric and polarimetric phase curves: empirical interpretation,” Icarus, vol. 161, no. 1, pp. 34–46, 2003. View at: Google Scholar
  119. D. A. Oszkiewicz, K. Muinonen, E. Bowell et al., “Online multi–parameter phase–curve fitting and application to a large corpus of asteroid photometric data,” Journal of Quantitative Spectroscopy and Radiative Transfer, vol. 112, no. 11, pp. 1919–1929, 2011. View at: Google Scholar
  120. L. Sbordone, M. Salaris, A. Weiss, and S. Cassisi, “Photometric signatures of multiple stellar populations in Galactic globular clusters,” Astronomy & Astrophysics, vol. 534, p. A9, 2011. View at: Publisher Site | Google Scholar
  121. I. Hussein, T. Kelecy, K. Miller, M. P. Wilkins, C. Roscoe, and M. Bolden, “Assessment of information content contained in observed photometric signatures of non–resolved space debris objects,” in 7th European Conference on Space Debris, ESA Space Debris Office, pp. 18–21, Darmstadt, Germany, 2017. View at: Google Scholar
  122. K. Subbarao and L. Henderson, “Observability and sensitivity analysis of lightcurve measurement models for use in space situational awareness,” Inverse Problems in Science and Engineering, vol. 27, no. 10, pp. 1399–1424, 2019. View at: Publisher Site | Google Scholar
  123. Y. Han, L. Lin, H. Sun, J. Jiang, and X. He, “Modeling the space–based optical imaging of complex space target based on the pixel method,” Optik, vol. 126, no. 15–16, pp. 1474–1478, 2015. View at: Publisher Site | Google Scholar
  124. H. N. Do, T. Chin, N. Moretti, M. K. Jah, and M. Tetlow, “Robust foreground segmentation and image registration for optical detection of GEO objects,” Advances in Space Research, vol. 64, no. 3, pp. 733–746, 2019. View at: Publisher Site | Google Scholar
  125. R. Linares, M. K. Jah, and J. L. Crassidis, “Inactive space object shape estimation via astrometric and photometric data fusion,” Advances in the Astronautical Sciences, vol. 143, pp. 217–232, 2012. View at: Google Scholar
  126. C. J. Wetterer, C. C. Chow, J. L. Crassidis, R. Linares, and M. K. Jah, “Simultaneous position, velocity, attitude, angular rates, and surface parameter estimation using astrometric and photometric observations,” in 2013 16th International Conference on Information Fusion (FUSION), pp. 997–1004, Istanbul, Turkey, 2013. View at: Google Scholar
  127. S. Kavitha and K. K. Thyagharajan, “Efficient DWT–based fusion techniques using genetic algorithm for optimal parameter estimation,” Soft Computing, vol. 21, no. 12, pp. 3307–3316, 2017. View at: Google Scholar
  128. B. Tian, Z. Chen, and S. Xu, “Sparse subband fusion imaging based on parameter estimation of geometrical theory of diffraction model,” IET Radar, Sonar & Navigation, vol. 8, no. 4, pp. 318–326, 2014. View at: Publisher Site | Google Scholar
  129. C. K. Gatebe and M. D. King, “Airborne spectral BRDF of various surface types (ocean, vegetation, snow, desert, wetlands, cloud decks, smoke layers) for remote sensing applications,” Remote Sensing of Environment, vol. 179, pp. 131–148, 2016. View at: Publisher Site | Google Scholar
  130. Y. Mou, X. Sheng, Y. Gao, J. Wu, Z. Wu, and T. Wu, “Bidirectional reflection distribution function modeling (BRDF) for terahertz diffuse scattering analysis of dielectric rough targets,” Infrared Physics & Technology, vol. 101, pp. 171–179, 2019. View at: Publisher Site | Google Scholar
  131. J. Bieron and P. Peers, “An adaptive brdf fitting metric,” Computer Graphics Forum, vol. 39, no. 4, pp. 59–74, 2020. View at: Publisher Site | Google Scholar
  132. Y. Cao, Y. Cao, W. Li, L. Bai, Z. Wu, and Z. Wang, “Optimization of ray tracing algorithm for laser radar cross section calculation based on material bidirectional reflection distribution function,” Optics Communications, vol. 500, p. 127207, 2021. View at: Publisher Site | Google Scholar
  133. X. Yang and M. Gao, “Study on properties of influence factors of polarization–based TS BRDF model,” Optik, vol. 172, no. 172, pp. 628–635, 2018. View at: Publisher Site | Google Scholar
  134. Y. Liu, J. Dai, S. Zhao et al., “Optimization of five–parameter BRDF model based on hybrid GA–PSO algorithm,” Optik, vol. 219, p. 164978, 2020. View at: Google Scholar
  135. A. Willison and D. Bédard, “A novel approach to modeling spacecraft spectral reflectance,” Advances in Space Research, vol. 58, no. 7, pp. 1318–1330, 2016. View at: Publisher Site | Google Scholar
  136. A. D. Dianetti and J. L. Crassidis, “Space object material determination from polarized light curves,” in AIAA Scitech 2019 Forum, p. 0377, 2019. View at: Publisher Site | Google Scholar
  137. A. D. Dianetti and J. L. Crassidis, “Space object attitude determination from multispectral light curves,” in AIAA Scitech 2020 Forum, p. 1098, 2020. View at: Publisher Site | Google Scholar
  138. M. C. Vandyke, J. L. Schwartz, and C. D. Hall, “Unscented Kalman filtering for spacecraft attitude state and parameter estimation,” Advances in the Astronautical Sciences, vol. 118, pp. 217–228, 2004. View at: Google Scholar
  139. R. Linares, M. K. Jah, J. L. Crassidis, F. A. Leve, and T. Kelecy, “Astrometric and photometric data fusion for inactive space object mass and area estimationActa Astronautica,” vol. 99, pp. 1–15, 2014. View at: Google Scholar
  140. F. Aghili and K. Parsa, “Motion and parameter estimation of space objects using laser–vision data,” Journal of Guidance, Control, and Dynamics, vol. 32, no. 2, pp. 538–550, 2009. View at: Publisher Site | Google Scholar
  141. A. Hasan, “Adaptive eXogenous Kalman filter for actuator fault diagnosis in robotics and autonomous systems,” in 2019 7th International Conference on Control, Mechatronics and Automation (ICCMA), pp. 162–167, IEEE, Delft, Netherlands, 2019. View at: Google Scholar
  142. A. Hasan, “EXogenous Kalman filter for state estimation in autonomous ball balancing robots,” in IEEE ASME International Conference on Advanced Intelligent Mechatronics, pp. 1522–1527, 2020. View at: Publisher Site | Google Scholar
  143. S. Du, J. Liu, C. Zhang, J. Zhu, and K. Li, “Probability iterative closest point algorithm for m–D point set registration with noise,” Neurocomputing, vol. 157, pp. 187–198, 2015. View at: Publisher Site | Google Scholar
  144. E. Delande, C. Frueh, J. Houssineau, and D. E. Clark, “Multi–object filtering for space situational awareness,” Advances in the Astronautical Sciences, vol. 155, pp. 2779–2798, 2015. View at: Google Scholar
  145. U. Hillenbrand and R. Lampariello, “Motion and parameter estimation of a free–floating space object from range data for motion prediction,” in 8th International Symposium on Artificial Intelligence, Robotics and Automation in Space, pp. 1–10, DLR, 2005. View at: Google Scholar
  146. M. D. Lichter, Shape, motion, and inertial parameter estimation of space objects using teams of cooperative vision sensors, Massachusetts Institute of Technology, Cambridge, Boston, 2005,
  147. S. Hati and S. Sengupta, “Robust camera parameter estimation using genetic algorithm,” Pattern Recognition Letters, vol. 22, no. 3–4, pp. 289–298, 2001. View at: Google Scholar
  148. M. D. Lichter and S. Dubowsky, “State, shape, and parameter estimation of space objects from range images,” in 2004 IEEE International Conference on Robotics and Automation, pp. 2974–2979, IEEE, New Orleans, LA, 2004. View at: Publisher Site | Google Scholar
  149. M. Uney, B. Mulgrew, and D. E. Clark, “Latent parameter estimation in fusion networks using separable likelihoods,” IEEE Transactions on Signal and Information Processing over Networks, vol. 4, no. 4, pp. 752–768, 2018. View at: Publisher Site | Google Scholar
  150. P. Mukhopadhyay and B. B. Chaudhuri, “A survey of Hough transform,” Pattern Recognition, vol. 48, no. 3, pp. 993–1010, 2015. View at: Publisher Site | Google Scholar
  151. J. Zheng, J. Zhang, S. Xu, H. Liu, and Q. Liu, “Radar detection and motion parameters estimation of maneuvering target based on the extended Keystone transform,” IEEE Access, vol. 6, pp. 76060–76074, 2018. View at: Google Scholar
  152. J. Xu, X. Xia, S. Peng, J. Yu, Y. Peng, and L. Qian, “Radar maneuvering target motion estimation based on generalized Radon–Fourier transform,” IEEE Transactions on Signal Processing, vol. 60, no. 12, pp. 6190–6201, 2012. View at: Publisher Site | Google Scholar
  153. Y. Yang, Z. Peng, W. Zhang, and G. Meng, “Parameterised time–frequency analysis methods and their engineering applications: a review of recent advances,” Mechanical Systems and Signal Processing, vol. 119, pp. 182–221, 2019. View at: Publisher Site | Google Scholar
  154. Y. Li, L. Du, and H. Liu, “Hierarchical classification of moving vehicles based on empirical mode decomposition of micro–Doppler signatures,” IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 5, pp. 3001–3013, 2013. View at: Publisher Site | Google Scholar
  155. F. Branz, L. Savioli, A. Francesconi, F. Sansone, and C. Menon, “Soft–docking system for capture of irregularly shaped, uncontrolled space objects,” in Sixth European Conference on Space Debris, ESA/ESOC, pp. 1–8, Darmstadt, Germany, 2013. View at: Google Scholar
  156. S. Yu, X. Wang, and T. Zhu, “Maneuver detection methods for space objects based on dynamical model,” Advances in Space Research, vol. 68, no. 1, pp. 71–84, 2021. View at: Publisher Site | Google Scholar
  157. F. Kruger, M. Nyolt, K. Yordanova, A. Hein, and T. Kirste, “Computational state space models for activity and intention recognition. A feasibility study,” Plos One, vol. 9, no. 11, p. e109381, 2014. View at: Google Scholar
  158. O. C. Schrempf, D. Albrecht, and U. D. Hanebeck, “Tractable probabilistic models for intention recognition based on expert knowledge,” in 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1435–1440, IEEE, San Diego, CA, 2007. View at: Publisher Site | Google Scholar
  159. L. Paninski, Y. Ahmadian, D. G. Ferreira et al., “A new look at state–space models for neural data,” Journal of Computational Neuroscience, vol. 29, no. 1–2, pp. 107–126, 2010. View at: Google Scholar
  160. T. A. Han and L. M. Pereira, “State–of–the–art of intention recognition and its use in decision making,” AI Communications, vol. 26, no. 2, pp. 237–246, 2013. View at: Google Scholar
  161. W. Yu, R. Alqasemi, R. Dubey, and N. Pernalete, “Telemanipulation assistance based on motion intention recognition,” in Proceedings of the 2005 IEEE International Conference on Robotics and Automation (ICRA), pp. 1121–1126, IEEE, Barcelona, Spain, 2005. View at: Google Scholar
  162. K. Li, X. Wang, Y. Xu, and J. Wang, “Lane changing intention recognition based on speech recognition models,” Transportation Research: Part C–Emerging Technologies, vol. 69, pp. 497–514, 2016. View at: Google Scholar
  163. H. Geffner and B. Bonet, “A concise introduction to models and methods for automated planning,” Synthesis Lectures on Artificial Intelligence and Machine Learning, vol. 7, no. 2, pp. 1–141, 2013. View at: Publisher Site | Google Scholar
  164. E. Blasch, G. Seetharaman, K. Palaniappan, H. Ling, and G. Chen, “Wide–area motion imagery (WAMI) exploitation tools for enhanced situation awareness,” in 2012 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), pp. 1–8, IEEE, Washington, DC, 2012. View at: Publisher Site | Google Scholar
  165. R. Furfaro, R. Linares, D. Gaylor, M. Jah, and R. Walls, “Resident space object characterization and behavior understanding via machine learning and ontology–based Bayesian networks,” in 2016 Advanced Maui Optical and Space Surveillance Technologies Conference (AMOS), pp. 1–14, Maui, Hawaii, 2016. View at: Google Scholar
  166. D. Shen, J. Lu, G. Chen et al., “Methods of machine learning for space object pattern classification,” in IEEE National Aerospace and Electronics Conference, pp. 565–572, 2019. View at: Publisher Site | Google Scholar
  167. D. Shen, C. Sheaff, M. Guo, E. Blasch, K. Pham, and G. Chen, “Enhanced GANs for satellite behavior discovery,” in Sensors and Systems for Space Applications XIII, p. 114220F, International Society for Optics and Photonics, 2020. View at: Publisher Site | Google Scholar
  168. K. A. Tahboub, “Intelligent human–machine interaction based on dynamic Bayesian networks probabilistic intention recognition,” Journal of Intelligent & Robotic Systems, vol. 45, no. 1, pp. 31–52, 2006. View at: Publisher Site | Google Scholar
  169. D. Aarno and D. Kragic, “Motion intention recognition in robot assisted applications,” Robotics and Autonomous Systems, vol. 56, no. 8, pp. 692–705, 2008. View at: Publisher Site | Google Scholar
  170. E. Davoodi, K. Kianmehr, and M. Afsharchi, “A semantic social network–based expert recommender system,” Applied intelligence, vol. 39, no. 1, pp. 1–13, 2013. View at: Google Scholar
  171. C. Chi, G. Liu, J. Zhang, Z. Pang, and B. Hou, “Design and implementation of a control and monitoring scheme for spacecraft obstacle avoidance,” in 2021 40th Chinese Control Conference (CCC), pp. 3810–3815, Shanghai, China, 2021. View at: Google Scholar
  172. T. S. Kelso, “Analysis of the Iridium 33–Cosmos 2251 collision,” Advances in the Astronautical Sciences, vol. 135, pp. 1099–1112, 2010. View at: Google Scholar
  173. F. A. Marcos, “Accuracy of atmospheric drag models at low satellite altitudes,” Advances in Space Research, vol. 10, no. 3–4, pp. 417–422, 1990. View at: Google Scholar
  174. D. Brouwer, “Solution of the problem of artificial satellite theory without drag,” Astronomical Journal, vol. 64, no. 9, pp. 378–396, 1959. View at: Publisher Site | Google Scholar
  175. Y. Kozai, “The motion of a close earth satellite,” Astronomical Journal, vol. 64, no. 8, pp. 367–377, 1959. View at: Publisher Site | Google Scholar
  176. Y. Kozai, “Note on the motion of a close earth satellite with a small eccentricity,” Astronomical Journal, vol. 66, pp. 132-133, 1961. View at: Publisher Site | Google Scholar
  177. M. H. Lane, “The development of an artificial satellite theory using a power–law atmospheric density representation,” in 2nd Aerospace Sciences Meeting, AIAA, pp. 1–29, New York, NY, 1965. View at: Publisher Site | Google Scholar
  178. M. H. Lane and F. R. Hoots, General perturbations theories derived from the 1965 lane drag theory, Aerospace Defense Command Peterson AFB CO Office of Astrodynamics, 1979.
  179. F. R. Hoots, P. W. Schumacher, and R. A. Glover, “History of analytical orbit modeling in the U. S. space surveillance system,” Journal of Guidance, Control, and Dynamics, vol. 27, no. 2, pp. 174–185, 2004. View at: Publisher Site | Google Scholar
  180. P. J. Message, “On Mr King–Hele's theory of the effect of the Earth's oblateness on the orbit of a close satellite,” Geophysical Journal International, vol. 3, p. 479, 1960. View at: Google Scholar
  181. L. Blitzer, “Secular and periodic motions of the node of an artificial Earth–satellite,” Nature, vol. 186, pp. 874-875, 1960. View at: Publisher Site | Google Scholar
  182. N. Z. Miura, Comparison and design of simplified general perturbation models (SGP4) and code for NASA Johnson Space Center, Orbital debris program office, 2009,
  183. D. Wei and C. Zhang, “An accuracy analysis of the SGP4/SDP4 model,” Chinese Astronomy and Astrophysics, vol. 34, no. 1, pp. 69–76, 2010. View at: Publisher Site | Google Scholar
  184. S. W. Keckler, W. J. Dally, B. Khailany, M. Garland, and D. Glasco, “GPUs and the future of parallel computing,” IEEE Micro, vol. 31, no. 5, pp. 7–17, 2011. View at: Publisher Site | Google Scholar
  185. R. Doyle, R. Some, W. Powell et al., “High performance spaceflight computing (HPSC) next–generation space processor (NGSP): a joint investment of NASA and AFRL,” in Proceedings of the Workshop on Spacecraft Flight Software, pp. 1–19, 2013. View at: Google Scholar
  186. H. Klinkrad, P. Beltrami, S. Hauptmann et al., “The ESA space debris mitigation handbook 2002,” Advances in Space Research, vol. 34, no. 5, pp. 1251–1259, 2004. View at: Publisher Site | Google Scholar
  187. B. C. Weeden and P. J. Cefola, “Computer systems and algorithms for space situational awareness: history and future development,” Advances in the Astronautical Sciences, vol. 138, pp. 205–226, 2010. View at: Google Scholar
  188. T. Geng, P. Zhang, W. Wang, and X. Xie, “Comparison of ultra–rapid orbit prediction strategies for GPS, GLONASS, Galileo and Bei Dou,” Sensors, vol. 18, no. 2, pp. 477–489, 2018. View at: Publisher Site | Google Scholar
  189. P. Yaya, L. Hecker, T. D. Wit, C. L. Fèvre, and S. Bruinsma, “Solar radio proxies for improved satellite orbit prediction,” Journal of Space Weather and Space Climate, vol. 7, p. A35, 2017. View at: Publisher Site | Google Scholar
  190. S. Guo, L. S. Shieh, G. Chen, and C. Lin, “Effective chaotic orbit tracker: a prediction–based digital redesign approach,” IEEE Transactions on Circuits and Systems I–Fundamental Theory and Applications, vol. 47, no. 11, pp. 1557–1570, 2000. View at: Publisher Site | Google Scholar
  191. H. Peng and X. Bai, “Exploring capability of support vector machine for improving satellite orbit prediction accuracy,” Journal of Aerospace Information Systems, vol. 15, no. 6, pp. 366–381, 2018. View at: Google Scholar
  192. H. Peng and X. Bai, “Improving orbit prediction accuracy through supervised machine learning,” Advances in Space Research, vol. 61, no. 10, pp. 2628–2646, 2018. View at: Publisher Site | Google Scholar
  193. H. Peng and X. Bai, “Artificial neural network–based machine learning approach to improve orbit prediction accuracy,” Journal of Spacecraft and Rockets, vol. 55, no. 5, pp. 1248–1260, 2018. View at: Publisher Site | Google Scholar
  194. H. Peng and X. Bai, “Gaussian processes for improving orbit prediction accuracy,” Acta Astronautica, vol. 161, pp. 44–56, 2019. View at: Publisher Site | Google Scholar
  195. H. Peng and X. Bai, “Relative evaluation of three machine learning algorithms on improving orbit prediction accuracy,” Astrodynamics, vol. 3, no. 4, pp. 325–343, 2019. View at: Google Scholar
  196. C. Levit and W. Marshall, “Improved orbit predictions using two–line elements,” Advances in Space Research, vol. 47, no. 7, pp. 1107–1115, 2011. View at: Publisher Site | Google Scholar
  197. H. Peng and X. Bai, “Fusion of a machine learning approach and classical orbit predictions,” Acta Astronautica, vol. 184, pp. 222–240, 2021. View at: Publisher Site | Google Scholar
  198. J. Hartikainen, M. Seppanen, and S. Sarkka, “State–space inference for non–linear latent force models with application to satellite orbit prediction,” in International Conference on Machine Learning (ICML 2012), pp. 1–8, 2012. View at: Google Scholar
  199. B. Li, J. Huang, Y. Feng, F. Wang, and J. Sang, “A machine learning–based approach for improved orbit predictions of LEO space debris with sparse tracking data from a single station,” IEEE Transactions on Aerospace and Electronic Systems, vol. 56, no. 6, pp. 4253–4268, 2020. View at: Publisher Site | Google Scholar
  200. D. A. Vallado, Evaluating Gooding angles–only orbit determination of space based space surveillance measurements, American Astronomical Society George H. Born Astrodynamics Symposium, 2010.
  201. R. H. Gooding, A new procedure for orbit determination based on three lines of sight (angles only), Technical Report Defence Research Agency, Farnborough, England, 1993.
  202. F. M. Fadrique, A. Á. Maté, J. J. Grau, J. F. Sánchez, and L. A. García, “Comparison of angles only initial orbit determination algorithms for space debris cataloguing,” Journal of Aerospace Engineering, Sciences and Applications, vol. 4, no. 1, pp. 39–51, 2012. View at: Publisher Site | Google Scholar
  203. D. K. Geller and T. A. Lovell, “Angles–only initial relative orbit determination performance analysis using cylindrical coordinates,” Journal of the Astronautical Sciences, vol. 64, no. 1, pp. 72–96, 2017. View at: Google Scholar
  204. G. Gaias, S. D'Amico, and J.-S. Ardaens, “Angles–only navigation to a non–cooperativesatellite using relative orbital elements,” Journal of Guidance, Control, and Dynamics, vol. 37, no. 2, pp. 439–451, 2014. View at: Google Scholar
  205. C. D. Karlgaard and F. H. Lutze, “Second–order relative motion equations,” Journal of Guidance, Control, and Dynamics, vol. 26, no. 1, pp. 41–49, 2003. View at: Publisher Site | Google Scholar
  206. J. Sullivan, A. Koenig, and S. D'Amico, “Improved maneuver–free approach to angles–only navigation for space rendezvous,” Advances in the Astronautical Sciences, vol. 158, pp. 1161–1184, 2016. View at: Google Scholar
  207. D. Lubey and H. Patel, “Optical initial orbit determination using polynomial chaos surrogate functions,” in 2017 Advanced Maui Optical and Space Surveillance Technologies Conference (AMOS), pp. 1–16, Maui, Hawaii, 2017. View at: Google Scholar
  208. J. Grzymisch and W. Fichter, “Analytic optimal observability maneuvers for in–orbit bearings–only rendezvous,” Journal of Guidance, Control, and Dynamics, vol. 37, no. 5, pp. 1658–1664, 2014. View at: Publisher Site | Google Scholar
  209. J. Li, H. Li, G. Tang, and Y. Luo, “Research on the strategy of angles–only relative navigation for autonomous rendezvous,” Science China–Technological Sciences, vol. 54, no. 7, pp. 1865–1872, 2011. View at: Google Scholar
  210. J. Grzymisch and W. Ficher, “Observability criteria and unobservable maneuvers for in–orbitbearings–only navigation,” Journal of Guidance, Control, and Dynamics, vol. 37, no. 4, pp. 1250–1259, 2014. View at: Publisher Site | Google Scholar
  211. J. Luo, B. Gong, J. Yuan, and Z. Zhang, “Angles–only relative navigation and closed–loop guidance for spacecraft proximity operations,” Acta Astronautica, vol. 128, pp. 91–106, 2016. View at: Publisher Site | Google Scholar
  212. T. Chen and S. Xu, “Double line–of–sight measuring relative navigation for spacecraft autonomous rendezvous,” Acta Astronautica, vol. 67, no. 1-2, pp. 122–134, 2010. View at: Publisher Site | Google Scholar
  213. S.-G. Kim, J. L. Crassidis, Y. Cheng, A. M. Fosbury, and J. L. Junkins, “Kalman filtering for relative spacecraft attitude and position estimation,” Journal of Guidance, Control, and Dynamics, vol. 30, no. 1, pp. 133–143, 2007. View at: Publisher Site | Google Scholar
  214. L. Zhu, S. Wang, and J. Zhu, “Adaptive beamforming design for millimeter–wave line–of–sight MIMO channel,” IEEE Communications Letters, vol. 23, no. 11, pp. 2095–2098, 2019. View at: Google Scholar
  215. D. K. Geller and I. Klein, “Angles–only navigation state observability during orbital proximity operations,” Journal of Guidance, Control, and Dynamics, vol. 37, no. 6, pp. 1976–1983, 2014. View at: Publisher Site | Google Scholar
  216. D. K. Geller and A. Perez, “Initial relative orbit determination for close–in proximity operations,” Journal of Guidance, Control, and Dynamics, vol. 38, no. 9, pp. 1833–1842, 2015. View at: Publisher Site | Google Scholar
  217. B. Gong, W. Li, S. Li, W. Ma, and L. Zheng, “Angles–only initial relative orbit determination algorithm for non–cooperative spacecraft proximity operations,” Astrodynamics, vol. 2, no. 3, pp. 217–231, 2018. View at: Google Scholar
  218. B. Gong, S. Li, Y. Yang, J. Shi, and W. Li, “Maneuver–free approach to range–only initial relative orbit determination for spacecraft proximity operations,” Acta Astronautica, vol. 163, pp. 87–95, 2019. View at: Publisher Site | Google Scholar
  219. B. A. Jones and B.-N. Vo, “A labeled multi–Bernoulli filter for space object tracking,” Advances in the Astronautical Sciences, vol. 155, pp. 1069–1088, 2015. View at: Google Scholar
  220. I. Hussein, K. DeMars, C. Fruh, M. Jah, and R. Erwin, “An AEGIS–FISST algorithm for multiple object tracking in space situational awareness,” in AIAA/AAS Astrodynamics Specialist Conference, pp. 1–20, AIAA, Minneapolis, Minnesota, 2012. View at: Google Scholar
  221. B. Jia, E. Blasch, K. D. Pham, D. Shen, Z. Wang, and G. Chen, “Cooperative space object tracking via multiple space–based visible sensors with communication loss,” in 2014 IEEE Aerospace Conference, pp. 1–8, Big Sky, MT, 2014. View at: Publisher Site | Google Scholar
  222. J. Stauch, M. Jah, J. Baldwin, T. Kelecy, and K. A. Hill, “Mutual application of joint probabilistic data association, filtering, and smoothing techniques for robust multiple space object tracking,” in AIAA/AAS Astrodynamics Specialist Conference, pp. 1–21, AIAA, San Diego, CA, 2014. View at: Publisher Site | Google Scholar
  223. B. A. Jones, D. S. Bryant, B.-T. Vo, and B.-N. Vo, “Challenges of multi–target tracking for space situational awareness,” in 2015 18th International Conference on Information Fusion (Fusion), pp. 1278–1285, Washington, DC, 2015. View at: Google Scholar
  224. O. Hagen, J. Houssineau, I. Schlangen, E. D. Delande, J. Franco, and D. E. Clark, “Joint estimation of telescope drift and space object tracking,” in 2016 IEEE Aerospace Conference, pp. 1–10, Big Sky, MT, 2016. View at: Google Scholar
  225. Z. Xu and X. Wang, “Space object tracking method based on a snake model,” Chinese Astronomy and Astrophysics, vol. 40, no. 2, pp. 266–276, 2016. View at: Publisher Site | Google Scholar
  226. H. Chen, J. Wang, C. Wang, J. Shan, and M. Xin, “Composite weighted average consensus filtering for space object tracking,” Acta Astronautica, vol. 168, pp. 69–79, 2020. View at: Publisher Site | Google Scholar
  227. T. Kelecy, D. Hall, K. Hamada, and M. D. Stocker, “Satellite maneuver detection using two–line element data,” in Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference, pp. 1–16, 2007. View at: Google Scholar
  228. J. Huang, W. Hu, and L. Zhang, “Maneuver detection of space object for space surveillance,” in Proceedings of the 6th European Conference on Space Debris, pp. 1–8, 2013. View at: Google Scholar
  229. V. Schaus, E. M. Alessi, G. Schettino, A. Rossi, and E. Stoll, “On the practical exploitation of perturbative effects in low Earth orbit for space debris mitigation,” Advances in Space Research, vol. 63, no. 7, pp. 1979–1991, 2019. View at: Publisher Site | Google Scholar
  230. T. G. Robertsa and R. Linaresa, “Satellite repositioning maneuver detection in geosynchronous orbit using two–line element (TLE) data,” in 71st International Astronautical Congress (IAC), pp. 1–9, Dubai, the United Arab Emirates, 2020. View at: Google Scholar
  231. L. Liu, J. Cao, and Y. Liu, “WFMHT method of orbit maneuver detection based on space–based bearing–only measurement,” Journal of Northwestern Polytechnical University, vol. 36, no. 6, pp. 1185–1192, 2018. View at: Publisher Site | Google Scholar
  232. T. Kelecy and M. Jah, “Detection and orbit determination of a satellite executing low thrust maneuvers,” Acta Astronautica, vol. 66, no. 5–6, pp. 798–809, 2010. View at: Publisher Site | Google Scholar
  233. S. Lemmens and H. Krag, “Two–line–elements–based maneuver detection methods for satellites in low earth orbit,” Journal of Guidance, Control, and Dynamics, vol. 37, no. 3, pp. 860–868, 2014. View at: Publisher Site | Google Scholar
  234. B. Jia, E. Blasch, K. D. Pham et al., “Space object tracking and maneuver detection via interacting multiple model cubature Kalman filters,” in 2015 IEEE Aerospace conference, pp. 1–8, Big Sky, MT, 2015. View at: Publisher Site | Google Scholar
  235. G. M. Goff, J. T. Black, and J. A. Beck, “Tracking maneuvering spacecraft with filter–through approaches using interacting multiple models,” Acta Astronautica, vol. 114, pp. 152–163, 2015. View at: Google Scholar
  236. D. A. Vallado, B. B. Virgili, and T. Flohrer, “Improved SSA through orbit determination of two–line element sets,” in Proceedings of the Sixth European Conference on Space Debris, pp. 345–351, Darmstadt, Germany, 2013. View at: Google Scholar
  237. Y. Liu, H. Zhao, C. Liu, J. Cao, and J. Wang, “Maneuver detection and tracking of a space target based on a joint filter model,” Journal of Guidance, Control, and Dynamics, vol. 23, no. 3, pp. 1441–1453, 2021. View at: Publisher Site | Google Scholar
  238. N. Singh, J. T. Horwood, and A. B. Poore, “Space object maneuver detection via a joint optimal control and multiple hypothesis tracking approach,” Advances in the Astronautical Sciences, vol. 143, pp. 843–862, 2012. View at: Google Scholar
  239. Y. Wang, X. Bai, H. Peng et al., “Gaussian–Binary classification for resident space object maneuver detection,” Acta Astronautica, vol. 187, pp. 438–446, 2021. View at: Google Scholar
  240. M. Li, X. Wu, and X. Liu, “An improved EMD method for time–frequency feature extraction of telemetry vibration signal based on multi–scale median filtering,” Circuits, Systems, and Signal Processing, vol. 34, no. 3, pp. 815–830, 2015. View at: Google Scholar
  241. W. Dai, X. Ding, J. Zhu, Y. Chen, and Z. Li, “EMD filter method and its application in GPS multipath,” Acta Geodaetica et Cartographica Sinica, vol. 35, no. 4, pp. 321–327, 2006. View at: Google Scholar
  242. R. Abay, S. Gehly, S. Balage, M. Brown, and R. Boyce, “Maneuver detection of space objects using generative adversarial networks,” in 2018 Advanced Maui Optical and Space Surveillance Technologies Conference (AMOS), pp. 1–8, Maui, Hawaii, 2018. View at: Google Scholar
  243. K. Wang and C. Thrampoulidis, “Benign overfitting in binary classification of Gaussian mixtures,” in ICASSP 2021 – 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4030–4034, IEEE, Toronto, ON, Canada, 2021. View at: Google Scholar
  244. G. Guel and M. Bassler, “Fast multilevel quantization for distributed detection based on Gaussian approximation,” in European Signal Processing Conference, pp. 2433–2437, 2021. View at: Google Scholar
  245. R. Wang, W. Liu, R. Yan, L. Shi, and S. Liu, “Refined study of space debris collision warning techniques for LEO satellites,” Journal of Space Safety Engineering, vol. 7, no. 3, pp. 262–267, 2020. View at: Google Scholar
  246. D. W. Hughes, “Rosetta: the remarkable story of Europe's comet explorer,” Observatory, vol. 141, no. 1285, pp. 308-309, 2021. View at: Google Scholar
  247. I. D. Kovalenko, N. A. Eismont, S. S. Limaye, L. V. Zasova, D. A. Gorinov, and A. V. Simonov, “Micro–spacecraft in Sun–Venus Lagrange point orbit for the Venera–D mission,” Advances in Space Research, vol. 66, no. 1, pp. 21–28, 2020. View at: Publisher Site | Google Scholar
  248. A. K. Mainzer, P. Eisenhardt, E. L. Wright et al., “Preliminary design of the wide–field infrared survey explorer (WISE),” in Proceedings of the SPIE–The International Society for Optical Engineering, vol. 5899, San Diego, CA, 2005. View at: Google Scholar
  249. M. Veto, S. Antoniak, M. Dean et al., “Ball aerospace spacecraft and systems for the near–Earth object surveyor mission,” in 2021 IEEE Aerospace Conference (50100), pp. 1–14, IEEE, Big Sky, MT, 2021. View at: Google Scholar
  250. D. Laurin, A. Hildebrand, R. Cardinal, W. Harvey, and S. Tafazoli, “NEOSSat: a Canadian small space telescope for near Earth asteroid detection,” in Space Telescopes and Instrumentation 2008: Optical, Infrared, and Millimeter: International Society for Optics and Photonics, vol. 7010, p. 701013, Marseille, France, 2008. View at: Google Scholar
  251. V. Abbasi, S. Thorsteinson, D. Balam, J. Rowe, D. Laurin, and L. Scott, “The NEOSSat experience: 5 years in the life of Canada’s space surveillance telescope,” in 1st NEO and Debris Detection Conference, pp. 1–16, Darmstadt, Germany, 2019. View at: Google Scholar
  252. M. W. Werner, T. Roellig, F. Low, G. H. Rieke, M. Rieke, and W. Hoffmann, “The Spitzer space telescope mission,” Advances in Space Research, vol. 36, no. 6, pp. 1048-1049, 2005. View at: Publisher Site | Google Scholar
  253. N. Scoville, R. G. Abraham, H. Aussel et al., “COSMOS: Hubble space Telescope Observations,” Astrophysical Journal Supplement Series, vol. 172, no. 1, pp. 38–45, 2007. View at: Publisher Site | Google Scholar
  254. O. Vaduvescu, M. Birlan, A. Tudorica et al., “EURONEAR--Recovery, follow-up and discovery of NEAs and MBAs using large field 1-2 m telescopes,” Planetary and Space Science, vol. 59, no. 13, pp. 1632–1646, 2011. View at: Publisher Site | Google Scholar
  255. N. Myhrvold, “Comparing NEO search telescopes,” Publications of the Astronomical Society of the Pacific, vol. 128, no. 962, p. 045004, 2016. View at: Google Scholar
  256. G. D. Bredvik and J. E. Strub, “Determination of acceptable launch windows for satellite collision avoidance,” Advances in the Astronautical Sciences, vol. 76, pp. 345–356, 1992. View at: Google Scholar
  257. K. Chan, “Collision probability analysis for Earth orbiting satellites,” Advances in the Astronautical Sciences, vol. 96, pp. 1033–1048, 1997. View at: Google Scholar
  258. J. L. Foster and H. S. Estes, A parametric analysis of orbital debris collision probability and maneuver rate for space vehicles, NASA Johnson Space Flight Center, 1992.
  259. R. P. Patera, “Conventional form of the collision probability integral for arbitrary space vehicle shape,” in AIAA/AAS Astrodynamics Specialist Conference and Exhibit, AIAA, pp. 2004–5218, Providence, Rhode Island, 2004. View at: Publisher Site | Google Scholar
  260. S. Alfano, “A numerical implementation of spherical object collision probability,” Journal of the Astronautical Sciences, vol. 53, no. 1, pp. 103–109, 2005. View at: Publisher Site | Google Scholar
  261. R. P. Patera, “Space vehicle conflict – avoidance analysis,” Journal of Guidance, Control, and Dynamics, vol. 30, no. 2, pp. 492–498, 2007. View at: Publisher Site | Google Scholar
  262. K. Chan, “Spacecraft maneuvers to mitigate potential collision threats,” in AIAA/AAS Astrodynamics Specialist Conference and Exhibit, pp. 1–11, AIAA, Monterey, California, 2002. View at: Publisher Site | Google Scholar
  263. J. B. Mueller, P. R. Griesemer, and S. J. Thomas, “Avoidance maneuver planning incorporating station–keeping constraints and automatic relaxation,” Journal of Aerospace Information Systems, vol. 10, no. 6, pp. 306–322, 2013. View at: Publisher Site | Google Scholar
  264. J. B. Mueller, “Onboard planning of collision avoidance maneuvers using robust optimization,” in AIAA Infotech@ Aerospace Conference, AIAA, Seattle, Washington, pp. 1–17, 2009. View at: Publisher Site | Google Scholar
  265. B. Kelly and S. De Picciotto, “Probability based optimal collision avoidance maneuvers,” in Space 2005, AIAA, pp. 1–13, Long Beach, California, 2005. View at: Publisher Site | Google Scholar
  266. Y. Wang, Y. Bai, J. Xing, G. Radice, Q. Ni, and X. Chen, “Equal–collision–probability–curve method for safe spacecraft close–range proximity maneuvers,” Advances in Space Research, vol. 62, no. 9, pp. 2599–2619, 2018. View at: Publisher Site | Google Scholar
  267. A. Krasuski and M. Meina, “Correcting inertial dead reckoning location using collision avoidance velocity–based map matching,” Applied Sciences, vol. 8, no. 10, p. 1830, 2018. View at: Publisher Site | Google Scholar
  268. K. Lee, C. Park, and S. Y. Park, “Near–optimal continuous control for spacecraft collision avoidance maneuvers via generating functions,” Aerospace Science and Technology, vol. 62, pp. 65–74, 2017. View at: Google Scholar
  269. J. Su, Z. Sheng, L. Xie, G. Li, and A. X. Liu, “Fast splitting–based tag identification algorithm for anti–collision in UHF RFID system,” IEEE Transactions on Communications, vol. 67, no. 3, pp. 2527–2538, 2018. View at: Publisher Site | Google Scholar
  270. D. D. Murakami, S. Nag, M. Lifson, and P. H. Kopardekar, Space traffic management with a NASA UAS traffic management (UTM) inspired architecture, AIAA Scitech 2019 Forum, 2019. View at: Publisher Site
  271. J. L. Gonzalo, C. Colombo, and P. Di Lizia, “Analytical framework for space debris collision avoidance maneuver design,” Journal of Guidance, Control, and Dynamics, vol. 44, no. 3, pp. 469–487, 2021. View at: Google Scholar
  272. Y. Matsushita, Y. Yoshimura, T. Hanada, Y. Itaya, and T. Fukushima, “Risk assessment of a large constellation of satellites in low–earth orbit,” Transactions of the Japan Society for Aeronautical and Space Sciences, Aerospace Technology Japan, vol. 20, pp. 10–15, 2022. View at: Publisher Site | Google Scholar
  273. P. J. Hoen, K. Tuyls, L. Panait, S. Luke, and J. A. La Poutre, “An overview of cooperative and competitive multiagent learning, learning and adaption in multi–agent systems,” in First International Workshop, LAMAS 2005, vol. 3898, pp. 1–46, Springer, Berlin, Heidelberg, 2005. View at: Publisher Site | Google Scholar
  274. B. Liu, Y. Chen, E. Blasch, K. Pham, D. Shen, and G. Chen, “A holistic cloud–enabled robotics system for real–time video tracking application,” Future Information Technology. Lecture Notes in Electrical Engineering, vol. 276, pp. 455–468, 2014. View at: Publisher Site | Google Scholar
  275. V. Dawarka and G. Bekaroo, “Building and evaluating cloud robotic systems: a systematic review,” Robotics and Computer–Integrated Manufacturing, vol. 73, p. 102240, 2022. View at: Publisher Site | Google Scholar
  276. R. Xu, Y. Chen, E. Blasch, and G. Chen, “Exploration of blockchain–enabled decentralized capability–based access control strategy for space situation awareness,” Optical Engineering, vol. 58, no. 4, p. 041609, 2019. View at: Publisher Site | Google Scholar
  277. R. Clark and R. Lee, “Parallel processing for orbital maneuver detection,” Advances in Space Research, vol. 66, no. 2, pp. 444–449, 2020. View at: Publisher Site | Google Scholar
  278. E. Blasch, M. Pugh, C. Sheaff, J. Raquepas, and P. Rocci, “Big data for space situation awareness,” Proceedings of SPIE, vol. 10196, p. 1019607, 2017. View at: Publisher Site | Google Scholar
  279. W. Suttle, Z. Yang, K. Zhang, Z. Wang, T. Basar, and J. Liu, “A multi–agent off–policy actor–critic algorithm for distributed reinforcement learning,” IFAC–Papers OnLine, vol. 53, no. 2, pp. 1549–1554, 2020. View at: Google Scholar
  280. A. D. Biria and B. G. Marchand, “Constellation design for space–based space situational awareness applications: an analytical approach,” Journal of Spacecraft and Rockets, vol. 51, no. 2, pp. 545–562, 2014. View at: Google Scholar
  281. L. Rider, “Design of low to medium altitude surveillance systems providing continuous multiple above–the–horizon viewing,” Optical Engineering, vol. 28, no. 1, pp. 25–29, 1989. View at: Publisher Site | Google Scholar
  282. F. Vatalaro, G. E. Corazza, C. Caini, and C. Ferrarelli, “Analysis of LEO, MEO, and GEO global mobile satellite systems in the presence of interference and fading,” IEEE Journal on Selected Areas in Communications, vol. 13, no. 2, pp. 291–300, 1995. View at: Publisher Site | Google Scholar
  283. B. G. Marchand and C. J. Kobel, “Above the horizon satellite coverage with dual–altitude band constraints,” Journal of Spacecraft and Rockets, vol. 46, no. 4, pp. 845–857, 2009. View at: Publisher Site | Google Scholar
  284. Z. Ye and Q. Zhou, “Performance evaluation indicators of space dynamic networks under broadcast mechanism,” Space: Science & Technology, vol. 2021, article 9826517, 11 pages, 2021. View at: Publisher Site | Google Scholar
  285. P. J. Blount, “Space traffic coordination: developing a framework for safety and security in satellite operations,” Space: Science & Technology, vol. 2021, article 9830379, 10 pages, 2021. View at: Publisher Site | Google Scholar
  286. S. DiPaola and Ö. N. Yalçin, “A multi–layer artificial intelligence and sensing based affective conversational embodied agent,” in 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), pp. 91-92, Cambridge, UK, 2019. View at: Publisher Site | Google Scholar

Copyright © 2022 Beichao Wang et al. Exclusive Licensee Beijing Institute of Technology Press. Distributed under a Creative Commons Attribution License (CC BY 4.0).

 PDF Download Citation Citation
Altmetric Score