Get Our e-AlertsSubmit Manuscript
Space: Science & Technology / 2022 / Article

Research Article | Open Access

Volume 2022 |Article ID 9763198 | https://doi.org/10.34133/2022/9763198

Wenshan Zhu, Jinzhen Mu, Changbao Shao, Jiaqian Hu, Beichao Wang, Zhongkai Wen, Fei Han, Shuang Li, "System Design for Pose Determination of Spacecraft Using Time-of-Flight Sensors", Space: Science & Technology, vol. 2022, Article ID 9763198, 18 pages, 2022. https://doi.org/10.34133/2022/9763198

System Design for Pose Determination of Spacecraft Using Time-of-Flight Sensors

Received12 May 2022
Accepted02 Aug 2022
Published05 Sep 2022

Abstract

The pose determination between nanosatellites and the cooperative spacecraft is essential for swarm in-orbit services. Time-of–flight (ToF) sensors are one of the most promising sensors to achieve the tasks. This paper presented an end-to-end assessment of how these sensors were used for pose estimation. First, an embedded system was designed based on the ToF camera with lasers as a driven light source. Gray and depth images were collected to detect and match the cooperative spacecraft in real time, obtaining the pose information. A threshold-based segmentation was proposed to find a small set of the pixels belonging to reflector markers. Only operating on the defined active pixel set reduced computational resources. Then, morphological detection combined with an edge following-based ellipse detection extracted the centroid coordinate of the circular marker, while the center-of-heart rate was calculated as the recognition condition. Next, the marker matching was completed using a deterministic annealing algorithm, obtaining two sets of 3D coordinates. A singular value decomposition (SVD) algorithm estimated the relative pose between the nanosatellite and the spacecraft. In the experiments, the pose calculated by the TOF camera reached an accuracy of 0.13 degrees and 2 mm. It accurately identified the markers and determined the pose, verifying the feasibility of the ToF camera for rendezvous and docking.

1. Introduction

Spacecraft pose determination is aimed at acquiring the relative pose of the active spacecraft like nanosatellites to the target [1]. The target pose is identified by six parameters representing the translation and rotation of the relative motion [2]. It is fundamental information in the filtering scheme of relative navigation, determining the relative state between the chaser and target. Spacecraft pose determination is essential in many mission scenarios, such as nanosatellite formation flying, active debris removal, and swarm services [35]. It contributes to executing further relative navigation maneuvers autonomously in these scenarios.

Now, different sensors and markers are explored for spacecraft pose estimation [4, 6]. Electrooptical (EO) sensors are the best option to implement pose determination in close proximity. Some of the sensors are active such as the light detection and ranging (LIDAR) system and passive like monocular and stereo cameras [7, 8]. They can estimate the 6-DOF (degrees of freedom) pose of a chaser orbiting in close proximity. When using these sensors for cooperative pose determination, the space target is equipped with artificial markers easily detected and recognized in the acquired datasets. These markers can be also active or passive, such as light emission diodes (LEDs) and regular geometry reflector markers (RGRMs). They are located on the target surfaces based on specific known patterns. Although many different sensors and markers can be used for spacecraft pose estimation, this paper will investigate the unique problem of using ToF sensors for pose estimation with RGRMs.

ToF cameras with anti-interference, automatic illumination, and low power consumption are increasingly applicable to cooperative target pose estimation [3, 9, 10]. The typical ToF camera consists of two primary components: (1) a laser and accompanying transmit optics and (2) the receive optics and detector [11, 12]. In this type of sensor, the laser generates a pulse covering the entire field of view (FOV). This laser pulse is then reflected by objects within the FOV of the laser to transmit optics. Some reflection can return to the detectors, which are only sensitive to light at the laser’s wavelength. Since the detectors in a ToF camera are an array of many small detectors (the pixels on the focal plane), two pieces of information consisting of laser return intensity and laser time of flight are obtained at each pixel. Therefore, a ToF camera can generate two images at the current frame by one shot [3, 1315]. A laser return intensity image resembles photographing the same scene using a black and white camera with a flashbulb in a dark space. The latter image contains the range at each pixel, creating 3D point clouds of the observed target. Such capability can reduce the distortion of sensor data caused by the fast relative motion between the target and camera. Moreover, the ToF camera has the advantages of high frame rate, easy calibration, small and simple structure, and high integration [16]. They are conducive to its actual applications in space missions.

Therefore, this paper developed the pose determination of cooperative spacecraft using a ToF camera. Cooperative pose determination means that a nanosatellite with a ToF camera as the chaser performs relative navigation to the target spacecraft equipped with cooperative reflector markers. The reflectors are placed at known positions on the target [17, 18]. Therefore, in a ToF pose determination system, the reflectors must be found in the ToF imagery and then matched with a catalog of reflectors stored onboard the nanosatellite, that is, the reflector identification. Once each reflector is identified, pose estimation between the sensor and reflector catalog frames is available. The ToF camera simultaneously implements position and attitude measurements by multiple reflectors on the known target in cooperative pose determination.

Furthermore, this paper focused only on the close-range portion (<15 m) of cooperative pose determination, where multiple reflectors are simultaneously visible. For executing an instantaneous pose measurement from the ToF data, a three-step process was presented as follows: (1) extract candidate reflectors from ToF imageries, (2) match the candidate reflectors to a known catalog, and (3) estimate the transformation best mapping the catalog reflector positions to observed reflector positions. Furthermore, the main contributions of this work are presented as follows: (1)An embedded measurement system was developed for cooperative spacecraft pose determination. It consists of a ToF camera, a pose determination algorithm, a target with eight reflectors, a turntable, and a software(2)Using the characteristics that the reflecting intensity of reflectors is higher than the background, a threshold-based segment was proposed to find a small set of pixels that are likely to belong to a reflector. The computational resources can be greatly reduced by only operating on a subset of the raw image(3)Circular reflectors were used in this paper, and morphological and ellipse detections were combined to extract the centroid coordinate of the reflector. Then, center-of-heart rates were computed as the reflector recognition condition

2. Measurement System Composition

2.1. Hardware System

The hardware system is composed of an embedded ToF module, a power supply system, a cooperative target, and an upper computer, as shown in Figure 1(a). The background of the target is designed as black, and the reflectors are designed as white (Figure 1(c)). The reflectors of the target are divided into two groups according to their sizes, and each group of reflectors is composed of four circular markers with a configuration of “L.” Specially, the diameter of the markers numbered 1 to 4 is 35 mm, and that of those numbered 5 to 8 is 80 mm. The height of the markers numbered 1, 3, 4, and 7 relative to the background is 100 mm, and that of those numbered 2, 5, 6, and 8 relative to the background is 150 mm. The information is manifested in Figure 1(c).

The designed ToF camera uses eight 850 nm lasers as the driving light source (the model of driving light source: 8SFH 4715S made by the OSRAM Company) to detect the target at a farther distance. The ToF camera contains grayscale and depth modes. In the gray mode, ToF works the same way as ordinary cameras, which receive passive light source imaging in a gray map with a resolution of . In depth mode, ToF uses the active light source generated by a laser to obtain the depth data of the target directly and arranges the data according to the image matrix to form a depth map with a resolution of . The hardware composition of the designed ToF camera is shown in Figure 1(b). A 24 V regulated power supply powers the ToF module, and an ARM with Linux operating system is embedded in the module. Relying on the capability of ambient light suppression under high light intensity environments (the high light environment is defined as 10000 Lux and normal light environment is defined as 1000 Lux), the EPC660 is chosen as the ToF chip (the chip parameters: pixels, , the size is , and diagonal length is 8 mm). The lighting source is VCSEL (vertical cavity surface-emitting laser), with the laser having high power and high stability. The lens’s focal length is designed as 6.5 mm, the horizontal FOV is designed as 52.4 degrees, and the vertical FOV is designed as 40.6 degrees (the detailed design process is given later). After receiving instructions from the upper computer, images are transmitted to the computer through the network cable.

The overall weight of the ToF camera is 1 kg. The workflow of ToF is as follows: 850 nm active light source is sent according to the set frequency; 850 nm light reflected by the target is received and collected; AD converts the received reflected light energy; the time difference between the emitted light and the reflected light is calculated based on the phase difference to obtain the distance of the target.

Since the operating range of the ToF camera in this paper is set within 15 m, the optical system of the ToF needs to design the optical aperture size and FOV according to the distance of the relative moving target, and the size of the FOV needs to be based on the area of the target identification point. The relative movement range of the platform is determined to ensure that the identification point of the cooperative target always remains within the FOV in the measuring range. Then, the designed indicators of the optical system are as follows: (1)The wavelength (2)The caliber is 20 mm(3)The light-source divergence angle is 90°

Then, refers to the transmitting optical system of the ToF. The receiving (imaging) optical system on the right is is the distance from the measured target to the camera; refer to the areas covered by and , respectively; is the chip size; is the distance from the chip to the lens (image distance); and is the focal length of the lens. According to the lens imaging theory, can be calculated as

As for measuring ranges, the near field is 0.27 m~15 m and the far field is 12 m~100 m. Imaging lenses are separately designed on this basis. When determining the focal length of lens, the change of with is calculated to obtain the number of target pixels on the imaging chip, as follows:

During the calculation process, is divided into and corresponding to the long and short sides of the chip, respectively. The sizes of the corresponding images are and . Then, the total number of pixels in the circular area is

The total number of pixels in the entire rectangular target is

The ToF requires that the characteristic area of the target is clearly imaged within the range of 0.27 m~15 m, aiming to obtain the distance and angle information of the target, . Considering a certain margin, . The resolution of the sensor chip is . The total pixel points are 76,800. The size of a single pixel point is , and the size is . Then, the cases of covering the long and short sides of the imaging chip are considered. (i), and the focal length of lens is 7 mm(ii), and the focal length of lens is 5.3 mm

Thus, the ToF receiving optical system uses a lens with a focal length of 6.5 mm. At this time, the horizontal FOV is 52.4 degrees and the vertical FOV is 40.6 degrees.

2.2. Software System

The ToF system software consists of embedded server software and upper-computer client application software. The embedded server software runs on the Linux operating system of the embedded ToF. It completes hardware initialization, raw depth data, and grayscale data processing receiving and responding to client. Hardware working parameter settings, depth, and grayscale image data can transfer request and output functions. The embedded server software is the running interface, and the device runs automatically when powered on. The client application software system is performed by a customized version QT5.9.4 implemented in C++, including the drivers for gray and deep image acquisition, processing, target detection, and pose solutions, as shown in Figure 2. To debug the pose measurement algorithm, the depth map, grayscale map, and pose results are directly manifested on the interface of the upper computer. In the meantime, the software visualization window can view the depth value corresponding to each pixel.

In the working mode, the ToF camera transmits the grayscale and depth maps to the upper computer alternately. After obtaining image data, the pose determination algorithm outputs the pose, consisting of three steps. (1)Reflector detection: segment reflectors from the gray image and extract the centroid coordinates of the target. Meanwhile, the depth information of the centroid is also extracted from the depth image(2)Reflector matching: assign the observed candidate reflectors to catalog reflectors through deterministic annealing in Markov random field (MRF).(3)Pose calculation: input the 3-D coordinates of the matching reflectors, and then use SVD to calculate the rotation matrix and position vector .

2.3. ToF Measurement Model

A ToF camera with frame observes a reflector with a known position in the target catalog with frame which is shown in Figure 3. Such a measurement model is simplified as [3] where is a reflector position in the ToF frame , is the position of the corresponding reflector in the target catalog frame (i.e., as found in the reflector catalog), describes the rotation from the target catalog frame to the ToF frame , is the position of the origin of the target frame with respect to the ToF frame , and is the measurement error. Therefore, the pose can be described by and .

The laser pulse of the ToF reflected off a reflector follows a line-of-sight direction of , projects onto the ToF focal plane, and illuminates some of the pixels. The pixel coordinates of the point illuminated by this laser return on the image plane are defined as . The image plane is an imaginary plane parallel to the ToF focal plane and is placed in front of the optical center of the ToF receive optics. It creates an image similar to that of an observer along the ToF boresight direction at the lens (Figure 3). Moreover, pixel columns are defined by integer values of , and pixel rows are defined by integer values of . Therefore, the point is related to the measuring line-of-sight direction by a pinhole camera model. Then, any point is converted to an equivalent horizontal angle and vertical angle by where refer to the coordinate of the principle point, and are the - and -axis scale factors in units of pixel/length, and is the focal length of the receive optics. Observing the ToF images at the pixel coordinates yields the corresponding measuring depth . Therefore, a measured 3-D point is expressed in the ToF frame , as

The results of Equation (8) can be used to construct an analytic expression for the measuring covariance in terms of readily available ToF measurement statistics. Define the bearing by the unit vector :

Therefore, the measurement is given by where and are depth and line-of-sight measuring errors, respectively. How to eliminate these errors will be introduced in Section 5.2.

3. Cooperative Reflector Extraction

3.1. Image Preprocessing

Reflector markers on the cooperative targets are made of special reflective materials so that the reflecting intensity is higher than the background [3, 1719]. The gray threshold and filtering intensity threshold are used to segment the gray image twice, and the region where the reflector markers are located is defined as the active pixel set.

3.1.1. Raw Gray Threshold

The first component is to find an effective pixel set of pixels (such that ) that are likely to belong to a reflector marker. The computational resources required of the nanosatellite can be greatly reduced by only operating on a small set of the original image, so that this useful set is defined as active pixel set . It follows that the member pixels in must be selected such that no useful pixels are missed. The following process is designed on how to find by using the characteristic that reflectors are made of special reflective materials so that the reflecting intensity is higher than the background.

First, apply a threshold operation to the raw gray image to check all pixel positions , where is the raw gray threshold, as shown in Figure 4(a). is adopted in our experiments, where is the maximum value of grayscale. Note that the choice of the threshold is based on empirical values according to light intensity environment and the reflecting intensity of reflector markers. Each reserved pixel is stored in a data structure containing the image plane coordinates as well as the range and intensity value for each pixel. The size of is limited to . If more pixels exist in thresholding operations than those allowed in , a sort is performed, and pixels are discarded from the dimmest to the brightest until the number of pixels is reduced to the maximum allowable in , as shown in Figure 4(b).

3.1.2. Filtered Intensity Threshold

The gradient operator and two-dimensional Gaussian function with standard deviation are defined, respectively, as

Substituting Equation (12) into Equation (11) obtains the following expression: where refers to the Laplacian-of-Gaussian filter. Then, a negative Laplacian-of-Gaussian (nLoG) filter is applied to a gray image, as where is a gray image. The convolution is only executed for the pixels in . It constitutes a significant computational saving. The active pixel set is a bright blob which is suitable for segmentation with the nLoG operator. The nLoG operator response value of a bright blob is great because of the characteristic of center-surround mechanism of the nLoG operator. The pixels are located in decreasing order, and a second threshold is adopted based on the filtered intensity (keep all pixel locations , where is the filtered intensity threshold; is used in our experiment), as shown in Figure 4(c).

For the depth image, the range is corrected by the recursive time-domain median method as , where represents the range of the -th depth image in pixel coordinates . After image preprocessing, the active pixel set contains the image plane coordinates, raw intensity, filtered intensity, and depth of each active pixel. For the final quicksort step, pixels in the set are ordered by decreasing filtered intensity from the brightest to dimmest. In construction, the intensity of all pixels not contained in is assumed to be zero.

3.2. Finding Candidate Reflector

The second component is to find the candidate reflector blobs in a single preprocessed ToF image. The contiguous blobs of the preprocessed image are searched in using the following three criterion: (1)The maximum grayscale in the blob is higher than , and the minimum one is greater than . Note that the and are used in our experiments(2)The maximum filtered intensity in the blob is higher than , and the minimum one is greater than . Note that the and are used in our experiments(3)The number of member pixels is between the dependent values and

The output of this step is a list of pixels belonging to the brightest candidate reflector blobs [3]. A few attractive observations are made relating these criteria to concepts from mathematical morphology. The first two criteria are equivalent to morphological reconstruction, while the third criterion is assessed by connected component analyses. Then, suppose that four binary images are created as

Since the reflectors adopted in this paper are circular, and their sizes are known, the contiguous blobs can be labeled to obtain the center. The pixels of the blobs with the same label can be counted, and the eccentricity and centroid of each blob can be calculated. Finally, the reflectors can be screened by finding the contiguous blobs with the number of pixels between and , and the eccentricity meets the conditions.

3.3. Ellipse Detection for Reflector Confirmation

The third component is confirming the detected reflector blobs. Although the detection method based on morphology is simple, it risks missing detection. When the detected number of marks does not match the designed number, the ellipse detection method is adopted to confirm and supplement. Generally speaking, ellipse detection comprises three steps: edge segments, line segments, and ellipse.

An adaptive edge detector [20] is first applied to generate edge maps without clarifying the low and high thresholds in the edge segment detection procedure. Thus, it could achieve better accuracy throughout different light conditions. Then, we use a curvature predictive edge-linking detector to extract segments from the edge maps. It uses the curvature of the last 12 points to predict the trend of the current segment. After that, an edge-thinning method [21] is used to get one-pixel-wide segments.

In the line segment detection procedure [22], we use a length-based line segment detector. By combining the length error and the proportion of directions, this detector could translate the edge segments into a series of line segments. Moreover, three elliptical conditions are applied to obtain potential elliptical arcs. Suppose that a line segment is given from the previous step, and denotes the number of line segments. In the -th line segment, denotes the number of pixels. The -th point of the -th line segment will be denoted by , where and (). For the -th line segment, we split it into individual elliptical arcs at the connection point for the following conditions [23]. (i)Point condition (PC): at least five points exist to fit an ellipse, so we will delete the line segment which contains no more than pixels.(ii)Angle condition (AC): is the angle from the two aligned vectors in clockwise. We break the -th line segment at where . is adopted in our experiments(iii)Curvature condition (CC): an ideal elliptical arc traces an ellipse with a sequence of points. Therefore, the curvatures of the connected points should be of the same sign. We break the -th line segment at on condition that

Next, a neighborhood segment grouping [21] is used to merge near distance segments to extract more complete and informative elliptical arcs for ellipse detection. is a threshold determining if two-line segments are in a close range, and is adopted in experiments. There are four-line segments from the partial enlarged view, and they all meet the threshold . AC and CC determine if two-line segments can be connected. Then, the parameters of these ellipses are estimated by the ElliFit [24].

3.4. Computing Candidate Reflector Centroids

The final component is to compute candidate reflector centroids [3]. Further, the counterimage counts the number of frames in which each pixel was found to belong to a candidate reflector. Now, a simple persistence check can be performed by only keeping pixels with a counter value above For the pixels passing this check, the processed intensity image and simplified depth image are divided by the counterimage.

Then, the orientation and range to the candidate reflectors is computed. The orientation can be derived from the centroids of the remaining contiguous blobs within the mean raw intensity image . Given an exceptional blob , the centroid is calculated using a weighted-average algorithm on :

The depth to the possible reflector described by the blob is computed by finding the average depth of the blob’s member pixels: where is the number of pixels in . A penalty score for each centroid is computed based on the size, shape, and intensity of the blob. Then, we use to check each centroid whether it meets , , and , where is the max penalty score and and are defined as the width and height of , respectively. When all the checks are passed, the blob is considered as a candidate reflector, as shown in Figure 5.

During calculating the above target center, the eccentricity and centroid of the connected domain are used to screen out the target center. As mentioned above, the method is simple and prone to errors under the interference of image background noise. To improve the robustness of the computation, the results of ellipse detection are used to further enhance and confirm the center of the reflector. The ellipse equation can also be described as where is the image coordinate and are fitted parameters derived from the ellipse. Then, the coordinates of the ellipse centroids are rewritten in the quadratic form, as

Next, the eigenvalue and eigenvector of the matrix are calculated. Set the eigenvalue and normalized eigenvector of as

The values of and are determined as follows: (1) are set as the same signs (plus or minus), , and (2)If ×, or (3)

The ellipse cone in the standard space is obtained, and the range of the centroids can be calculated as where refers to the radius of mark points. Finally, the mean values of , , and detected ellipse centroids , are solved, taking these values as the centroids of the final markers. The range of is analogous to solve.

4. Feature Matching

4.1. Definition of Matching Sets

Suppose that catalog reflectors and observed reflectors are introduced in Equations (5) and (6). Define as the catalog reflectors [3, 25]. Then, the Euclidean distance between two catalog reflectors is given by where denote the -th and -th catalog reflector.

Second, represents the observed reflectors. Similarly, the Euclidean distance can be expressed as where is defined in Equation (6) and refer to the -th and -th observed reflectors. Therefore, the pose determination is to find the mapping from to where is a candidate configuration of the Markov random field [2628].

Since the set of reflector assignments is the MRF, then, the MRF will be used to find the probability that the -th observed reflector can correspond to the -th catalog reflector, , given the assignments of the neighboring observed reflectors. For more compact notations, a candidate realization is given by . Therefore, the MRF computes the probabilities where represents the indices of neighbors of the -th observed reflector. can be expressed by the Gibbs distribution [29, 30]: where is the system temperature, is the set of cliques involving the -th observed reflector, and is the clique potential of given that the value of is set to . Given and , is the unique distribution that maximizes the system entropy.

4.2. Deterministic Annealing

The Gibbs distribution is derived from an energy minimization perspective, where one seeks to minimize the Helmholtz free energy [3]. Equation (26) can be derived from solving the following optimization problem: where is the Helmholtz free energy, is the internal energy, is the system temperature, and is the Shannon entropy. Thus, the distribution from Equation (26) is the unique distribution that minimizes . Based on Equation (26), as becomes highly large, an equal probability appears that exists in any of the different states described by

Therefore, the annealing process can be initiated at a high temperature, with an a priori assumption of a uniform probability distribution across all possible reflector assignments. By constant iteration, the temperature is reduced based on a cooling schedule [31] given by where is the iteration number and is the cooling parameter. Although a logarithmic cooling schedule has been shown in theory to guarantee convergence to the global minimum, it generates a rate of convergence unacceptably slow for real-time applications.

Finally, for the form of the Gibbs distribution, it is clear that the probability of the lowest energy state tends toward one (and the probability of all other states tends toward zero) as becomes highly small. It can be mathematically expressed as

With the temperature iteratively decreasing, an ideal configuration is guaranteed to reach eventually.

4.3. Algorithm Implementation

In this section, the deterministic annealing is implemented as the element assignment of a matrix. First, suppose that the conditional probabilities are arranged into a matrix :

Therefore, the sum of each row is one based on Equation (31). Define the parameter as the inverse temperature . It follows the form of Equation (29), as

This expression is used to reduce the temperature as the algorithm is constantly iterated. Then, any particular temperature iteration is defined as where the constant is a penalty parameter. Equation (26) at the same iteration can be rewritten as follows: where and . Then, forms the matrix :

Based on Equation (36), the matrix is computed by normalizing each element in by summing its rows. Furthermore, the independent row normalizations are easily parallelized, and this method can be highly facilitated by realizing a field-programmable gate array. As mentioned above, if is chosen small enough, the initial values in are given by

Further, if the temperature slowly decreases in iterations according to Equation (33), Equation (30) demonstrates that all values in tend to be zero or one [3].

5. Pose Estimation Based on Singular Value Decomposition

5.1. Pose Estimation

First, the objective function is selected as follows [3, 3233]: where is the number of the matching pairs and is a weight. This function seeks the and minimizing weighted measurement residuals. Taking the variations of and setting the result as zero yields

Then, the following operations are defined as

Because is an orthogonal matrix, , are scalars and , are constants. 1 and are obtained. Hence, the expression is written as follows:

Define . The singular value decomposition of is given as where and are the singular vectors of and is the singular value of . The eigenvectors of constitute and , and the eigenvalues constitute . The following expression is obtained as

Since are orthogonal matrices, is also an orthogonal matrix. Then, the maximum trace is calculated as

The maximum trace is required to minimize . When , can be calculated. The corresponding to the above situation.

The optimal estimation for is obtained as

Therefore, the optimal estimation for is implemented based on Equation (39), as

5.2. Optimization of Error

The ToF measurement system can obtain an initial pose estimate and by the SVD. According to Equation (10), pose errors should be considered for optimization. Suppose the measured line-of-sight direction to the -th observed reflector is given by . This section is aimed at iteratively reprojecting the expected measurements in the observed line-of-sight directions to improve pose estimation performance. As mentioned above, the initial guess in iterations is the measured reflector positions obtained by the centroid algorithm.

Performing the expected measurements in the observed line-of-sight direction creates a new measurement for the next iteration where is the rotation matrix from the target frame to ToF frame at the -th iteration, is the vector from the ToF to target at the -th iteration, and is the measurement of the -th reflector at the -th iteration. Therefore, the residuals are defined as

This iteration continues until the change of drops to the threshold. That is, .

6. Results

6.1. Experimental System Composition

The experimental system in Figure 6 consists of a three-axis turntable, a ToF camera, a target, and a software. The ToF camera is installed on the docking surface of the turntable, and the installation matrix is known [34]. The target is fixed, and the center distance from the camera is known. In the initialization, the turntable is set to zero, and the center of the target is parallel to the center of the ToF that the attitudes are considered to be zero. Then, the ToF rotates with the turntable and outputs true values. It is connected to the upper computer, and the relative pose between the camera and target is measured in real time using the measurement algorithm.

6.2. Target Detection

First, the effectiveness of the detection method is verified based on the two groups of reflector markers. The first group consists of 5 reflectors, and 6 reflectors constitute the second group. Under various pose transformations, the proposed identification algorithm can effectively segment the reflectors and locate their centers, as shown in Figure 7. The experimental results show that the maximum pixel error of reflector segmentation is 0.2 pixels and that of center point extraction is only 0.06 pixels.

6.3. System Stability

During in-orbit applications, the stability of the measurement system is crucial. Then, the stability of the system is tested using two groups of typical distance settings. The experiments have collected data for approximately 6000 seconds at 697 mm. Table 1 lists the average, maximum, and minimum values of the pose at 697 mm. Figures 8 and 9 manifest the results that the random errors of the measuring system are universally small, validating its high stability. Similarly, Table 2 lists the mentioned values at 5712 mm. The ToF camera can also realize high system stability while measuring the cooperative target pose at a relatively long distance (Figure 10). As shown in Table 1, the fluctuation range of the position is in millimeters and the fluctuation range of the attitude exceeds 1°. It is found in Figure 8 that the pose has singularities at 1079 s and 4396 s due to the disturbance. After removing the singular data, the analysis results show that the random error is small and the system has high stability, as shown in Table 3 and Figure 9.


(mm) (mm) (mm)Pitch (°)Yaw (°)Roll (°)

Average2.193–10.457697.604–0.382–2.653–2.528
Maximum2.399–10.315699.269–0.056–1.406–2.087
Minimum1.515–11.323693.924–1.143–2.851–2.613
Fluctuation range0.8841.0085.3451.0871.4550.526


(mm) (mm) (mm)Pitch (°)Yaw (°)Roll (°)

Average5.097−2.3815712.3623.8231.324−2.053
Maximum5.192−2.2315713.3624.1011.473−1.998
Minimum5.007−2.5525711.2463.5831.179−2.114
Fluctuation range0.1850.3212.1160.5180.2940.116


(mm) (mm) (mm)Pitch (°)Yaw (°)Roll (°)

Average2.193–10.457697.606–0.383–2.654–2.528
Maximum2.399–10.315699.269–0.056–2.456–2.454
Minimum2.021–10.638695.158–0.646–2.851–2.613
Fluctuation range0.3780.3234.1110.5900.3950.159

6.4. Pose Optimization

Initially analyzing the pose, high precision appears in the roll angle measurements, but the pitch and yaw angles are highly coupled due to uncoaxial imaging ToF with the turntable. Furthermore, the ToF and target plane parallel calibration can affect measurement results. Modifying the attitude calculation algorithm eliminates the couple, while the accuracy and stability of the results are highly provided. The system with the modified algorithm adopts the standard Eigen library, and arbitrary attitude changes are given as the input, generating an attitude transformation matrix.

Figure 11(a) shows the changing curves of roll angle at the true value of 20.3°. The roll angle ranges from −20.22° to −20.3° with a mean value of −20.251°. Rangefinder measurements have an initial value of 0.1° and a rolling value of 20.4°. The difference between measured and true values is 0.05°. It is less than 0.15°, meeting the accuracy requirements. Figure 11(b) shows the true roll angle at 36°. The roll angle ranges from −35.94° to −36.04° with a mean value of −35.986°. The initial and rolling values of rangefinder measurements are 0.1° and 36.1°, respectively. The difference between the measured and true values is highly small as 0.014°.

The changes of pitch angle at 3.9° are manifested in Figure 12(a). The angle ranges from −3.9° to −4.01°, while the average is −3.959°. The initial measurement is −0.5°, and the pitching value is 3.4°. The measured and true values differ for 0.057°, which is less than 0.15° and meets the requirements. Then, Figure 12(b) manifests the pitch angle changes at 3.0°. The angle ranges from −2.9° to −3.1° with a mean value of −2.999°. The initial value is −0.5°, and the pitching change is 2.5°. The difference is 0.001°, completely meeting the requirements.

Figure 13(a) shows yaw angle changes at 2.67°. The yawing range is 2.62°~2.68°, while the mean value is −3.957°. The computed truth value is 2.65°, and the difference is 0.02°. The difference between the measured and true values is less than 0.15°, which meets the accuracy requirements. Then, the yaw angle changes at −0.954° are demonstrated in Figure 13(b). It ranges from −1.1° to 0.9° with an average of −0.981°. The truth value is −0.954°. The measured and true values differ for 0.027°, also meeting the accuracy requirements.

6.5. Comparison with Ground Truth

As shown in Figure 14(a), the difference of the ToF’s measured values is consistent with that of the turntable, validating a high accuracy in measuring results in the -axis direction. When the distance is over 5 m, the measuring error is 2.5 mm. It meets the accuracy requirement of 28.3 mm. The -value error increases when the distance is less than 5 m, which exceeds the requirement. The error increases can be the following: (1) the calibration error of the target itself: the number of pixels occupied by the target identification in the image plane increases as the distance is close, introducing the centroid pixel deviation; (2) the target installation error: since the target and 6-DOF platform may not be strictly calibrated in a parallel plane, the measured -value increases as the range decreases at a long distance; (3) the installation position error of the ToF: when farther from the platform center, the measuring errors will be larger.

In Figure 14(b), the difference between the measured values of ToF and those of the platform is consistent. It indicates that the results in the -axis direction are of high accuracy. The measurement errors at points 8 and 9 are relatively large, while the other points meet the accuracy requirements. At points 8 and 9, and and the measured -axis errors are −11 mm and −7 mm, greater than the accuracy requirement of 6 mm. However, the error of 7 points corresponding to 8 points is −1 mm, and that of 10 to 9 is −1 mm, both reaching the desired accuracy. Moreover, the mean value of 7 and 8 points is −5 mm, and that of 9 and 10 is −4 mm, also meeting the accuracy requirements.

Figure 14(c) demonstrates the similar cases in the -axis direction. The measurement error at point 8 is relatively large, while the errors at other points all meet the requirements. The error of point 8 is the -axis direction error measured at , which is -17.8 mm, greater than the accuracy requirement of 6 mm. In contrast, the errors of point 7 and point 9 are reduced to 0 mm, meeting the accuracy requirement.

Figure 14(d) manifests this consistent and high accuracy of ToF in roll angle measurements. Due to the parallel relation between the uncalibrated target and ToF, and coaxis between the uncalibrated ToF and 6-DOF platform, the short-range measuring accuracy is greatly affected. Three points fail to meet the requirements at 3, 8, and 10, while the corresponding errors at 4, 7, and 9 points are kept within the desired accuracy.

Affected by the relative position calibration of the target, the ToF, and the 6-DOF platform, yaw and pitch angles are significantly coupled as the platform’s yaw and pitch positions change. The results have errors and can be restricted by the conditions of the 6-DOF platform test. In this case, the yaw and pitch accuracy are not analyzed. Based on the experimental analyses, the error of the original data collected by the ToF is small. Moreover, the target detection algorithm is accurate and effective, while the pose is accurate.

7. Conclusions

This work provided an overview of the challenges on cooperative pose determination with a ToF camera at close ranges. Then, a detailed end-to-end description to design a ToF measuring system was presented. The designed measurement system consisting of a three-axis turntable, a ToF camera, a target, and a software was aimed at estimating the pose of the spacecraft. The results indicated that the system is stable and has high pose estimation accuracy. It needs to be explained that a few limitations exist in the proposed scheme. For instance, the self-designed ToF is applicable in a closer range, and its measuring range should be extended for more applications. Besides, the algorithm only considers a simple structure of reflectors on the target, so the proposed scheme with a complex structure and configuration reflectors can be explored. In the future, some work will concentrate on these aspects.

Data Availability

The data of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest to this work.

Authors’ Contributions

W. Zhu, J. Mu, C. Shao, J. Hu, B. Wang, and S. Li contributed to the conceptualization, literature review, article writing, and improvement; Z. Wen and F. Han contributed to the conceptualization.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant No. 11972182 and No. U20B2056), sponsored by the Science and Technology Innovation Action Plan of Shanghai (Grant No. 19511120900), funded by the Science and Technology on Space Intelligent Control Laboratory (Grant Nos. HTKJ2020KL502019 and 2021-JCJQ-LB-010-04), and sponsored by the National Basic Research Program of China (Grant No. JCKY2018203B036 and No. JCKY2021606B002).

References

  1. Q. Wang, D. Jin, and X. Rui, “Dynamic simulation of space debris cloud capture using the tethered net,” Space: Science & Technology, vol. 2021, article 9810375, 11 pages, 2021. View at: Publisher Site | Google Scholar
  2. R. Wang, C. Liang, D. Pan, X. Zhang, P. Xin, and X. Du, “Research on a visual servo method of a manipulator based on velocity feedforward,” Space: Science & Technology, vol. 2021, article 9763179, 8 pages, 2021. View at: Publisher Site | Google Scholar
  3. J. A. Christian, S. B. Robinson, C. N. D’Souza, and J. P. Ruiz, “Cooperative relative navigation of spacecraft using flash light detection and ranging sensors,” Journal of Guidance, Control and Dynamics, vol. 37, no. 2, pp. 452–465, 2014. View at: Publisher Site | Google Scholar
  4. R. Opromolla, G. Fasano, G. Rufino, and M. Grassi, “Pose estimation for spacecraft relative navigation using model-based algorithms,” IEEE Transactions on Aerospace and Electronic Systems, vol. 53, no. 1, pp. 431–447, 2017. View at: Publisher Site | Google Scholar
  5. R. Opromolla, G. Fasano, G. Rufino, and M. Grassi, “Uncooperative pose estimation with a LIDAR-based system,” Acta Astronautica, vol. 110, pp. 287–297, 2015. View at: Publisher Site | Google Scholar
  6. R. Opromolla, G. Fasano, G. Rufino, and M. Grassi, “A review of cooperative and uncooperative spacecraft pose determination techniques for close-proximity operations,” Progress in Aerospace Sciences, vol. 93, pp. 53–72, 2017. View at: Publisher Site | Google Scholar
  7. E. Brachmann, A. Krull, F. Michel, S. Gumhold, J. Shotton, and C. Rother, “Learning 6D object pose estimation using 3D object coordinates,” in Computer Vision–ECCV 2014, vol. 8690, pp. 536–551, Zurich, Switzerland, 2014. View at: Publisher Site | Google Scholar
  8. T. Tzschichholz, L. Ma, and K. Schilling, “Model-based spacecraft pose estimation and motion prediction using a photonic mixer device camera,” Acta Astronautica, vol. 68, no. 7–8, pp. 1156–1167, 2011. View at: Publisher Site | Google Scholar
  9. B. Liang, Y. He, and Y. Zou, “Application of time–of–flight camera for relative measurement of noncooperative target in close range,” Journal of Astronautics, vol. 37, no. 9, pp. 1080–1088, 2016. View at: Google Scholar
  10. B. Jiang and X. Jin, “Improved correction algorithm for harmonic- and intensity-related errors in time-of-flight cameras,” Acta Optica Sinica, vol. 40, no. 1, p. 0111024, 2020. View at: Publisher Site | Google Scholar
  11. H. G. Martínez, G. Giorgi, and B. Eissfeller, “Pose estimation and tracking of non-cooperative rocket bodies using time-of-flight cameras,” Acta Astronautica, vol. 139, pp. 165–175, 2017. View at: Publisher Site | Google Scholar
  12. B. Wilde, B. Blackwell, and B. Kish, “Experimental evaluation of a COTS time–of–flight camera as rendezvous sensor for small satellites,” in 2020 IEEE Aerospace Conference, pp. 1–16, Big Sky, MT, USA, 2020. View at: Publisher Site | Google Scholar
  13. X. Yan, L. Ao, and X. Yang, “Three–dimensional pose measurement method of non–cooperative rectangular target based on time–of–flight camera,” Application Research of Computers, vol. 35, no. 9, pp. 2856–2860, 2018. View at: Google Scholar
  14. Q. Wang, T. Lei, X. Liu et al., “Pose estimation of non-cooperative target coated with MLI,” IEEE Access, vol. 7, pp. 153958–153968, 2019. View at: Publisher Site | Google Scholar
  15. J. Ventura, A. Fleischner, and U. Walter, “Pose tracking of a noncooperative spacecraft during docking maneuvers using a time–of flight sensor,” in AIAA Guidance, Navigation, and Control Conference, pp. 1–16, San Diego, CA, USA, 2016. View at: Google Scholar
  16. S. Foix, G. Alenya, and C. Torras, “Lock-in time-of-flight (ToF) cameras: a survey,” IEEE Sensors Journal, vol. 11, no. 9, pp. 1917–1926, 2011. View at: Publisher Site | Google Scholar
  17. M. A. Sutton and F. Hild, “Recent advances and perspectives in digital image correlation,” Experimental Mechanics, vol. 55, no. 1, pp. 1–8, 2015. View at: Publisher Site | Google Scholar
  18. Z. Wen, Accurate and fast identification and localization of a cooperative targets in complex background, Changchun Institute of Optics, Fine Mechanics and Physics, University of the Chinese Academy of Sciences, Changchun, China, 2017.
  19. B. Wang, G. Li, J. Chen, and Z. Yu, “Two methods of coded targets used in rendezvous and docking,” Journal of Astronautics, vol. 29, no. 1, pp. 162–166, 2008. View at: Google Scholar
  20. L. Zhang, X. Huang, W. Feng, S. Liang, and T. Hu, “Consistency-based ellipse detection method for complicated images,” Optical Engineering, vol. 55, no. 5, pp. 053105–053115, 2016. View at: Publisher Site | Google Scholar
  21. Y. Liu, Z. Xie, and H. Liu, “Fast and robust ellipse detector based on edge following method,” IET Image Processing, vol. 13, no. 13, pp. 2409–2419, 2019. View at: Publisher Site | Google Scholar
  22. Q. Jia, X. Fan, Z. Luo, L. Song, and T. Qiu, “A fast ellipse detector using projective invariant pruning,” IEEE Transactions on Image Processing, vol. 26, no. 8, pp. 3665–3679, 2017. View at: Publisher Site | Google Scholar
  23. D. K. Prasad, M. K. H. Leung, and S. Y. Cho, “Edge curvature and convexity based ellipse detection method,” Pattern Recognition, vol. 45, no. 9, pp. 3204–3221, 2012. View at: Publisher Site | Google Scholar
  24. D. K. Prasad, M. K. H. Leung, and C. Quek, “ElliFit: an unconstrained, non-iterative, least squares based geometric ellipse fitting method,” Pattern Recognition, vol. 46, no. 5, pp. 1449–1465, 2013. View at: Publisher Site | Google Scholar
  25. A. Meijster and M. Wilkinson, “A comparison of algorithms for connected set openings and closings,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 4, pp. 484–494, 2002. View at: Publisher Site | Google Scholar
  26. K. Thon, H. Rue, S. O. Skrøvseth, and F. Godtliebsen, “Bayesian multiscale analysis of images modeled as Gaussian Markov random fields,” Computational Statistics & Data Analysis, vol. 56, no. 1, pp. 49–61, 2012. View at: Publisher Site | Google Scholar
  27. R. Maronna, D. Martin, and V. Yohai, Robust Statistics: Theory and Methods, Wiley, New York, NY, USA, 2006. View at: Publisher Site
  28. M. J. Black and A. Rangarajan, “On the unification of line processes, outlier rejection, and robust statistics with applications in early vision,” International Journal of Computer Vision, vol. 19, no. 1, pp. 57–91, 1996. View at: Publisher Site | Google Scholar
  29. K. Rose, “Deterministic annealing for clustering, compression, classification, regression, and related optimization problems,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2210–2239, 1998. View at: Publisher Site | Google Scholar
  30. S. Geman and D. Geman, “Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 6, no. 6, pp. 721–741, 1984. View at: Publisher Site | Google Scholar
  31. B. Hajek, “Cooling schedules for optimal annealing,” Mathematics of Operations Research, vol. 13, no. 2, pp. 311–329, 1988. View at: Publisher Site | Google Scholar
  32. Z. Liu, W. Liu, X. Gong, and J. Wu, “Optimal attitude determination from vector sensors using fast analytical singular value decomposition,” Journal of Sensors, vol. 2018, Article ID 6308530, 10 pages, 2018. View at: Publisher Site | Google Scholar
  33. G. H. Golub, A. Hoffman, and G. W. Stewart, “A generalization of the Eckart-Young-Mirsky matrix approximation theorem,” Linear Algebra and its Applications, vol. 88-89, pp. 317–327, 1987. View at: Publisher Site | Google Scholar
  34. G. Casonato and G. B. Palmerini, “Visual techniques applied to the ATV/1SS rendezvous monitoring,” in 2004 IEEE Aerospace Conference Proceedings, vol. 1, pp. 613–625, Big Sky, MT, USA, 2004. View at: Publisher Site | Google Scholar

Copyright © 2022 Wenshan Zhu et al. Exclusive Licensee Beijing Institute of Technology Press. Distributed under a Creative Commons Attribution License (CC BY 4.0).

 PDF Download Citation Citation
Views100
Downloads52
Altmetric Score
Citations