Get Our e-AlertsSubmit Manuscript
Space: Science & Technology / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 9763179 | https://doi.org/10.34133/2021/9763179

Rui Wang, Changchun Liang, Dong Pan, Xiaodong Zhang, Pengfei Xin, Xiaodong Du, "Research on a Visual Servo Method of a Manipulator Based on Velocity Feedforward", Space: Science & Technology, vol. 2021, Article ID 9763179, 8 pages, 2021. https://doi.org/10.34133/2021/9763179

Research on a Visual Servo Method of a Manipulator Based on Velocity Feedforward

Received28 Jun 2021
Accepted19 Aug 2021
Published10 Sep 2021

Abstract

In this paper, a method of predicting the motion state of a moving target in the base coordinate system by hand-eye vision and the position and attitude of the end is proposed. The predicted value is used as the velocity feedforward, and the position-based visual servo method is used to plan the velocity of the end of the manipulator. It overcomes the influence of end coordinate system motion on target prediction in a discrete system and introduces an integral control method to compensate for the prediction velocity, eliminating the end tracking error caused by target velocity prediction error. The effectiveness of this method is verified by simulation and experiment.

1. Introduction

In recent years, with the increasing number of spacecraft launched by various countries, the number of spacecraft with on-orbit faults and failures also increases, which makes the space environment deteriorate and the waste of orbit resources increase. Moreover, some faults or failures of spacecraft can be maintained at a limited cost, which can maximize the economic benefits of on-orbit spacecraft. On the other hand, the construction of large-scale space facilities is eager to get rid of the limited capacity constraints and is exploring ways to meet the construction needs through on-orbit assembly [1]. Therefore, spacecraft on-orbit maintenance, assembly, and other on-orbit service research have become the focus of attention of all countries, and the use of space manipulators to perform on-orbit service tasks has become one of the important development trends.

On-orbit target acquisition is the premise for space manipulator to perform on-orbit service task, and visual servo is one of the common methods for on-orbit target acquisition. At present, the successful missions of a space manipulator to capture spacecraft include Japan’s ETS-VII [2], the International Space Station to capture Japan’s HTV [3], and the United States’ Dragon spacecraft. The above tasks are mainly aimed at the cooperative and state-controlled target acquisition. The target is generally in a state of almost static relative to the base of the manipulator or known and controllable relative velocity. In the process of executing these tasks, the servo control of the manipulator is more studied and the technology is more mature, but on-orbit services are often aimed at noncooperative and uncontrolled moving targets. Visual servo control has its particularity for moving objects with an uncontrolled state, such as translation velocity or rotation angular velocity in any direction or both.

Because the position and attitude of the moving target are changing, the traditional visual servo algorithm for the static target always lags behind the moving target because of the time consumed by visual information processing, pose calculation, and control output. Literature [46] has carried out the research of moving target acquisition based on visual servo. Literature [712] synthesizes the above research and proposes a method of autonomous acquisition path planning for spinning target based on predictive compensation, which solves the problem of end path planning for spinning target acquisition. The common key point of the above methods is the prediction of the state (velocity and angular velocity) of a moving target, and the prediction is realized by the differential of the position and attitude of the target in the end coordinate system. However, in a space manipulator, due to the limitation of computing resources, the update period of hand-eye camera data and the control period of the manipulator are always as long as tens of milliseconds or even hundreds of milliseconds. In this case, the solution of velocity and angular velocity in the rotating end coordinate system will bring about great errors. In addition, due to the existence of error factors such as measurement error, end positioning error, system calculation delay error, and control cycle error of the visual sensor, the predicted target velocity always has errors, which cannot meet the requirements only by velocity compensation.

In this paper, a visual servo method of the manipulator based on velocity feedforward is proposed. The characteristics of this method are as follows. (1) In the fixed base coordinate system of the manipulator, the motion state of the target in the base coordinate system is predicted by hand-eye vision data and the position and attitude of the end, which eliminates the influence of the rotation of the end coordinate system on the prediction of the target when the operation period is limited. (2) On the basis of target velocity feedforward, integral control is introduced to compensate for the predicted velocity, eliminating the tracking error caused by target velocity prediction error.

The contents of this paper are as follows: Section 2 introduces the general concept and process of the visual servo of the manipulator, Section 3 derives the visual servo method of the manipulator based on velocity feedforward, Section 4 introduces the simulation and experiment of the proposed method, and Section 5 summarizes the method.

2. Introduction of Visual Servo for the Manipulator

The visual servo of a manipulator is a process of recognizing the position and attitude of the captured target according to the visual information and then guiding the end of the manipulator to track the target. The basic control structure is shown in Figure 1.

Take the closed-loop system at the end of the camera mounted on the manipulator as an example, as shown in Figure 2. is the base coordinate system of the manipulator (assumed as a fixed coordinate system), is the end coordinate system of the manipulator, and is the coordinate system fixed to a capture point. is the translation vector of the capture point coordinate system in the manipulator end coordinate system; that is, the position error of the capture point relative to the manipulator end is marked as . is the Euler angle of the capture point coordinate system in the end system of the manipulator (the rotation axis is in the order of , , and , and the rotation axis is the moving axis after each rotation), that is, the attitude error of the capture point relative to the manipulator, expressed by . is the allowable position and attitude error for control. The basic process of visual servo of the manipulator is generally divided into the following three steps:

Step 1. The manipulator obtains the visual information of the captured target through the camera, calculates the relative position and attitude error , and guides the end to move towards the captured point.

Step 2. The end of the manipulator moves in front of the capture point, adjusts its attitude and position, and keeps relatively static with the target. This point is called the middle point of visual servo. The condition for this step is that the position and attitude error of the end and the middle point of the manipulator is less than .

Step 3. The manipulator moves from the middle point of the second step to the capture point (also known as the target point of visual servo) and keeps relatively static with the target. The condition for the completion of this step is that the position and attitude errors between the end of the manipulator and the target are less than .

3. Visual Servo Control Method Based on Velocity Feedforward

The visual servo control method based on velocity feedforward is shown in Figure 3.

Based on the end velocity planned by the pose error, the target velocity is introduced as the feedforward of the end planning velocity to realize the tracking of the moving target, as shown in

In formula (1), is the velocity of the planned end of the manipulator in the end coordinate system, is the angular velocity of the planned end of the manipulator in the end coordinate system, is the proportional coefficient of visual servo, is the relative position error between the end of the manipulator and the capture point expressed in the end coordinate system, is the relative attitude error between the end of the manipulator and the capture point expressed in the end coordinate system, is the representation of the absolute velocity of the target in the end coordinate system, and is the representation of the absolute angular velocity of the target in the end coordinate system.

3.1. Velocity Feedforward Calculation

The relationship between the absolute velocity of the target in the end coordinate system and the base coordinate system is shown in

In formula (2), is the attitude transformation matrix of the end coordinate system in the base coordinate system, in which the matrix can be obtained by the forward kinematics of the manipulator. is the absolute linear velocity of the target in the base coordinate system. In the discrete system, is calculated as

In formula (3), is the absolute linear velocity of the target in the base coordinate system at time , is the position of the target in the base coordinate system at time , and is the sampling period of the discrete system.

By substituting formula (3) into formula (2), the expression of the absolute velocity of the target at time in the end coordinate system can be obtained as

can be calculated by

In formula (5), is the position of the end in the basic coordinate system which can be obtained by the forward kinematics of the manipulator.

By substituting formula (5) into formula (4), we can obtain

In formula (6), all variables on the right side of the equal sign can be directly solved by vision or the forward kinematics of the manipulator. It is feasible to calculate at the -moment.

3.2. Angular Velocity Feedforward Calculation

The relationship between the absolute angular velocity of the target in the end coordinate system and the base coordinate system is as follows:

In formula (7), is the absolute angular velocity of the target in the base coordinate system.

In discrete systems, the calculation of -time is formulated as

In formula (8), is the attitude variable matrix of the target in time calculated according to

In formula (9), is the attitude transformation matrix of the target in the basic coordinate system.

can be calculated by

In formula (10), is the attitude transformation matrix of the target in the end coordinate system which can be directly solved by visual data.

The expression of the absolute angular velocity of the target in the end coordinate system can be obtained by substituting formula (10) to (7) in turn.

3.3. Integral Compensation of Velocity Feedforward

Because of the existence of error factors such as measurement error of vision sensor, calculation delay error of the system, and control period error, there are always errors in the predicted target velocity and .

In formula (11), is the real value of the absolute velocity of the target expressed in the end coordinate system. is the real value of the absolute angular velocity of the target expressed in the end coordinate system. is errors between predicted and real target velocities.

By substituting formula (11) into formula (1), we can obtain

When and are exactly equal to and , respectively, the end is in a stable tracking state. According to formula (12), there are

It can be seen that there are steady-state position and attitude errors between the end and the target, and the error is proportional to the error of target prediction. In order to eliminate the steady-state error caused by the feedforward error of velocity, the integral control of steady-state error is introduced to compensate for the feedforward error of velocity.

In formula (14), is the integral gain factor of visual servo.

Therefore, in the control method shown in Figure 3, the measurement value of velocity feedforward is still used. After introducing integral compensation, the integral compensation will eliminate the steady-state error of position and attitude caused by velocity error. In general, formula (14) is still written as follows:

4. Simulation and Experimental Verification

4.1. Simulation Verification

The simulation system as shown in Figure 4 is designed to verify the proposed method.

is the fixed coordinate system, and the base coordinate of the manipulator coincides with the fixed coordinate system. The three joints of the three-degree-of-freedom manipulator are rotating joints, and the rotating axis is parallel to the axis. The end of the manipulator is equipped with hand-eye camera, and the axis of the end coordinate system is opposite to the axis of .

The parameters of each rod are , , , , , , . The maximum speed of the end is 0.1 m/s, and the acceleration is 0.02 m/s2.

The initial position of the end is (m), and the attitude is °(the attitude described in this paper is Euler angles rotating around axes , , and , respectively). The initial position of the target in the end coordinate system is (m), and the attitude is , and the intermediate point of visual servo is 0.3 m in front of the target capture point and is fixed with the target.

In the simulation, the calculation period of end planning is set to 50 ms, and the updating period of the position and attitude of the target in the end coordinate system (visual data provided by the camera) is 80 ms.

The velocity of the target is (m/s), and the angular velocity of rotation is °/s in the base coordinate system.

The position and attitude errors of the end of the manipulator to the intermediate point and the target point are .

In the simulation of visual servo, the calculation value of linear velocity feedforward is shown in Figure 5. Because the end of the manipulator can only be moved in and directions and rolled in the direction. The translational motion in the direction is the main in the simulation and experimental conditions, and the direction and direction are very small. Therefore, in order to save space and not affect the necessity of problem description, all simulation images in this paper only list the position () and the velocity (VX) in the direction and their errors.

The comparison of velocity error in the direction is listed in Figure 5. From top to bottom, 1 ms periodic velocity prediction error in the end coordinate system, 80 ms period prediction error in the end coordinate system, and 80 ms period prediction error in the end base system are, respectively, listed. The comparison shows that the prediction error of the pose renewal period is better than 80 ms for the method of prediction in the end coordinate system, which is equivalent to the prediction period error of 80 ms in the base coordinate system. For space robots, it is very difficult to realize the updating period of posture for 1 ms due to the limitation of operation resources, and the 80 ms updating cycle is more feasible in engineering. Therefore, the method of target velocity prediction in the base coordinate system has more practical application value for space robot.

The predicted target velocity and angular velocity in the base coordinate system proposed in this paper are used as feedforward for end velocity planning (the calculation period of end planning is 50 ms, and the pose update period of the target in the end coordinate system is 80 ms), and the velocity prediction error is set as 0.008 m/s. The relative position relationship between the end of the direction and the capture point is shown in Figure 6 (the final position deviation in the direction is the same as that in the direction, but the initial deviation in the direction is large, which is not conducive to see the final position error from the whole tracking curve, so the direction deviation is used for analysis). In the case of no integral compensation, it can be seen from the relative position curve of the end and the target that there is a constant distance (about 0.05 m) between the end and the target due to the speed prediction error, which is caused by no integral compensation for the speed prediction error. When the maximum value of integral compensation is set to 0.01 m/s, the end of the manipulator can track the target well and the final position error tends to be zero.

4.2. Experimental Verification

The algorithm is validated on an air-bearing platform by using a real manipulator with the same parameters in simulated working conditions. The experimental scenario is shown in Figure 7.

The absolute positioning accuracy of the end of the manipulator is 0.02 m and 0.5°. The measurement accuracy of the hand-eye camera is 0.01 m and 0.5°. In the experiment, the target moves about 35 mm/s in the direction, and the maximum angular velocity in the direction is about 1°/s (limited by the experimental conditions, the linear velocity and angular velocity on the air bearing platform are not as stable as the simulation). During the experiment, the calculated value of linear velocity feedforward is shown in Figure 8. The curve shows that although the real target speed is not as stable as the simulation process, the measured target speed tends to be stable after the initial acceleration and adjustment and is consistent with the target speed curve measured by the ground system.

After introducing the velocity feedforward to plan the end velocity, but without introducing the integral compensation, the tracking process of the end to the target is shown in Figure 9(a) (similarly, only the -direction position change relationship is listed). Due to the existence of velocity prediction error, the end always fails to meet the tolerance range (position tolerance 2 cm, attitude tolerance 2 degrees) in front of the acquisition point (middle point). Tracking failed. After introducing the integral compensation, the tracking process of the end to the target is shown in Figure 9(b). From the relative position curve of the end and the target, it can be seen that the end tracking the target is stable, and the position and attitude errors of the end and the target reach the acquisition tolerance range after about 35 seconds.

5. Conclusion

Aiming at the problem of the robot tracking moving target, this paper proposes a method to predict the moving state of the moving target in the base coordinate system by using hand vision and end pose information and uses the position-based visual servo method to plan the end of the manipulator by introducing the predicted value as speed feedforward, At the same time, the integral method is introduced to compensate for the velocity prediction error, which overcomes the influence of the movement of the end coordinate system on the target prediction in the discrete system and converges globally in the movement process of the manipulator, which has practical value for the case of the space manipulator with limited computing resources. The simulation and experimental results show that the proposed method is effective for visual servo tracking of translational and spinning targets.

Data Availability

The authors agree that all data related to this paper can be deposited in an approved database and related accession numbers can be included in the published paper. All related data can be released at the time of publication, and all data necessary to understand and assess the conclusions of the manuscript can be available to any reader of this journal. All data included in this study are in the article and are available upon request by contact with the corresponding author.

Conflicts of Interest

We declare that there is no competing interest.

Acknowledgments

Thanks are due to Dr. Jiyang Yu for completing all the code implementation of the proposed method. Thanks are due to Dr. Zhihong Wu and Mr. Yonghui Zhou for their great help in the experimental verification and completing the implementation of all the experiments. This paper is supported by the National Natural Science Foundation of China (61733001).

References

  1. Z. Guang, Q. Yue, L. Bin, and L. Cheng, “Development of on-orbit capture technology,” Robot, vol. 30, no. 5, pp. 467–480, 2008. View at: Google Scholar
  2. K. Yoshida, “Engineering test satellite VII flight experiments for space robot dynamics and control: theories on laboratory test beds ten years ago, now in orbit,” The International Journal of Robotics Research, vol. 22, no. 5, pp. 321–335, 2003. View at: Publisher Site | Google Scholar
  3. U. Satoshi, K. Toru, and U. Hirohiko, “HTV rendezvous technique and GN&C design evaluation based on 1st flight on-orbit operation result,” in AIAA/AAS Astrodynamics Specialist Conference, p. 7664, Toronto, Ontario, Canada, 2010. View at: Google Scholar
  4. W. Xu, B. Liang, C. Li, Y. Liu, and Y. Xu, “Autonomous target capturing of free-floating space robot: theory and experiments,” Robotica, vol. 27, pp. 425–445, 2008. View at: Google Scholar
  5. F. Aghili and K. Parsa, “An adaptive vision system for guidance of a robotic manipulator to capture a tumbling satellite with unknown dynamics,” in IEEE/RSJ International Conference on Intelligent Intelligent Robots and System, pp. 3064–3071, Nice, France, 2008. View at: Google Scholar
  6. F. Aghili, “A prediction and motion-planning scheme for visually guided robotic capturing of free-floating tumbling objects with uncertain dynamics,” IEEE Transactions on Robotics, vol. 28, no. 3, pp. 634–649, 2012. View at: Publisher Site | Google Scholar
  7. L. Houde, Coordinated motion planning study of dual-arm space robot for capturing spinning target, Ph.D. Dissertation of Harbin University of Technology, 2013.
  8. L. Houde, L. Bin, X. Wenfu, M. Qingtao, and Y. Jianghua, “Motion prediction and autonomous path planning for spinning target capturing,” Journal of Jilin University (Engineering and Technology Edition), vol. 44, no. 3, pp. 757–764, 2014. View at: Google Scholar
  9. C. C. Liang, X. D. Zhang, D. Pan, and X. Liu, “Research of visual servo control system for space intelligent robot,” in 2015 IEEE Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), pp. 96–100, Chongqing, China, 2015. View at: Google Scholar
  10. W. Xu, C. Li, B. Liang, Y. Liu, and W. Qiang, “Coordinated planning and control method of space robot for capturing moving target,” Acta Automatica Sinica, vol. 35, no. 9, pp. 1216–1225, 2009. View at: Google Scholar
  11. W. Xinglong, Z. Zhicheng, and Q. Guangji, “Dynamic trajectory planning method of space manipulator for capturing a tumbling target,” Journal of Astronautics, vol. 38, no. 7, pp. 678–685, 2017. View at: Google Scholar
  12. J. Bingxi, L. Shan, Z. Kaixiang, and C. Jian, “Survey on robot visual servo control: vision system and control strategies,” Acta Automatica Sinica, vol. 41, no. 5, pp. 861–873, 2015. View at: Google Scholar

Copyright © 2021 Rui Wang et al. Exclusive Licensee Beijing Institute of Technology Press. Distributed under a Creative Commons Attribution License (CC BY 4.0).

 PDF Download Citation Citation
Views22
Downloads34
Altmetric Score
Citations