Technical Paper

Experimental Study of Spacecraft Pose Estimation Algorithm Using Vision-based Sensor

Jeonghoon Hyun, Youngho Eun, Sang-Young Parkhttp://orcid.org/0000-0002-1962-4038
Author Information & Copyright
Department of Astronomy, Yonsei University, Seoul 03722, Korea
Corresponding Author Tel: +82-2-2123-5687, E-mail: spark624@yonsei.ac.kr

© The Korean Space Science Society. All rights reserved. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: Sep 20, 2018; Revised: Nov 12, 2018; Accepted: Nov 14, 2018

Abstract

This paper presents a vision-based relative pose estimation algorithm and its validation through both numerical and hardware experiments. The algorithm and the hardware system were simultaneously designed considering actual experimental conditions. Two estimation techniques were utilized to estimate relative pose; one was a nonlinear least square method for initial estimation, and the other was an extended Kalman Filter for subsequent on-line estimation. A measurement model of the vision sensor and equations of motion including nonlinear perturbations were utilized in the estimation process. Numerical simulations were performed and analyzed for both the autonomous docking and formation flying scenarios. A configuration of LED-based beacons was designed to avoid measurement singularity, and its structural information was implemented in the estimation algorithm. The proposed algorithm was verified again in the experimental environment by using the Autonomous Spacecraft Test Environment for Rendezvous In proXimity (ASTERIX) facility. Additionally, a laser distance meter was added to the estimation algorithm to improve the relative position estimation accuracy. Throughout this study, the performance required for autonomous docking could be presented by confirming the change in estimation accuracy with respect to the level of measurement error. In addition, hardware experiments confirmed the effectiveness of the suggested algorithm and its applicability to actual tasks in the real world.

Keywords: relative pose estimation; vision-based sensor; extended Kalman Filter; hardware experiment; laser distance meter

1 INTRODUCTION

Formation flying offers benefits in many aspects, such as development period, launch cost, and performance, by replacing a single large spacecraft with several small spacecraft. However, in order to improve the performance of formation flying missions, it is necessary to operate spacecraft more precisely than a single spacecraft operation. Therefore, various research has been conducted to improve the accuracy of relative pose estimation between spacecraft.

The vision sensor provides relative pose information and is mainly used at short distances. Generally, it is relatively inexpensive compared to sensors such as lidar and radar. In addition, shorter relative distance means higher estimation accuracy. Therefore, the vision sensor is considered to be a powerful and promising sensor for proximity operation. As an example, recent proximity operations of Shenzhou-9 and Tiangong-1, as well as the docking of the spacecraft were carried out using a vision sensor (Xie et al. 2013). NASA employed an CCD optical sensor to install the SPHERES autonomous docking module, and utilized it to verify performance of the vision sensor in space (Saenz-Otero & Miller 2005) and conduct various studies (Olivieri et al. 2017).

Vision-based relative pose estimation has been generally performed using line of sight (LOS) vector measurement (Junkins et al. 1999). The relative state between spacecraft can be estimated by using the nonlinear least squares method (Junkins et al. 1999) or the extended Kalman filter (Kim et al. 2007). In addition, a study was conducted for estimation of both the state of a chief and a deputy satellite based on chief coordinates rather than local vertical and local horizontal (LVLH) coordinates (Zhang et al. 2014). Previous studies applied the extended Kalman filter with zero-mean Gaussian with white noise as dynamic model error. Thus, it is possible to estimate orbital elements of a chief spacecraft, as well as the relative pose, if the initial state error is not too large. However, the vision sensor provides only relative information, such that estimation of absolute orbital elements using idealized error is not convenient. It is confirmed that the relative position estimation accuracy decreases remarkably when considering nonlinear perturbations (Lee & Pernicka 2011). In this study, a projected circular orbit (PCO) formation flying scenario was implemented by applying nonlinear perturbations and possible degradation of the sensor resolution in reality.

Vision-based relative pose estimation can be utilized for satellite formation flying with relative navigation methods. The relative position of satellites can be determined using only GPS measurements (Sim et al. 2009) or laser measurements, which consist of range and direction data (Jung et al. 2012;Lee et al. 2018). Moreover, the relative position can be determined employing a laser in combination with GPS data (Lee et al. 2015;Oh et al. 2016). With relative position and relative pose estimation, the precise formation flying technologies, such as the proximity operation, rendezvous, or docking, become feasible. Navigation algorithms for vision-based relative pose estimation can be derived from the filtering algorithms for absolute satellite navigation (Kim et al. 2017, Lee et al. 2017a;Lee et al. 2017b).

Since it takes a lot of effort to verify formation flying technology in space, validation through a test-bed on the ground is regarded as the most reasonable alternative. A lot of research has been conducted using test-beds to obtain high stability and environmental similarity to space environments. In Texas A&M, a vision-based state estimation system, VisNav, was installed on a 2- Degrees-of-freedom (DOF) air bearing test bed (Du 2004). The MIT MVDS also performed ground validation through a 3-DOF air bearing test bed (Rodgers 2006). In Yonsei, the Autonomous Spacecraft Test Environment for Rendezvous In proximity (ASTERIX) consists of an aluminum table, main computer, and two spacecraft simulators (Eun et al. 2018). ASTERIX can be used to perform a feasibility assessment on a number of systems, which will be on a space.

In this study, pose alignment was implemented by the vision system in numerical simulations to verify the feasibility of the system, i.e., that it can be applied to satellite formation flying. The influence of external environmental factors on the performance of formation flying was analyzed, while checking the estimation and control accuracy according to the level of observation error. In order to improve the relative position estimation accuracy even in the presence of nonlinear perturbation, we proposed a performance improvement by combination with a laser sensor. This will serve to supplement the existing algorithm, as the estimation performance declines with larger relative distances, suggesting the possibility of expanding the relative distance range of the formation flying that utilizes the vision sensor. By carrying out performance verification through hardware experiments, we confirmed that the constructed algorithm is applicable in a real environment in the presence of various disturbing factors. In addition, we investigated the reason behind the difference between the results of numerical simulations and hardware experiment and laid the basis for constructing algorithms that can be applied to real environments more pragmatically.

2 VISION-BASED MEASUREMENT SYSTEM

A measurement model of the estimation algorithm was constructed using a vision sensor, a laser distance meter, and a gyro sensor.

2.1 Vision System

The vision system consists of a CMOS video camera (Table 1), a red 5 mm circular LED, and the estimation algorithm. The camera in tracking spacecraft recognizes four LEDs attached to the target spacecraft and saves their positions on the CMOS image plane. The camera generates a gray scale 2D image with a fixed shutter speed and focal length. If we construct a vector from the position at which the light is focused to the center of the lens, the measurement can be written as follows (Alonso et al. 2000).

Table 1. Camera (DFK33UX174) and lens (GMTHR48014MCN)
jass-35-263_T1
Download Excel Table

b i = A r i , i = 1 , 2 , 3 n b i = 1 f 2 + x 2 + y 2 [ x y f ] r i = 1 ( X i + x ) 2 + ( Y i + y ) 2 + ( Z i + z ) 2 [ ( X i + x ) ( Y i + y ) ( Z i + z ) ]
(1)

In Eq. (1), bi is the i-th measurement, and A is the transformation matrix from the initial coordinate system to the sensor coordinate system. Xi,Yi,Zi are coordinates of the i-th LED with respect to the target spacecraft, and x,y,z are coordinates of the target spacecraft with respect to the tracking spacecraft. The coordinates can be expressed in the LVLH coordinate system, the ECI (Earth-Centered Inertial) coordinate system, and the body-fixed coordinate system of the target spacecraft, etc. The measurement noise matrix of the Gaussian-distributed error was proven to be a diagonal matrix with diagonal components of the error variance as in Eq. (2) (Cheng et al. 2005).

R i = σ i 2 I 3 × 3
(2)

2.2 LED Configuration

Information provided by the vision sensor consists of projected LED positions on the CCD detector and the image size for image processing (Fig. 1). The LEDs have to be sufficiently brighter than surrounding lights, but at the same time should have moderate intensities to reduce measurement error due to inter-LED interference or light blur. This is achieved by adjusting camera aperture. The wavelength of the LEDs was selected in visible light range to minimize interference with infrared markers.

jass-35-263_F1
Fig. 1. 2D Gray scale image obtained by image processing.
Download Original Figure

The measurement values of the vision system consist of row and column positions in an image plane, whereas the LEDs have 3D position information in 3D space. Therefore, the matching does not always fulfill a one-to-one correspondence, such that a singularity problem can occur easily (Fig. 2). Singularity arises when all of the LEDs are placed on one plane (Rodgers 2006). Therefore, the minimum number of LEDs to avoid the singularity is four.

jass-35-263_F2
Fig. 2. Singularity (Rodgers 2006).
Download Original Figure

Considering the viewing angle, all LEDs were positioned to be projected within 80 % of the image size when the camera and LED were located at the closest distance (Fig. 3). When the distance between the camera and the LEDs center is 5 cm, the dimensions of the LED configuration are 15 cm in width and 8 cm in height. The estimation accuracy with respect to the number and distribution of LEDs was discussed in detail in (Xing et al. 2010).

jass-35-263_F3
Fig. 3. LED configuration considering singularity, field of view, and minimum distance
Download Original Figure
2.3 Image Processing
2.3.1. Calibration

Calibration is a process of experimental and numerical determination for an actual focal length. This is an indispensable process that derives the internal matrix of the measurement model. The process numerically estimates internal parameters using several grid images (Zhang 2000). In this study, we used the GML C ++ Camera Calibration Toolbox v 0.72 beta developed by Graphics and Media Lab of National University of Moscow (http://graphics.cs.msu.ru/en/node/909). By calibration, we obtain the radial distortion, tangential distortion coefficient, a position for the principal point, and pixel error, as well as the focal length. The most influential variable is the focal length, which needs to be obtained every time the camera settings are changed.

2.3.2. Cutoff

An image obtained with a camera gives different 2D information depending on parameter settings such as hue, brightness, and saturation. In order to extract the desired measurement, a cutoff process is necessary (Fig. 4). By the cutoff process, the number of observed objects is obtained at every capture and the cutoff value is adjusted repeatedly until the desired number is achieved. In this process, if the difference between successive positions of the object, LED, has a value larger than a specified value, failure is declared and passed to the next step. If failures are declared continuously, some action should be taken such as reducing surrounding lights by adjusting the experimental environment.

jass-35-263_F4
Fig. 4. Cutoff and centroiding.
Download Original Figure
2.3.3. Centroiding

The center points of the LEDs are found in the LED image after cutoff (Fig. 1) by extracting the center of mass of the concentrated pixels. In the LED structure, a hemispherical part emitting light contains a filament in itself. Therefore, image distortion of the LED becomes larger with the increase in lateral distance between the center of the lens and the LED. The effect of image distortion on the observation error is shown in Fig. 5. If the relative distance in the line of sight direction is larger than 1 m, the observation error is less than 1 pixel. In a previous study using the measurement model of Eq. (3.3) and the Position Sensitive Diode (PSD), the authors assumed an observation error of 0.3 – 0.5 pixels (Crassidis et al. 2000). We assumed observation errors of 0.1 – 5 pixels to analyze the estimation accuracy with respect to the level of observation errors. The observation error was set to less than 1 pixel in the PCO formation flying scenario.

jass-35-263_F5
Fig. 5. Centroiding error (left), measurement error with respect to relative distance (right).
Download Original Figure
2.4 Gyro Sensor

The gyro sensor, which measures the angular velocity of the spacecraft, was applied to the filter and the relative attitude estimation. The angular velocity measured by the gyro includes drift and instability error, in addition to the actual angular velocity of the body. The gyro drift error is divided into white noise and a bias error. The instability error is divided into electric noise and a quantization error component. In this study, the most common gyro model, the first Markov process, was applied (Crassidis & Junkins 2004).

ω = ω ˜ β η ν β ˙ = n u
(3)

In Eq. (3), nv and nu depict Gaussian white noise with an average of zero and spectral densities of συ2Iandσu2I, respectively. β denotes the drift error vector, and ω˜ is the measured angular velocity. The angular velocity can be estimated by propagating the attitude in Eq. (12).

2.5 Laser Distance Meter

The laser distance meter measures relative distance from the sensor to the object in the direction of the laser line of sight (Fig. 6). The device can measure relative distance at a rate of up to 50 Hz. The measurement model of the laser distance meter is as follows (Shin 2016).

jass-35-263_F6
Fig. 6. LDM41.1(Jenoptik)(left), distance measurement error of laser (right).
Download Original Figure

d r e l a t i v e = n T A r e f l a s e r [ R x R y R z ] × { 1 w h e n   l a s e r   d e t e c t s   a   t a r g e t 0 w h e n   l a s e r   l o s e s   a   t a r g e t d ̂ r e l a t i v e ρ ̂ = n T A r e f l a s e r r ̂ ρ ̂ n = [ 1 0 0 ] T r = [ R x R y R z ]
(4)

In Eq. (4), Areflaser is the matrix for converting the coordinate system depicted by Rx, Ry, Rz to the laser coordinate system. The line of sight of the laser coincides with the -axis of the laser coordinate system. It is assumed that the observed values can be obtained when the angle between the x-axis of the laser coordinate system and the position vector pointing from the laser to the target spacecraft is within 1m/Rx2+Ry2+Rz2 rad. Therefore, in order to employ the laser, the alignment must be implemented by the vision sensor and the gyro sensor. The inherent measurement error of this equipment is within 5 mm (Fig. 6).

3 VISION-BASED RELATIVE POSE ESTIMATION

The relative state between two spacecrafts can be estimated by using the nonlinear least squares method (Junkins et al. 1999) or the extended Kalman filter (Kim et al. 2007), both of which utilize the line of sight vector for measurement. The nonlinear least squares method provides a simple estimation solution by means of a numerical algorithm, but becomes inaccurate with large measurement noise.

The measurement model was constructed through the relationship between relative poses and expected measurements derived from lens equations and triangulation (Rodgers 2006). Using actual measurements, we can obtain the numerical solution of the rotation matrix, which involves the relative pose.

On the other hand, the extended Kalman filter can speed up the calculation and improve accuracy as precision of the dynamic model increases (Junkins et al. 1999). Therefore, it is possible to estimate the state between two spacecrafts on-line with the extended Kalman filter.

3.1 Nonlinear Least Squares Method

To estimate the initial relative pose between two spacecrafts without prior information, Eq. (1) is solved iteratively with the current measurement only. As this is a nonlinear optimization problem with no constraints, we applied the Levenberg-Marquardt method, which is a nonlinear least squares method (More 1977). It is necessary to match the position of the LED with the projected position on the image plane, so that a solution converges by iterative calculation. Therefore, we terminate the numerical calculation to obtain a final solution when a solution with a residual of less than 10 pixels is presented with a corresponding combination.

3.2 Extended Kalman Filter

In the case of the extended Kalman filter, the measurement noise matrix is a diagonal matrix with diagonal components of the covariance of the observation error. When propagating trajectories through the relative orbital equation, a diagonal matrix with components of acceleration disturbances can be used as the process noise matrix (Kim et al. 2007). Nevertheless, we used Eq. (5), which describes an absolute orbital motion, to check the effect of nonlinear perturbation apart from white noise. Thereby, we found a proper process noise matrix by trial and error, in order to match standard deviation derived from the filter with the state error derived from reference state.

3.3 Dynamic Model
3.3.1. Absolute Equations of Motion

In addition to Earth’s gravity, spacecrafts in Earth orbit are also affected by Earth’s non-spherical perturbations, atmospheric drag, 3rd body perturbations, and solar radiation pressure. The absolute equations of motion including these perturbations are as follows (Vallado 2014).

r ¨ = μ r / r 3 + a n o n s p h e r i c a l + a a i r d r a g + a 3 r d b o d y + a s o l a r r a d i a t i o n
(5)

In Eq. (5), the position vector of the spacecraft is depicted by , and μ is the Earth gravitational constant, while anonspherical,aairdrag,a3rdbody,asolarradiation represent accelerations due to the non-spherical perturbation, the atmospheric friction, the 3rd body gravitational perturbation, and the solar radiation pressure, respectively. The state vector propagated through the above equation was regarded to be the true value and used as a reference to be compared to the estimated value in order to evaluate estimation performance.

The attitude of the spacecraft was represented by a quaternion value. The angular velocity was represented in three axes of the body fixed coordinate system. The attitude dynamic model is expressed as follows (Zhang et al. 2014).

q ̇ = 1 2 Ξ ( q ) ω = 1 2 Ω ( ω ) q q = [ q 1 q 2 q 3 q 0 ] T Ξ ( q ) [ q 0 q 3 q 3 q 0 q 2 q 1 q 2 q 1 q 1 q 2 q 0 q 3 ] Ω ( ω ) [ 0 ω z ω z 0 ω x ω x ω x ω y ω y ω x ω x ω y 0 ω z ω z 0 ]
(6)

In these expressions (6), the attitude and the angular velocity with respect to the inertial coordinate system are described in a body-fixed coordinate system. The derivative of the angular velocity can be obtained as follows.

ω ̇ = I 1 ( T ω × ( I ω ) )
(7)

The attitude and the angular velocity are calculated by state equations Eq. (6) and (7). The torque includes the gravity gradient torque and control torque of a spacecraft. The attitude and angular velocity of the two spacecrafts are propagated in an inertial frame, and then the true value of the relative attitude is generated by the quaternion relationship as follows.

q d , c = q d q c 1
(8)

In Eq. (8) qd,qcandqd,c are quaternions representing a deputy spacecraft, a chief spacecraft, and the relative attitude of the deputy spacecraft with respect to the chief spacecraft, respectively. The symbol ‘⊗’ denotes quaternion multiplication.

The Jacobian matrix F of the filter was derived from Eq. (5). The dynamic model of the filter includes J2 perturbation. The orbit control includes an input error of about 10 %, which is a realistic value of the simulator thruster. The result could be seen in the numerical simulation of docking, which appeared to be working properly.

An absolute equation of motion was mainly used for estimation with a sensor that gave the absolute attitude with respect to the inertial reference frame. Thus, it can also be implemented for the estimation of the spacecraft simulator, since the reference coordinate system is assumed to be inertial. The linearized state equation was constructed through Eq. (6) and Eq. (7). It is assumed that a torque in Eq. (7) comprises an error of about 10 % of the control input. Detailed matrices in the filter are not described in this paper because of their simplicity.

3.3.2. Relative Equations of Motion

Relative pose estimation with respect to a rotating coordinate frame requires relative equations of motion. Here, we describe the relative pose estimation method based on the LVLH coordinate system (Zhang et al. 2014). This method is used in the numerical simulation of PCO scenarios.

If the relative distance between spacecraft is much smaller than the orbital radius of the spacecraft, it is convenient to establish the relative orbital equations representing the relative coordinates of the deputy spacecraft with respect to the chief orbital coordinate system (LVLH). If the trajectory of the chief orbit is a circle, the relative equation of motion is simplified to the following Clohessy-Wiltshire (CW) equation (Schaub & Junkins 2002).

x ¨ 2 n y ˙ 3 n 2 = 0 y ¨ + 2 n x ˙ = 0 z ¨ + n 2 z = 0 n = f ˙ = c o n s t a n t
(9)

In Eq. (9), x, y, z are relative position components in the LVLH coordinate system, and f is the orbital angular velocity.

The relative attitude equation is expressed with the relative quaternion obtained by Eq. (8). It can be expressed as Eq. (10) (Lefferts et al. 1982).

q t , c = q t q c 1 q ̇ t , c = [ [ ω c × ] ρ 0 ] + 1 2 [ ( ω t ω c ) 0 ] q t , c ρ = [ q 1 q 2 q 3 ]
(10)

The subscripts c and t in Eq. (10) refer to the reference and the other spacecraft, respectively. It was shown that Eq. (10) is identical to the following kinetic equations (Xing et al. 2001).

q ̇ t , c = 1 2 Ξ ( q t , c ) ω t , c Ξ ( q ) [ q 4 I 3 × 3 + [ ρ × ] ρ ]
(11)

Here, we can use some of the quaternion relations in (Kim et al. 2007). Therefore, the discrete equation of propagation is given as follows.

q t , c , k + 1 = Ω ¯ ( ω t , k ) Γ ¯ ( ω c , k ) q t , c , k Ω ¯ ( ω t , k ) = [ cos ( 1 2 | ω t , k | Δ t ) I 3 × 3 [ ψ k × ] ψ k ψ k T cos ( 1 2 | ω t , k | Δ t ) ] Γ ¯ ( ω t , k ) = [ cos ( 1 2 | ω c , k | Δ t ) I 3 × 3 [ ψ k × ] ζ k ζ k T cos ( 1 2 | ω c , k | Δ t ) ] ψ k = sin ( 1 2 | ω t , k | Δ t ) | ω t , k | ω t , k ζ k = sin ( 1 2 | ω c , k | Δ t ) | ω c , k | ω c , k
(12)

Eq. (12) assumes that angular velocities are constant during the time interval. Propagation error increases as the time interval is lengthened, or when the torque and changes in angular velocities are large. We constructed a state equation which includes the relative position, velocity, orbital elements, and derivative values of the orbital elements (Zhang et al. 2014). Then, since the estimation for absolute orbital components is meaningless when the true value is propagated with nonlinear perturbations, absolute orbital components and their derivatives are omitted. The state equation can be constructed as follows.(13)

X = [ x , y , z , x ˙ , y ˙ , z ˙ , r c , r c ̇ , θ , θ ˙ ] T = [ x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 , x 9 , x 10 ] T X ̇ = f ( X ) = [ x 4 x 5 x 6 x 1 x 10 2 ( 1 + 2 x 7 p ) + 2 x 10 ( x 5 x 2 x 8 x 7 ) 2 x 10 ( x 4 x 1 x 8 x 7 ) + x 2 x 10 2 ( 1 x 7 p ) x 7 x 10 2 x 3 p x 8 2 x 8 x 10 x 7 ] F o r b i t = f ( X ) X G o r b i t = [ 0 3 × 3 I 3 × 3 0 2 × 3 0 2 × 3 ] Q c , o r b i t = [ σ x 2 0 0 0 σ y 2 0 0 0 σ z 2 ] Q o r b i t = G o r b i t Q c , o r b i t G o r b i t T
(13)

The sensitivity matrix of the filter is also related to the relative equations of motion. The measurement of the vision sensor has the following relationship with the relative pose.

H o r b i t = [ b ρ 0 3 × 7 ] b ρ = A ( q ) r ρ r ρ = 1 s [ ( Y + y ) 2 + ( Z + z ) 2 ( X + x ) ( Y + y ) ( X + x ) ( Z + z ) ( X + x ) ( Y + y ) ( X + x ) 2 + ( Z + z ) 2 ( Y + y ) ( Z + z ) ( X + x ) ( Z + z ) ( Y + y ) ( Z + z ) ( Y + y ) 2 + ( X + x ) 2 ] s = [ ( X + x ) 2 + ( Y + y ) 2 + ( Z + z ) 2 ] 3 / 2
(14)

Therefore, the relative position can be estimated from the vision-based measurement. To estimate both relative position and attitude, we use Eq. (14) in conjunction with the attitude model.

The relative attitude equation of the filter is slightly different from the absolute one. First, Eq. (11) has a similar shape to Eq. (6). Using relations of Eq. (11) and some relations of quaternion, the relative attitude equation is summarized as follows.(15)

δ q ̇ r e l = 1 2 [ ω r e l 0 ] δ q r e l 1 2 δ q r e l [ ω ̂ r e l 0 ]
(15)

The angular velocity error is defined as follows.

δ ω d = ω d ω ^ d δ ω H = ω H ω ^ H
(16)

In Eq. (16), the subscript H denotes the component of the LVLH coordinate system. The angular velocity error of Eq. (16) can be expressed using the gyro sensor model of Eq. (3).(17)

ω = ω ˜ β ^ d δ ω d = ω d ω ^ d = ω ˜ β d η v ( ω ˜ β ^ d ) = ( δ β d + η d v ) δ ω H = ω H ω ^ H = n ^ δ θ ˙ n ^ = [ 001 ] T , δ θ ˙ = θ ˙ θ ˙ ^
(17)

With the following procedures in (Kim et al. 2007), the state equations can be constructed as follows.(18)

δ α ˙ d / H = [ ω ^ d × ] δ α d / H δ β d A ( q ^ d / H ) n ^ δ θ ˙ η d υ δ α ˙ c / H = [ ω ^ c × ] δ α c / H δ β c A ( q ^ c / H ) n ^ δ θ ˙ η c υ
(18)

The error state vector constructed by both the relative orbit dynamics and the attitude equation of motion is defined as follows.(19)

Δ x = [ δ α d H T δ α c H T δ β c T δ β d T δ ρ T δ ρ ˙ T δ r c δ r ˙ c δ θ δ θ ˙ ] T
(19)

Therefore, the overall error state equation can be summarized as the following Eq. (20).

Δ x ˙ = F Δ x + G ω ω = [ η d v T , η c v T , η d u T , η c u T , ω ρ ] F 1 = [ [ ω ^ d × ] 0 3 × 3 I 3 × 3 0 3 × 3 [ ω ^ c × ] 0 3 × 3 0 3 × 3 0 3 × 9 A ( q ^ d / H ) n ^ I 3 × 3 0 3 × 9 A ( q ^ c / H ) n ^ ] F 2 = 0 6 × 22 , F 3 = [ 0 10 × 12 F o r b i t ] G = [ G 1 G 2 G 3 G 4 ] G 1 = d i a g ( I 3 × 3 , I 3 × 3 , I 3 × 3 , I 3 × 3 ) G 2 = 0 12 × 13 , G 3 = 0 10 × 12 G 4 = [ 0 3 × 3 I 3 × 3 0 3 × 4 ] T Q = d i a g ( σ d v 2 I 3 × 3 , σ c v 2 I 3 × 3 , σ d u 2 I 3 × 3 , σ c u 2 I 3 × 3 , σ ρ 2 I 3 × 3 ) H ( X ^ ) = [ [ A ( q ^ d / H ) r ^ 1 × ] b ^ 1 δ α ^ c / H 0 3 × 3 0 3 × 3 b ^ 1 ρ ^ 0 3 × 7 ]
(20)

Eq. (20) represents the relative attitude of the primary and secondary spacecraft, chief and deputy, with respect to the rotational coordinate system (LVLH). The relative attitude between two spacecrafts can be obtained by quaternion calculation. Since no orbital control is applied in the PCO formation flying scenario, information on thrust control is not required. If nonlinear perturbation is added to the dynamics, the absolute orbital estimation of the main spacecraft ​​becomes meaningless. As can be seen from the matrix H in Eq. (20), absolute orbital elements do not affect the measurement value and only have an effect on the propagated state of each spacecraft. Therefore, in the PCO formation flying scenario with nonlinear perturbations, the error state vector is modified as follows.

Δ x = [ δ α d H T δ α c H T δ β c T δ β d T δ ρ T δ ρ ˙ T ] T
(21)

3 RESULTS

3.1 Numerical Simulation
3.1.1. Docking Scenario

In this scenario, the target (chief) spacecraft with attached LEDs was orbiting a low earth orbit. The scenario assumed that a tracking (deputy) spacecraft, equipped with a camera, was about 6 m away from the chief and approaching to dock with the target. A classical PD controller was used in this simulation. The reference orbital model was based on absolute equations of motion in Eq. (5), including JGM 2020, 3rd body perturbation of the moon and the sun, atmospheric friction, and solar radiation pressure. The absolute attitude model in Eq. (6) and (7) including control input error and gravity gradient torque were used in this scenario. The dynamic model in the filter used the absolute trajectory model in Eq. (5), and the absolute attitude model in Eq. (6) and (7), as same as reference model but excluded error components.

The docking aimed at reaching the required state starting from the initial state. The relative distance between the camera and the LEDs center was 0.3 m at the required position, and attitudes of the tracking spacecraft and the target spacecraft were aligned. A 10 % control error for PD control was applied. The P and D gain of the attitude control were set to 0.1 and 0.04, respectively, and the P gain and the D gain of the trajectory control were set to 2 and 7, respectively. The measurement error was varied from 0.1 pixel to 5 pixels. The docking requirement (Wigbert 2008) determining the success of the docking mission was considered and simulation results were compared with these values. The initial values of relative pose between spacecraft and docking requirements are shown in Table 2. The initial state was set within the range in which the vision sensor can detect all the LEDs. The initial relative position, velocity, attitude, and angular velocity errors were set to [1, 0.5 0.5] m, [0.1, 0.1, 0.1] m, [1 1 1]°, and [1 1 1] °/s, respectively.

Table 2. Initial state (above), docking requirement (below)
jass-35-263_T2
Download Excel Table

We found that estimation errors converged at around 200 s for all cases in the measurement error of 0.1 – 5 pixels for given initial conditions. The sum of the three sigma values of both the control and estimation error was considered as the total accuracy of the procedure.

Table 3 illustrates the position, velocity estimation error, and control error in the case of a measurement error of 1 pixel, and shows that requirements were satisfied. Three sigma of relative position estimation error was 0.0009 m and three sigma of control error was 0.0010 m. Thus, the total accuracy was about 0.0019 m, which satisfied the docking requirement (Fig. 7). Three sigma of the attitude estimation error was 0.1342°, and the three sigma of the attitude control error was 0.0872° (Fig. 8). Therefore, all required accuracies were satisfied. In Table 3, it can be seen that the measurement error increases with an increase in both the control and the estimation error. Moreover, if the measurement error exceeds 5 pixels, the attitude control and estimation performance do not meet the requirement. However, the position accuracy still meets the docking requirement. Therefore, docking using the vision system should be carefully handled with respect to the attitude accuracy.

Table 3. The performance of position and attitude according to the level of measurement error
jass-35-263_T3
Download Excel Table
jass-35-263_F7
Fig. 7. The error of position (above), velocity (below), control (left), and estimation (right) with 1 pixel measurement error.
Download Original Figure
jass-35-263_F8
Fig. 8. The error of attitude (above), angular velocity (below), control (left), and estimation (right) with 1 pixel measurement error.
Download Original Figure
3.1.2. PCO Formation Flying Scenario

The PCO at low Earth altitude had a projection radius of 100 m. Previous studies have used PSD with a standard deviation error of 0.0005° (Zhang et al. 2014). This corresponds to 0.01 pixel on a sensor with 8 mm focus length. We set the measurement error as 1/3 pixel, which was experimentally confirmed with the CMOS vision sensor used for ASTERIX. The orbital model of the filter applied the relative orbital in Eq. (9). With initial values satisfying the condition of Eq. (22), the orbit was designed to perform relative orbital motion for a sufficiently long period without any orbit control. The relative pose was estimated through the extended Kalman filter in Eq. (20).

y ˙ ( t 0 ) x ( t 0 ) = n ( 2 + e ) ( 1 + e ) ( 1 e ) 3
(22)

In order to detect LEDs even at longer distances, the space between each LED was adjusted to be 10 times wider than in the docking scenario (Fig. 3). Thus, the lateral and vertical width of the LED structure were set to be 1.5 m and 0.8 m, respectively. The initial relative pose of the PCO scenario is shown in Table 4. The bias error of the gyro sensor was set to be 1°/hr for each axis, and the initial error of the position and attitude was set to be 1m and 1°. The initial covariance matrix consisted of a relative position 5I3×3m2, a relative attitude of I3×3deg2, a gyro bias of each satellite 4I3×3(deg/h2), and a diagonal matrix with relative velocity 0.02I3×3(m/s2). For a given initial condition, attitude estimation errors of the two spacecrafts converged in about 50 minutes. The dotted line in Fig. 7 represents the 3 sigma of accuracy. The attitude estimation error of each satellite was less than 1°, and 3 sigma was also less than 1°. The estimated value of the bias of the gyro sensor converged efficiently to the true value 1°/hr at around 50 minutes. The relative attitude estimation error was less than 1°.

Table 4. Initial state of PCO scenario
jass-35-263_T4
Download Excel Table

The estimation results by numerical simulations are shown in Table 5. The relative position was estimated with a precision of less than 1 m, and the velocity with a precision of less than 5 mm/s. Compared with the docking scenario, it was seen that the error increased by several times or several tens of times. The result reveals that estimation accuracy becomes lower as the relative distance between spacecraft increases.

Table 5. The accuracy of attitude estimation (above), position and velocity estimation (below)
jass-35-263_T5
Download Excel Table
3.1.3. The Performance Complement Using Laser Distance Meter

On the other hand, since the laser distance meter provides distance information in the direction of line of sight, it is suitable as a sensor for supplementing the vision sensor. The attitude estimation information required for the laser distance meter can be obtained through the vision system. Therefore, the PD controller uses the estimated attitude information of the vision system. The 3 sigma of the attitude control error is shown in Fig. 9. Considering the attitude control error of steady state and the relative distance, the alignment error of the laser was less than 1 m. Therefore, it is assumed that after the attitude control reaches the steady state, the laser distance measurement can be continuously received. The error of the equipment itself was assumed to be around 5 mm, which was obtained from the experiment (Fig. 6). We constructed the extended Kalman filter with the measurement model of laser distance meter in Eq. (4).

jass-35-263_F9
Fig. 9. Attitude control error of chief spacecraft including camera.
Download Original Figure

Figs. 10 and 11 depict results of the estimation algorithm supplemented by the laser distance meter. The graphs show that the relative position and velocity estimation accuracy improved. The accuracy of the position estimation shown in the LVLH coordinate system improved by about 10 times from 0.5 m to 0.05 m. The 3 sigma of the position error of the X, Y and Z axes in the LVLH coordinate system was 0.0246 m, 0.0428 m and 0.0421 m respectively, and the 3 sigma of the velocity estimation error was 0.0006 m/s, 0.0008 m/s, and 0.0007 m/s in X, Y and Z axes, respectively. There was no significant change in the attitude estimation accuracy. From these results, we deduce that this method serve as an alternative when the estimation performance of the vision system is degraded due to the effect of nonlinear perturbations and sufficiently long relative distance.

jass-35-263_F10
Fig. 10. The accuracy of relative position estimation with laser distance meter.
Download Original Figure
jass-35-263_F11
Fig. 11. The accuracy of relative velocity estimation with laser distance meter.
Download Original Figure
3.2 Hardware Experiment

The reference value of the ASTERIX was generated by eight infrared cameras, with two cameras at each corner of the laboratory (Fig. 12). The spacecraft simulator was equipped with more than three infrared markers, so that we could establish the coordinate system of the simulator. The error in the reference value was 0.3550 mm for a period of 100 seconds. However, the simulator has to be immobilized to the blind spot in order to generate stable references. The attitude error due to the loss of one marker was about 2° (B circle in Fig. 15). The movement of ASTERIX equipped with the vision system was implemented by manpower, not gas thrusters. The state between two simulators was estimated in real time (5 – 7 Hz). As we could see in the docking scenario, the simulators gradually approached a phase in which attitudes were aligned.

jass-35-263_F12
Fig. 12. Reference using infrared cameras.
Download Original Figure

The experimental results are shown in Figs. 13-16. The relative position and attitude with respect to relative distance are shown in Figs. 13 and 14. The X-axis of the graph represents the relative distance shown in the camera coordinate system. It can be seen that the docking proceeded without being completely aligned with respect to yaw axis (a-1, a-2 in Fig. 13). Figs. 15 and 16 show results of real-time state estimation. The Y-axis of the graph represents position errors of the x, y, and z axes or attitude errors of roll, pitch, and yaw, respectively. For all points, the ratio of satisfying the attitude requirement of the docking, which was less than 1°, is 95.96 %, and the ratio of satisfying the position requirement, which was less than 1 cm, was 99.84%. When the relative distance was less than 0.4 m, 97.18 % and 100 % of the points satisfied requirements. The circle A in Fig. 15 corresponds a-1 and a-2 in Fig. 13. The attitude estimation error increased with increase in the relative attitude angle, especially when the distance between spacecraft was near in the LOS direction. This was because the LED projection image was distorted due to the relative attitude (Fig. 5). An example is shown in Fig. 17. The position error corresponding to the circle B of Fig. 15 is due to one of the markers not being detected for an instant. However, position errors were less than 1 cm, even if in the absence of one marker. In Fig. 16, the tendency of the position estimation error was slightly different for each axis. This was due to the incompleteness of the assembly, so that the estimation error was related to unadjusted assembly tolerance and measurement error, simultaneously.

jass-35-263_F13
Fig. 13. True relative attitude confirmed by infrared cameras.
Download Original Figure
jass-35-263_F14
Fig. 14. True relative position confirmed by infrared cameras.
Download Original Figure
jass-35-263_F15
Fig. 15. Relative attitude estimation error.
Download Original Figure
jass-35-263_F16
Fig. 16. Relative position estimation error.
Download Original Figure
jass-35-263_F17
Fig. 17. Image distortion due to misalignment of attitude.
Download Original Figure

4 CONCLUSION

The effectiveness of the constructed algorithm was verified by software simulations assuming two scenarios. First, it is confirmed that the vision system composed of a vision-based estimation algorithm and a PD controller can be used for autonomous docking. When the measurement error of the vision sensor is less than 5 pixels, the desired relative position and attitude accuracy can be achieved. In the PCO formation flying scenario, the precision was lower than that of the docking task, but the accuracy was improved to the order of several tens of millimeters by the complementary algorithm using the laser sensor. This result confirmed that the relative position estimation performance of the vision system can be compensated by combining the vision system with the relative position sensor, which is sensitive to the relative distance direction. The algorithm demonstrated the possibility of expanding the distance range utilizing the vision system.

The near-field estimation performance of the constructed algorithm was verified by hardware experiments. First, it was confirmed that the accuracy required for the docking can be achieved by using the real vision system composed of real equipment. In addition, except for the assembly tolerance of the hardware, it was found that simulation results are similar to those of numerical simulations. Even in the presence of unmeasurable errors that may exist in the laboratory, the estimation system satisfied mission requirements. This implies that the vision system demonstrated its feasibility in the real-world estimation.

This study confirmed the performance of both the software algorithm and the hardware system. Also, ways to supplement the algorithm were suggested and verified. In addition, by assessing the performance in actual experimental environment, the feasibility of the constructed algorithm was confirmed. In further research, the performance of the estimation algorithm in more varying scenarios can be analyzed and a system that reflects various error factors in the real world can be constructed.

ACKNOWLEDGMENTS

This work was supported by Global Surveillance Research Center (GSRC) program funded by the Defense Acquisition Program Administration (DAPA) and Agency for Defense Development (ADD) of Korea.

References

1.

AlonsoR, CrassidisJL, JunkinsJL, Vision-based relative navigation for formation flying of spacecraft, in AIAA Guidance, Navigation, and Control Conference and Exhibit, Dever, CO, 14-17 Aug 2000.

2.

ChengY, CrassidisJL, MarkleyFL, Attitude estimation for large field-of-view sensors, J. Astronaut. Sci.54, 433-448 (2005).
0021-9142

3.

CrassidisJL, AlonsoR, JunkinsJL, Optimal attitude and position determination from line-of-sight measurements, J. Astronaut. Sci.48, 391-408 (2000).0021-9142

4.

CrassidisJL, JunkinsJL, Optimal Estimation of Dynamic Systems (Chapman & Hall/CRC, New York, 2004).

5.

DuJW, Vision based navigation system for autonomous proximity operations: an experimental and analytical study, PhD dissertation, Texas A&M University, (2004).

6.

EunY, ParkSY, KimGN, Development of a hardware-in-the-loop testbed to demonstrate multiple spacecraft operations in proximity, Acta Astron.147, 48-58 (2018).
0001-5237

7.

JungS, ParkSY, ParkHE, ParkC, KimSW, et al., Real-time determination of relative position between satellites using laser ranging, J. Astron. Space Sci.29, 351-362 (2012).

8.

JunkinsJL, HughesDC, KarimP, PariyapongWV, Vision-based navigation for rendezvous, docking and proximity operation, in Annual AAS Rocky Mountain Guidance and Control Conference, San Diego, CA, 3-7 Feb 1999.

9.

KimSG, CrassidisJL, ChengY, FosburyAM, JunkinsJL, Kalman filtering for relative spacecraft attitude and position estimation, J. Guid. Control Dyn.30, 133-143 (2007).
0731-5090

10.

KimY, ParkSY, LeeE, KimM, A deep space orbit determination software: overview and event prediction capability, J. Astron. Space Sci.34, 139-151 (2017).

11.

LeeD, PernickaHJ, Vision-based relative state estimation using the unscented Kalman filter, Int. J. Aeronaut. Space Sci.12, 24-36 (2011).

12.

LeeE, ParkSY, ShinB, ChoS, ChoiEJ, et al., Orbit determination of KOMPSAT-1 and Cryosat-2 satellites using optical wide-field patrol network (OWL-Net) data with batch least squares filter, J. Astron. Space Sci.35, 19-30 (2017a).

13.

LeeE, KimY, KimM, ParkSY, Development, demonstration and validation of the deep space orbit determination software using lunar prospector tracking data, J. Astron. Space Sci.34, 213-223 (2017b).

14.

LeeJ, ParkSY, KangDE, Relative navigation with intermittent laser-based measurement for spacecraft formation flying, J. Astron. Space Sci.34, 163-173 (2018).

15.

LeeK, OhH, ParkHE, ParkSY, ParkC, Laser-based relative navigation using GPS measurements for spacecraft formation flying, J. Astron. Space Sci.32, 387-393 (2015).

16.

LeffertsEJ, MarkleyFL, ShusterMD, Kalman filtering for spacecraft attitude estimation, J. Guid. Control Dyn.5, 417-429 (1982).
0731-5090

17.

MoreJJ, The Levenberg-Marquardt algorithm: implementation and theory, in Conference of Numerical Analysis, Dundee, United Kingdom, 28 Jun 1977.

18.

OhH, ParkHE, LeeK, ParkSY, ParkC, Improved GPS-based satellites relative navigation using femtosecond laser relative distance measurements, J. Astron. Space Sci.33, 45-54 (2016).

19.

OlivieriL, AntonelloA, BettiolA, BranzF, DuzziM, et al., Microgravity tests in preparation of a tethered electromag-netic docking space demonstration, in 68th International Astronautical Congress, Adelaide, Australia, 25-29 Sep 2017.

20.

RodgersLP, Concepts and technology development for the autonomous assembly and reconfiguration of modular space systems, PhD dissertation, MIT (2006).

21.

Saenz-OteroA, MillerDW, SPHERES: a platform for formation flight research, in Optics and Photonics 2005, San Diego, CA, 31 Jul – 4 Aug 2005.

22.

SchaubH, JunkinsJL, Analytical Mechanics of Aerospace Systems (AIAA, Reston, 2002).

23.

ShinK, OhH, ParkSY, ParkC, Real-time orbit determination of Korea regional navigation satellite system with inter-satellite link, MS dissertation, Yonsei University (2016).

24.

SimSH, ParkSY, ChoiKH, Autonomous real-time relative navigation for formation flying satellites, J. Astron. Space Sci.26, 59-74 (2009).

25.

ValladoDA, Fundamentals of Astrodynamics and Applications (Springer, Berlin, 2014).

26.

WigbertF, Automated Rendezvous and Docking of Spacecraft (Cambridge University Press, Cambridge, 2008).

27.

XieY, HuJ, WangM, HuH, ZhangH, Accurate and stable control of Shenzhou spacecraft in rendezvous and docking, Proceedings of the IFAC Symposium on Automatic Control in Aerospace, Würzburg, Germany, 2-6 September 2013.

28.

XingGQ, ParvezSA, Nonlinear attitude state tracking control for spacecraft, J. Guid. Control, Dyn.24, 624-626 (2001).
0731-5090

29.

XingY, CaoX, ZhangS, GuoH, WangF, Relative position and attitude estimation for satellite formation with coupled translational and rotational dynamics, Acta Astron.67, 455- 467 (2010).
0001-5237

30.

ZhangL, YangH, ZhangS, CaiH, Kalman filtering for relative spacecraft attitude and position estimation: a revisit, J. Guid. Control Dyn.37, 1706-1711 (2014).
0731-5090

31.

ZhangZ, A flexible new technique for camera calibration, IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330-1334 (2000).