Abstract

A spacecraft-borne optical navigation camera is one of the key instruments for optical autonomous navigation, and the line of sight (LOS) of the camera directly affects the accuracy of navigation. We developed an on-orbit calibration approach for a navigation camera to ensure the accuracy, in which a stepwise calibration is performed, external parameters estimated, and internal parameters estimated in a generalized camera frame determined by external parameters. In addition, we proposed a batch and sequential on-orbit estimation method to save on-orbit computing power, and established a strategy to reject misidentified reference stars while keeping consistency in on-orbit parameters estimation. After the calibration, the accuracy of LOS in inertial frame satisfied the need in optical autonomous navigation. The results have been proven precise and robust in three experiments.

© 2016 Optical Society of America

1. Introduction

Optical navigation (ON) is one enabling technology for autonomous navigation, which is mandatory whenever there is no time to validate navigation decisions in the earth for spacecraft because of signal delay or communications interruption [1]. Optical navigation is the use of imaging data to aid in spacecraft navigation. In the typical case, an optical navigation camera on a spacecraft takes a picture of some nearby beacons. In tradition, typical beacons are coordinate centers of nearby solar system objects: planets, asteroids, comets, other spacecrafts, or reference stars [2]. According to the known and reliable ephemerides of the beacons and images of the beacons taken by the optical navigation camera, the navigation system will be able to accurately locate the spacecraft in inertial space and plan subsequent maneuver to accomplish the mission. Among all autonomous navigation technologies, optical navigation is believed a most feasible solution in the last two decades by reducing communication demands onto ground-based antennas for high-precision navigation [3].

An optical navigation camera is one of the key instruments on spacecraft for optical navigation, with which images of some nearby beacons are taken, and then the images are processed to extract various available observations from raw images. Among all observations, the line of sight (LOS) of a beacon in navigation camera frame is a most important one [4]. During an exposure by camera, the spacecraft attitude is estimated by ADCS (attitude determination and control system). With the ADCS-estimated attitude in inertial frame and the installation matrix that turns from navigation camera to ADCS, the LOS in camera frame can be converted into a unit vector from navigation camera to deacon in the inertial frame. The estimates of LOS and output of other sensors are provided to a GNC (guidance, navigation, and control) system to generate best possible estimation for orientation and position of the spacecraft [5].

Methods that convert LOS from the camera frame into the inertial frame include two categories: star-relative and starless. When ON beacons and more than 3 reference stars are available in a single image, it is star-relative, which was first validated in cruise phase of the Deep Space-1 mission [6]. However, reference stars are not always available in navigation images because of orbit geometry, relative brightness of beacons and stars, or small telescope fields of view, in which the method is starless, as performed in the Deep Impact mission [7]; and beacon observations and attitude information came from a separate source that offered by an ADCS attitude estimator, as shown in Fig. 1 [8]. In tradition, the platform of navigation system uses the rigid connection between ADCS (star camera) and optical navigation camera.

 figure: Fig. 1

Fig. 1 Platform of navigation system of the Deep Impact mission.

Download Full Size | PPT Slide | PDF

Calibration of navigation camera is a critical issue on which LOS depends having a direct impact on the accuracy of ON. Generally, camera is calibrated in ground-based laboratories to high precision before launching [9], including items of camera geometric distortion (internal calibration) and installation errors between camera and ADCS (external calibration). However, vibration during launch and variation in thermal environment may alter the preset camera parameters. Therefore, it is necessary to redo these jobs during the mission [10]. However, few paper talked about it, and comprehensive study on this issue is not available at present.

To solve the on-orbit calibration for spacecraft-borne optical navigation camera, we proposed a stepwise calibration algorithm in which external parameters are estimated first, and then internal parameters second based on the generalized camera frame determined by external parameters. Background reference stars on images are used as the control for parameters estimation. After a careful analysis of the issue, three key problems are put forward and solved in this manuscript. First, to overcome the problem that over-parameterization, strongly-correlated and less-significant in the traditional physical measurement model, we proposed the detector directional angle model for camera internal calibration. Second, to save on-orbit memory storage and computing power, we established a batch&sequential estimation method in order to process more star images that involved in the estimation to achieve high accuracy. Third, identifying reference stars correctly is the precondition for on-orbit calibration, and misidentified stars may hinder the calibration considerably. Therefore, we established a strategy using statistical information to reject misidentified reference stars, making the on-orbit calibration more stable and reliable in deep space complex environment.

After calibration, combined with the ADCS-estimated attitude, every pixel of navigation camera can obtain a high-precision inertial LOS determined by the external and internal calibration parameters, which has two advantages mainly. First, the accuracy of LOS in inertial frame can be guaranteed by starless method when reference stars are not available on the images. Second, as many on-orbit image processing algorithms are complex and time-consuming, having the calibration done first can well predict the locations of beacons and reference stars in the images, which will minimize the detection and identification of beacons and reference stars to a small region of interest (ROI) [2], making the jobs much easier and simpler and more rapidly in emergency situation, and welcomed of course.

In this paper, we introduce the measurement model of optical navigation camera and list all the internal and external error sources in Section 2. In Sections 3, we present, respectively, the on-orbit external and internal camera calibration model, and the batch&sequential estimation method, to estimate the external and internal calibration parameters. Section 4 details the calibration results, and computer simulated data are tested in three experiments. Last, Section 5 is the summary with conclusions.

2. Measurement model of optical navigation

2.1 Ideal measurement model

The imaging model of optical navigation camera can be considered as a pinhole imaging model [11,12 ], which assumes that each point on the target emits a single ray and each ray maps to a point on the focal plane. The pinhole imaging model considers a simple relationship between a point on focal plane and the corresponding LOS unit vector in camera coordinate system, as shown in Fig. 2 . Assuming f is the focal length, and (x0,y0) the principal point, which are determined on-ground, we define P(xfp,yfp) as the point on focal plane of the measured beacon. The corresponding LOS unit vector of P(xi,yi,zi) in camera coordinate system can be espressed as:

 figure: Fig. 2

Fig. 2 Ideal measurement model of the optical navigation camera.

Download Full Size | PPT Slide | PDF

wiC=1(xfpx0)2+(yfpy0)2+f2[(xfpx0)(yfpy0)f].

The LOS measured in camera coordinate system should be rotated to inertial coordinate system before being provided to GNC system.

wiI=RADCSIRCameraADCSwiC
where wiI is the LOS in inertial coordinate system. RADCSI is the attitude of ADCS in the inertial coordinate system provided by itself when the LOS is measured. RCameraADCS is the installation matrix from the camera coordinate system to ADCS.

In general, an ideal measurement model is not exact due to imperfect instrument manufacture and variable environment. Error sources from camera (internal) and installation (external) should be eliminated to approach to the real measurement model as much as possible.

2.2 Internal camera error sources

Generally, navigation cameras are calibrated in ground-based laboratories to high precision before launching. However, any changes in the instrument or environment in space may alter preset camera parameters. Three types of distortion errors [13] in a navigation camera would occur: 1. CCD translation, inclination, and rotation; 2. optical distortion of lenses; and 3. the change of focal length. To get the beacon’s real coordinate value P(xr,yr,zr) from measured coordinate value P(xi,yi,zi) in camera coordinate system, these errors must be calibrated.

2.2.1 Errors in CCD

CCD translation can be described as the translation of principal point. Assuming Δx0 and Δy0 indicate the change of an initial principal point, we can obtain P(xti,yti,zti) from P(xi,yi,zi) by calibrating the CCD translation:

{xti=xiΔx0yti=yiΔy0zti=zi
CCD inclination and rotation can be represented in the φωκ rotation system, and counterclockwise direction is the positive direction. Assuming the horizontal CCD plane rotates Δφ around axis y, then rotates Δω around x, and rotates Δκ around z, we can obtain P(xtθi,ytθi,ztθi) from P(xti,yti,zti) by calibrating the CCD inclination and rotation as:
[xtθiytθiztθi]=(RφRωRκ)T[xtiytizti]
where Rφ, Rω, Rκare the rotation matrix of Δφ,Δω,Δκ.

2.2.2 Lens distortion

Lens distortion is ubiquitous for optical instrument. In order to sense faint beacons, navigation cameras of narrow field angle are designed. Therefore, the first-order radial distortion model with the first-order tangential distortion model is appropriate; more parameters or higher-order models do not have obvious advantages in navigation camera calibration [14,15 ]. Assuming that a global estimation method is sufficient to the calibration, we can obtain P(xtθgi,ytθgi,ztθgi) from P(xtθi,ytθi,ztθi) by calibrating the optical distortion of lens:

{xtθgi=xtθi(k1xtθir2+p1(3xtθi2+ytθi2)+2p2xtθiytθi)ytθgi=ytθi(k1ytθir2+p2(3ytθi2+xtθi2)+2p1xtθiytθi)ztθgi=ztθi
k1 is coefficient of radial distortion; p1, p2 are coefficients of tangential distortion.

2.2.3 Focal length error

The change of focal length is inevitable. Assuming f represents the focal length determined from on-ground calibration, and Δf represents the change of the focal length, we can obtain P(xtθgfi,ytθgfi,ztθgfi) from P(xtθgi,ytθgi,ztθgi) by calibrating the change of the focal length.

{xr=xtθgfi=(f+Δf)xtθgi/fyr=ytθgfi=(f+Δf)ytθgi/fzr=ztθgfi=(f+Δf)=ztθgiΔf

Equation (6) is a rigorous physical measurement model for camera, by which the real coordinate P(xr,yr,zr) in camera coordinate can be obtained.

By compensating all the internal camera errors, the real LOS unit vector of the measured P(xi,yi,zi) in camera coordinate system can be expressed as:

wiC=1xr2+yr2+zr2[xryrzr].

2.3 External installation error sources

Errors in camera installation angle are often relatively considerable due to limited laboratory conditions in calibration technology, and to the changes during launching and in flight phase. These external errors would have a greater impact on accuracy of the LOS than internal ones as more variable, obvious, and random in space flight environment. Assuming (pitch,roll,yaw) is the installation angle determined on-ground and (Δpitch,Δroll,Δyaw) the offset of each angle, we can obtain the real LOS unit vector in the inertial coordinate system:

wiI=RADCSIRCameraADCS(pitch+Δpitch,roll+Δroll,yaw+Δyaw)wiC

3 An on-orbit calibration approach

3.1 Geometric calibration model

In order to set up a real navigation measurement model and guarantee the accuracy of LOS in inertial coordinate system, calibrations for both external installation and internal camera errors are necessary. The calibration is done in two steps:

  • 1. Calibration to camera installation angle;
  • 2. Calibration to internal camera distortion based on the external installation angle obtained in step one.

The reference stars serve as the control points for the calibration.

3.1.1 External calibration model

Supposing vi represents the unit direction vector of reference star i in guide star database, and (αi,δi) represents the right ascension and declination of the reference star i in the celestial sphere, the observed unit direction vector vi in inertial coordinate system can be calculated by:

vi=[XistarYistarZistar]=[cosαicosδisinαicosδisinδi].

Assuming (xpi,ypi) represents the location of reference star on the focal plane, its coordinate in camera coordinate system is Pi(xpix0,ypiy0,f). Assuming Pi is the real coordinate on-orbit, we can establish an external calibration model as:

[(xfpix0)(yfpiy0)f]=λRADCSCameraRIADCS[XistarYistarZistar]
where λ is proportional coefficient; RIADCS is the rotation matrix from inertial coordinate system to ADCS and provided by ADCS with quaternion (q0,q1,q2,q3) as follows:

RIADCS=[A1B1C1A2B2C2A3B3C3]=[q12q22q32+q022(q1q2+q3q0)2(q1q3q2q0)2(q1q2q3q0)q12+q22q32+q022(q2q3+q1q0)2(q1q3+q2q0)2(q2q3q1q0)q12q22+q32+q02]

RADCSCamera is the installation matrix from ADCS to the camera coordinate system with the installation angle (pitch,roll,yaw) as follows:

RADCSCamera=[a1b1c1a2b2c2a3b3c3]=([cos(pitch)0sin(pitch)010sin(pitch)0cos(pitch)].[1000cos(roll)sin(roll)0sin(roll)cos(roll)].[cos(yaw)sin(yaw)0sin(yaw)cos(yaw)0001])T

According to the reference stars recognized in star images, external calibration parameters XE(pitch,roll,yaw) can be determined assuming that the coordinates of reference stars in camera coordinate system is real.

After external calibration, the attitude of camera coordinate system in inertial coordinate system can be determined by RADCSCameraRIADCS, which will be the reference coordinate system for internal calibration.

3.1.2 Internal calibration model

Although in theory the camera physical measurement model in Eq. (6) considers the major internal errors, the model is not practical as an on-orbit calibration model for navigation camera due to over-parameterization. Some parameters included in physical measurement model are strongly correlated because of unique imaging conditions (i.e., long focal length and narrow field angle). In addition, some parameters are less significant in imagery geometric accuracy. If using physical measurement model as the internal calibration model to calculate each parameter, the calculation equation would be seriously ill-conditioned, and thus the reliability and the accuracy of the calibration could not be ensured. Therefore, although the camera physical measurement model is rigorous in theory, it is not suitable for on-orbit internal calibration.

To solve the problem, a detector directional angle model [16] is adopted as the internal calibration model (as it shows in Fig. 3 ). By calibrating the tangent of directional angle (ψx,ψy) for each CCD detector in the reference coordinate system determined by external calibration, the LOS of each CCD detector in the inertial coordinate system can be determined accurately.

 figure: Fig. 3

Fig. 3 Directional angle of CCD detector.

Download Full Size | PPT Slide | PDF

Polynomial model can be used to model the tangent of directional angles of CCD detectors. As the internal distortion is low-order because of its narrow field of view, we use an individual three-order polynomial that has high orthogonality and low correlation as the internal calibration model.

(VImage)cam=(xf,yf,1)T=(tan(ψx(s,l)),tan(ψy(s,l)),1)T
where

{tan(ψx(s,l))=ax0+ax1s+ax2l+ax3sl+ax4s2+ax5l2+ax6s2l+ax7sl2+ax8s3+ax9l3tan(ψy(s,l))=ay0+ay1s+ay2l+ay3sl+ay4s2+ay5l2+ay6s2l+ay7sl2+ay8s3+ay9l3

(s,l)is the CCD detector’s image plane coordinate (we difine the original point is the center of CCD plane).ax0, ax1, ax2, ax3, ax4, ax5, ax6, ax7, ax8, ax9and ay0, ay1, ay2, ay3, ay4, ay5, ay6, ay7, ay8, ay9 are internal calibration parameters YI.

Then Eq. (10) can be transformed:

[tan(ψx(s,l))tan(ψy(s,l))1]=λRADCSCameraRIADCS[XistarYistarZistar]

According to the identified reference stars, internal calibration parameters can be obtained based on the installation matrix RADCSCamera produced by external calibration.

3.1.3 Discussion

As some internal errors are included in the external calibration results, the reference coordinate system could not well represent the real camera coordinate system. However, this does not affect the calculation of internal calibration parameters because of the high correlation between external and internal calibration parameters on account of the narrow field angle. In addition, the proposed flexible internal calibration model could well compensate the residual errors that caused by external calibration, which would lower the precision requirement of external calibration. A high-accuracy LOS of each CCD detector in inertial coordinate system can be obtained by combining the internal and external calibration parameters as the follows:

wiI=λRADCSIRCameraADCS[tan(ψx(s,l))tan(ψy(s,l))1]
where λ is the normalization coefficient, RADCSI=(RIADCS)Tand RCameraADCS=(RADCSCamera)T.

Once the internal parameters are determined on-orbit accurately, there would be no need to update them frequently, because they are relatively stable and the determination of them is computation-cost. However, external parameters should better be updated before the spacecraft photographs at beacons due to changes of ambient conditions.

3.2 Estimation of the camera parameters

With star identification algorithms, we can obtain right ascension and declination of reference stars in the images taken by navigation camera for calibration, and acquire the unit direction vector (X0,Y0,Z0)T in inertial coordinate system by Eq. (9). The 2D coordinates (s,l) of stars on the image plane can be established in centroid extraction algorithms. Those correctly identified stars can act as control points with which calibration parameters can be set. The noises of attitude from ADCS and stars centroiding are the main random errors, and should be suppressed in the calibration.

To filter out the noises and satisfy accuracy, both batch and sequential estimations (B&S estimation) are used to calculate the parameters. Batch estimation works in the least-square scheme, while sequential estimation uses the Kalman filter. The batch method is employed when a number of measurements have been accumulated while the sequential method updates the estimate one measurement at a time [10]. So a new estimator is developed by combination of Least Squares (LS) and Kalman Filter. The LS estimates of the calibration parameters are determined from Eq. (18) and Eq. (23), then LS estimates are used as “measurements” in the Eq. (20) and Eq. (25) for a recursive Kalman filter to filter out the noise in the LS estimates and combine many LS estimates to find a best estimate. In this way, the biggest memory consumption is from the once LS batch estimation, however, a much better estimate than once LS batch estimation can be obtained by iterative refinement process.

In this practice, external parameters are calculated via B&S estimation first, and then internal parameters can be estimated accurately based on the reference coordinate system that determined in external calibration, as Fig. 4 shows. Meanwhile, external parameters can also be updated based on determined internal parameters in need.

 figure: Fig. 4

Fig. 4 Flowchart of estimation of camera parameters.

Download Full Size | PPT Slide | PDF

Equation (16) can be derived from Eqs. (10)-(12) for external parameters estimation as follows:

{F=Aa1Xistar+Bb1Yistar+Cc1ZistarAa3Xistar+Bb3Yistar+Cc3Zistar(xfpix0)fG=Aa2Xistar+Bb2Yistar+Cc2ZistarAa3Xistar+Bb3Yistar+Cc3Zistar(yfpiy0)f

In Eq. (16), F is the residual error vector of x axis direction in camera frame, and G is the residual error vector of y axis direction in camera frame.

Because the Eq. (14) is linear when internal calibration parameters are undetermined parameters, Eq. (17) can be derived from Eqs. (11), (12) and (14) for internal parameters estimation as:

{f=Aa1Xistar+Bb1Yistar+Cc1ZistarAa3Xistar+Bb3Yistar+Cc3Zistar=tan(ψx(s,l))g=Aa2Xistar+Bb2Yistar+Cc2ZistarAa3Xistar+Bb3Yistar+Cc3Zistar=tan(ψy(s,l))
in which

Aa1=A1a1+A2b1+A3c1;Aa2=A1a2+A2b2+A3c2;Aa3=A1a3+A2b3+A3c3;Bb1=B1a1+B2b1+B3c1;Bb2=B1a2+B2b2+B3c2;Bb3=B1a3+B2b3+B3c3;Cc1=C1a1+C2b1+C3c1;Cc2=C1a2+C2b2+C3c2;Cc3=C1a3+C2b3+C3c3;

External calibration parameters are XE(pitch,roll,yaw), internal calibration parameters are YI(ax0,axm,ax9,ay0,aym,ay9), and m=0,2,,9.

3.2.1 Calibration on external parameters in B&S estimation

To determine external calibration parameters, we assume initial internal calibration parameters are “true”. We initialize the external and internal calibration parameters XE and YI with on-ground calibration initial XE0 and YI0. In the external parameters estimation, we define k the times of iteration, and NE the number of reference stars in each iteration for batch estimation. Reference stars for batch estimation may distribute on multiple images taken continuously, because the number of reference stars in one image is limited.

3.2.1.1 Batch estimation

Insert (XE0,YI0) into Eq. (16) and linearize it to get Eq. (18) in kth iteration:

Ri,kE=Ai,kΔXk
in which
Ai,k=[Fi,kXEGi,kXE]=[Fi,kpitchFi,krollFi,kyawGi,kpitchGi,krollGi,kyaw]ΔXk=[ΔpitchΔrollΔyaw]kRi,kE=[F(XE0,YI0)G(XE0,YI0)]i,k
where ΔXk is the correction of the external calibration parameters obtained in kth iteration. Ri,kE is the residual error vector of ith reference star calculated by the current (XE0,YI0) in kth iteration,i=1,2,,NE.

Pi,kE represents the weight of the observation value of ith reference star in kth iteration in external calibration.

ΔXk is calculated in least-square method:

ΔXk=(AkTPkEAk)1(AkTPkERkE)
whereAk=[A1AiANE]kT,PkE=diag(p1E,piE,,pNEE)k, RkE=[R1ERiERNEE]kT.

3.2.1.2 Sequential estimation

Kalman filter for external calibration includes measurement equation and state equation as Eq. (20).

{ZkE=ΗEkΔXk+VkEΔXk+1=Φk+1,kEΔXk

In Eq. (20), as the new measurement ZkE in measurement equation is LS estimate ΔXk obtained by Eq. (19) from batch estimation, sensitivity matrixΗEk=I3×3. VkE is the noise in the LS estimate, and its covariance matrix isQkKalman(E). State equation expresses the variation tendency of the LS estimatesΔXk. As the states of the LS estimates ΔXk in each iteration are relatively constant, in other words, the best estimate of Kalman filter is constant, so the transition matrixΦk+1,kE=I3×3.

Then we can use Kalman filter to obtain the updated ΔXk+1 by Eq. (21) as:

{KkE=PkKalman(E)ΗEkT(ΗEkPkKalman(E)ΗEkT+QkKalman(E))1Pk+1Kalman(E)=(IKkEΗEk)PkKalman(E)ΔXk+1=ΔXk+KkE(ZkEΗEkΔXk)
where PkKalman(E) is the covariance matrix of a priori estimation error, as the Kalman filter estimates are not guaranteed to be accurate for poor initial guesses, we set P0Kalman(E) to be unit matrix. KkE is the gain matrix of the Kalman filter in kth iteration.

By iteration, Kalman filter filters out the noise in the LS estimates and combine many LS estimates to achieve the best estimate of ΔXk.

3.2.1.3 Parameters modification

We repeat the B&S estimation iteratively until ΔXk+1-ΔXkε where ε is a small positive number, to calibrate the external parameters:

XE=XE0+ΔXk+1

3.2.2 Calibration on internal parameters in B&S estimation

After external calibration, we believe the modified XE is true, and leave internal calibration parameters to be calibrated. In internal calibration, we define NI the number of reference stars in each iteration for batch estimation.

3.2.2.1 Batch estimation

Insert the modified XE into the Eq. (17) and obtain Eq. (23) in kth iteration:

Rj,kI=Bj,kYk
in which

Bj,k=[d(tan(ψx(s,l)))dYId(tan(ψy(s,l)))dYI]j,k=[dtanψxdax0dtanψxdaxmdtanψxdax9dtanψxday0dtanψxdaymdtanψxday9dtanψydax0dtanψydaxmdtanψydax9dtanψyday0dtanψydaymdtanψyday9]j,k
Yk=[ax0axmax9ay0aymay9]kT,Rj,kI=[f(XE)g(XE)]j,k,m=0,2,,9

Yk is the estimated internal calibration parameters obtained in kth iteration. Rj,kI is the vector of jth reference star in the camera frame calculated by current XE in kth iteration,j=1,2,,NI.

Pj,kI represents the weight of the observation value of jth reference star in kth iteration in internal calibration.

Yk is calculated in least-square method:

Yk=(BkTPkIBk)1(BkTPkIRkI)
in whichBk=[B1BjBNI]kT,PkI=diag(p1,pj,,pNI)k, RkI=[R1RjRNI]kT.

3.2.2.2 Sequential estimation

Kalman filter for internal calibration includes measurement equation and state equation as Eq. (25).

{ZkI=ΗIkYk+VkIYk+1=Φk+1,kIYk

In Eq. (25), as the new measurement ZkI in measurement equation is LS estimate Yk obtained by Eq. (24) from batch estimation, sensitivity matrixΗIk=I20×20. VkI is the noise in the LS estimate, and its covariance matrix isQkKalman(I). State equation expresses the variation tendency of the LS estimatesYk. As the states of the LS estimates Yk in each iteration are relatively constant, in other words, the best estimate of Kalman filter is constant, so the transition matrixΦk+1,kI=I20×20.

Then we can use Kalman filter to obtain the updated Yk+1 by Eq. (26) as:

{KkI=PkKalman(I)ΗIkT(ΗIkPkKalman(I)ΗIkT+QkKalman(I))1Pk+1Kalman(I)=(IKkIΗIk)PkKalman(I)Yk+1=Yk+KkI(ZkIΗIkYk)
where PkKalman(I) is the covariance matrix of a priori estimation error, as the Kalman filter estimates are not guaranteed to be accurate for poor initial guesses, we set P0Kalman(I) to be unit matrix. KkI is the gain matrix of the Kalman filter in kth iteration.

By iteration, Kalman filter filters out the noise in the LS estimates and combine many LS estimates to achieve the best estimate ofYk.

3.2.2.3 Parameters modification

We perform the B&S estimation iteratively until Yk+1-Ykε, ε is a small positive number, to get the modified internal parameters:

YI=Yk+1

3.2.3 Detect and reject misidentified stars

Identifying reference stars correctly is the precondition for on-orbit calibration. However, a long exposure to image from dim reference stars would blur the stars due to vibration of spacecraft during attitude adjustment. The blurred stars would make it more difficult to implement centroid extraction accurately. In addition, uncalibrated navigation camera parameters may reduce the successful rate of star identification sharply because most star identification algorithms depend on camera parameters [17,18 ]. Misidentified stars may hinder the calibration considerably. Therefore, it is necessary to detect and reject misidentified stars before calibration.

We validate the normality of measurements according to statistical information of the residual errors in the batch estimation, and then reject the reference stars that might be misidentified most probably. The principle is as follows.

The coordinates of reference stars in the image plane obtained by centroid extraction and the corresponding unit direction vector from Eq. (9) obtained by star identification are put into Eqs. (16) and (17) and their residual errors Rn,k are calculated before kth batch estimation.

In external calibration, as

Rn,k=[r1r2]n,k=[F(XE0,YI0)G(XE0,YI0)]n,k
where n=1,2,NE.

In internal calibration, as

Rn,k=[r1r2]n,k=[f(XE)tan(ψx(s,l))g(XE)tan(ψy(s,l))]n,k
where n=1,2,NI.

Rn,k can be treated as a vector, and its magnitude can be written as:

|Rn,k|=r12+r22

The mean μ and the standard deviation θ of |Rn,k| can be easily obtained. Meanwhile, a particular normal probability distribution can be defined to validate the normality of each |Rn,k|. To judge whether or not a star is misidentified, we compare the corresponding absolute deviation from the mean ||Rn,k|-μ| with a threshold F. The star may well be misidentified when ||Rn,k|-μ| is bigger than F, then its weight should better to be set 0 to avoid its negative effect on the estimation.

pn={1,||Rn,k|-μ|<F0,||Rn,k|-μ|>F

In Eq. (31), we set F=θ.

3.2.4 Discussion

The coworking of least square-based batch estimation with Kalman filter-based sequential estimation can reduce memory load and take more reference stars for estimation. Therefore, noises in the measurements can be depressed and reach high accuracy of calibration. Moreover, misidentified stars can be detected and rejected in batch estimation, by which the calibration will be more stable and reliable in the complex deep space.

3.3 Accuracy assessment

The accuracy of a beacon’s LOS in inertial frame calculated by internal and external parameters can be referred for estimating the accuracy of on-orbit calibration. The statistical error between calculated and true LOS in inertial frame is an index to the calibration accuracy. The statistical error can be calculated by:

Δr=1Ni=1N(arccos(viwiI))
where N is the total number of the beacons, vi is the true LOS of beacons and can be obtained by Eq. (9), and wiI can be calculated by Eq. (15).

4. Experiments

To verify our proposed approach, we designed three experiments. Experiment 1 was designed to test the overall performance and effectiveness of detector directional angle model in modelling and compensating internal distortion and residual error of external calibration. Experiment 2 was designed to test the performance of re-calibration of external parameters on the basis of the calibrated internal parameters before photographing beacons. Experiment 3 was designed to verify the necessity and effectiveness of detecting and rejecting of misidentified stars in on-orbit calibration.

Real star image data obtained by navigation camera on-orbit are unavailable at present because they are rarely downloaded from spacecraft. The simulated navigation camera model and error sources are determined by Eqs. (3)-(6) and Eq. (8) and the true and initial parameters are set (Table 1 ). The camera internal parameters are set according to the medium-resolution navigation camera of the Deep Impact Mission. The field angle of camera is about 10 mrad and the resolution is 512×512 in pixel of 21×21 um. Thus, the angular resolution of one pixel is about 4 acrsec. We use physical measurement model (shown by Eq. (6)) to simulate the internal distortion of the camera and evaluate the calibration accuracy, and use detector directional angle model (shown by Eq. (13)) to calibrate the internal distortion. The Tycho-2 [19] Catalogue of 2.5 million brightest stars in visual magnitudes brighter than 12 with whose corresponding revised right ascension and declination are taken as the reference stars database. Using true camera specification, installation angle, and star catalog, the positions of reference stars within the field of view of the navigation camera can be simulated when attitudes of navigation camera are fixed.

Tables Icon

Table 1. The initial and the true external and internal parameters

Noises in stars centroiding and the attitudes provided by the ADCS are main random sources of errors, and should be taken into account in the simulation. The coordinate values of reference stars in star images used for calibration are simulated according to Eqs. (3)-(6) and Eq. (8), in which the mean centroiding noise is zero and standard deviation is 0.3 pixel. In addition, the corresponding attitudes provided by ADCS are simulated in mean attitude determination noise at zero and standard deviation at 3 arcsec. In order to evaluate the calibration accuracy, we simulated other 100 star images with no centroiding or no attitudes noise for calculating the average deviation of LOS determined by calibrated internal and external parameters.

4.1 Experiment 1

Due to the deviation between true and initial values of a parameter, the same CCD detector of a navigation camera may have a very different LOS in inertial coordinate system defined by internal and external parameters. Via on-orbit calibration, high-precision LOS of each CCD detector can be obtained by calibrated parameters. We simulated sequence images with 2D coordinate values of corresponding reference stars by setting a random initial attitude and angular velocity, in which the sampling frequency was set relatively low to enlarge the attitudes change so that the reference stars in sequence images could be located evenly in the field of view. Given the number of the external and internal calibration parameters, we set NE at 30 and NI at 100.

To assess the ability of the directional angle model in compensating internal camera error, we assume that external parameters are known and only internal calibration should be performed (shown in Fig. 5(a), 5(b), 5(c) ). Figure 5(b) displays that the biggest residual deviation of LOS in camera frame of all the pixels after internal calibration is below 0.2 arcsec, in other words, it is smaller than 0.05 pixel size according to the angular resolution of one pixel. As the Fig. 5(c) shows, a greater accuracy of calibration can be achieved in B&S estimation iteration. Therefore, effectiveness of the directional angle model in describing the camera internal distortion is verified.

 figure: Fig. 5

Fig. 5 Results of the experiment 1: (a) - The deviation surface of LOS in camera frame with internal error; (b) -The residual deviation surface of the LOS in camera frame after internal calibration; (c) – The curve of the accuracy of LOS in camera frame of internal calibration with different times of iteration; (d) –The deviation surface of LOS in inertial frame with internal and external errors; (e) –The residual deviation surface of LOS in inertial frame after external and internal calibrations; (f) –The curve of the accuracy of LOS in inertial frame of comprehensive calibration with different times of iteration in internal calibration.

Download Full Size | PPT Slide | PDF

To judge the overall performance of the proposed on-orbit calibration approach, we assume that both external and internal parameters are undetermined, and external and internal calibration should be made. The result of the calibration parameters are shown in Table 2 and Fig. 5(d), 5(e), 5(f). In external calibration, an obvious deviation was shown between estimated and true external parameters (Tables 1 and 2 ), because the estimated installation angles compensated part of internal camera errors, and determined an generalized rather than the true camera frame. The estimated internal parameters based on the generalized camera frame can achieve higher and higher accuracy by iterative computation with more star images (shown in Fig. 5(f)). Moreover, the biggest residual deviation of LOS in the inertial frame determined with calibrated parameters is below 0.3 arcsec (shown in Fig. 5(e)), or 0.075 pixel. Therefore, a high accuracy LOS in inertial frame can be determined in the generalized camera frame after calibrations on external and internal parameters.

Tables Icon

Table 2. The estimated external and internal calibration parameters

In order to evaluate the advantage of the B&S estimation method, the traditional least square estimation method are performed by the same simulated data used in the Fig. 5(d), 5(e), 5(f). As the accuracy of internal calibration will directly influence the final accuracy of on-orbit calibration, the estimation of internal parameters by Least Square estimation method with different number of reference stars is designed to compare with the performance of B&S estimation. The result is as the Fig. 6 shows.

 figure: Fig. 6

Fig. 6 The accuracy of the calibration by B&S and LS estimation method.

Download Full Size | PPT Slide | PDF

In Fig. 6, because the number of reference stars in each iteration for internal calibration in B&S estimation NI is set to be 100, when the number of iteration is 10, the number of reference stars participated in B&S estimation is 1000 (1K). Then we use the same 1000 reference stars and LS estimation for internal calibration as comparison. In a similar way, we can use the same number of reference stars for B&S estimation or LS estimation, and then compare their accuracy. The accuracy curve is shown in Fig. 6. Obviously, B&S and LS estimation by the same number of reference stars will achieve a quite similar accuracy, while with more stars participated in the estimation, the accuracy will be higher. The essential difference is that memory consumption of B&S estimation is constant by iteration no matter how many stars participated in the estimation, and the biggest memory consumption is always from the once batch estimation. However, LS estimation needs bigger and bigger memory storage with more and more reference stars participated in the estimation, and the higher dimension matrix needs higher computing power. As the memory storage and computing power are very limited on spacecraft, B&S estimation method make it realizable to include more star images into the estimation, and to achieve higher accuracy.

4.2 Experiment 2

External calibration is more frequently demanded than internal calibration as it is relatively variable. To evaluate the performance of external recalibration based on calibrated internal parameters, we designed true and initial external parameters (shown in Table 3 ). The internal parameters and the initial external parameters are taken from Experiment 1. Sequence images are simulated for B&S estimation on external parameters, in which NE is set at 30.

Tables Icon

Table 3. External parameters in Experiment 2

The iteration convergence is satisfactory as shown in Fig. 7(a), 7(c) that the accuracy of calibration increases with the number of iteration of Kalman flitting, which has a small fluctuation in the beginning of iteration. Distinct deviations occur between each true and corresponding estimated external parameters obtained in the 100th iteration (shown in Table 3), however, the installation matrix determined by Eq. (12) is able to reach high accuracy (shown in Fig. 7(b)). That is because a high accuracy installation matrix does not call for a high accuracy of all installation angles. Once more star images are used in iteration to estimate external parameters by B&S estimation method, the installation matrix will be more accurate (shown in Fig. 7(c)). The residual deviation surface of LOS in Fig. 7(b) is similar to that obtained in Experiment 1, which means that external calibration can eliminate most of the external error, and the residual deviation remained is from internal calibration in Experiment 1.

 figure: Fig. 7

Fig. 7 Results of the experiment 2: (a) – Correction of external calibration parameters obtained in each iteration by Kalman filtering; (b) –The residual deviation surface of LOS in inertial frame after external calibration; (c) – The accuracy of overall calibration with external parameters obtained in different times of iteration by Kalman filtering in external calibration.

Download Full Size | PPT Slide | PDF

4.3 Experiment 3

Misidentified stars can bring incorrect vectors into estimation and decrease the accuracy. To test the performance of the proposed method in detecting and rejecting of misidentified stars, different correct identification rates of reference stars are designed, in which the ascension and declination of adjacent stars in star images are changed to create misidentified stars. External calibration alone designed in Experiment 2, and internal calibration alone designed in Experiment 1 are taken to test the performance.

As shown in Fig. 8(a) , the accuracy of external and internal calibration is wildly inaccurate, even when the correct identification rate reaches 96.7% in external calibration and 99% in internal calibration, indicating that only one misidentified star exist in 30 reference stars in external batch estimation and 100 reference stars in internal batch estimation. Actually there is a very real possibility that this could happen, because the star identification algorithm is not totally reliable. Batch estimation is the foundation of the sequential estimation; therefore, these disturbing data should be discarded.

 figure: Fig. 8

Fig. 8 Results of the experiment 3: (a) The accuracy of the batch estimation when misidentified stars exist; (b) The accuracy of the batch estimation by detecting and rejecting the misidentified stars.

Download Full Size | PPT Slide | PDF

We set F=θ to detect and reject the misidentified stars. As shown in Fig. 8(b), when the identification rate is above about 65%, the accuracy of external or internal calibration is high regardless of incorrect vectors caused by misidentified stars. However, the program cannot perform well if successful identification rate is too low, because too many gross errors would cause difficulty in setting an appropriate threshold in Eq. (31). Nevertheless, our proposed method is effective to eliminate gross errors during estimation in most cases, and the results of estimation are consistent.

5. Conclusions

We proposed a stepwise calibration in combination with batch&sequential estimation, with which on-orbit autonomous calibration for navigation camera can be realized by estimating external parameters first, and internal parameters second using the generalized camera frame determined beforehand by external parameters. The batch&sequential estimation can lower the requirement of on-orbit computing power, and additionally, method to reject gross error should be combined to the estimation in order to guarantee the stability.

In three validating experiments, results indicate that the LOS of each CCD detector can be obtained in high accuracy with external and internal calibration parameters. The batch&sequential estimation can reach higher accuracy with the increase of iteration even if misidentified reference stars exist, and process simultaneously more star images in the estimation regardless of memory restriction. Overall, our proposed methods in this paper have been proven good and effective in terms of accuracy, robustness, and performance in on-orbit calibration for optical autonomous navigation in deep universal space.

Acknowledgments

The authors would like to thank the accompniers working with us in state key laboratory of information engineering in surveying, mapping and remote sensing. We also thank to National Basic Research Program of China 973 Program (2014CB744201), National Natural Science Foundation of China (NSFC) (41371430, 91438111) and Program for Changjing Scholars and Innovative Research Team in University (IRT1278).

References and links

1. W. M. Owen Jr, “Methods of optical navigation,” in Spaceflight Mechanics140, (2011).

2. J. M. Rebordão, “Space optical navigation techniques: an overview,” 8th Ibero American Optics Meeting/11th Latin American Meeting on Optics, Lasers, and Applications. International Society for Optics and Photonics, 2013.

3. S. Li, R. K. Lu, L. Zhang, and Y. M. Peng, “Image Processing Algorithms For Deep-Space Autonomous Optical Navigation,” J. Navig. 66(04), 605–623 (2013). [CrossRef]  

4. J. A. Christian and G. E. Lightsey, “Onboard image-processing algorithm for a spacecraft optical navigation sensor system,” J. Spacecr. Rockets 49(2), 337–352 (2012). [CrossRef]  

5. J. I. Kawaguchi, T. Hashimoto, T. Misu, and S. Sawai, “An autonomous optical guidance and navigation around asteroids,” Acta Astronaut. 44(5), 267–280 (1999). [CrossRef]  

6. J. E. Riedel, S. Bhaskara, S. Desai, D. Han, B. Kennedy, and G. W. Null, “Autonomous optical navigation DS1 technology validation report,” Jet Propulsion Laboratory, California, USA (2000).

7. D. L. Hampton, J. W. Baer, M. A. Huisjen, C. C. Varner, A. Delamere, D. D. Wellnitz, and K. P. Klaasen, “An overview of the instrument suite for the Deep Impact mission,” Space Sci. Rev. 117(1–2), 43–93 (2005). [CrossRef]  

8. M. P. Hughes and C. N. Schira, “Deep impact attitude estimator design and flight performance,” Adv. Astronaut. Sci. 125(441), 042802 (2006).

9. J. Oberst, B. Brinkmann, B. Giese, “Geometric calibration of the MICAS CCD sensor on the DS1 (Deep Space One) spacecraft: laboratory vs. in-flight data analysis,” International Archives of Photogrammetry and Remote Sensing 33.B1; PART 1: 221–230 (2000).

10. M. A. Samaan, T. Griffith, P. Singla, and J. L. Junkins, “Autonomous on-orbit calibration of star trackers,” In Core Technologies for Space Systems Conference (Communication and Navigation Session) (2001, November).

11. Y. Hong, G. Ren, and E. Liu, “Non-iterative method for camera calibration,” Opt. Express 23(18), 23992–24003 (2015). [CrossRef]   [PubMed]  

12. P. D. Lin and C. K. Sung, “Comparing two new camera calibration methods with traditional pinhole calibrations,” Opt. Express 15(6), 3012–3022 (2007). [CrossRef]   [PubMed]  

13. T. Sun, F. Xing, and Z. You, “Optical system error analysis and calibration method of high-accuracy star trackers,” Sensors (Basel) 13(4), 4598–4623 (2013). [CrossRef]   [PubMed]  

14. C. Ricolfe-Viala and A. J. Sanchez-Salmeron, “Lens distortion models evaluation,” Appl. Opt. 49(30), 5914–5928 (2010). [CrossRef]   [PubMed]  

15. J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992). [CrossRef]  

16. M. Wang, B. Yang, F. Hu, and X. Zang, “On-orbit geometric calibration model and its applications for high-resolution optical satellite imagery,” Remote Sens. 6(5), 4391–4408 (2014). [CrossRef]  

17. M. Kolomenkin, S. Pollak, I. Shimshoni, and M. Lindenbaum, “Geometric voting algorithm for star trackers,” IEEE Trans. Aerosp. Electron. Syst. 44(2), 441–456 (2008). [CrossRef]  

18. J. Yang, G. J. Zhang, and J. Jiang, “A star identification algorithm for un-calibrated star sensor cameras,” Opt. Technol. 34, 26–32 (2008).

19. HEASARC, “TYCHO2,” http://heasarc.nasa.gov/W3Browse/all/tycho2.html.

References

  • View by:

  1. W. M. Owen Jr, “Methods of optical navigation,” in Spaceflight Mechanics140, (2011).
  2. J. M. Rebordão, “Space optical navigation techniques: an overview,” 8th Ibero American Optics Meeting/11th Latin American Meeting on Optics, Lasers, and Applications. International Society for Optics and Photonics, 2013.
  3. S. Li, R. K. Lu, L. Zhang, and Y. M. Peng, “Image Processing Algorithms For Deep-Space Autonomous Optical Navigation,” J. Navig. 66(04), 605–623 (2013).
    [Crossref]
  4. J. A. Christian and G. E. Lightsey, “Onboard image-processing algorithm for a spacecraft optical navigation sensor system,” J. Spacecr. Rockets 49(2), 337–352 (2012).
    [Crossref]
  5. J. I. Kawaguchi, T. Hashimoto, T. Misu, and S. Sawai, “An autonomous optical guidance and navigation around asteroids,” Acta Astronaut. 44(5), 267–280 (1999).
    [Crossref]
  6. J. E. Riedel, S. Bhaskara, S. Desai, D. Han, B. Kennedy, and G. W. Null, “Autonomous optical navigation DS1 technology validation report,” Jet Propulsion Laboratory, California, USA (2000).
  7. D. L. Hampton, J. W. Baer, M. A. Huisjen, C. C. Varner, A. Delamere, D. D. Wellnitz, and K. P. Klaasen, “An overview of the instrument suite for the Deep Impact mission,” Space Sci. Rev. 117(1–2), 43–93 (2005).
    [Crossref]
  8. M. P. Hughes and C. N. Schira, “Deep impact attitude estimator design and flight performance,” Adv. Astronaut. Sci. 125(441), 042802 (2006).
  9. J. Oberst, B. Brinkmann, B. Giese, “Geometric calibration of the MICAS CCD sensor on the DS1 (Deep Space One) spacecraft: laboratory vs. in-flight data analysis,” International Archives of Photogrammetry and Remote Sensing 33.B1; PART 1: 221–230 (2000).
  10. M. A. Samaan, T. Griffith, P. Singla, and J. L. Junkins, “Autonomous on-orbit calibration of star trackers,” In Core Technologies for Space Systems Conference (Communication and Navigation Session) (2001, November).
  11. Y. Hong, G. Ren, and E. Liu, “Non-iterative method for camera calibration,” Opt. Express 23(18), 23992–24003 (2015).
    [Crossref] [PubMed]
  12. P. D. Lin and C. K. Sung, “Comparing two new camera calibration methods with traditional pinhole calibrations,” Opt. Express 15(6), 3012–3022 (2007).
    [Crossref] [PubMed]
  13. T. Sun, F. Xing, and Z. You, “Optical system error analysis and calibration method of high-accuracy star trackers,” Sensors (Basel) 13(4), 4598–4623 (2013).
    [Crossref] [PubMed]
  14. C. Ricolfe-Viala and A. J. Sanchez-Salmeron, “Lens distortion models evaluation,” Appl. Opt. 49(30), 5914–5928 (2010).
    [Crossref] [PubMed]
  15. J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992).
    [Crossref]
  16. M. Wang, B. Yang, F. Hu, and X. Zang, “On-orbit geometric calibration model and its applications for high-resolution optical satellite imagery,” Remote Sens. 6(5), 4391–4408 (2014).
    [Crossref]
  17. M. Kolomenkin, S. Pollak, I. Shimshoni, and M. Lindenbaum, “Geometric voting algorithm for star trackers,” IEEE Trans. Aerosp. Electron. Syst. 44(2), 441–456 (2008).
    [Crossref]
  18. J. Yang, G. J. Zhang, and J. Jiang, “A star identification algorithm for un-calibrated star sensor cameras,” Opt. Technol. 34, 26–32 (2008).
  19. HEASARC, “TYCHO2,” http://heasarc.nasa.gov/W3Browse/all/tycho2.html .

2015 (1)

2014 (1)

M. Wang, B. Yang, F. Hu, and X. Zang, “On-orbit geometric calibration model and its applications for high-resolution optical satellite imagery,” Remote Sens. 6(5), 4391–4408 (2014).
[Crossref]

2013 (2)

T. Sun, F. Xing, and Z. You, “Optical system error analysis and calibration method of high-accuracy star trackers,” Sensors (Basel) 13(4), 4598–4623 (2013).
[Crossref] [PubMed]

S. Li, R. K. Lu, L. Zhang, and Y. M. Peng, “Image Processing Algorithms For Deep-Space Autonomous Optical Navigation,” J. Navig. 66(04), 605–623 (2013).
[Crossref]

2012 (1)

J. A. Christian and G. E. Lightsey, “Onboard image-processing algorithm for a spacecraft optical navigation sensor system,” J. Spacecr. Rockets 49(2), 337–352 (2012).
[Crossref]

2010 (1)

2008 (2)

M. Kolomenkin, S. Pollak, I. Shimshoni, and M. Lindenbaum, “Geometric voting algorithm for star trackers,” IEEE Trans. Aerosp. Electron. Syst. 44(2), 441–456 (2008).
[Crossref]

J. Yang, G. J. Zhang, and J. Jiang, “A star identification algorithm for un-calibrated star sensor cameras,” Opt. Technol. 34, 26–32 (2008).

2007 (1)

2006 (1)

M. P. Hughes and C. N. Schira, “Deep impact attitude estimator design and flight performance,” Adv. Astronaut. Sci. 125(441), 042802 (2006).

2005 (1)

D. L. Hampton, J. W. Baer, M. A. Huisjen, C. C. Varner, A. Delamere, D. D. Wellnitz, and K. P. Klaasen, “An overview of the instrument suite for the Deep Impact mission,” Space Sci. Rev. 117(1–2), 43–93 (2005).
[Crossref]

1999 (1)

J. I. Kawaguchi, T. Hashimoto, T. Misu, and S. Sawai, “An autonomous optical guidance and navigation around asteroids,” Acta Astronaut. 44(5), 267–280 (1999).
[Crossref]

1992 (1)

J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992).
[Crossref]

Baer, J. W.

D. L. Hampton, J. W. Baer, M. A. Huisjen, C. C. Varner, A. Delamere, D. D. Wellnitz, and K. P. Klaasen, “An overview of the instrument suite for the Deep Impact mission,” Space Sci. Rev. 117(1–2), 43–93 (2005).
[Crossref]

Christian, J. A.

J. A. Christian and G. E. Lightsey, “Onboard image-processing algorithm for a spacecraft optical navigation sensor system,” J. Spacecr. Rockets 49(2), 337–352 (2012).
[Crossref]

Cohen, P.

J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992).
[Crossref]

Delamere, A.

D. L. Hampton, J. W. Baer, M. A. Huisjen, C. C. Varner, A. Delamere, D. D. Wellnitz, and K. P. Klaasen, “An overview of the instrument suite for the Deep Impact mission,” Space Sci. Rev. 117(1–2), 43–93 (2005).
[Crossref]

Hampton, D. L.

D. L. Hampton, J. W. Baer, M. A. Huisjen, C. C. Varner, A. Delamere, D. D. Wellnitz, and K. P. Klaasen, “An overview of the instrument suite for the Deep Impact mission,” Space Sci. Rev. 117(1–2), 43–93 (2005).
[Crossref]

Hashimoto, T.

J. I. Kawaguchi, T. Hashimoto, T. Misu, and S. Sawai, “An autonomous optical guidance and navigation around asteroids,” Acta Astronaut. 44(5), 267–280 (1999).
[Crossref]

Herniou, M.

J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992).
[Crossref]

Hong, Y.

Hu, F.

M. Wang, B. Yang, F. Hu, and X. Zang, “On-orbit geometric calibration model and its applications for high-resolution optical satellite imagery,” Remote Sens. 6(5), 4391–4408 (2014).
[Crossref]

Hughes, M. P.

M. P. Hughes and C. N. Schira, “Deep impact attitude estimator design and flight performance,” Adv. Astronaut. Sci. 125(441), 042802 (2006).

Huisjen, M. A.

D. L. Hampton, J. W. Baer, M. A. Huisjen, C. C. Varner, A. Delamere, D. D. Wellnitz, and K. P. Klaasen, “An overview of the instrument suite for the Deep Impact mission,” Space Sci. Rev. 117(1–2), 43–93 (2005).
[Crossref]

Jiang, J.

J. Yang, G. J. Zhang, and J. Jiang, “A star identification algorithm for un-calibrated star sensor cameras,” Opt. Technol. 34, 26–32 (2008).

Kawaguchi, J. I.

J. I. Kawaguchi, T. Hashimoto, T. Misu, and S. Sawai, “An autonomous optical guidance and navigation around asteroids,” Acta Astronaut. 44(5), 267–280 (1999).
[Crossref]

Klaasen, K. P.

D. L. Hampton, J. W. Baer, M. A. Huisjen, C. C. Varner, A. Delamere, D. D. Wellnitz, and K. P. Klaasen, “An overview of the instrument suite for the Deep Impact mission,” Space Sci. Rev. 117(1–2), 43–93 (2005).
[Crossref]

Kolomenkin, M.

M. Kolomenkin, S. Pollak, I. Shimshoni, and M. Lindenbaum, “Geometric voting algorithm for star trackers,” IEEE Trans. Aerosp. Electron. Syst. 44(2), 441–456 (2008).
[Crossref]

Li, S.

S. Li, R. K. Lu, L. Zhang, and Y. M. Peng, “Image Processing Algorithms For Deep-Space Autonomous Optical Navigation,” J. Navig. 66(04), 605–623 (2013).
[Crossref]

Lightsey, G. E.

J. A. Christian and G. E. Lightsey, “Onboard image-processing algorithm for a spacecraft optical navigation sensor system,” J. Spacecr. Rockets 49(2), 337–352 (2012).
[Crossref]

Lin, P. D.

Lindenbaum, M.

M. Kolomenkin, S. Pollak, I. Shimshoni, and M. Lindenbaum, “Geometric voting algorithm for star trackers,” IEEE Trans. Aerosp. Electron. Syst. 44(2), 441–456 (2008).
[Crossref]

Liu, E.

Lu, R. K.

S. Li, R. K. Lu, L. Zhang, and Y. M. Peng, “Image Processing Algorithms For Deep-Space Autonomous Optical Navigation,” J. Navig. 66(04), 605–623 (2013).
[Crossref]

Misu, T.

J. I. Kawaguchi, T. Hashimoto, T. Misu, and S. Sawai, “An autonomous optical guidance and navigation around asteroids,” Acta Astronaut. 44(5), 267–280 (1999).
[Crossref]

Peng, Y. M.

S. Li, R. K. Lu, L. Zhang, and Y. M. Peng, “Image Processing Algorithms For Deep-Space Autonomous Optical Navigation,” J. Navig. 66(04), 605–623 (2013).
[Crossref]

Pollak, S.

M. Kolomenkin, S. Pollak, I. Shimshoni, and M. Lindenbaum, “Geometric voting algorithm for star trackers,” IEEE Trans. Aerosp. Electron. Syst. 44(2), 441–456 (2008).
[Crossref]

Rebordão, J. M.

J. M. Rebordão, “Space optical navigation techniques: an overview,” 8th Ibero American Optics Meeting/11th Latin American Meeting on Optics, Lasers, and Applications. International Society for Optics and Photonics, 2013.

Ren, G.

Ricolfe-Viala, C.

Sanchez-Salmeron, A. J.

Sawai, S.

J. I. Kawaguchi, T. Hashimoto, T. Misu, and S. Sawai, “An autonomous optical guidance and navigation around asteroids,” Acta Astronaut. 44(5), 267–280 (1999).
[Crossref]

Schira, C. N.

M. P. Hughes and C. N. Schira, “Deep impact attitude estimator design and flight performance,” Adv. Astronaut. Sci. 125(441), 042802 (2006).

Shimshoni, I.

M. Kolomenkin, S. Pollak, I. Shimshoni, and M. Lindenbaum, “Geometric voting algorithm for star trackers,” IEEE Trans. Aerosp. Electron. Syst. 44(2), 441–456 (2008).
[Crossref]

Sun, T.

T. Sun, F. Xing, and Z. You, “Optical system error analysis and calibration method of high-accuracy star trackers,” Sensors (Basel) 13(4), 4598–4623 (2013).
[Crossref] [PubMed]

Sung, C. K.

Varner, C. C.

D. L. Hampton, J. W. Baer, M. A. Huisjen, C. C. Varner, A. Delamere, D. D. Wellnitz, and K. P. Klaasen, “An overview of the instrument suite for the Deep Impact mission,” Space Sci. Rev. 117(1–2), 43–93 (2005).
[Crossref]

Wang, M.

M. Wang, B. Yang, F. Hu, and X. Zang, “On-orbit geometric calibration model and its applications for high-resolution optical satellite imagery,” Remote Sens. 6(5), 4391–4408 (2014).
[Crossref]

Wellnitz, D. D.

D. L. Hampton, J. W. Baer, M. A. Huisjen, C. C. Varner, A. Delamere, D. D. Wellnitz, and K. P. Klaasen, “An overview of the instrument suite for the Deep Impact mission,” Space Sci. Rev. 117(1–2), 43–93 (2005).
[Crossref]

Weng, J.

J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992).
[Crossref]

Xing, F.

T. Sun, F. Xing, and Z. You, “Optical system error analysis and calibration method of high-accuracy star trackers,” Sensors (Basel) 13(4), 4598–4623 (2013).
[Crossref] [PubMed]

Yang, B.

M. Wang, B. Yang, F. Hu, and X. Zang, “On-orbit geometric calibration model and its applications for high-resolution optical satellite imagery,” Remote Sens. 6(5), 4391–4408 (2014).
[Crossref]

Yang, J.

J. Yang, G. J. Zhang, and J. Jiang, “A star identification algorithm for un-calibrated star sensor cameras,” Opt. Technol. 34, 26–32 (2008).

You, Z.

T. Sun, F. Xing, and Z. You, “Optical system error analysis and calibration method of high-accuracy star trackers,” Sensors (Basel) 13(4), 4598–4623 (2013).
[Crossref] [PubMed]

Zang, X.

M. Wang, B. Yang, F. Hu, and X. Zang, “On-orbit geometric calibration model and its applications for high-resolution optical satellite imagery,” Remote Sens. 6(5), 4391–4408 (2014).
[Crossref]

Zhang, G. J.

J. Yang, G. J. Zhang, and J. Jiang, “A star identification algorithm for un-calibrated star sensor cameras,” Opt. Technol. 34, 26–32 (2008).

Zhang, L.

S. Li, R. K. Lu, L. Zhang, and Y. M. Peng, “Image Processing Algorithms For Deep-Space Autonomous Optical Navigation,” J. Navig. 66(04), 605–623 (2013).
[Crossref]

Acta Astronaut. (1)

J. I. Kawaguchi, T. Hashimoto, T. Misu, and S. Sawai, “An autonomous optical guidance and navigation around asteroids,” Acta Astronaut. 44(5), 267–280 (1999).
[Crossref]

Adv. Astronaut. Sci. (1)

M. P. Hughes and C. N. Schira, “Deep impact attitude estimator design and flight performance,” Adv. Astronaut. Sci. 125(441), 042802 (2006).

Appl. Opt. (1)

IEEE Trans. Aerosp. Electron. Syst. (1)

M. Kolomenkin, S. Pollak, I. Shimshoni, and M. Lindenbaum, “Geometric voting algorithm for star trackers,” IEEE Trans. Aerosp. Electron. Syst. 44(2), 441–456 (2008).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992).
[Crossref]

J. Navig. (1)

S. Li, R. K. Lu, L. Zhang, and Y. M. Peng, “Image Processing Algorithms For Deep-Space Autonomous Optical Navigation,” J. Navig. 66(04), 605–623 (2013).
[Crossref]

J. Spacecr. Rockets (1)

J. A. Christian and G. E. Lightsey, “Onboard image-processing algorithm for a spacecraft optical navigation sensor system,” J. Spacecr. Rockets 49(2), 337–352 (2012).
[Crossref]

Opt. Express (2)

Opt. Technol. (1)

J. Yang, G. J. Zhang, and J. Jiang, “A star identification algorithm for un-calibrated star sensor cameras,” Opt. Technol. 34, 26–32 (2008).

Remote Sens. (1)

M. Wang, B. Yang, F. Hu, and X. Zang, “On-orbit geometric calibration model and its applications for high-resolution optical satellite imagery,” Remote Sens. 6(5), 4391–4408 (2014).
[Crossref]

Sensors (Basel) (1)

T. Sun, F. Xing, and Z. You, “Optical system error analysis and calibration method of high-accuracy star trackers,” Sensors (Basel) 13(4), 4598–4623 (2013).
[Crossref] [PubMed]

Space Sci. Rev. (1)

D. L. Hampton, J. W. Baer, M. A. Huisjen, C. C. Varner, A. Delamere, D. D. Wellnitz, and K. P. Klaasen, “An overview of the instrument suite for the Deep Impact mission,” Space Sci. Rev. 117(1–2), 43–93 (2005).
[Crossref]

Other (6)

HEASARC, “TYCHO2,” http://heasarc.nasa.gov/W3Browse/all/tycho2.html .

W. M. Owen Jr, “Methods of optical navigation,” in Spaceflight Mechanics140, (2011).

J. M. Rebordão, “Space optical navigation techniques: an overview,” 8th Ibero American Optics Meeting/11th Latin American Meeting on Optics, Lasers, and Applications. International Society for Optics and Photonics, 2013.

J. Oberst, B. Brinkmann, B. Giese, “Geometric calibration of the MICAS CCD sensor on the DS1 (Deep Space One) spacecraft: laboratory vs. in-flight data analysis,” International Archives of Photogrammetry and Remote Sensing 33.B1; PART 1: 221–230 (2000).

M. A. Samaan, T. Griffith, P. Singla, and J. L. Junkins, “Autonomous on-orbit calibration of star trackers,” In Core Technologies for Space Systems Conference (Communication and Navigation Session) (2001, November).

J. E. Riedel, S. Bhaskara, S. Desai, D. Han, B. Kennedy, and G. W. Null, “Autonomous optical navigation DS1 technology validation report,” Jet Propulsion Laboratory, California, USA (2000).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Platform of navigation system of the Deep Impact mission.
Fig. 2
Fig. 2 Ideal measurement model of the optical navigation camera.
Fig. 3
Fig. 3 Directional angle of CCD detector.
Fig. 4
Fig. 4 Flowchart of estimation of camera parameters.
Fig. 5
Fig. 5 Results of the experiment 1: (a) - The deviation surface of LOS in camera frame with internal error; (b) -The residual deviation surface of the LOS in camera frame after internal calibration; (c) – The curve of the accuracy of LOS in camera frame of internal calibration with different times of iteration; (d) –The deviation surface of LOS in inertial frame with internal and external errors; (e) –The residual deviation surface of LOS in inertial frame after external and internal calibrations; (f) –The curve of the accuracy of LOS in inertial frame of comprehensive calibration with different times of iteration in internal calibration.
Fig. 6
Fig. 6 The accuracy of the calibration by B&S and LS estimation method.
Fig. 7
Fig. 7 Results of the experiment 2: (a) – Correction of external calibration parameters obtained in each iteration by Kalman filtering; (b) –The residual deviation surface of LOS in inertial frame after external calibration; (c) – The accuracy of overall calibration with external parameters obtained in different times of iteration by Kalman filtering in external calibration.
Fig. 8
Fig. 8 Results of the experiment 3: (a) The accuracy of the batch estimation when misidentified stars exist; (b) The accuracy of the batch estimation by detecting and rejecting the misidentified stars.

Tables (3)

Tables Icon

Table 1 The initial and the true external and internal parameters

Tables Icon

Table 2 The estimated external and internal calibration parameters

Tables Icon

Table 3 External parameters in Experiment 2

Equations (37)

Equations on this page are rendered with MathJax. Learn more.

w i C = 1 ( x f p x 0 ) 2 + ( y f p y 0 ) 2 + f 2 [ ( x f p x 0 ) ( y f p y 0 ) f ] .
w i I = R ADCS I R Camera ADCS w i C
{ x t i = x i Δ x 0 y t i = y i Δ y 0 z t i = z i
[ x t θ i y t θ i z t θ i ] = ( R φ R ω R κ ) T [ x t i y t i z t i ]
{ x t θ g i = x t θ i ( k 1 x t θ i r 2 + p 1 ( 3 x t θ i 2 + y t θ i 2 ) + 2 p 2 x t θ i y t θ i ) y t θ g i = y t θ i ( k 1 y t θ i r 2 + p 2 ( 3 y t θ i 2 + x t θ i 2 ) + 2 p 1 x t θ i y t θ i ) z t θ g i = z t θ i
{ x r = x t θ g f i = ( f + Δ f ) x t θ g i / f y r = y t θ g f i = ( f + Δ f ) y t θ g i / f z r = z t θ g f i = ( f + Δ f ) = z t θ g i Δ f
w i C = 1 x r 2 + y r 2 + z r 2 [ x r y r z r ] .
w i I = R ADCS I R Camera ADCS ( p i t c h + Δ p i t c h , r o l l + Δ r o l l , y a w + Δ y a w ) w i C
v i = [ X i s t a r Y i s t a r Z i s t a r ] = [ cos α i cos δ i sin α i cos δ i sin δ i ] .
[ ( x f p i x 0 ) ( y f p i y 0 ) f ] = λ R ADCS Camera R I ADCS [ X i s t a r Y i s t a r Z i s t a r ]
R I ADCS = [ A 1 B 1 C 1 A 2 B 2 C 2 A 3 B 3 C 3 ] = [ q 1 2 q 2 2 q 3 2 + q 0 2 2 ( q 1 q 2 + q 3 q 0 ) 2 ( q 1 q 3 q 2 q 0 ) 2 ( q 1 q 2 q 3 q 0 ) q 1 2 + q 2 2 q 3 2 + q 0 2 2 ( q 2 q 3 + q 1 q 0 ) 2 ( q 1 q 3 + q 2 q 0 ) 2 ( q 2 q 3 q 1 q 0 ) q 1 2 q 2 2 + q 3 2 + q 0 2 ]
R ADCS Camera = [ a 1 b 1 c 1 a 2 b 2 c 2 a 3 b 3 c 3 ] = ( [ cos ( p i t c h ) 0 sin ( p i t c h ) 0 1 0 sin ( p i t c h ) 0 cos ( p i t c h ) ] . [ 1 0 0 0 cos ( r o l l ) sin ( r o l l ) 0 sin ( r o l l ) cos ( r o l l ) ] . [ cos ( y a w ) sin ( y a w ) 0 sin ( y a w ) cos ( y a w ) 0 0 0 1 ] ) T
( V Image ) cam = ( x f , y f , 1 ) T = ( tan ( ψ x ( s , l ) ) , tan ( ψ y ( s , l ) ) , 1 ) T
{ tan ( ψ x ( s , l ) ) = a x 0 + a x 1 s + a x 2 l + a x 3 s l + a x 4 s 2 + a x 5 l 2 + a x 6 s 2 l + a x 7 s l 2 + a x 8 s 3 + a x 9 l 3 tan ( ψ y ( s , l ) ) = a y 0 + a y 1 s + a y 2 l + a y 3 s l + a y 4 s 2 + a y 5 l 2 + a y 6 s 2 l + a y 7 s l 2 + a y 8 s 3 + a y 9 l 3
[ tan ( ψ x ( s , l ) ) tan ( ψ y ( s , l ) ) 1 ] = λ R ADCS Camera R I ADCS [ X i s t a r Y i s t a r Z i s t a r ]
w i I = λ R ADCS I R Camera ADCS [ tan ( ψ x ( s , l ) ) tan ( ψ y ( s , l ) ) 1 ]
{ F = A a 1 X i s t a r + B b 1 Y i s t a r + C c 1 Z i s t a r A a 3 X i s t a r + B b 3 Y i s t a r + C c 3 Z i s t a r ( x f p i x 0 ) f G = A a 2 X i s t a r + B b 2 Y i s t a r + C c 2 Z i s t a r A a 3 X i s t a r + B b 3 Y i s t a r + C c 3 Z i s t a r ( y f p i y 0 ) f
{ f = A a 1 X i s t a r + B b 1 Y i s t a r + C c 1 Z i s t a r A a 3 X i s t a r + B b 3 Y i s t a r + C c 3 Z i s t a r = tan ( ψ x ( s , l ) ) g = A a 2 X i s t a r + B b 2 Y i s t a r + C c 2 Z i s t a r A a 3 X i s t a r + B b 3 Y i s t a r + C c 3 Z i s t a r = tan ( ψ y ( s , l ) )
Aa 1 =A 1 a 1 + A 2 b 1 + A 3 c 1 ; Aa 2 =A 1 a 2 + A 2 b 2 + A 3 c 2 ; Aa 3 =A 1 a 3 + A 2 b 3 + A 3 c 3 ; Bb 1 =B 1 a 1 + B 2 b 1 + B 3 c 1 ; Bb 2 =B 1 a 2 + B 2 b 2 + B 3 c 2 ; Bb 3 =B 1 a 3 + B 2 b 3 + B 3 c 3 ; Cc 1 =C 1 a 1 + C 2 b 1 + C 3 c 1 ; Cc 2 =C 1 a 2 + C 2 b 2 + C 3 c 2 ; Cc 3 =C 1 a 3 + C 2 b 3 + C 3 c 3 ;
R i , k E = A i , k Δ X k
A i , k = [ F i , k X E G i , k X E ] = [ F i , k p i t c h F i , k r o l l F i , k y a w G i , k p i t c h G i , k r o l l G i , k y a w ] Δ X k = [ Δ p i t c h Δ r o l l Δ y a w ] k R i , k E = [ F ( X E 0 , Y I 0 ) G ( X E 0 , Y I 0 ) ] i , k
Δ X k = ( A k T P k E A k ) 1 ( A k T P k E R k E )
{ Z k E = Η E k Δ X k + V k E Δ X k + 1 = Φ k + 1 , k E Δ X k
{ K k E = P k Kalman ( E ) Η E k T ( Η E k P k Kalman ( E ) Η E k T + Q k Kalman ( E ) ) 1 P k + 1 Kalman ( E ) = ( I K k E Η E k ) P k Kalman ( E ) Δ X k + 1 = Δ X k + K k E ( Z k E Η E k Δ X k )
X E = X E 0 + Δ X k + 1
R j , k I = B j , k Y k
B j , k = [ d ( tan ( ψ x ( s , l ) ) ) d Y I d ( tan ( ψ y ( s , l ) ) ) d Y I ] j , k = [ d tan ψ x d a x 0 d tan ψ x d a x m d tan ψ x d a x 9 d tan ψ x d a y 0 d tan ψ x d a y m d tan ψ x d a y 9 d tan ψ y d a x 0 d tan ψ y d a x m d tan ψ y d a x 9 d tan ψ y d a y 0 d tan ψ y d a y m d tan ψ y d a y 9 ] j , k
Y k = [ a x 0 a x m a x 9 a y 0 a y m a y 9 ] k T , R j , k I = [ f ( X E ) g ( X E ) ] j , k , m = 0 , 2 , , 9
Y k = ( B k T P k I B k ) 1 ( B k T P k I R k I )
{ Z k I = Η I k Y k + V k I Y k + 1 = Φ k + 1 , k I Y k
{ K k I = P k Kalman ( I ) Η I k T ( Η I k P k Kalman ( I ) Η I k T + Q k Kalman ( I ) ) 1 P k + 1 Kalman ( I ) = ( I K k I Η I k ) P k Kalman ( I ) Y k + 1 = Y k + K k I ( Z k I Η I k Y k )
Y I = Y k + 1
R n , k = [ r 1 r 2 ] n , k = [ F ( X E 0 , Y I 0 ) G ( X E 0 , Y I 0 ) ] n , k
R n , k = [ r 1 r 2 ] n , k = [ f ( X E ) tan ( ψ x ( s , l ) ) g ( X E ) tan ( ψ y ( s , l ) ) ] n , k
| R n , k | = r 1 2 + r 2 2
p n = { 1 , | | R n , k | - μ | < F 0 , | | R n , k | - μ | > F
Δ r = 1 N i = 1 N ( arc cos ( v i w i I ) )

Metrics