A spacecraft-borne optical navigation camera is one of the key instruments for optical autonomous navigation, and the line of sight (LOS) of the camera directly affects the accuracy of navigation. We developed an on-orbit calibration approach for a navigation camera to ensure the accuracy, in which a stepwise calibration is performed, external parameters estimated, and internal parameters estimated in a generalized camera frame determined by external parameters. In addition, we proposed a batch and sequential on-orbit estimation method to save on-orbit computing power, and established a strategy to reject misidentified reference stars while keeping consistency in on-orbit parameters estimation. After the calibration, the accuracy of LOS in inertial frame satisfied the need in optical autonomous navigation. The results have been proven precise and robust in three experiments.
© 2016 Optical Society of America
Optical navigation (ON) is one enabling technology for autonomous navigation, which is mandatory whenever there is no time to validate navigation decisions in the earth for spacecraft because of signal delay or communications interruption . Optical navigation is the use of imaging data to aid in spacecraft navigation. In the typical case, an optical navigation camera on a spacecraft takes a picture of some nearby beacons. In tradition, typical beacons are coordinate centers of nearby solar system objects: planets, asteroids, comets, other spacecrafts, or reference stars . According to the known and reliable ephemerides of the beacons and images of the beacons taken by the optical navigation camera, the navigation system will be able to accurately locate the spacecraft in inertial space and plan subsequent maneuver to accomplish the mission. Among all autonomous navigation technologies, optical navigation is believed a most feasible solution in the last two decades by reducing communication demands onto ground-based antennas for high-precision navigation .
An optical navigation camera is one of the key instruments on spacecraft for optical navigation, with which images of some nearby beacons are taken, and then the images are processed to extract various available observations from raw images. Among all observations, the line of sight (LOS) of a beacon in navigation camera frame is a most important one . During an exposure by camera, the spacecraft attitude is estimated by ADCS (attitude determination and control system). With the ADCS-estimated attitude in inertial frame and the installation matrix that turns from navigation camera to ADCS, the LOS in camera frame can be converted into a unit vector from navigation camera to deacon in the inertial frame. The estimates of LOS and output of other sensors are provided to a GNC (guidance, navigation, and control) system to generate best possible estimation for orientation and position of the spacecraft .
Methods that convert LOS from the camera frame into the inertial frame include two categories: star-relative and starless. When ON beacons and more than 3 reference stars are available in a single image, it is star-relative, which was first validated in cruise phase of the Deep Space-1 mission . However, reference stars are not always available in navigation images because of orbit geometry, relative brightness of beacons and stars, or small telescope fields of view, in which the method is starless, as performed in the Deep Impact mission ; and beacon observations and attitude information came from a separate source that offered by an ADCS attitude estimator, as shown in Fig. 1 . In tradition, the platform of navigation system uses the rigid connection between ADCS (star camera) and optical navigation camera.
Calibration of navigation camera is a critical issue on which LOS depends having a direct impact on the accuracy of ON. Generally, camera is calibrated in ground-based laboratories to high precision before launching , including items of camera geometric distortion (internal calibration) and installation errors between camera and ADCS (external calibration). However, vibration during launch and variation in thermal environment may alter the preset camera parameters. Therefore, it is necessary to redo these jobs during the mission . However, few paper talked about it, and comprehensive study on this issue is not available at present.
To solve the on-orbit calibration for spacecraft-borne optical navigation camera, we proposed a stepwise calibration algorithm in which external parameters are estimated first, and then internal parameters second based on the generalized camera frame determined by external parameters. Background reference stars on images are used as the control for parameters estimation. After a careful analysis of the issue, three key problems are put forward and solved in this manuscript. First, to overcome the problem that over-parameterization, strongly-correlated and less-significant in the traditional physical measurement model, we proposed the detector directional angle model for camera internal calibration. Second, to save on-orbit memory storage and computing power, we established a batch&sequential estimation method in order to process more star images that involved in the estimation to achieve high accuracy. Third, identifying reference stars correctly is the precondition for on-orbit calibration, and misidentified stars may hinder the calibration considerably. Therefore, we established a strategy using statistical information to reject misidentified reference stars, making the on-orbit calibration more stable and reliable in deep space complex environment.
After calibration, combined with the ADCS-estimated attitude, every pixel of navigation camera can obtain a high-precision inertial LOS determined by the external and internal calibration parameters, which has two advantages mainly. First, the accuracy of LOS in inertial frame can be guaranteed by starless method when reference stars are not available on the images. Second, as many on-orbit image processing algorithms are complex and time-consuming, having the calibration done first can well predict the locations of beacons and reference stars in the images, which will minimize the detection and identification of beacons and reference stars to a small region of interest (ROI) , making the jobs much easier and simpler and more rapidly in emergency situation, and welcomed of course.
In this paper, we introduce the measurement model of optical navigation camera and list all the internal and external error sources in Section 2. In Sections 3, we present, respectively, the on-orbit external and internal camera calibration model, and the batch&sequential estimation method, to estimate the external and internal calibration parameters. Section 4 details the calibration results, and computer simulated data are tested in three experiments. Last, Section 5 is the summary with conclusions.
2. Measurement model of optical navigation
2.1 Ideal measurement model
The imaging model of optical navigation camera can be considered as a pinhole imaging model [11,12 ], which assumes that each point on the target emits a single ray and each ray maps to a point on the focal plane. The pinhole imaging model considers a simple relationship between a point on focal plane and the corresponding LOS unit vector in camera coordinate system, as shown in Fig. 2 . Assuming f is the focal length, and (x0,y0) the principal point, which are determined on-ground, we define P(xfp,yfp) as the point on focal plane of the measured beacon. The corresponding LOS unit vector of in camera coordinate system can be espressed as:
The LOS measured in camera coordinate system should be rotated to inertial coordinate system before being provided to GNC system.
In general, an ideal measurement model is not exact due to imperfect instrument manufacture and variable environment. Error sources from camera (internal) and installation (external) should be eliminated to approach to the real measurement model as much as possible.
2.2 Internal camera error sources
Generally, navigation cameras are calibrated in ground-based laboratories to high precision before launching. However, any changes in the instrument or environment in space may alter preset camera parameters. Three types of distortion errors  in a navigation camera would occur: 1. CCD translation, inclination, and rotation; 2. optical distortion of lenses; and 3. the change of focal length. To get the beacon’s real coordinate value from measured coordinate value in camera coordinate system, these errors must be calibrated.
2.2.1 Errors in CCD
CCD translation can be described as the translation of principal point. Assuming and indicate the change of an initial principal point, we can obtain from by calibrating the CCD translation:
2.2.2 Lens distortion
Lens distortion is ubiquitous for optical instrument. In order to sense faint beacons, navigation cameras of narrow field angle are designed. Therefore, the first-order radial distortion model with the first-order tangential distortion model is appropriate; more parameters or higher-order models do not have obvious advantages in navigation camera calibration [14,15 ]. Assuming that a global estimation method is sufficient to the calibration, we can obtain from by calibrating the optical distortion of lens:
2.2.3 Focal length error
The change of focal length is inevitable. Assuming represents the focal length determined from on-ground calibration, and represents the change of the focal length, we can obtain from by calibrating the change of the focal length.
Equation (6) is a rigorous physical measurement model for camera, by which the real coordinate in camera coordinate can be obtained.
By compensating all the internal camera errors, the real LOS unit vector of the measured in camera coordinate system can be expressed as:
2.3 External installation error sources
Errors in camera installation angle are often relatively considerable due to limited laboratory conditions in calibration technology, and to the changes during launching and in flight phase. These external errors would have a greater impact on accuracy of the LOS than internal ones as more variable, obvious, and random in space flight environment. Assuming is the installation angle determined on-ground and the offset of each angle, we can obtain the real LOS unit vector in the inertial coordinate system:
3 An on-orbit calibration approach
3.1 Geometric calibration model
In order to set up a real navigation measurement model and guarantee the accuracy of LOS in inertial coordinate system, calibrations for both external installation and internal camera errors are necessary. The calibration is done in two steps:
- 1. Calibration to camera installation angle;
- 2. Calibration to internal camera distortion based on the external installation angle obtained in step one.
The reference stars serve as the control points for the calibration.
3.1.1 External calibration model
Supposing represents the unit direction vector of reference star in guide star database, and represents the right ascension and declination of the reference star in the celestial sphere, the observed unit direction vector in inertial coordinate system can be calculated by:
Assuming represents the location of reference star on the focal plane, its coordinate in camera coordinate system is . Assuming is the real coordinate on-orbit, we can establish an external calibration model as:
is the installation matrix from ADCS to the camera coordinate system with the installation angle as follows:
According to the reference stars recognized in star images, external calibration parameters can be determined assuming that the coordinates of reference stars in camera coordinate system is real.
After external calibration, the attitude of camera coordinate system in inertial coordinate system can be determined by , which will be the reference coordinate system for internal calibration.
3.1.2 Internal calibration model
Although in theory the camera physical measurement model in Eq. (6) considers the major internal errors, the model is not practical as an on-orbit calibration model for navigation camera due to over-parameterization. Some parameters included in physical measurement model are strongly correlated because of unique imaging conditions (i.e., long focal length and narrow field angle). In addition, some parameters are less significant in imagery geometric accuracy. If using physical measurement model as the internal calibration model to calculate each parameter, the calculation equation would be seriously ill-conditioned, and thus the reliability and the accuracy of the calibration could not be ensured. Therefore, although the camera physical measurement model is rigorous in theory, it is not suitable for on-orbit internal calibration.
To solve the problem, a detector directional angle model  is adopted as the internal calibration model (as it shows in Fig. 3 ). By calibrating the tangent of directional angle for each CCD detector in the reference coordinate system determined by external calibration, the LOS of each CCD detector in the inertial coordinate system can be determined accurately.
Polynomial model can be used to model the tangent of directional angles of CCD detectors. As the internal distortion is low-order because of its narrow field of view, we use an individual three-order polynomial that has high orthogonality and low correlation as the internal calibration model.
is the CCD detector’s image plane coordinate (we difine the original point is the center of CCD plane)., , , , , , , , , and , , , , , , , , , are internal calibration parameters .
Then Eq. (10) can be transformed:
According to the identified reference stars, internal calibration parameters can be obtained based on the installation matrix produced by external calibration.
As some internal errors are included in the external calibration results, the reference coordinate system could not well represent the real camera coordinate system. However, this does not affect the calculation of internal calibration parameters because of the high correlation between external and internal calibration parameters on account of the narrow field angle. In addition, the proposed flexible internal calibration model could well compensate the residual errors that caused by external calibration, which would lower the precision requirement of external calibration. A high-accuracy LOS of each CCD detector in inertial coordinate system can be obtained by combining the internal and external calibration parameters as the follows:
Once the internal parameters are determined on-orbit accurately, there would be no need to update them frequently, because they are relatively stable and the determination of them is computation-cost. However, external parameters should better be updated before the spacecraft photographs at beacons due to changes of ambient conditions.
3.2 Estimation of the camera parameters
With star identification algorithms, we can obtain right ascension and declination of reference stars in the images taken by navigation camera for calibration, and acquire the unit direction vector in inertial coordinate system by Eq. (9). The 2D coordinates of stars on the image plane can be established in centroid extraction algorithms. Those correctly identified stars can act as control points with which calibration parameters can be set. The noises of attitude from ADCS and stars centroiding are the main random errors, and should be suppressed in the calibration.
To filter out the noises and satisfy accuracy, both batch and sequential estimations (B&S estimation) are used to calculate the parameters. Batch estimation works in the least-square scheme, while sequential estimation uses the Kalman filter. The batch method is employed when a number of measurements have been accumulated while the sequential method updates the estimate one measurement at a time . So a new estimator is developed by combination of Least Squares (LS) and Kalman Filter. The LS estimates of the calibration parameters are determined from Eq. (18) and Eq. (23), then LS estimates are used as “measurements” in the Eq. (20) and Eq. (25) for a recursive Kalman filter to filter out the noise in the LS estimates and combine many LS estimates to find a best estimate. In this way, the biggest memory consumption is from the once LS batch estimation, however, a much better estimate than once LS batch estimation can be obtained by iterative refinement process.
In this practice, external parameters are calculated via B&S estimation first, and then internal parameters can be estimated accurately based on the reference coordinate system that determined in external calibration, as Fig. 4 shows. Meanwhile, external parameters can also be updated based on determined internal parameters in need.
In Eq. (16), F is the residual error vector of x axis direction in camera frame, and G is the residual error vector of y axis direction in camera frame.
External calibration parameters are , internal calibration parameters are , and
3.2.1 Calibration on external parameters in B&S estimation
To determine external calibration parameters, we assume initial internal calibration parameters are “true”. We initialize the external and internal calibration parameters and with on-ground calibration initial and . In the external parameters estimation, we define the times of iteration, and the number of reference stars in each iteration for batch estimation. Reference stars for batch estimation may distribute on multiple images taken continuously, because the number of reference stars in one image is limited.
22.214.171.124 Batch estimation
represents the weight of the observation value of reference star in iteration in external calibration.
is calculated in least-square method:
126.96.36.199 Sequential estimation
Kalman filter for external calibration includes measurement equation and state equation as Eq. (20).
In Eq. (20), as the new measurement in measurement equation is LS estimate obtained by Eq. (19) from batch estimation, sensitivity matrix. is the noise in the LS estimate, and its covariance matrix is. State equation expresses the variation tendency of the LS estimates. As the states of the LS estimates in each iteration are relatively constant, in other words, the best estimate of Kalman filter is constant, so the transition matrix.
Then we can use Kalman filter to obtain the updated by Eq. (21) as:
By iteration, Kalman filter filters out the noise in the LS estimates and combine many LS estimates to achieve the best estimate of .
188.8.131.52 Parameters modification
We repeat the B&S estimation iteratively until where is a small positive number, to calibrate the external parameters:
3.2.2 Calibration on internal parameters in B&S estimation
After external calibration, we believe the modified is true, and leave internal calibration parameters to be calibrated. In internal calibration, we define the number of reference stars in each iteration for batch estimation.
184.108.40.206 Batch estimation
is the estimated internal calibration parameters obtained in iteration. is the vector of reference star in the camera frame calculated by current in iteration,.
represents the weight of the observation value of reference star in iteration in internal calibration.
is calculated in least-square method:
220.127.116.11 Sequential estimation
Kalman filter for internal calibration includes measurement equation and state equation as Eq. (25).
In Eq. (25), as the new measurement in measurement equation is LS estimate obtained by Eq. (24) from batch estimation, sensitivity matrix. is the noise in the LS estimate, and its covariance matrix is. State equation expresses the variation tendency of the LS estimates. As the states of the LS estimates in each iteration are relatively constant, in other words, the best estimate of Kalman filter is constant, so the transition matrix.
Then we can use Kalman filter to obtain the updated by Eq. (26) as:
By iteration, Kalman filter filters out the noise in the LS estimates and combine many LS estimates to achieve the best estimate of.
18.104.22.168 Parameters modification
We perform the B&S estimation iteratively until , is a small positive number, to get the modified internal parameters:
3.2.3 Detect and reject misidentified stars
Identifying reference stars correctly is the precondition for on-orbit calibration. However, a long exposure to image from dim reference stars would blur the stars due to vibration of spacecraft during attitude adjustment. The blurred stars would make it more difficult to implement centroid extraction accurately. In addition, uncalibrated navigation camera parameters may reduce the successful rate of star identification sharply because most star identification algorithms depend on camera parameters [17,18 ]. Misidentified stars may hinder the calibration considerably. Therefore, it is necessary to detect and reject misidentified stars before calibration.
We validate the normality of measurements according to statistical information of the residual errors in the batch estimation, and then reject the reference stars that might be misidentified most probably. The principle is as follows.
The coordinates of reference stars in the image plane obtained by centroid extraction and the corresponding unit direction vector from Eq. (9) obtained by star identification are put into Eqs. (16) and (17) and their residual errors are calculated before batch estimation.
In external calibration, as
In internal calibration, as
can be treated as a vector, and its magnitude can be written as:
The mean and the standard deviation of can be easily obtained. Meanwhile, a particular normal probability distribution can be defined to validate the normality of each . To judge whether or not a star is misidentified, we compare the corresponding absolute deviation from the mean with a threshold . The star may well be misidentified when is bigger than , then its weight should better to be set 0 to avoid its negative effect on the estimation.
In Eq. (31), we set .
The coworking of least square-based batch estimation with Kalman filter-based sequential estimation can reduce memory load and take more reference stars for estimation. Therefore, noises in the measurements can be depressed and reach high accuracy of calibration. Moreover, misidentified stars can be detected and rejected in batch estimation, by which the calibration will be more stable and reliable in the complex deep space.
3.3 Accuracy assessment
The accuracy of a beacon’s LOS in inertial frame calculated by internal and external parameters can be referred for estimating the accuracy of on-orbit calibration. The statistical error between calculated and true LOS in inertial frame is an index to the calibration accuracy. The statistical error can be calculated by:Eq. (9), and can be calculated by Eq. (15).
To verify our proposed approach, we designed three experiments. Experiment 1 was designed to test the overall performance and effectiveness of detector directional angle model in modelling and compensating internal distortion and residual error of external calibration. Experiment 2 was designed to test the performance of re-calibration of external parameters on the basis of the calibrated internal parameters before photographing beacons. Experiment 3 was designed to verify the necessity and effectiveness of detecting and rejecting of misidentified stars in on-orbit calibration.
Real star image data obtained by navigation camera on-orbit are unavailable at present because they are rarely downloaded from spacecraft. The simulated navigation camera model and error sources are determined by Eqs. (3)-(6) and Eq. (8) and the true and initial parameters are set (Table 1 ). The camera internal parameters are set according to the medium-resolution navigation camera of the Deep Impact Mission. The field angle of camera is about 10 and the resolution is in pixel of . Thus, the angular resolution of one pixel is about 4 acrsec. We use physical measurement model (shown by Eq. (6)) to simulate the internal distortion of the camera and evaluate the calibration accuracy, and use detector directional angle model (shown by Eq. (13)) to calibrate the internal distortion. The Tycho-2  Catalogue of 2.5 million brightest stars in visual magnitudes brighter than 12 with whose corresponding revised right ascension and declination are taken as the reference stars database. Using true camera specification, installation angle, and star catalog, the positions of reference stars within the field of view of the navigation camera can be simulated when attitudes of navigation camera are fixed.
Noises in stars centroiding and the attitudes provided by the ADCS are main random sources of errors, and should be taken into account in the simulation. The coordinate values of reference stars in star images used for calibration are simulated according to Eqs. (3)-(6) and Eq. (8), in which the mean centroiding noise is zero and standard deviation is 0.3 pixel. In addition, the corresponding attitudes provided by ADCS are simulated in mean attitude determination noise at zero and standard deviation at 3 arcsec. In order to evaluate the calibration accuracy, we simulated other 100 star images with no centroiding or no attitudes noise for calculating the average deviation of LOS determined by calibrated internal and external parameters.
4.1 Experiment 1
Due to the deviation between true and initial values of a parameter, the same CCD detector of a navigation camera may have a very different LOS in inertial coordinate system defined by internal and external parameters. Via on-orbit calibration, high-precision LOS of each CCD detector can be obtained by calibrated parameters. We simulated sequence images with 2D coordinate values of corresponding reference stars by setting a random initial attitude and angular velocity, in which the sampling frequency was set relatively low to enlarge the attitudes change so that the reference stars in sequence images could be located evenly in the field of view. Given the number of the external and internal calibration parameters, we set at 30 and at 100.
To assess the ability of the directional angle model in compensating internal camera error, we assume that external parameters are known and only internal calibration should be performed (shown in Fig. 5(a), 5(b), 5(c) ). Figure 5(b) displays that the biggest residual deviation of LOS in camera frame of all the pixels after internal calibration is below 0.2 arcsec, in other words, it is smaller than 0.05 pixel size according to the angular resolution of one pixel. As the Fig. 5(c) shows, a greater accuracy of calibration can be achieved in B&S estimation iteration. Therefore, effectiveness of the directional angle model in describing the camera internal distortion is verified.
To judge the overall performance of the proposed on-orbit calibration approach, we assume that both external and internal parameters are undetermined, and external and internal calibration should be made. The result of the calibration parameters are shown in Table 2 and Fig. 5(d), 5(e), 5(f). In external calibration, an obvious deviation was shown between estimated and true external parameters (Tables 1 and 2 ), because the estimated installation angles compensated part of internal camera errors, and determined an generalized rather than the true camera frame. The estimated internal parameters based on the generalized camera frame can achieve higher and higher accuracy by iterative computation with more star images (shown in Fig. 5(f)). Moreover, the biggest residual deviation of LOS in the inertial frame determined with calibrated parameters is below 0.3 arcsec (shown in Fig. 5(e)), or 0.075 pixel. Therefore, a high accuracy LOS in inertial frame can be determined in the generalized camera frame after calibrations on external and internal parameters.
In order to evaluate the advantage of the B&S estimation method, the traditional least square estimation method are performed by the same simulated data used in the Fig. 5(d), 5(e), 5(f). As the accuracy of internal calibration will directly influence the final accuracy of on-orbit calibration, the estimation of internal parameters by Least Square estimation method with different number of reference stars is designed to compare with the performance of B&S estimation. The result is as the Fig. 6 shows.
In Fig. 6, because the number of reference stars in each iteration for internal calibration in B&S estimation is set to be 100, when the number of iteration is 10, the number of reference stars participated in B&S estimation is 1000 (1K). Then we use the same 1000 reference stars and LS estimation for internal calibration as comparison. In a similar way, we can use the same number of reference stars for B&S estimation or LS estimation, and then compare their accuracy. The accuracy curve is shown in Fig. 6. Obviously, B&S and LS estimation by the same number of reference stars will achieve a quite similar accuracy, while with more stars participated in the estimation, the accuracy will be higher. The essential difference is that memory consumption of B&S estimation is constant by iteration no matter how many stars participated in the estimation, and the biggest memory consumption is always from the once batch estimation. However, LS estimation needs bigger and bigger memory storage with more and more reference stars participated in the estimation, and the higher dimension matrix needs higher computing power. As the memory storage and computing power are very limited on spacecraft, B&S estimation method make it realizable to include more star images into the estimation, and to achieve higher accuracy.
4.2 Experiment 2
External calibration is more frequently demanded than internal calibration as it is relatively variable. To evaluate the performance of external recalibration based on calibrated internal parameters, we designed true and initial external parameters (shown in Table 3 ). The internal parameters and the initial external parameters are taken from Experiment 1. Sequence images are simulated for B&S estimation on external parameters, in which is set at 30.
The iteration convergence is satisfactory as shown in Fig. 7(a), 7(c) that the accuracy of calibration increases with the number of iteration of Kalman flitting, which has a small fluctuation in the beginning of iteration. Distinct deviations occur between each true and corresponding estimated external parameters obtained in the 100th iteration (shown in Table 3), however, the installation matrix determined by Eq. (12) is able to reach high accuracy (shown in Fig. 7(b)). That is because a high accuracy installation matrix does not call for a high accuracy of all installation angles. Once more star images are used in iteration to estimate external parameters by B&S estimation method, the installation matrix will be more accurate (shown in Fig. 7(c)). The residual deviation surface of LOS in Fig. 7(b) is similar to that obtained in Experiment 1, which means that external calibration can eliminate most of the external error, and the residual deviation remained is from internal calibration in Experiment 1.
4.3 Experiment 3
Misidentified stars can bring incorrect vectors into estimation and decrease the accuracy. To test the performance of the proposed method in detecting and rejecting of misidentified stars, different correct identification rates of reference stars are designed, in which the ascension and declination of adjacent stars in star images are changed to create misidentified stars. External calibration alone designed in Experiment 2, and internal calibration alone designed in Experiment 1 are taken to test the performance.
As shown in Fig. 8(a) , the accuracy of external and internal calibration is wildly inaccurate, even when the correct identification rate reaches 96.7% in external calibration and 99% in internal calibration, indicating that only one misidentified star exist in 30 reference stars in external batch estimation and 100 reference stars in internal batch estimation. Actually there is a very real possibility that this could happen, because the star identification algorithm is not totally reliable. Batch estimation is the foundation of the sequential estimation; therefore, these disturbing data should be discarded.
We set to detect and reject the misidentified stars. As shown in Fig. 8(b), when the identification rate is above about 65%, the accuracy of external or internal calibration is high regardless of incorrect vectors caused by misidentified stars. However, the program cannot perform well if successful identification rate is too low, because too many gross errors would cause difficulty in setting an appropriate threshold in Eq. (31). Nevertheless, our proposed method is effective to eliminate gross errors during estimation in most cases, and the results of estimation are consistent.
We proposed a stepwise calibration in combination with batch&sequential estimation, with which on-orbit autonomous calibration for navigation camera can be realized by estimating external parameters first, and internal parameters second using the generalized camera frame determined beforehand by external parameters. The batch&sequential estimation can lower the requirement of on-orbit computing power, and additionally, method to reject gross error should be combined to the estimation in order to guarantee the stability.
In three validating experiments, results indicate that the LOS of each CCD detector can be obtained in high accuracy with external and internal calibration parameters. The batch&sequential estimation can reach higher accuracy with the increase of iteration even if misidentified reference stars exist, and process simultaneously more star images in the estimation regardless of memory restriction. Overall, our proposed methods in this paper have been proven good and effective in terms of accuracy, robustness, and performance in on-orbit calibration for optical autonomous navigation in deep universal space.
The authors would like to thank the accompniers working with us in state key laboratory of information engineering in surveying, mapping and remote sensing. We also thank to National Basic Research Program of China 973 Program (2014CB744201), National Natural Science Foundation of China (NSFC) (41371430, 91438111) and Program for Changjing Scholars and Innovative Research Team in University (IRT1278).
References and links
1. W. M. Owen Jr, “Methods of optical navigation,” in Spaceflight Mechanics140, (2011).
2. J. M. Rebordão, “Space optical navigation techniques: an overview,” 8th Ibero American Optics Meeting/11th Latin American Meeting on Optics, Lasers, and Applications. International Society for Optics and Photonics, 2013.
3. S. Li, R. K. Lu, L. Zhang, and Y. M. Peng, “Image Processing Algorithms For Deep-Space Autonomous Optical Navigation,” J. Navig. 66(04), 605–623 (2013). [CrossRef]
4. J. A. Christian and G. E. Lightsey, “Onboard image-processing algorithm for a spacecraft optical navigation sensor system,” J. Spacecr. Rockets 49(2), 337–352 (2012). [CrossRef]
5. J. I. Kawaguchi, T. Hashimoto, T. Misu, and S. Sawai, “An autonomous optical guidance and navigation around asteroids,” Acta Astronaut. 44(5), 267–280 (1999). [CrossRef]
6. J. E. Riedel, S. Bhaskara, S. Desai, D. Han, B. Kennedy, and G. W. Null, “Autonomous optical navigation DS1 technology validation report,” Jet Propulsion Laboratory, California, USA (2000).
7. D. L. Hampton, J. W. Baer, M. A. Huisjen, C. C. Varner, A. Delamere, D. D. Wellnitz, and K. P. Klaasen, “An overview of the instrument suite for the Deep Impact mission,” Space Sci. Rev. 117(1–2), 43–93 (2005). [CrossRef]
8. M. P. Hughes and C. N. Schira, “Deep impact attitude estimator design and flight performance,” Adv. Astronaut. Sci. 125(441), 042802 (2006).
9. J. Oberst, B. Brinkmann, B. Giese, “Geometric calibration of the MICAS CCD sensor on the DS1 (Deep Space One) spacecraft: laboratory vs. in-flight data analysis,” International Archives of Photogrammetry and Remote Sensing 33.B1; PART 1: 221–230 (2000).
10. M. A. Samaan, T. Griffith, P. Singla, and J. L. Junkins, “Autonomous on-orbit calibration of star trackers,” In Core Technologies for Space Systems Conference (Communication and Navigation Session) (2001, November).
15. J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992). [CrossRef]
16. M. Wang, B. Yang, F. Hu, and X. Zang, “On-orbit geometric calibration model and its applications for high-resolution optical satellite imagery,” Remote Sens. 6(5), 4391–4408 (2014). [CrossRef]
17. M. Kolomenkin, S. Pollak, I. Shimshoni, and M. Lindenbaum, “Geometric voting algorithm for star trackers,” IEEE Trans. Aerosp. Electron. Syst. 44(2), 441–456 (2008). [CrossRef]
18. J. Yang, G. J. Zhang, and J. Jiang, “A star identification algorithm for un-calibrated star sensor cameras,” Opt. Technol. 34, 26–32 (2008).
19. HEASARC, “TYCHO2,” http://heasarc.nasa.gov/W3Browse/all/tycho2.html.