## Abstract

The star tracker is one of the most promising attitude measurement devices used in spacecraft due to its extremely high accuracy. However, high dynamic performance is still one of its constraints. Smearing appears, making it more difficult to distinguish the energy dispersive star point from the noise. An effective star acquisition approach for motion-blurred star image is proposed in this work. The correlation filter and mathematical morphology algorithm is combined to enhance the signal energy and evaluate slowly varying background noise. The star point can be separated from most types of noise in this manner, making extraction and recognition easier. Partial image differentiation is then utilized to obtain the motion parameters from only one image of the star tracker based on the above process. Considering the motion model, the reference window is adopted to perform centroid determination. Star acquisition results of real on-orbit star images and laboratory validation experiments demonstrate that the method described in this work is effective and the dynamic performance of the star tracker could be improved along with more identified stars and guaranteed position accuracy of the star point.

© 2013 OSA

## 1. Introduction

The development of earth-observing and deep-space exploration satellites has resulted in an increase in the requirements for attitude measurement accuracy. The star tracker is an undoubtedly essential choice due to its high accuracy [1]. The main steps for attitude determination of a star tracker under a celestial coordinate system include star extraction, star point centroid determination, star identification, and attitude estimation [2]. Star acquisition discussed in this work includes star extraction and star point centroid determination. The pointing accuracy of the star tracker depends largely on the number of identified stars and the centroiding accuracy of the star point in a star image [1, 3]. An effective star acquisition method is crucial to the entire system and in ensuring identified star number and star point position accuracy. Star extraction method and centroid determination algorithm have matured under static conditions based on CCD or APS CMOS image sensors [4–9]. However, when the star tracker is under dynamic conditions, smearing of the star point could occur during exposure time, leading to unfavorable results. First, limited star energy will disperse into more pixels. The signal to noise ratio (SNR) of blurred star points may decrease. Star extraction and identification become more difficult or even fail when star points are submerged by noise at a high angular velocity. Second, the accuracy of star point position may decrease and could cause inaccuracies in attitude measurement. Thus, requirements for threshold estimation have become even more demanding, not to mention further centroid determination method. The increase in requirements has likewise made studies on star acquisition method for motion-blurred star image under high dynamic conditions both urgent and meaningful.

In this work, we present an effective star acquisition approach for motion-blurred star image. This method is applicable for both CCD and CMOS sensors, but is more useful for APS CMOS-based star tracker as CMOS sensors usually have more noise that may affect dynamic performance. We use a combination of correlation filter and mathematical morphology algorithm in Section 2 to perform denoising, star point energy enhancement, background noise estimation, and adaptive threshold setting. In this manner, the image lag of APS CMOS sensor can be a positive factor in the beginning as it can help us extract smearing star point information as they are always adjacent to the star point under dynamic conditions. Partial image differentiation is utilized in Section 3 to obtain motion information including motion direction and blur extent from only one image of the star tracker based on the extraction results. Processing or restoration method can be completed based on the motion model. In our work, we adopt the motion parameters to provide a reference window for centroid determination and eliminate the effects of image lag as discussed in Section 4.

The entire star acquisition procedure can be summarized in the flow diagram in Fig. 1.

Real on-orbit star images are processed using the approach proposed in this work. The processing results and laboratory validation experiments demonstrate that our method described can make good use of the motion characteristics and solve the problem of deteriorated dynamic performance caused by motion well due to the guaranteed identified star number and position accuracy of the star point.

## 2. Star extraction approach based on correlation filter and mathematical morphology algorithm

Correlation filter and open operation of mathematical morphology are adopted in star extraction method. The correlation filter is used for preliminary eliminating noise. Open operation is adopted to evaluate the background and to set the extraction threshold. This procedure is the analytical foundation for subsequent processing.

#### 2.1 Denoising and star point enhancement

Denoising of star image may be ignored under static conditions because the star energy is usually sufficiently obvious to be extracted. However, denoising is essential for further processing under dynamic conditions. Denoising and signal enhancement have been widely studied in the field of image processing. Gonzalez [10] described different methods but not all image processing methods for cameras are suitable for the star image because of their different imaging targets, different application environments, different image characteristics, and different evaluation systems. As the star centroid position is most important concern of the star tracker, the processing method is best to be a linear and symmetrical filter to ensure that the weighted centroid will not be influenced after processing. Non-linear filters such as the median filter [11, 12] or histogram equalization [13] could cause unpredictable deviations of centroid position. Secondly, it is best for the processing method to have an enhancement effect for the star signal, while providing noise suppression.

From the above considerations, we choose the correlation filter and a Gaussian distribution filter template. We use *f* (*x*, *y*) to represent the original gray image of the star tracker, with *h*(*x*, *y*) as operator of correlation filter. Similar with the convolution operation, a one-dimensional correlation function between *f* and *h* can be expressed as Eq. (1):

Discrete two-dimensional correlational function between *f* and *h* can be expressed as Eq. (2):

where *x* = 0, 1, 2,…, *M*−1; *y* = 0, 1, 2,…, *N*−1. The formula shows that the most important difference between convolution and correlation is that correlation requires no folding.

The process of correlation filter of original star image *F _{org}* can be expressed as Eq. (3):

The filter template *H _{filter}* adopts a 3$\times $3 window, and is basically consistent with star point energy distribution. On one hand, the correlation filter is a linear low-pass filter that has good filter effect for Gaussian and salt and pepper noise. On the other hand, through correction filter, peak value will appear at a highly correlated position. Thus, star point energy will be protruded without changing the centroid.

#### 2.2 Adaptive extraction threshold evaluation

Although salt and pepper noise and Gaussian noise can be suppressed through the correlation filter, gradually changed background noise remains and causes trouble in threshold evaluation and star extraction. Global threshold methods such as Otsu’s method [14] are no longer suitable under dynamic conditions. In recent years, several studies have focused on the adaptive local threshold segmentation method [15–19]. The most popular method is using linear or nonlinear combination of surrounding pixel gray value to determine the background value of one pixel [18]. However, when the pixel is around the star point, the evaluation result tends to be affected by the star gray value and thus is unable to obtain satisfactory evaluated background. The open operator of mathematical morphology can solve this problem well. A Two-dimensional image morphological operation is described as follows.

We use *c*(*x*, *y*) to represent the gray value of image at the point (*x*, *y*) after correlation filter, *b*(*x*, *y*) is the value of structural element *b* at the point (*x*, *y*), and *D _{b}* is the domain of

*b*,

*D*is the domain of

_{c}*c*.

Erosion operation is a basic operation of mathematical morphology. *c* eroded by *b* can be expressed as$c\Theta b$in Eq. (4):

Dilation operation is the inverse operation of erosion, which can be expressed as$c\oplus b$in Eq. (5):

The process of erosion operation is similar with two-dimensional convolution process, while using addition operation instead of convolution multiplication and using the minimum value instead of convolution summation.

The process of dilation operation is also similar with two-dimensional convolution process, while using subtraction operation instead of convolution multiplication and using the maximum value instead of convolution summation.

The open operation *t* between *c* and *b* can be calculated by the combination of erosion and dilation operation:$t=(c\Theta b)\oplus b$.When the radius of structural element *b* is larger than the radius of the stellar image, the open operation can conveniently acquire the initial background *t* of the star image.

Figure 2 is a sketch map with small radius signal to provide an intuitive display. Supposing pixel value 85 is signal, we can see that as long as the radius of *b* is larger than the signal radius of *c*, open operation can acquire the background without signal information.

Considering the blurred characteristic of stellar image under dynamic conditions, the structural element *b* in this paper is set to a disk, with the radius of 25 pixels. Outside values of the structural element *b*(*x*, *y*) are set to zero, while inside values of *b*(*x*, *y*) are set to one for convenient calculation. *t* expresses the evaluated slowly varying background noise and will not be influenced by the star points. Background *B*(*x*, *y*) can be obtained through the simple average in Eq. (6), and *K* and *L* are the size of average window:

Thus, adaptive threshold for each pixel can be expressed as Eq. (7):

Where σ is a constant, with a typical value between 10 and 20. Taking *T* as the threshold, binarization and connected component analysis can be conducted further. Thus, star extraction is completed and the star identification method can be performed. Thus, information on the number ** p** star point such as the boundary of the star point area${\mathbb{Z}}_{p}$, pixel values (

*x*

_{p}_{,}

*,*

_{k}*y*

_{p}_{,}

*), gray values${\Gamma}_{p}$of the labeled star points, and star ID*

_{k}*N*corresponding to the star catalog can be obtained. We use the results of the subtraction

_{p}*g*between

*c*and

*B*in the following motion parameter analysis and centroid determination process to avoid the impact of the background noise.

*g*can be expressed in Eq. (8), and the same

*g*is applicable to both motion parameter estimation and centroid determination.

As star points are distributed sparsely on the star image, the star information obtained above can provide an analysis window for the following method.

## 3. Motion parameter identification based on partial image differentiation

The identification of motion parameters is significant to motion-blurred image as it can characterize the point spread function (PSF) of the blur, with which further processing or restoration can be conducted. We use motion parameters to provide a reference window for centroid determination to enable the blurred star point to be distinguished from the attached APS CMOS image lag. The star position will be determined after centroid determination taking into account the reference window. Several blur parameters estimation methods have been investigated in previous studies. Cannon [20] used a square PSF and its periodic zeros property in the spectral domain to deal with blurred images in the case of uniform linear motion. Blur extent was estimated by measuring the separations between the zeros. However, the assumption of zeros in the spectral domain is difficult to satisfy in various cases of motion degradation. Maximum likelihood estimation is used in reference [21, 22] to model the original image, blur, and noise process. However, the method is incorporated into the restoration algorithm and requires heavy computations. Savakis and Trussell [23] proposed another blur identification method, which aimed to determine the best match between the restoration residual power and expected residual power spectra by choosing the PSF from a collection of candidate PSFs. However, their method is too sensitive to noise, making it difficult to achieve good results.

Star points are sparsely distributed on the star image, so the motion feature cannot usually be detected for the entire star image. We propose an improved motion parameter identification method for motion-blurred image of the star tracker that takes reference from the solution by Y.Yitzhaky [24, 25] based on partial image differentiation. The method utilizes the characteristics of the homogeneity and smoothness of the blurred image in the motion direction being greater than in other directions. Motion will reduce the high frequency component in the motion direction and will have limited influence in other directions. When we conduct directional high pass filter (image differential), if the filtering direction is the motion direction, the summation of gray value in this direction is supposed to be the minimum.

Image rotation based on bicubic interpolation is adopted to enable the motion direction to coincide with the horizontal axis for convenient calculation. We use *g*(*i*, *j*)* _{α}* to express the processed blurred image to be analyzed.

*g*(

*i*,

*j*)

*in this work does not include the entire star image, but rather represents the 3$\times $3 window from${\mathbb{Z}}_{p}$because of the characteristics of the star points. The center of the analysis window is located on pixel${\zeta}_{i,j}$, which has the maximum pixel value of ${\mathbb{Z}}_{p}$. The rotated result can be expressed as*

_{α}*g'*(

*u*,

*v*)

_{0}.The result after directional differential can be expressed as Eq. (9):

Summation of gray value in horizontal direction can be expressed as Eq. (10):

The motion direction *α* can be calculated as Eq. (11):

After obtaining the motion direction *α*, first-order differential can be conducted in a horizontal direction followed by rotation of the region of ${\mathbb{Z}}_{p}$. Here, we use soble arithmetic instead of first-order differential to resist noise. A correlation curve can be obtained by utilizing autocorrelation, and a positive peak will appear on the correlation curve. A pair of conjugate negative correlation peaks will likewise appear on both sides of the peak position. The distance between the two negative correlation peaks is equal to two times that of the motion extent.

Soble operator can be expressed as:

The process of image differential can be expressed as Eq. (12):

where$\chi {(i,j)}_{0}$represents the rotated result of ${\mathbb{Z}}_{p}$,* represents convolution operation, and ${\chi}^{\prime}{(i,j)}_{0}$is the differential result.

The autocorrelation result *S*(*i*, *j*) in line *i* can be obtained as Eq. (13):

*j* = − (*N*−1),…, −1, 0, 1,…, *N*−1; *N* is the row number of $\chi {(i,j)}_{0}$.

Usually, an average of several lines can lead to more reliable results, and two or three lines around the line number of${\zeta}_{i,j}$is enough, the calculations can be expressed in Eq. (14):

where *i* = 0, 1,…, *M*−1; *M* is relevant line number.

Motion extent can be achieved conveniently through the curve of *S _{sum}* by considering the distance between the two negative correlation peaks.

## 4. Centroid determination method with reference window and adaptive threshold

In the case of high dynamic, the star tracker moves significantly during the image exposure. The centroid algorithm cannot be used directly. In addition, image lag will appear during the exposure time. Image lag is a common phenomenon that difficult to avoid. It is an incomplete reset of the previous frame. Saturated parts in the previous frame are not completely reset to the dark level at the beginning of the current frame. This always brings bright areas that adjoin to the stellar image. A reference window is necessary to determine the position of the pixels distribution and avoid image lag. The size and position of the reference window has to be adjusted according to motion direction and extent. Gyro information or multi-frame accumulation star image can provide motion information. Our approach has more advantages, for instance, we only require one star image without other external information. Based on the star information obtained in Section 2 and motion parameters in Section 3, we can determine the reference window${\mathbb{R}}_{p}$for centroid determination as shown in Fig. 3. The center of the reference window is${\zeta}_{i,j}$and the boundary of the reference window${\mathbb{R}}_{p}$is determined according to motion direction and extent. Thus, star point area${\mathbb{Z}}_{p}$obtained in Section 2 can be narrowed down to${\mathbb{R}}_{p}$, thereby avoiding the effect of adjacent image lag. Centroid determination is performed on the adaptively thresholded image *g*(*x*, *y*) produced by Eq. (8) and attitude matrix *A _{q}* will be obtained after attitude algorithm [26].

We use the difference value (D-value) between the mean value of signal and mean value of noise instead of SNR to represent the difficulty level of star extraction. A comparison of extraction star point position and the inverse calculated star point position utilizing the obtained attitude matrix *A _{q}* is used to evaluate the accuracy of the star point position. This evaluation method is suitable for on-orbit star image, and can represent the accuracy of attitude measurement.

## 5. Real on-orbit star image analysis and processing result

The image is captured from the LEO satellite, and detailed information on the image is shown in Table 1.

The following Figs. display the difference between the results with and without our extraction method separately. Red numbers in the Fig. represent extracted stars and green number represent identified stars.

Figure 4 and Fig. 5 show that more stars can be extracted and identified using our star extraction method. (b), (c), and (d) of Fig. 4 and Fig. 5 shows the gray values of one star point. The star point is difficult to observe in Fig. 4 and is more obvious in Fig. 5 as the energy of the star point is enhanced and the noise suppressed with the method in Section 2.

The following table provides detailed information of recognized star points.

Table 2 shows that with our extraction method, some stars that previously could not be extracted could now be extracted and identified as a result of the improvement on the D-value.

Motion direction results are shown in Fig. 6 and Fig. 7. An approximate range can be obtained from Fig. 6 with long step, and accurate motion direction can be obtained using the precise step in Fig. 7.

Figure 6 shows that the minimum value is 50° with 10° step, and thus, the motion direction is in the range of 40° to 60°. The following segmentation result is concentrated in the range of 40° to 60°.

We can see from Fig. 6 and Fig. 7 that the motion direction is 52° from the horizontal direction in the counterclockwise direction.

As described in Section 2, the distance between the two negative correlation peaks is equal to two times the motion extent. The motion extent can be obtained conveniently from following results.

The motion extent can be obtained from Fig. 8, which indicates that the distance between the two negative correlation peaks is 32 pixels of star 1527, and 27 pixels of star 1474. Thus, the motion extent is 16 pixels of star 1527, and 13.5 pixels of star 1474. As the gray values of stars are not the same according to their star magnitude, the detected smear extents have a slight difference although they are the same in theory under the same angular velocity. In this work, the extent of the detected smear is necessary to provide a reference window and thus, the motion extent is decided for every star point separately. As the mean of smear extents of all star points is 16.87pixels, the rough estimated motion angular velocity is 0.8°/*s*.

The following table displays the accuracy of the star point position in different star acquisition methods.

Table 3 illustrates that position accuracy may decrease with only initial extraction method if the star tracker adopts CMOS sensor because the combined effect of correlation filter and image lag will expand the scope of the star point and cause error in centroid determination. However, this situation is improved considering the reference window obtained by the motion parameters. With the entire star acquisition method, the identified star number increases from four to sixteen, and the average position accuracy keeps in the range of 0.8 pixels at the estimated angular velocity 0.8°/*s*.

## 6. Laboratory validation experiment

We conduct laboratory experiments to verify the star acquisition method and create a supplement for real on-orbit results. We adjust the brightness of the collimator to approximate the star brightness situation of the on-orbit star image. The exposure time in the laboratory experiment is 200ms. Turntable speed is controlled at 0.4°/*s* and 1.2°/*s* separately to show the results under different angular velocity conditions. Figure 9 shows the experiment system, which includes a turntable, star tracker, collimator, and other ancillary equipment.

Figure 10 shows the gray image before and after star extraction. The motion trajectory of star point is more obvious with the correlation filter [Fig. 10(b)].

The motion direction as shown in Fig. 11 is approximately 90°, which coincides with the experiment conditions. Figure 12 is the autocorrelation curve of the differential of the rotated image. The estimated motion extent is approximately 5 pixels. Considering that the focal length is equal to 3320 pixels, exposure time is 200ms, and motion speed of the turntable is 0.4°/*s*, the theoretical extent is 4.64 pixels. The error is 0.36 pixels and is sufficiently accurate for further processing such as restoration.

We also conduct another set of experiment with 1.2°/*s* motion speed in vertical direction to show the results under more demanding conditions [Fig. 13]. The same conclusion can be obtained that the motion trajectory of star point is more obvious with the correlation filter and background removing proposed in this paper. Motion direction and motion extent acquired consistent with the theoretical values.

## 7. Conclusion

An effective motion-blurred star acquisition approach under high dynamic conditions is presented in this work. According to the motion characteristics of the star point, we adopt a correlation filter to conduct denoising and signal enhancement. On this basis, open operator of mathematical morphology is then adopted to evaluate slowly varying background noise and set adaptive thresholds for better star extraction. This method is especially suitable for APS CMOS star tracker because image lag can provide help in the extraction process. A partial image differentiation is adopted to achieve motion direction and blur extent from one image of the star tracker based on the above extraction region. In our work, we adopt the motion parameters to set a reference window for centroid determination and eliminate the influence of image lag. Real on-orbit star image is processed through the approach proposed in this work. The processing results and laboratory validation experiments demonstrate that our method is effective in improving the identified star number and position accuracy of the star point, thereby improving the dynamic performance of the star tracker.

## Acknowledgments

This work is partially supported by a grant from the National High Technology Research and Development Program of China (863 Program) (No. 2012AA121503). The laboratory experiments were performed at the State Key Laboratory of Precision Measurement Technology and Instruments at Tsinghua University. We gratefully acknowledge the support of both institutions.

## References and links

**1. **C. C. Liebe, “Accuracy performance of star trackers-A tutorial,” IEEE Trans. Aerosp. Electron. Syst. **38**(2), 587–599 (2002). [CrossRef]

**2. **J. Gwanghyeok, *Autonomous star sensing, pattern identification, and attitude determination for spacecraft: an analytical and experiment study* Doctoral thesis, Texas A&M University, 2001.

**3. **T. Sun, F. Xing, and Z. You, “Optical system error analysis and calibration method of high-accuracy star trackers,” Sensors (Basel) **13**(4), 4598–4623 (2013). [CrossRef] [PubMed]

**4. **B. M. Quine, V. Tarasyuk, H. Mebrahtu, and R. Hornsey, “Determining star-image location: A new sub-pixel interpolation technique to process image centroids,” Comput. Phys. Commun. **177**(9), 700–706 (2007). [CrossRef]

**5. **G. Rufino and D. Accardo, “Enhancement of the centroiding algorithm for star tracker measure refinement,” Acta Astronaut. **53**(2), 135–147 (2003). [CrossRef]

**6. **D. S. Anderson, *Autonomous star sensing and pattern recognition for spacecraft attitude determination* Doctoral thesis, Texas A&M University, 1991.

**7. **M. R. Shortis, T. A. Clarke, and T. Short, “A comparison of some techniques for the subpixel location of discrete target image,” Proc. SPIE **2350**, 239–250 (1994). [CrossRef]

**8. **M. A. Samaan, T*oward faster and more accurate star sensors using recursive centroiding and star identification* Doctoral thesis, Texas A&M University, 2003.

**9. **S. Zheng, Y. Tian, J. Tian, and J. Liu, “Facet-based star acquisition method,” Opt. Eng. **43**(11), 2796–2805 (2004). [CrossRef]

**10. **R. C. Gonzalez, *Digital Image Processing* (Pearson Education, 2009).

**11. **S. J. Ko and Y. H. Lee, “Center weighted median filters and their applications to image enhancement,” IEEE Trans. Circ. Syst. **38**(9), 984–993 (1991). [CrossRef]

**12. **Z. Wang and D. Zhang, “Progressive switching median filter for the removal of impulse noise from highly corrupted images,” IEEE Trans. Circuits Syst. II-Analog Digital Sig. Process. **46**(1), 78–80 (1999). [CrossRef]

**13. **J. A. Stark, “Adaptive image contrast enhancement using generalizations of histogram equalization,” IEEE Trans. Image Process. **9**(5), 889–896 (2000). [CrossRef] [PubMed]

**14. **N. OTSU, “A Threshold Selection Method from Gray-Level Histograms,” IEEE Trans. Syst. Man Cybern. **9**(1), 62–66 (1979). [CrossRef]

**15. **J. Bernsen, “Dynamic thresholding of grey-level images,” in Proceedings 8th International Conference on Pattern Recognition, Paris, pp. 1251–1255, 1986.

**16. **L. L. Kontsevich and C. W. Tyler, “Bayesian adaptive estimation of psychometric slope and threshold,” Vision Res. **39**(16), 2729–2737 (1999). [CrossRef] [PubMed]

**17. **S. G. Chang, B. Yu, and M. Vetterli, “Adaptive wavelet thresholding for image denoising and compression,” IEEE Trans. Image Process. **9**(9), 1532–1546 (2000). [CrossRef] [PubMed]

**18. **A. B. Katake, *Modeling, image processing and attitude estimation of high speed star sensors* Doctoral thesis, Texas A&M University, 2006.

**19. **W. Zhang, W. Quan, and L. Guo, “Blurred star image processing for star sensors under dynamic conditions,” Sensors (Basel) **12**(5), 6712–6726 (2012). [CrossRef] [PubMed]

**20. **M. Cannon, “Blind deconvolution of spatially invariant image blurs with phase,” IEEE Trans. Acoust. Speech Signal Process. **24**(1), 58–63 (1976). [CrossRef]

**21. **A. K. Katsaggelos, *Digital Image Restoration* (Springer-Verlag, 1991).

**22. **J. Biemond, A. M. Tekalp, and R. L. Lagendijk, “Maximum likelihood image and blur identification: A unifying approach,” Opt. Eng. **29**(5), 422–435 (1990). [CrossRef]

**23. **A. E. Savakis and H. J. Trussell, “Blur identification by residual spectral matching,” IEEE Trans. Image Process. **2**(2), 141–151 (1993). [CrossRef] [PubMed]

**24. **Y. Yitzhaky and N. S. Kopeika, “Identification of blur parameters from motion blurred images,” Graphical Models Image Process. **59**(5), 310–320 (1997). [CrossRef]

**25. **Y. Yitzhaky, R. Milberg, S. Yohaev, and N. S. Kopeika, “Comparison of direct blind deconvolution methods for motion-blurred images,” Appl. Opt. **38**(20), 4325–4332 (1999). [CrossRef] [PubMed]

**26. **G. Wahba, “A least squares estimate of satellite attitude,” SIAM Rev. **7**(3), 409–409 (1965). [CrossRef]