## Abstract

When applied inside Earth’s atmosphere, the star tracker is sensitive to sky
background produced by atmospheric scattering and stray light. The
shot noise induced by the strong background reduces star detection
capability and even makes it completely out of operation. To improve
the star detection capability, an attitude-correlated frames adding
(ACFA) approach is proposed in this paper. Firstly, the attitude
changes of the star tracker are measured by three gyroscope units
(GUs). Then the mathematical relationship between the image
coordinates at different time and the attitude changes of the star
tracker is constructed (namely attitude-correlated transformation,
ACT). Using the ACT, the image regions in different frames that
correspond to the same star can be extracted and added to the current
frame. After attitude-correlated frames adding, the intensity of the
star signal increases by *n* times, while the shot
noise increases by $\sqrt{n}~\sqrt{n}/2$ times due to its stochastic
characteristic. Consequently, the signal-to-noise ratio (SNR) of the
star image enhances by a factor of $\sqrt{n}~2\sqrt{n}$. Simulations and experimental results
indicate that the proposed method can effectively improve the star
detection ability. Hence, there are more dim stars detected and used
for attitude determination. In addition, the star centroiding error
induced by the background noise can also be reduced.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

A star tracker is an avionics instrument used to provide the absolute 3-axis attitude of a spacecraft utilizing star observations [1, 2]. First-generation star trackers are characterized by outputting positions of a few bright stars in sensor-referenced coordinates and requires external processing [3]. Its star observation for navigation purposes is performed by aiming the Stellar INS optoelectronic tools at the brightest stars and long-term holding them using gyrostabilized platforms [4].

The present second-generation star tracker, also called star imagers, performs this task by pattern recognition of the star constellations in the field of view (FOV) and by means of a star catalog covering the entire firmament [5]. It has a smoother and more robust operation, lower cost and higher accuracy compared to first-generation star trackers and becomes the preferred autonomous navigator onboard most spacecraft [6–9]. However, when used inside the atmosphere, it is susceptible to sky background produced by atmospheric scattering and stray light and the star image has an extremely low SNR. The dramatically SNR decline of star image reduces the detection capability and makes it out of operation during daytime [10–13]. Therefore, it is crucial to improve the SNR of star images and to overcome the obstacle of the daytime star detection under strong noise in order to adapt the star tracker to the near-ground condition [14–16].

Selecting an image sensor whose spectral response lies in the short-wave infrared band is the mainstream solution for daytime star trackers [17]. It has higher SNR compared with the camera detecting visible light because the background radiation in infrared spectrum region is much weaker than that in visible spectrum region. To further enhance the star detection ability, there are three ways to improve the SNR of infrared star images. The first way is to improve the optical system by using lens that has a large aperture and a long focal length, which definitely increases the cost and size of the system. For example, the daytime star tracker developed by Trex Enterprise Corp. adopts three optical lens of 200mm aperture and 0.5° × 0.4° FOV to observe stars on the H-band, enabling the detection of the 6.8 magnitude stars at sea level during daytime, and the total weight of the assembly is up to 100 pounds [18,19]. Extending the integration time is the second way, because the star signal increases faster than the noise with integration time [20]. The main problem is that the pixels are easy to saturate under strong background radiation, and the long exposure time will induce the blurring of the star image under dynamic conditions. Thirdly, Mark O’Malley proposed a frame adding approach as an alternative to long integration time to improve the SNR for low light detection [21]. By shortening the exposure time of each single-frame image and adding the frames directly, it effectively prevents the saturation of the electric detector under strong noise and has a good effect on image enhancement of static targets. However, under dynamic conditions, the star is shifting on the image with the angular motion of the vehicle [22,23]. Adding the star images directly results in motion-blurring of the image and the deterioration in star centroiding accuracy [24,25].

In order to solve the problems of the third way and make it adapted to the dynamic condition, a star images adding method based on the attitude-correlated frame (ACF) is proposed in this paper. In the ACF method [26], the star image frames measured in different time are correlated by the angular motions of the star tracker and calculated with the angular rates sensed by the strap-down GUs. Then the translation and rotation transformations between different frames (namely correlated transformation) are conducted by using the gyro-measured attitude changes. By correlating and adding successive image frames, the SNR of the star image is enhanced while the background noise is suppressed under dynamic conditions, and the star detection capacity can be improved.

The paper is organized as follows. The ACFA approach is depicted in detail in Section 2. Simulation results are presented in Section 3. In Section 4, the experimental result is given to verify the approach. Finally, the conclusion is drawn in Section 5.

## 2. Basic theory

#### 2.1. Frame adding to improve SNR under static conditions

For optical sensors, shot noise occurs due to the random fluctuations (described by the Gaussian distribution) of the number of photons detected, and its standard deviation is proportional to the square-root of the average intensity. During the day time, the performance of the focal plane array (FPA) is limited by the shot noise induced by strong sky background radiation. The SNR of the image can be enhanced by increasing the integration time. However, the pixels are easy to saturate under strong radiation condition. Frame adding is a viable way to improve the SNR with successive short integration time images and to keep them from saturation.

Considering a single frame obtained for low light detection, the SNR is given by [21]:

where*s̄*is an estimate of the input signal,

_{i}*N*is the integration dependent noise including shot noise, dark current noise, and transfer noise, and

_{τ}*N*

_{0}is the output noise, such as readout, amplifier, and quantization noise. It should be noted that the shot noise is the main source in strong sky background.

Consider *n* frames taken one after another and added
together. The mean input signal increases to
*ns̄ _{i}*. Assuming mean square
fluctuations add, then the noise for

*n*frames added together becomes $\sqrt{n\left({N}_{\tau}^{2}+2{N}_{0}^{2}\right)}$. Therefore, the SNR for frame adding is given by [21]:

There is an obvious improvement of SNR ($\sqrt{n}$) for *n* correlated frames
over a single frame. Theoretically, adding successive star images can
improve the signal-to-noise ratio of the image, but in fact, the
attitude of the vehicle is constantly changing, causing a certain star
to shift on different images. Consequently, directly adding the
images, the energy of a certain star is no longer distributed within a
small area. The method of frame adding to improve SNR is restricted to
static target signal on different images. So this method needs to be
modified before applied to dynamic conditions.

#### 2.2. Attitude-correlated transformation process

Under dynamic conditions, the attitude of the camera is changing with the vehicle,
therefore, the position of a certain star on Focal Plane Array (FPA)
is shifting with time as shown in Fig.
1. Adding dynamic frames directly results in motion blurring of
star image and deterioration in star centroiding accuracy. The
Attitude Correlated Frames Adding (ACFA) Approach is proposed to solve
the problem. Considering *n* successive frames, to make
the input signal after addition increase to *n* times
of a single frame, the pixels of a certain star on different images
should be added. As shown in Fig.
1(b), for a certain point in the celestial sphere, its
coordinate on the FPA of star tracker is
(*u _{j}*,

*v*) at

_{j}*t*. While its coordinate is (

_{j}*u*,

_{k}*v*) at

_{k}*t*. Because of the attitude change of the star tracker from

_{k}*t*to

_{k}*t*, there is a correlated transformation between (

_{j}*u*,

_{j}*v*) and (

_{j}*u*,

_{k}*v*). The information of GUs is used to calculate the attitude change between two moments and to eliminate its impact on the star coordinate on FPA. By transforming the coordinate at

_{k}*t*to find its position at other moments, then the corresponding pixels can be added. In this way, star radiation can be regarded as a static signal on the image, and adding successive frames after attitude-correlated transformation can improve SNR of star image and star detection capability under dynamic conditions.

_{k}The coordinate systems used in this paper are defined as follows:

The Earth-centered inertial coordinate system, which is fixed in inertial space, is
represented by *i*-frame. Its
*X _{i}* axis is in the equatorial plane and
points to the vernal equinox. The

*Z*axis is aligned with the Earth’s rotation axis and vertical to the equatorial plane. The

_{i}*Y*axis completes a right-handed frame with the other two axes. The star tracker coordinate system (

_{i}*s*-frame) has its origin at the detector center. The

*X*and

_{s}*Y*axes are parallel to a row and a column of the detector, respectively. The three axes satisfy the right-hand rule. The strap-down GUs system, which is represented by

_{s}*b*-frame, has its

*X*,

_{b}*Y*and

_{b}*Z*axes consistent with the three mutually orthogonal sensitive axes of the GUs. And it is rigidly mounted to the star tracker platform.

_{b}The vector of a certain star in *i*-frame is
*r ^{i}*, and it can be transformed to

*s*-frame when given the attitude matrix of the star tracker at the time of

*t*and

_{k}*t*respectively:

_{j}*b*-frame to

*s*-frame) ${\mathbf{C}}_{b}^{s}$ and attitude matrix of the GUs ${\mathbf{C}}_{i}^{b}$:

It can be deduced that:

**Φ**×] is the skew-symmetric matrix of the angle increment:

For the sampling time of Δ*t*, the angle
increment can be calculated using the angular rate of GUs in the
*b*-frame ${\mathit{\omega}}_{ib}^{b}$:

The vectors of a certain star in *i*-frame are fixed at
different time. Thus, the vectors of the star in
*s*-frame between *t _{k}* and

*t*have the following linear transformation relationship:

_{j}At two different moments, the vector of a certain star ${r}_{k}^{s}$ and ${r}_{j}^{s}$ become correlated because of
${\mathbf{C}}_{s(j)}^{s(k)}$. So ${\mathbf{C}}_{s(j)}^{s(k)}$ is defined as the attitude-correlated
matrix (ACM) from *t _{k}* to

*t*.

_{j}*T*is the value in

_{mn}*m*row and

*n*column of ${\mathbf{C}}_{s(j)}^{s(k)}$, Eq. (9) can be rewritten as:

The ideal imaging of a star tracker can be simplified to a pinhole
model [28]. If
the principal point coordinates (*u*_{0},
*v*_{0}), the focal length *f*
and the aberration of the camera *δu*,
*δv* of the star tracker are known, the
relationship between the vector of a star in *s*-frame
is given by:

*u*,

_{j}*v*) is the coordinate on FPA in frame #

_{j}*j*, and (

*u*,

_{k}*v*) is the coordinate on FPA in frame #

_{k}*k*:

*u*,

_{k}*v*) is the corresponding coordinate of (

_{k}*u*,

_{j}*v*) at

_{j}*t*under dynamic conditions, Eq. (9) can be rewritten as:

_{k}We transform each pixel of frame # *j* through
Eq. (14), finding the
coordinate on FPA of corresponding pixels in frame #
*k*.

For a moment *t _{n}* when the attitude is to be
measured, the former successive

*n*− 1 star images are correlated to the

*n*th image by the attitude-correlated matrices before added. The exact number of correlated images should be chosen considering some other factors as the dynamic condition of the vehicle, integration time, the frequency of star tracker, SNR threshold for star detection and etc.

#### 2.3. Area weighed averaging adding method

For static star images, there is a one-to-one correspondence of the pixels between different frames. However, in reality, the integer coordinate becomes fractional after transformation. Consequently, the pixels between the correlated images are not perfectly overlapping. Considering that each row and column has hundreds of pixels and we add the entire image, the transformed pixels located at the edge of the image can be ignored. A certain transformed pixel in the *j*th image (*j* = 1, 2, ..., *n* − 1) is located in a window in the *n*th image. The transformed pixel of the *j*th image centered at *A*_{0} has overlapping areas with four pixels of the *n*th image with the center of *A*_{1}, *A*_{2}, *A*_{3} and *A*_{4} respectively, which is shown in Fig. 2.

The gray value of pixel *A*_{0} is divided into four parts according to the overlapping areas *S* and then added to the four pixels of *n*th image *A*_{1}, *A*_{2}, *A*_{3} and *A*_{4} by area weighed averaging adding method which is given by:

*I*,

_{n}*I*,

_{j}*I*is the gray value function of the image at

*t*,

_{n}*t*(

_{j}*j*= 1, 2, ...,

*n*− 1) and after addition respectively.

Due to the area weighed averaging adding method, the original pixel is divided into four parts, the standard deviation is linearly split as well. While the images are added after the division, it is the variance of each frame that is linearly superimposed. The standard deviation of the image noise in a single frame is defined as *σ*. And the noise standard deviation of the correlated image is $\sqrt{n}/2$ to $\sqrt{n}$ times that of a single frame (The specific derivation process is elaborated in the appendix):

The star signal is still proportional to the number of correlated frames *n*. Therefore, the added SNR satisfies the following inequality:

The area weighted averaging method can solve the problem that the pixels between correlated images are not perfectly overlapping and meanwhile make the noise standard deviation after addition lower than $\sqrt{n}$ times of a single frame.

The flowchart shown in Fig. 3 depicts the procedure of the ACFA approach. The system consists of a star tracker and GUs which are rigidly fixed together. Firstly, the star tracker takes successive star images while the GUs outputs its angular rate to calculate the ACM. Then the mathematical relationship between the image coordinates at different time and the ACM are constructed. Using the ACT, the image regions in different frames which are corresponding to the same star can be correlated to the current frame. Next the area weighed averaging method is adopted to add the correlated frames. Finally the added frame with increased SNR is used for star extraction and identification.

## 3. Simulation

#### 3.1. Simulation parameters

The star images are generated by the star tracker simulator when given intrinsic parameters and attitude of the star tracker. The performance of a typical star tracker and GUs used for this simulation are listed in Table 1:

The three-axis attitude of the star tracker in *i*-frame is shown in Fig. 4. The GUs data is generated based on the given attitude of the star tracker at a sample rate of 50 Hz.

The feasibility of the proposed method is validated by the calculated SNR of star images and star point centroiding error. The gray values of the image pixels reflect the level of star signal and noise charges, which are used to calculate the SNR. Firstly, a sub-image of 50 × 50 pixels containing the navigation star is selected. And the mean of the gray values is subtracted from each pixel in the sub-image. Then a window of 5 × 5 pixels containing the star is extracted, and their mean gray value is calculated as the signal intensity. Next, the standard deviation of the remaining pixels (exclude the 5 × 5 window) in the sub-image is calculated as the noise intensity. Finally, the SNR is equal to the signal intensity divided by noise intensity.

#### 3.2. Adding result of correlated images

Using the approach proposed in Section 2, the simulated successive star images are correlated and added. It can be seen in Fig. 5(a) that the star signal is overwhelmed by strong noise in a single frame. When 60 frames are correlated and added in Fig. 5(b), the star signal has an obvious improvement over the noise and is easier to be detected. Besides, the star point after addition still keeps its shape.

Then the star signal intensity and the noise intensity of correlated image are calculated. The signal of a certain star and noise growth with respect to the number of correlated frames are shown in Fig. 6. The star signal intensity in the window is proportional to the number of correlated frames, which indicates that the star point energy is still concentrated within a small region that is dominated by a certain star. The noise after the correlation and addition is between $\sqrt{n}/2$ and $\sqrt{n}$ of a single frame, which verifies the result of Eq. (16). The SNR of the star image by ACFA approach as shown in Fig. 7(a) is between $\sqrt{n}$ and $2\sqrt{n}$ times of a single frame, which agrees with Eq. (17).

The star centroids are subsequently determined and compared with the present true value. As shown in Fig. 7(b), The centroiding error is always less than 0.1 pixel and declines with the number of correlated frames, and gradually approaches to the limit of systematic errors. Therefore the ACFA approach improves the SNR of star image making dim stars detectable and meanwhile keeps the shape of stars which guarantees the centroiding accuracy.

#### 3.3. Influence of attitude-correlated matrix error

The attitude-correlated matrix (calculated by Eq. (5)) accuracy is one of the main factors affecting the adding result of the ACFA approach. Since the GUs information is used to calculate the ACM, the gyro error is the leading source of error. In practice, the GUs have two dominant errors including bias and noise. And typical performances of gyroscopes in different grades are listed in Table 2 [29]. GUs errors of three different grades are added (navigation, tactical and automotive respectively) while correlating the images, and the SNR of star image is analyzed. The simulation result is shown in Fig. 8(a). Compared with the error-free condition, the effect of navigation and tactical gyroscope errors on the SNR after the correlation is rather small, which can be ignored. As for the automotive grade, when the number of correlated frames is small, the simulation result agrees with theory. But when more frames are added, the SNR slightly decreases.

As indicated in Eq. (5),
the installation matrix ${\mathbf{C}}_{b}^{s}$ is used to calculate the
attitude-correlated matrix. As the fixed angle is used to calculate
the installation matrix, if it deviates from the actual value, the
calculated attitude-correlated matrix is inaccurate. And the
correlating result of the ACFA approach is affected as well. Different
levels of fixed angle error are added while correlating the images,
and the SNR of the star image is analyzed. The simulation result is
shown in Fig. 8(b). It can be
seen that for fixed angle error less than [500, 500,
1000]^{″}, its effect on the SNR growth is
rather small, which can be ignored. As for higher-level fixed angle
error, the SNR shows a decreasing trend when more frames are
correlated. Therefore, the ACFA approach performs well with
middle-level fixed angle errors.

The effects of two kinds of errors are then analyzed in detail. When the GUs error is automotive-grade, 10 frames with only star signal are correlated and added by ACFA approach, the result is shown in Fig. 9(a). The added star signal is no longer concentrated within a small region, and with increased noise, the SNR is reduced.

The result of ACFA depends on the accuracy of attitude-correlated matrix, so we further analyze the effects of GUs error and fixed angle error on attitude-correlated matrix. The error of attitude-correlated matrix can be calculated as follows [27]:

Where ${\mathbf{C}}_{s(j)}^{s(k)}$ is the true value, ${\overline{\mathbf{C}}}_{s(j)}^{s(k)}$ is the calculated matrix with errors, and
**I** is the unit matrix. Therefore, the corresponding
attitude error angle can be obtained by transforming **E** to
three-axis Euler angles. The error of attitude-correlated matrix
caused by automotive Gus as shown in Fig. 10(a) accumulates with navigation time. For a fixed angle
error of [500, 500, 1000]^{″}, the
error of attitude-correlated matrix is shown in Fig. 10(b). It can be seen that the error
fluctuates around zero and is less than 0.02 arc-seconds. So it will
not accumulate with time and is too small to affect the ACFA
approach.

## 4. Experiment

The experiment was conducted to verify the proposed method at the laboratory building (Changsha, China) on 28th Nov. at 2 am. The equipment used in experiments are shown in Fig. 11. The star tracker and GUs are rigidly fixed together and mounted on a rotary table, and their performance specifications are listed in Table 3. The rotary table provided a dynamic condition for the system. The experiment last for about 40 minutes, and the three-axis attitude of the star tracker in *i*-frame is shown in Fig. 12.

During the process, 4556 successive star images were taken by star tracker. The
three-axis angular rate sensed by the GUs can be transformed to the
*s* frame through the pre-calibrated installation matrix
${\mathbf{C}}_{b}^{s}$. And then the attitude-correlated matrix
between different frames can be calculated. Since the attitude-correlated
matrix is calculated, the star images can be correlated and added by the
ACFA approach.

The contrast of the star images before and after the correlation is shown in Fig. 13. The star on the image after addition still keeps its shape as shown in Fig. 13(b), so it guarantees the subsequent star centroiding accuracy. There is a dim star on the original image, and it can be detected after adding 5 frames. The star identification result is then analyzed. For the single image in Fig. 13(a), 9 stars can be identified. When 5 successive frames are correlated and added as shown in Fig. 13(b), the number of identified stars increases to 24. Since the dim stars become “brighter”, the number of detectable stars increases as well.

The noise and SNR growth with the number of correlated frames is shown in Fig. 14. The noise after the correlation and addition is between $\sqrt{n}/2$ and $\sqrt{n}$ of a single frame, which verifies the simulation result. The SNR of *n* correlated frames is $\sqrt{n}~2\sqrt{n}$ times that of a single frame, which means the experimental results are consistent with the theory.

The 4556 images under dynamic conditions are divided into more than 400 parts, every 10 successive star images is a group and then correlated by ACFA approach. The statistical result is shown in Fig. 15. Stars with magnitude lower than 5 are detectable for a single frame. As the SNR of star signals is increasing with the number of correlated frames, the maximum magnitude of detectable stars rises as well and reaches 5.64 for 10 correlated frames. Therefore, the number of identified stars increases sharply from 1 to 9 when 7 frames are added. The star identification failure rate shows an opposite trend decreasing from 80% to 20%.

In the case where the number of bright stars is less than 3, the star identification is likely to fail, and the star tracker cannot output the attitude of the vehicle normally. So the ACFA approach can improve the threshold of detectable star magnitude and the star identification numbers. When the failure rate is accordingly lower, the application of star tracker can be expanded to strong background radiation condition. Besides, with more identified stars on the image, the output attitude accuracy of the star tracker is improved as well [9].

Since the true coordinates of the stars on the focal plane are unknown, the angular distance (the angular separation between two stars as observed from the detector) is used to estimate the star centroid error. In different coordinate systems, the angular distance between two certain stars is constant. In *i*-frame (taken for reference), it can be calculated using the right ascension and declination of the identified stars. The corresponding angular distance in *s*-frame (measured value) can be obtained through the coordinates on FPA and focal length. And their difference is the angular distance error. The root mean square(RMS) value of star angular distance error is shown in Fig. 15(d). The RMS of a single frame is up to 13.1 arc-seconds. It declines to approximately 7.5 arc-seconds as the number of correlated frames increases. But it cannot keep decreasing to zero due to the limit of systematic errors, which is consistent with the simulation result.

## 5. Conclusions

The star tracker is sensitive to strong background noise and can be completely out of
operation. In this paper, an attitude-correlated frames adding (ACFA)
approach is proposed to break the bottleneck. The attitude-correlated
matrix at different time is calculated using the information of GUs. And
then successive star images are correlated by transforming the pixels to
find the regions of a certain star on different images. Next, add the
correlated images to enhance star signal intensity and to improve the SNR
of star images. Finally, the method of area weighed averaging is adopted
to add the transformed pixels which are not perfectly overlapping.
Simulations under dynamic conditions and experimental results under
oscillating conditions indicate that the star signal of *n*
frames correlated by ACFA approach is *n* times that of a
single frame and the SNR increases to $\sqrt{n}~2\sqrt{n}$ times. It also improves the star centroid
accuracy. And the impact of GUs error (including bias and noise) and fixed
angle can be ignored at low angular rate of the vehicle, which verifies
the effectiveness of the proposed approach.

For the hardware implementation, the ACFA method, which uses sequent frames for attitude determination, requires larger memory space to store the image data and more computation resource for image adding. That will cost more hardware resource and time for image processing. The increment of image processing time will induce more time delay of the attitude output. To solve this problem, we can use a high-speed central processing unit (CPU), of which the clock speed is up to several GHz, for image processing. Moreover, the computation cost can be reduced by processing the sub-images containing the signal of navigation stars. Because the integration of star tracker and GUs can provide the attitude information, we can predict the coordinates of the navigation stars on each frame. Then the sub-image (for example, a window of 15 pixels×15 pixels) containing a star can be extracted, with its center located at the predicted star coordinates. The ACFA method can only store and process the sequent sub-images. As a consequence, the hardware cost will be greatly reduced.

The method can be applied to star tracker and INS integrated navigation systems. The GUs of INS can provide accurate attitude change for this approach. In return, the accumulating error of the inertial navigation system can be compensated with more accurate attitude information of the star tracker. Using a short-wave infrared camera to detect stars during the daytime is another trend but the noise of infrared image is also high. The feasibility of this method for improving the SNR of infrared star images should be further assessed.

## Appendix

After the correlated images are added by the area weighted averaging method, the SNR of the star image is analyzed.

Assuming that the noise of an image (*M* × *N* pixels) is Gaussian distribution, its mean value is *Ī* and standard deviation is *σ*:

Considering that the pixels of correlated images after ACT are not perfectly overlapping as shown in the figure below, a pixel is divided into four parts: *S*_{1}, *S*_{2}, *S*_{3}, and *S*_{4}. Before addition each pixel of a single frame is divided into the same four parts which have the ratio of *k*_{1}, *k*_{2}, *k*_{3} and *k*_{4} respectively. So for each sub-pixel, we have:

*S*

_{1},

*S*

_{2},

*S*

_{3},

*S*

_{4}potion of the pixel

*I*

_{i,j}and 0 ⩽

*k*

_{1},

*k*

_{2},

*k*

_{3},

*k*

_{4}⩽ 1. The mean value and standard deviation of all the sub-pixels whose area are

*S*

_{1}can be calculated:

Similarly, the standard deviation of S2, S3, and S4 sub-pixels are:

Therefore, the standard deviations of the four arrays has the equation:

When these divided parts of *n* frames with the same noise intensity (standard deviation *σ*) are added together, it is their variances that add linearly. And the noise intensity of the added frame is *σ _{add}*:

*σ*

_{f1},

*σ*

_{f2},

*σ*

_{f3},

*σ*

_{f4}are the standard deviation of the sub-pixels with the area of

*S*

_{1},

*S*

_{2},

*S*

_{3}and

*S*

_{4}in the

*f*th frame respectively. We have:

For any real number, there exists an inequality:

*f*= 1, 2, ...,

*n*,

Thus,

The star signal is proportional to the number of correlated frames *n*. Therefore, the SNR of the correlated frame after addition satisfies the following inequality:

## Funding

National Natural Science Foundation of China (NSFC) (61803378, 61573368).

## Disclosures

The authors declare that there are no conflicts of interest related to this article.

## References

**1. **C. C. Liebe, “Accuracy performance of star trackers - a tutorial,” IEEE Trans. Aerosp. Electron. Syst. **38**(2), 587–599 (2002). [CrossRef]

**2. **S. Levine, R. Dennis, and K. L. Bachman, “Strapdown astro-Inertial navigation utilizing the optical Wide-angle lens startracker,” Navigation **37**(2), 347–362 (1990). [CrossRef]

**3. **A. R. Eisenman, C. C. Liebe, and J. L. Joergensen, “New generation of autonomous star trackers,” Proc. SPIE **3221**, 524–535 (1997). [CrossRef]

**4. **G. A. Avanesov, R. V. Bessonov, A. N. Kurkina, M. B. Lyudomirskii, I. S. Kayutin, and N. E. Yamshchikov, “Autonomous strapdown stellar-inertial navigation systems: design principles, operating modes and operational experience,” Gyroscopy and Navig. **4**(4), 204–215 (2013). [CrossRef]

**5. **C. C. Liebe, “Star trackers for attitude determination,” IEEE Aerosp. Electron. Syst. Mag. **10**(6), 10–16 (1995). [CrossRef]

**6. **J. L. Jorgensen, T. Denver, M. Betto, and P. V. d. Braembussche, “The PROBA satellite star tracker performance,” Acta Astronaut. **56**(1), 153–159 (2005). [CrossRef]

**7. **E. F. Young, R. Mellon, J. W. Percival, K. P. Jaehnig, J. Fox, T. Lachenmeier, B. Oglevie, and M. Bingenheimer, “Sub-arcsecond performance of the ST5000 star tracker on a balloon-borne platform,” in *2012 IEEE Aerospace Conference* (IEEE, 2012), pp. 1–7.

**8. **I. S. Kruzhilov, “Evaluation of instrument stellar magnitudes without recourse to data as to star spectral classes,” Proc. SPIE **0635**, 063537 (2012).

**9. **W. Tan, S. Qin, R. M. Myers, T. J. Morris, G. Jiang, Y. Zhao, X. Wang, L. Ma, and D. Dai, “Centroid error compensation method for a star tracker under complex dynamic conditions,” Opt. Express **25**(26), 33559–33574 (2017). [CrossRef]

**10. **W. Wang, X. Wei, J. Li, and G. Wang, “Noise suppression algorithm of short-wave infrared star image for daytime star sensor,” Infrared Phys. Technol. **85**, 382–394 (2017). [CrossRef]

**11. **N. Truesdale, M. Skeen, J. Diller, K. Dinkel, Z. Dischner, A. Holt, T. Murphy, S. Schuette, and A. Zizzi, “DayStar: modeling the
daytime performance of a star tracker for high altitude
balloons,” in *51st AIAA Aerospace
Sciences Meeting including the New Horizons Forum and Aerospace
Exposition* (American Institute of Aeronautics
and Astronautics, 2013), https://arc.aiaa.org/doi/abs/10.2514/6.2013-139. [CrossRef]

**12. **G. Wang, F. Xing, M. Wei, and Z. You, “Rapid optimization method of the strong stray light elimination for extremely weak light signal detection,” Opt. Express **25**(21), 26175–26185 (2017). [CrossRef] [PubMed]

**13. **M. Rex, E. Chapin, M. J. Devlin, J. Gundersen, J. Klein, E. Pascale, and D. Wiebe, “BLAST autonomous daytime star cameras,” Proc. SPIE **6269**, 62693H (2006). [CrossRef]

**14. **K. Ho and S. Nakasuka, “Novel star identification algorithm utilizing images of two star trackers,” in *2010 IEEE Aerospace Conference* (IEEE, 2010), pp. 1–10.

**15. **M. Wei, F. Xing, and Z. You, “A real-time detection and positioning method for small and weak targets using a 1D morphology-based approach in 2D images,” Light Sci. Appl. **7**, 18006 (2018). [CrossRef]

**16. **J. Lu and L. Yang, “Optimal scheme of star observation of missile-borne inertial navigation system/stellar refraction integrated navigation,” Rev. Sci. Instruments **89**(5), 054501(2018). [CrossRef]

**17. **W. Wang, X. Wei, J. Li, and G. Zhang, “Guide star catalog generation for short-wave infrared (SWIR) All-Time star sensor,” Rev. Sci. Instruments **89**, 075003 (2018). [CrossRef]

**18. **M. Belenkii, D. G. Bruns, V. A. Rye, and T. Brinkley, “Daytime stellar imager,” US007349804B2 (Mar. 252008).

**19. ** Trex Enterprises Corporation Products/services, “Optical GPS” (Trex Enterprises Corporation, 2016), http://www.trexenterprise-s.com/Pages/Products

**20. **N. A. Truesdale, K. J. Dinkel, Z. J. B. Dischner, J. H. Diller, and E. F. Young, “DayStar: Modeling and
test results of a balloon-borne daytime star
tracker,” in *IEEE Aerospace
Conference* (SPIE,
2013), pp.
1–12.

**21. **M. J. O’Malley and E. O’Mongain, “Charge-coupled devices: frame adding as an alternative to long integration times and cooling,” Opt. Eng. **31**(3), 522–526 (1992). [CrossRef]

**22. **J. Yan, J. Jiang, and G. Zhang, “Dynamic imaging model and parameter optimization for a star tracker,” Opt. Express **24**(6), 5961–5983 (2016). [CrossRef] [PubMed]

**23. **J. Yan, J. Jiang, and G. Zhang, “Modeling of intensified high dynamic star tracker,” Opt. Express **25**(2), 927–948 (2017). [CrossRef] [PubMed]

**24. **R. A. Fowell, S. I. Saeed, R. Li, and Y.-W. A. Wu, “Mitigation of angular acceleration effects on optical sensor data,” U.S. Patent 6, 863, 244 (2005).

**25. **R. A. Fowell, R. Li, and Y.-W. A. Wu, “Method for compensating star motion induced error in a stellar inertial attitude determination system,” U.S. Patent 7, 487, 016 (2009).

**26. **L. Ma, D. Zhan, G. Jiang, S. Fu, H. Jia, X. Wang, Z. Huang, J. Zheng, F. Hu, W. Wu, and S. Qin, “Attitude-correlated frames approach for a star sensor to improve attitude accuracy under highly dynamic conditions,” Appl. Opt. **54**(25), 7559–7566 (2015). [CrossRef] [PubMed]

**27. **D. H. Titterton and J. L. Weston, *Strapdown Inertial Navigation Technology* (The Institution of Engineering and Technology, 2004), chap. 12. [CrossRef]

**28. **P. Sturm, S. Ramalingam, J. P. Tardif, S. Gasparini, and J. Barreto, *Camera Models and Fundamental Concepts Used in Geometric Computer Vision* (Now Publishers, 2004), chap. 3.

**29. **J. D. Gautier, *GPS/INS Generalized Valuation Tool
(GIGET) for the Design and Testing of Integrated Navigation
System* (Stanford University,
2003).