An algorithm for determining satellite track end points with subpixel resolution in spaced-based images is presented. The algorithm allows for significant curvature in the imaged track due to rotation of the spacecraft capturing the image. The motivation behind the subpixel end point determination is first presented, followed by a description of the methodology used. Results from running the algorithm on real ground-based and simulated spaced-based images are shown to highlight its effectiveness.
© 2011 Optical Society of America
The issue of autonomously detecting satellite and airplane tracks in images is by no means a new one. For decades, these tracks have been nothing more than a nuisance for astronomers–foreground artifacts that must be disposed of in the preprocessing of data–and several methods for identifying and removing them have been discussed in the literature. For instance, the Recognition by Adaptive Subdivision of Transformation Space algorithm  removes satellite streaks directly from images using a geometric approach that assumes the tracks are straight lines, and Storkey et al.  uses the Random Sampling and Consensus algorithm to allow for postprocessing removal of curved tracks and scratches as well.
While these streaks may be a source of noise in the field of astronomy, for applications such as the Space Surveillance Network (SSN), they are the signal. A track from a satellite or piece of debris, along with time-stamp information, allows the SSN to make an equatorial angles-only determination of its orbit. One can conceive several ways of obtaining the time-stamp information, but the most straightforward approach is to measure the start and end times of an exposure and extract the end points of the imaged track(s).
The precision of such a measurement, of course, is dependent on how well one can determine the track end points on the detector. As Earl notes, the error in detecting the end points may very well dominate the other sources of error in the measurement . Increasing the detector resolution (number of pixels per unit area) to mitigate this error is not a viable option because doing so decreases the dwell time per pixel of the target, effectively lowering its signal- to-noise ratio (SNR). But even with low resolution detectors, subpixel information is still available since the time spent by the satellite “in” the pixel translates to intensity, so all hope is not lost.
In fact, several methods to obtain subpixel track end points are available. For instance, Levesque presents an algorithm for accurate end point detection that has been successfully used on images obtained with the Canadian Automated Small Telescope for Orbital Research system . However, the problem with these methods is that they generally make the assumptions that (1) the track is straight and (2) previously obtained orbital information is available to predict the appearance of the streak in the newly acquired image.
The motivation behind the method that will be discussed in this paper is a mission called the Space-based Telescopes for Actionable Refinement of Ephemeris (STARE), for which neither of these assumptions is valid . The purpose of STARE is to refine orbital information for satellites and debris by directly imaging them with CMOS imagers onboard a constellation of cube-satellites (CubeSats). The images acquired by a given sensor will be run through an algorithm in the onboard microprocessor that is tasked with extracting star and track end point coordinates and sending them to the ground (without the accompanying image). Since the attitude of the STARE satellites will not be precisely controlled, the telescopes may be rotating about the pointing axis. Any uncertainty in the initial orbits means the location of the tracks on the imager will not be well-known. The STARE algorithm must therefore deliver subpixel end point determination for tracks with arbitrary curvature and location.
It should be emphasized that the algorithm is not concerned with detection of faint streaks, but rather high fidelity end point determination for streaks with ample SNR. Also, to avoid confusion while describing the algorithm in the following Sections, the term satellite will be reserved for the STARE CubeSat. The imaged debris or satellite will be referred to as the target.
2. Curved Target Tracks in STARE Images
During a STARE observation, the satellite will stare at a fixed star background and allow the target to streak across the field of view. Changes in the ori entation of the satellite during the observation are unwanted since they could potentially reduce the dwell time per pixel of the stars and the target (the exception being the case where the motion of the satellite causes inadvertent rate tracking of the target). But rotation of the satellite about the two axes perpendicular to the telescope pointing is of less concern, because it simply adds to the transverse velocity component of the target and causes the stars to streak in a uniform manner across the detector . It will not produce curvature in the streak left by the target.
Rotation about the pointing axis, on the other hand, could potentially induce significant curvature. If the satellite has a rotational velocity of about the pointing axis, which will be taken as z, and the target has velocity components and coordinates of
One can gain an appreciation for the form of Eq. (2) by considering that for the case of , it is the parametric representation of a spiral. Telescope angular velocities above are not anticipated, so a spiral pattern should never be observed in STARE images. But is large enough to make a Hough transform ineffective for basic detection and create an error as large as two pixels for a track that extends all the way across the image if a global linear fit is used.
Fortunately, fitting the entire track is not necessary. As long as the parameters , , and are known reasonably well , the track end points , are sufficient to refine the orbit of the target. The primary intent of the STARE algorithm is to find these coordinates.
3. STARE End Point Determination Algorithm
The following Subsections follow the numbering in Fig. 1, which gives an overview of the STARE algorithm.
3A. Image Correction
Before the images are searched for stars and tracks, they must first be cleaned. Because the STARE algorithm identifies stars and tracks as a contiguous set of pixels above a noise threshold, T, preprocessing of the data is crucial to its success. The basic steps of the image correction, shown in box 1 of Fig. 1, are as follows.
- Sky Image or Background Subtraction. In the case of the STARE mission, 10 raw images slightly offset from each other will be the median filtered to produce a sky image. Subtracting this sky image from a raw image very accurately removes both the dark current and sky background. If a sky image is not available, modal subtraction or other methods of background subtraction can be used.
- Bad Pixel Masking and Interpolation. Bad pixels are problematic for thresholding. These pixels can easily be mapped during routine calibration of the detector and stored as a mask in nonvolatile memory. They are first zeroed in each of the background subtracted images and then corrected with a nearest neighbor filter that uses the local values and gradients across the pixel to fill in a reasonable value.
- Low Pass Filter. The corrected image is optionally smoothed using a Gaussian kernel with a FWHM on the order of 1–2 pixels. The smoothing helps ensure that the tracks are contiguous. If the bad pixel density becomes excessive, the kernel can be extended at the expense of increasing the error in end point estimation.
3B. Object Detection
After the image is corrected, it is searched for contiguous sets of pixels that have a value above T. This step is shown in box 2 of Fig. 1. With both real and simulated images, typically , where RN is the read noise of the detector, produces good results. If the sky noise or dark current shot noise dominates the read noise, this must be taken into account in setting T.
Once a contiguous set of pixels has been identified, it is characterized as a star, track, or unknown object (such as a delta or Compton scattered worm) based upon its ellipticity (e) and the number of pixels (N) it contains. These values are dependent on the optical system and detector used, but for STARE, a cut of and should effectively identify all real tracks. A perfectly straight track should have ; the margin allows for curvature and the possibility of overlapping stars or cosmic rays. The chance of a muon hit producing a track greater than 20 pixels long is extremely low.
Confusion of cosmic rays and stars could potentially be more troublesome. For instance, the STARE optical system produces a subpixel point spread function (PSF), and most stars will actually appear as 1–4 pixel points rather than the nice Gaussian profiles encountered in astronomy applications. Based on previous space-based measurements, though, a significant amount of 1–4 pixel cosmic ray events are not anticipated in the STARE exposures [8, 9]. At geomagnetic latitudes below , about 0.706 events per exposure are expected, and above this number may go up to 12. With these rates, an astrometric solution from the list of star centroids is possible even with the contamination.
3C. Iterative Local Fitting at Track End Points (Transverse Degree of Freedom)
The next step, step 3, is to find the end points for each of the tracks identified in step 2 above. As previously mentioned, applying a global linear fit to the track to find its end points may result in large errors. But a local linear fit to the track at each end point can still help in constraining their possible locations. The question that then arises is how many pixels to use in the fit. If too many are used, the curvature of the track will force the slope toward the global average. If too few are used, the estimate is vulnerable to detector noise, bad pixels, etc.
One might consider using the second derivative as a criterion:
A solution to the problem is to use an iterative weighted least squares fit to each track end point until the rms deviation of distance from the included track pixels to the line is below a certain threshold, . Starting with all pixels identified in the track, a line is fit using the expression
The threshold and whether intensity weighting is used in Eq. (5) will depend on the potential curvature and actual PSF of the system. Figure 2 shows results for a simulated track where and was used without weighted fitting. The eventual error in end point estimation was less than 0.1 pixels in both x and y. One can imagine extreme cases in which the target traces out a path perfectly centered over the dividing boundary between two rows of pixels, but this will be a very rare occurrence.
3D. Matched Filter at Track End Points (Longitudinal Degree of Freedom)
Once the track has been fit at each end point, the path the target took along the detector near that point is well approximated. What is left is to determine precisely where the target was along this path at the start (or end) of the exposure (step 4). Simply recording the first or last pixel with a value above T will obviously result in errors. Accurately determining the location of the target requires taking into account the PSF of the optical system and the kernel used in the low pass filter of step 1.
To do this, a region of interest (ROI) around the roughly estimated end point that spans pixels is first considered. An example ROI with is shown in Fig. 3a. The goal is to reproduce this ROI with a simulated one obtained by convolving a line segment with a filter that matches the PSF and kernel described above. The form of the line segment is already known from the fit obtained in step 3. The length of it will indicate exactly where the end point is located.
After dividing each simulated pixel into r sub pixels, a line segment of length is created at the edge of the simulated ROI from which the track emerges. The segment is convolved with the filter to produce a track in the simulated ROI, as shown in Fig. 3b. The simulated ROI is then subtracted from the real one and the residual is squared. The length of the line segment is increased by , and the process is repeated so that after iterations, there will be a set of residuals. The minimum of these, as shown in Fig. 3d, indicates where the end point is located.
4. Results for Simulated and Real Images
The results from testing the STARE algorithm on real images obtained by ground-based telescopes are encouraging. For these images, a median sky frame and bad pixel map could not be obtained, but subtraction of the mode sufficed for image correction. In Fig. 4, tracks found in three separate Oceanit images are shown after being analyzed by the algorithm. The ends of the green line segment indicate where the extracted end points are located. Although there are no official coordinates for these reported in the Oceanit data, inspection by eye shows that they line up well with the locations expected from the 1.9 pixel FWHM PSF.
Extensive testing on simulated tracks and star fields has also been performed. These tests are especially useful because the measured end point can be compared to the true end point to determine the accuracy of the algorithm as a function of track length, orientation, brightness, etc. To comprehensively measure the error in the estimated end points, a run was performed in which 400 images were generated and analyzed. Real star fields were sampled and then tracks with random orientation and length were generated in a number of different brightness intervals. As a proxy for brightness, the quantity of photons per micrometer, which is the x axis of Fig. 5, was used. The reason for this is that a track of a given brightness will produce varying SNRs depending on how it is oriented relative to the detector. For instance, if a track is centered over the boundary between a row of pixels, it will produce roughly half the SNR as it would when centered directly over one of the two rows.
On the y axis of Fig. 5 is the total error in the end point estimate, , where and are simply the difference between the real and measured coordinates. The plot shows that at a level of about 600 photons per micrometer, the error approaches a near constant value of . This is expected from the choice of for the simulated grid, which should produce an error of roughly 0.1 pixels for each coordinate (the step in length at each iteration is pixels). The value of 600 photons per micrometer corresponds to an SNR in the range of 6–12, depending on the track orientation. One can see that at a value of 250 photons per micrometer, which is roughly an SNR of 2–4, the error is slightly larger. But it is still subpixel and will serve well for the purpose of orbital refinement.
The results in the previous Section show that the STARE algorithm is capable of extracting accurate track end points for both straight and curved tracks. The error of 0.14 pixels obtained for the simulated data can be decreased further by increasing the number of grid points per pixel r at the expense of increased computation time as long as the track has a sufficient SNR. The accuracy cannot be improved indefinitely for very bright objects, of course, as it will be limited by the dynamic range of the detector, dwell time per pixel of the target, etc.
One may point out that a disadvantage of the technique is that it requires the track to be a contiguous set of pixels. A large fraction of satellites oscillate in brightness as they cross the sky and their signal may fall below the noise threshold as a result. However, as long as the value N is set low enough, the only implication is that the end points for a number of subtracks will be reported instead of just two (this would also be the case if there is an extensive region of bad pixels the track happens to cross). If the target happens to reach a minima in brightness at the start and end of the exposure, there is no hope of accurately measuring the end point anyway.
Another important point to consider is that, although it will not be available in the STARE mission, a priori knowledge of the target could potentially be used to enhance the performance of the algorithm. A rough estimate of the velocity and position at the exposure start can be used in Eq. (3) to help determine the number of pixels used to fit each end of the track directly or help in determining a value for . And if the information is accurate enough, it may be possible to generate a matched filter for initially finding the track even for the case of high curvature.
It should also be mentioned that the numbers presented in the previous Section are for a very idealized scenario. A number of other errors–global positioning system measurement errors, timing errors, attitude control uncertainty, etc.–come into play in the game of orbital refinement. The simulations have neglected these. Also, the simulations ignore the low fill factor of the CMOS detector that will be used for the STARE pathfinder mission. Because the pixels are not sensitive over their entire area, information is lost every time the target spot passes over the pixel boundaries, and this alone can produce 0.3–0.7 pixel errors . As long as these systematics remain reasonably well behaved, though, the subpixel results provided by the STARE algorithm will allow for orbital refinement.
An algorithm for determining the end points of the satellite and debris tracks in space-based images has been presented. The algorithm is capable of delivering subpixel accuracy even for the case of curved tracks resulting from rotation of the imaging spacecraft. The underlying methodology and motivation for the algorithm have been discussed, and results for both real and simulated data showing high quality performance have been presented. Results from real data obtained by the STARE satellites will soon follow.
1. H. Ali, C. Lampert, and T. Breuel, “Satellite tracks removal in astronomical images,” in Progress in Pattern Recognition, Image Analysis and Applications, Lecture Notes in Computer Science (Springer, 2006), Vol. 4225, pp. 892–901. [CrossRef]
2. A. J. Storkey, N. C. Hambly, C. K. I. Williams, and R. G. Mann, “Cleaning sky survey data bases using hough transform and renewal string approaches,” Mon. Not. R. Astron. Soc. 347, 36–51 (2004). [CrossRef]
3. M. A. Earl, “Determining the range of an artificial satellite using its observed trigonometric parallax,” J. R. Astron. Soc. Can. 99, 50–55 (2005).
4. M. Levesque, “Automatic reacquisition of satellite positions by detecting their expected streaks in astronomical images,” presented at the Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, Hawaii, 1–4 Sept. 2009.
5. L. Simms, V. Riot, W. De Vries, S. Olivier, A. Pertica, B. Bauman, D. Phillion, and S. Nikolaev, “Optical payload for the STARE mission,” Proc. SPIE 8044-5, 804406 (2010).
6. Note that a simplification has been made by approximating the path of the target as a straight line during the exposure, which it is not.
7. This information will be available from calibration data taken before the observation.
8. AMS Collaboration, “Protons in near earth orbit,” Phys. Lett. B 472, 215–226 (2000). [CrossRef]
9. D. Shaw and P. Hodge, “Cosmic ray rejection in STIS CCD images,” Instrument Science Rep. STIS 98-22 (Space Telescope Science Institute, 1998).
10. While the STARE pathfinder satellites will suffer from this problem, the future STARE constellation CubeSats will carry high quality sensors that will not.