Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Near earth space object detection using parallax as multi-hypothesis test criterion

Open Access Open Access

Abstract

The US Strategic Command (USSTRATCOM) operated Space Surveillance Network (SSN) is tasked with Space Situational Awareness (SSA) for the U.S. military. This system is made up of Electro-Optic sensors, such as the Ground-based Electro-Optical Deep Space Surveillance (GEODSS) and RADAR based sensors, such as the Space Fence Gaps. They remain in the tracking of Resident Space Objects (RSO’s) in Geosynchronous Orbits (GEO), due to limitations of SST and GEODSS system implementation. This research explores a reliable, ground-based technique used to quickly determine an RSO’s altitude from a single or limited set of observations. Implementation of such sensors into the SSN would mitigate GEO SSA performance gaps. The research entails a method used to distinguish between the point spread function (PSF) observed by a star and the PSF observed from an RSO by using Multi-Hypothesis Testing with parallax as a test criterion. Parallax is the effect that an observed object will appear to shift when viewed from different positions. This effect is explored by generating PSFs from telescope observations of space objects at different baselines. The research has shown the PSF of an RSO can be distinguished from that of a star using single, simultaneous observations from reference and parallax sensing telescopes. This report validates these techniques with both simulations and experimental data from the SST and Naval Observatory sensors.

1. Introduction

SSA is paramount in any military operation because the warfighter’s success is hinged on the space environment being a sanctuary; such is the serious weakness of the US military’s current stratagem. The official USAF description of the SSA mission is that “SSA encompasses intelligence on adversary space operations; surveillance of all space objects and activities; detailed reconnaissance of specific space assets; monitoring space environmental conditions; monitoring cooperative space assets; and conducting integrated command, control, communications, processing, analysis, dissemination, and archiving activities” [1]. In reality, the space operational environment of the future will be contested, degraded and operationally limited which necessitates that the US maintain a comprehensive situational awareness of it at all times [2]. These challenges necessitate the evolution of SSA techniques to ensure that there are no gaps in the SSN allowing un-predicted collisions or exploitation by adversaries. Protecting our nation’s space-based reconnaissance and command, control and communication (C3) assets is paramount for our ability to project power to the warfighter.

Certain paradigms are prevalent among academia and industry for the characterization of space objects. The LINEAR (Lincoln Near-Earth Asteroid Research) program, for example, has been known to use velocity matched filtering to classify asteroids [3]. Velocity matched filtering in general entails monitoring an object over a period to decide whether the object is of interest [4]. The velocity matched filtering technique might miss objects which are moving with un-hypothesized velocities or orbits. This technique also requires a detection in multiple frames so that dim objects with wavering brightness could be lost. Objects which aren’t bright enough over an extended time could be missed as well. If the SSN contained a sensor with the capability to determine a RSO’s state vector with single or limited observations, the sensor would produce smaller data sets, and potentially detect dimmer objects. Even better would be a method to utilize a network of ground based electro-optic sensors and leverage their simultaneous data sets to produce a more perceptive detection scheme for RSOs. Space based assets may be more capable at detecting dimmer objects over extended periods but suffer from cost, reliability and C2 limitations. The need for a cost-effective ground-based technique to quickly determine the RSO state vector of an observation from single or limited data sets is apparent. A network of sensors may present itself as the ideal machinery for such a technique.

2. Background

Spacewatch was the first program dedicated to improving the detection, tracking and then cataloging of space objects. Spacewatch was a scientific success to the astronomical community as it was the first to use a CCD to actively scan and survey the sky. Prior to this program, astronomers and those working in the space community were using photographic plates to image and detect objects. The use of CCDs led the program to develop the first software algorithms designed to improve space object detection in 1990 [5]. Since then, CCDs have greatly improved the number of smaller and fainter space objects detected, tracked and cataloged due to significant advances in computing power, memory and storage. These advances have resulted in further research programs to develop advanced algorithms to detect faint space objects.

2.1 Traditional space object detection techniques

One of the earlier programs dedicated to improving detection algorithm was the mid-1990’s Lincoln Near Earth Asteroid Research (LINEAR) program. The LINEAR detection algorithm utilizes imagery obtained from a ground based electro-optic telescope to detect space objects using a binary hypothesis test (BHT) point detector [3]. Currently the SST and other assets within the SSN use a modified version of the BHT point detector developed for LINEAR to make a detection decision on a single pixel in a given frame of data [3]. The SNR level from the point detector (PD), shown in Eq. (1), is calculated by examining the received intensity at point, d(xxo,yy0), from a single frame of data. This algorithm is designed to create a binary mask to identify pixels that represent an object with an intensity over a set threshold. This method of detection is optimal under the assumption that the noise is Gaussian distributed. Thus, when the background, B, is subtracted and the result divided by the standard deviation, σ, the result is the number of standard deviations the intensity of the pixel point is from the mean.

SNRPD=d(xxo,yyo)Bσ.

The SNR from the point detector is the sufficient statistic used in the BHT. The two hypotheses of the BHT are the null hypothesis (H0), that an object is not located at the tested pixel and the alternative hypothesis (H1), that an object is located at the tested pixel point. The SNR is compared to a set threshold to determine if an object potentially exists at the tested point.

While computationally simple, the point detector’s performance suffers when the intensity from the object is not in a single pixel of the detector. The intensity of the object can be spread across multiple pixels due to atmospheric turbulence. This reduces the intensity in the tested pixel and a lower SNR that adversely degrades the performance of an algorithm dependent on testing each individual pixel point. To improve the probability of false alarms, the SST utilizes a modified version of the LINEAR algorithm across multiple frames of data to detect unknown space objects [5].

A more computationally complex algorithm known as matched filter have been developed to improve space object detection performance. This algorithm is used by the highly successful Pan-STARRS telescope for making space object detection decisions [6]. A matched filter approach is based on matching the observed data with the expected PSF. The expected PSF can be determined from measurable statistical parameters of the atmosphere or can be measured by viewing a nearby star [7]. This approach assumes that the object is either small or far away enough that it is essentially a point source when viewed through the optical system. The expected PSF is correlated with the data to determine if an object is present at a given pixel point. However, unlike the point detector, multiple pixels are used to make a detection decision.

A matched filter algorithm, shown in Eq. (2), is implemented in the SExtractor software suite which is largely used within the SSA community for object detection, measuring and classifying objects from astronomical images [8]. The detection piece of this program is a filter designed to detect faint space objects assuming Gaussian distributed noise by correlating the data with the expected PSF and dividing by the standard deviation, σMF, of the noise in the N x N pixel region. SNRMF, is the sufficient statistic compared to a threshold in a BHT to make a detection decision.

SNRMF=xy(d(xxo,yyo)B)h(x,y)σMF.
σMF=1Nxy(d(xxo,yy)B)2.

Each of these detection algorithms are designed for utilizing long exposure data with low background light. A long exposure image is generally used because it allows a lower SNR object to be detected while averaging out lower order atmospheric turbulence and random spikes in intensity due to the Poisson nature of the background light. In this paper, we restrict our analysis to RSOs in geosynchronous or geostationary orbits as they are of high importance to national security, present significant challenge to detect with ground-based RADAR such as the space fence and do not necessarily move significantly during an integration time of 10 milliseconds (the minimal time associated with the use of long-exposure atmospheric models [7, pp 434]).

2.2 Atmospheric model

In many SSA ground based telescope systems, the integration time is significantly long enough that it operates in the long exposure regime. The long exposure PSF and optical transfer function (OTF) are used to model the average size and spatial frequency content of a point source object viewed through a telescope [7]. The long exposure OTF for a circular aperture telescope is defined as

HL(fx,fy)=exp[(λ¯zfx2+fy2ro)5/3].

Where λ¯ is the mean wavelength, z is the focal length and fx and fx are spatial frequencies. The Fried atmospheric seeing parameter, ro, is a measure of the quality of optical propagations through the atmosphere [9]. Values of 5-15 cm are observed for most operational sites. As defined in Eq. (5), the long exposure OTF can be converted into the spatial domain to obtain the long exposure PSF, h(x,y), using the inverse Fourier transform of the OTF. This represents the spatial image expected when viewing a distant point source.

h(x,y)=F(1){HL(fx,fy)}.

The long exposure PSF averages the random phase fluctuations due to the atmosphere to produce a spatially large PSF. On average, a long exposure PSF will contain zero tilt and will be an even and symmetric function.

2.3 PSF generation

The goal of this research effort is to be able determine if an object is a RSO or not and it is hypothesized that distant objects such as stars will not have any detectable tilt aberrations by any observation equipment on Earth. This is because the phase-front from a star will be flat across very large distances. Parallax is detected by measuring the tilt aberration in a phase-front that is not flat across the aperture of an observing telescope. Although GEO objects have a limited amount of phase-front curvature, even massive telescopes would not be able to detect any parallax. An experimental setup to test this hypothesis is to utilize a reference telescope pointed directly at the detected object and secondary telescope pointed straight up to detect any parallax effects in the phase-front as seen in Fig. 1. Such a set-up will allow the necessarily large baseline differences to detect parallax from RSO observations.

 figure: Fig. 1

Fig. 1 Visual Depiction of Telescope Arrangement for Parallax Detection.

Download Full Size | PDF

Using the Fraunhofer approximation, this phase from a distant but not infinitely far off object is propagated to a plane which contains both the reference telescope and the parallax sensing telescope. If there is curvature in the phase of an observed object across the telescope aperture, the resulting PSF will not be located at the center of the CCD due to the tilt aberrations in the phase. Equation (6) shows the mathematical form for the field, U’, at the aperture of both the reference and the parallax sensing telescope.

U'(x',y')=ej2πz/λejπ(x'2+y'2)jλz.

Now that the field in the plane of the two telescopes has been obtained, the PSF’s from each telescope need to be generated. Due to the small distance between the aperture and the CCD planes, the Fresnel approximation will be utilized. For the reference telescope, the PSF is the Fourier Transform of the aperture located at the center of the CCD array behind the telescope. This occurs due to the quadratic term in the integrand for the Fresnel propagator being canceled out by the phase delay of the lens as it is known that the PSF will be at the focal point at the center of the CCD. However, this is not the case for the parallax sensing telescope and special considerations will need to be taken to generate its PSF. A new coordinate system is created centered on where the PSF should be located as predicted by geometric optics as seen in Fig. 2.

 figure: Fig. 2

Fig. 2 New Coordinate System for Non-Paraxial Propagation

Download Full Size | PDF

The field at the detector plane of the parallax sensing telescope is computed using an off-axis Fourier propagation tool introduced in [10]. In Eq. (7) the field, U, is the result of a Fourier transform of the field at the aperture, U’, and the phase aberrations generated by the off-axis propagation geometry. The term Ro(xs,ys) is the distance from every point in the aperture plane to the point (xd,yd) in the detector plane [9].

U(xd,yd)=1jλzU(xs,ys)ej2πR0(xs,ys)/λej2π((xdxs+ydys))/λzdxsdys.
The PSF, ho(xd,yd), is obtained by taking the squared magnitude of U and normalizing it so it sums to one.

3. Algorithm development

Space object detection algorithms utilize both single and multiple spatial images obtained from ground-based telescopes. Many SSA platforms use multiple frames in their processing chain, however they begin with being able to detect on a single frame and use the multiple frames of follow-up data to confirm or reject the detection decision. Additionally, multiple frames can be tested to reduce the false alarm rate to an acceptable level. This research is focused on improving the ability to detect a dim space object from a single frame of data.

To derive the LRT, the joint probability density function of the two telescopes will be utilized; because the telescopes are geographically separated systems, it is assumed that their independently collected data is also independently distributed. The joint probability density function of two independent random variables becomes the product of the two marginal probability density functions. The LRT, Λ, is formed by taking the ratio of the product of these two distributions given the condition that an observation is present in the data, H1, in the numerator and the condition that an observation is not present the data, Ho, in the denominator [11].

Λi=P(d1(x,y)|Hi)P(d2(x,y)|Hi)P(d1(x,y)|Ho)P(d2(x,y)|Ho)><1.

In this equation d1 and d2 represent images gathered by the first and second telescopes. Because the parallax sensing telescope is pointed straight up rather than directly at an observation, tilt aberrations will affect the distribution of its PSF dependent on whether a stellar object or RSO object is being observed. The test PSFs will be defined as href(x,y) for the reference PSF, hstar(x,y) for stellar observations and as hNEO(x,y) for RSO’s. There are now two hypotheses given an observation in a frame of data which are the hypothesis that an object has been detected in a given frame of data and that object is a RSO, (H1), and the hypothesis that an object has been detected in a given frame of data and that object is a stellar object, (H2). The null hypothesis, (H0), is the hypothesis that no object is observed and the frame or pixel only contains background data.

In order to compute the general LRT Λι, the joint probability density function of the two frames of data is assumed to be Gaussian in both the Hi and H0 cases. The two joint probability density functions are used to form a ratio and the natural logarithm is taken to produce the relationship shown in Eq. (9). In this equation, the ratio of S2 and S1 represent the ratio of the light gathering power of telescope two over the light gathering power of telescope 1. This can be approximated by the ratio of their aperture areas.

Λi=x=1Ny=1N[href(x,y)(d1(x,y)B1)σ1+S2hi(x,y)(d2(x,y)B2)S1σ2].

Substituting the psfs associated with each hypothesis into Eq. (9), and normalizing them by the standard deviation of the LRT, σΛi, the following LRTs can be obtained:

SNR1=x=1Ny=1N[href(x,y)(d1(x,y)B1)σΛ1σ1+S2hNEO(x,y)(d2(x,y)B2)S1σ2σΛ1]><τ.
SNR2=x=1Ny=1N[href(x,y)(d1(x,y)B1)σΛ2σ1+S2hstar(x,y)(d2(x,y)B2)S1σ2σΛ2]><τ.

Detection decisions are made by comparing the SNR for each hypothesis to a common threshold and either picking the hypothesis with the higher SNR as the detection decision or if neither hypothesis SNR exceeds the threshold, then the null hypothesis is selected.

4. Performance metrics and experimental setup

Simulated and experimental data were used to analyze the performance of the proposed space object detection algorithm. This section describes the setup used to collect the two data sets in detail. In practice, it is likely that a wide field of view (FOV) camera capturing data on a telescope will contain many thousands of objects. These objects will include stars, satellites and potentially space debris and will all have varying levels of intensity. These objects are treated identically by the algorithm since they would appear as point sources to the optical system. Additionally, the frame of data collected by the optical system is reduced to only test a small subset of the entire frame. This increases the likelihood that multiple objects do not exist within the subset windowed data while decreasing the computational complexity involved in processing large frames of data.

This approach is used with other space object detection algorithms. The data collected from the SST contains 6144 x 4096 pixels, however subsets or windows as small as 15 x 15 centered on the test location are used in object detection algorithms to determine if there is an object in the center of the window [5]. A window of this size allows for a PSF to be contained within the window while providing enough pixels for the background statistics to be calculated. Operationally, the 15 x 15 window would slide across the entire image as each individual pixel was tested. The data used for testing this algorithm was set at 20 x 20 pixels. Larger window sizes allow for potentially better computation of background statistics since more samples are being used. The larger window may incorporate more optical sources which can hamper the accurate calculation of background statistics. The two competing factors drive the choice of the window size with no optimal choice being obvious.

4.1 Performance metrics

The objective of the proposed algorithm is to detect if an object is present in a given frame of data and to correctly distinguish whether that detection is that of a satellite or a star. The three hypotheses are 1) the null hypothesis that a space object has not been detected in a given frame of data, 2) the hypothesis that an object has been detected in a given frame of data and that object is a RSO and 3) the hypothesis that an object has been detected in a given frame of data and that object is a stellar object. Each of these hypotheses are defined using the statistical distribution which the data is expected to take in each of the three scenarios. Performance of the algorithm is quantified by probabilities which arise from the statistical distribution of the data sets. Using probability, the performance of the algorithm can be quantified.

There are five probabilities of interest which are shown in Table 1. They include the probability of detection, PD, the probability of false alarm, Pfa, the probability of satellite detection, PDneo, the probability of incorrectly detecting a star when a satellite is in the data, PIDstar, and the probability missing a satellite detection, Pmiss. The probability of a detection is that either Λ1 or Λ2 is above the threshold when a target is present. The probability of false alarm is the probability that either Λ1 or Λ2 is above the threshold when no target is present. The probability of satellite detection is the probability that Λ1 is above the threshold and is above Λ2 when a satellite observation is present. The probability of incorrectly detecting a star is the probability that Λ2 is above the threshold and is also above Λ1 when a satellite is in the data. The probability of satellite miss is the probability that Λ2 and Λ1 are below the threshold when an observation is present.

Tables Icon

Table 1. Performance Metrics for Satellite Detection.

Due to the presence of noise in the data frames being run through the proposed algorithm, the value of Λ1 and Λ2 will differ with every run. Because the LRT is made up of a sum of many pixels, each of which is a random realization, the central limit theorem dictates that the LRT follows a Gaussian distribution. The range of values which the LRT can take on makes up all possible thresholds. These thresholds depend on whether the data being tested has background or an observation present. There is a unique probability of detection for every distinct threshold value and a plot of these probabilities versus threshold values is called the CDF plot.

A Monte Carlo method is utilized to determine the mean, m, and variance, σ2, of the LRT so that probabilities can be computed using the Gaussian CDF. All the possible thresholds are also determined by taking the minimum and maximum of a data set containing many realized LRT values given many random data frame inputs under the hypotheses that an object is present or absent. With the introduction of multiple hypotheses, the computation of probability of detection becomes more challenging. The probability needs to consider that the LRT is larger than some threshold and larger than the LRT with the other detection hypothesis.

Each LRT takes the form of a Gaussian random number, Λ1 and Λ2, defined respectively by its mean, m1 and m2 and standard deviation, σ1 and σ2. If the LRT given the hypothesis that a satellite is present minus the LRT given the hypothesis that a star is present is greater than unity, a satellite observation has taken place. The probability that a satellite observation has taken place is found using the new random variable Z1 which is the difference between the Gaussian random numbers Λ1 and Λ2. Z1 is therefore also a Gaussian random number because any linear combination of Gaussian random numbers produces another Gaussian random number; the mean of Z1 is also equal to the differences between the means of Λ1 and Λ2. Equation (13) also shows how to compute the variance of Z1 and (14) the correlation coefficient between.

Λ1 and Λ2, ρ12.

Z¯1=E[Z1]=mZ1=m1m2.
σZ12=E[(Z1Z¯1)2]=σΛ12+σΛ222σΛ1σΛ2ρ12.
ρ12=E[(Λ1m1)(Λ2m2)]σΛ1σΛ2.

The probability that Λ1> Λ2 is the same as the probability that Z1 > 0. This probability can be computed using the Gaussian CDF given the appropriate inputs.

P(Z1>0)=012πσZ12e(xmZ1)2/2σZ12dx.

The probability of detecting a satellite when the satellite is present in the data is found by multiplying the probability that Z1 > 0 by the probability that an object has been detected, PD. If the two events are disjoint, the intersection of the two events is the product.

PDneo=P(Z1>0)PD.

In the same way the probability of detecting a star is computed as the product of the probability that Z1<0 times the probability of detection.

PDstar=P(Z1<0)PD.

The computation of PFA is also more complex given the multi-hypothesis case. A false alarm occurs when background data is present in the frame being tested but the algorithm detects that a satellite or stellar object is present. Either of the LRTs need to be above the threshold in the presence of background for a detection to be made erroneously.

Pfa=P(Λ1>τ)+P(Λ2>τ)P(Λ1Λ2>τ).

The probability that the two different LRTs are above the threshold is computed using the same methodology as with computing probability of detection as seen in Eqs. (19-20).

PΛ1>τ(τ)=τ12πσ12e(Λ1m1)2/2σ12dΛ1.
PΛ2>τ(τ)=τ12πσ22e(Λ2m2)2/2σ22dΛ2.

Computing the joint probability that both LRTs are above the threshold involves the bi-variate CDF and the joint probability involves integrating the density function of the bi-variate Gaussian distribution given the appropriate inputs as seen in (21).

Pfa=τe((Λ1m1)22σ12(1ρ122)+(Λ2m2)22σ22(1ρ122)2ρ12(Λ1m1)(Λ2m2)2σ1σ21ρ1222πσ1σ21ρ122.

Each of the probabilities can be plotted together given the entire range of possible thresholds to show if the algorithm is effective at distinguishing between stellar and RSO observation detections. As the baseline distance between the parallax and reference telescopes increases, the probability of correctly detecting a satellite given that a satellite is present in the data will be higher than the probability of incorrectly detecting that a star is present or the probability of missing the detection of an object the data all together. Generally, the performance of a detection algorithm will be quantified using a receiver operating characteristics (ROC) curve. The ROC curve is a plot of the probability of detection versus the probability of false alarm. Such a curve is very useful when trying to compare the performance of different detection schemes. When the ROC curve for different detection schemes is plotted together, the algorithm which produces the higher probability of detection value given the same probability of false alarm has superior performance [11]. ROC curves will show the dependence of the CD MHT algorithm performance on the amount of separation between the reference and parallax sensing telescopes. Each iteration will have a different baseline and produce unique ROC curves. The ROC curve is also useful for quickly determining what the probability of detection will be given a probability of false alarm.

4.2 Experimental data

The purpose of the experiment is to test the new detection algorithm to see if it correctly determines that an observation is a RSO based on the presence of parallax as hypothesized in the H1 case. The data collection took place simultaneously from the SST and Naval Observatory with both the ANIK-F1 and ANIK-F1R satellites in view as the two satellites are eclipsed by the earth. As the two satellites fall into the shadow of the earth, their luminosity decreases until the satellites are dimmer than the background noise. The parallax effects due to tilt are present in the aperture of the parallax sensing telescope when the telescope’s optical axis is not set to be aligned with the target. If the reference telescope is set to track the target while the parallax sensing telescope is set to maintain its optical axis parallel to that of the reference telescope’s, the parallax effect in the parallax sensing telescope is maximized. Because astronomical telescope systems in use today aren’t configured to track in such a manner, it was necessary to make due by observing two targets by two telescopes simultaneously. Henceforward, ANIK F1 will be referred to as the primary target and ANIK F1R will be referred to as the secondary target. Figure 3 demonstrates an exaggerated drawing of the system being tested. In the figure, the primary target’s phase-front is centered on both telescopes because they are aligned to track it causing the PSFs to be centered at the focal point on the CCD. The phase-front from the secondary target is not centered on either aperture because neither telescope is set to track it which causes the PSF to be non-paraxial or not at the focus of the mirrors on either CCD array. Parallax is observable in the frames of SST and Naval Observatory telescope data because the PSF from the secondary target is in a different position relative to that of the primary target on each respective CCD.

 figure: Fig. 3

Fig. 3 Two Telescopes Tracking Primary Target and Secondary Target in FOV.

Download Full Size | PDF

The motion of the stars also allowed for the proper orientation of the data frames because stars move in a Westward motion on the East–west axis. If the CCD’s horizontal axes were North–south and vertical axis were East–west, the star motion would be on the vertical axis. The amount of parallax in the aperture of the reference and parallax sensing telescopes depends on the latitude, longitude and altitude of the secondary target as compared to that of the primary target which is aligned with both telescope’s optical axes as viewed from both telescope sites. Table 2 shows the coordinates of the primary target, secondary target, reference telescope site and parallax sensing telescope site.

Tables Icon

Table 2. Locations of Both Targets and Both Telescopes.

As the crow flies, the Naval Observatory is 550.32 Km from the SST which can be broken down into 232.5753 Km in the East–west axis and 498.7593 Km in the North–south axis. The distance between the two satellites was found to be 30.27 Km by calculating the tangential distance between the coordinates on a sphere with a radius of approximately 36,000 Km. The distance between the two telescopes is an order of magnitude more than the distance between the two satellites and the parallax effect will be proportional to the East–west and North–south distance difference between the two observation sites. Because star motion appears to be westward and the star motion in the CCD is in vertical the direction, it is expected that the difference in secondary target PSF location between the two telescopes in the vertical direction of the CCD will be approximately twice that of the horizontal direction. The actual secondary target PSF shift due to parallax in degrees is found for each telescope system by determining the position in pixels for both telescopes, converting those positions to degrees and then finding the difference between the two. Figure 4 shows a non-saturated data frame from the SST telescope and Fig. 5 shows a non-saturated data frame from the Naval Observatory telescope. The PSFs from the two telescopes look different because of differences in the optical setup of the respective telescopes to include aperture diameter, focal length, CCD pixel size, differing optical designs between the instruments, different optical corrections and integration times.

 figure: Fig. 4

Fig. 4 Observation from SST Telescope of ANIK-F1 and ANIK-F1R

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Observation from Naval Observatory Telescope of ANIK-F1 and ANIK-F1R

Download Full Size | PDF

In both data frames, ANIK-F1 is the primary target and ANIK-F1R is the secondary target; ANIK-F1 is brighter than ANIK-F1R. It’s hard to tell that the secondary target’s PSF is in a different location from the data frames. Table 3 shows the location of the secondary target’s PSF in pixels and degrees as seen on the CCD for both telescopes as compared to the location of the primary target’s PSF on the CCD. It also shows the difference between the secondary target’s relative positon in the CCD as seen by the SST location as compared to the position of the secondary target’s position in the CCD as seen by the Naval Observatory location. The difference in where the secondary target’s PSF shows up is because of parallax and as expected, the amount of PSF shift in the vertical axis is over twice that as seen in the horizontal axis. This is due to the observation locations being roughly twice as far in the East–west direction as compared to the North–south direction.

Tables Icon

Table 3. Registration of Parallax Effect due to Different Observation Locations

The difference in Table 3 is shown in units of SST CCD pixels and in units of Naval Observatory CCD pixels; it is how many pixels the secondary target’s PSF needs to be shifted to put it where it would be in the other telescope’s CCD relative primary target’s PSF. The Naval Observatory has smaller pixels and it would more accurately quantify the PSF shift due to parallax; however, the SST has a faster integration time and thus, more data frames to run the CD MHT algorithm against. As seen in the simulation section, single pixel shifts are adequate to distinguish a RSO target from a stellar target so, the quantity of data frames makes the SST telescope the best choice for the Parallax sensing telescope in the experiment due to the dimming target luminosity throughout during the data collect. Because small shifts can be used to determine the presence of an RSO, any unaccounted small shift in the position of an object can the technique to fail. Good calibration of non-linear distortion is therefore required for success of the proposed technique. Effects of non-linear optical distortion were minimized in our experiments by the fact that the telescopes were pointed at ANIK-F1, therefore putting the objects under study near the optic axis where these distortion effects would be minimized.

5. Results and analysis

The Naval Observatory Telescope is set to be the reference telescope and the SST is set to be the parallax sensing telescope. For each hypothesis, there is a unique set of test PSFs against which the collected data is correlated with. The LRT as seen in Eq. (9) was chosen to implement the new algorithm because the satellites being observed become dimmer during the data collect. To compute the ratio of the image intensity as seen in the parallax sensing telescope to that as seen in the reference telescope, the total photo-counts in each aperture were summed and then divided after removing the background photon contribution. The data read in by the reference telescope is correlated with the reference telescope’s expected PSF and the data read in by the parallax sensing telescope is correlated with PSF’s corresponding to the RSO and stellar object hypotheses which are H1 and H2 respectively. The PSFs used in the algorithm are created from the collected data by selecting un-saturated data frames which better represent the spatial distribution of expected observations. The background of the frame of data is computed using a median filter [12] and then subtracted from all pixels in that frame of data. Any negative pixels are set to zero and the test PSF is created by normalizing this frame of data. The href, hneo and hstar test PSFs required to test for H1 and H2 are created using this technique. The reference telescope’s test PSF, href, does not depend on the presence of parallax, as seen in Fig. 6. Proper registration of the remaining test PSFs is determined by the orientation of the observation location of the reference and parallax sensing telescopes as defined in Table 2 and Table 3.

 figure: Fig. 6

Fig. 6 Test PSF for Reference Telescope

Download Full Size | PDF

Throughout the experiment, the location of the geostationary RSO target, ANIK-F1R, does not change, but due to parallax, the location of ANIK-F1R as observed on the CCD of the reference and parallax sensing telescopes is different. The parallax sensing telescope’s observations of ANIK-F1R are registered to be centered in the SST telescopes CCD and stellar observations are shifted to where they would be located as viewed by the reference telescope. Alternatively, the location of a stellar observation, which has un-detectable parallax present in the parallax sensing telescope’s aperture, could be registered at the CCD’s center to achieve the same result by shifting the ANIK-F1R observations to where they would be as viewed by the parallax sensing telescope’s CCD. The remaining test PSFs, hneo and hstar, are shown in Figs. 7 and 8 respectively. The PSFs look very different from href because the reference and parallax sensing optical configurations differ.

 figure: Fig. 7

Fig. 7 Test PSF for Parallax Sensing Telescope Hypothesizing H1 is true

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Test PSF for Parallax Sensing Telescope Hypothesizing H2 is true.

Download Full Size | PDF

SST data frames are generated at a rate of 1 per second and Naval Observatory data frames are generated at a rate of 1 per 7 seconds because the integration times differ between the two telescopes. The new algorithm takes in raw data from simultaneous observations of the target collected by the two telescopes; to achieve this, every frame of the SST data is fed into the test and new frame of Naval Observatory data is updated once the seventh SST data frame has been registered. Data frames were chosen to be 20 by 20 windows around the observed target. Equation (9) is easily implemented with the test PSFs by correlating the data read in by the Naval Observatory telescope’s CCD with its test PSF, href, and correlating the data read in by the SST’s CCD with the test PSF corresponding to either the H1 or H2 hypothesis, hneo or hstar. The second correlation is multiplied by the ratio of target intensities and then the two correlations are added together. In this manner, the two LRT’s, Λ1 and Λ2, are computed by using Eq. (9) to produce the two random variables. The LRT given H1 is hypothesized and divided by the local standard deviation making it a unit-normal Gaussian random variable and the effects of noise on this SNR are reduced using a moving average over 10 frames [12]. The Point Detector (PD) algorithm [3,13] was also used to determine what the SNR would be for SSN sensors used by the USAF given the same data. Both the SNR from the LRT and the PD algorithm are plotted in Fig. 9 this value is the number of standard deviations the signal is over the background. From the figure, it is clear the new algorithm produces a higher SNR at lower target luminosities as compared to the PD algorithm. Figure 9 also shows that stars passing through the data, as seen in frames 424 and 735, have a lower SNR value when using the new algorithm as compared to the when using the traditional PD detection algorithm.

 figure: Fig. 9

Fig. 9 SNR using CD MHT and SNR using PD vs. SST Frame Number

Download Full Size | PDF

The new algorithm performs better than the PD algorithm due to the correlator’s ability to compare an observed target’s PSF spatial distribution to the shape that is hypothesized. Another gain of the proposed algorithm is that it is also able to distinguish whether a detection is a RSO or a stellar object by using multiple hypotheses. In this case, there are two hypotheses being tested; the hypothesis that a RSO is present in the observed data, H1, and the hypothesis that a star is present in the observed data, H2. When run though the new algorithm, a LRT is produced for each of the two hypotheses. The two LRT’s, Λ1 andΛ2, are differenced producing the new random variable, Z1, which is then normalized and averaged the same way that the LRT was producing a unit-normal Gaussian random variable. The SNR of the Gaussian random variable, Z1, is seen in Fig. 10. The figure shows that the proposed algorithm correctly categorized the detections as a RSO.

 figure: Fig. 10

Fig. 10 SNR of Z1 vs. SST Frame Number

Download Full Size | PDF

Using the SNR to generate a probability curve is much more powerful than just using the SNR values to show that correct detections have been made because it represents the likelihood of making the correct choice given a desired PFA. The SNR values are used to find the probability of correctly detecting a RSO, PD NEO, when it is present in the observed data. P(Z1>0) is computed by using the CDF MATLAB command with a threshold of zero with inputs of the SNR value of each frame found using the Gaussian random variable Z1 which is seen in Fig. 10. PD is computed by using the CDF MATLAB command with a threshold of six standard deviations given inputs of the SNR value at each frame using the Gaussian random variable Λ1 as computed by the new algorithm which is seen in Fig. 9. PD NEO is found for each SST frame by multiplying P(Z1>0) by PD as seen in Eq. (16). A threshold for detection of 6 standard deviations was selected to produce a PFA of 10−9 which is desired by USAF SSN applications. The probability of detecting a target, PD, using the standard PD algorithm was also computed using the CDF MATLAB command with a threshold of six standard deviation with inputs of the SNR value at each frame found using the Gaussian random variable computed using the PD algorithm. Figure 11 is the plot of PD NEO found using the proposed algorithm and PD found using the PD algorithm for this selected PFA. In the figure, the CD MHT algorithm has a superior performance as compared to the PD algorithm used by USAF SSN sensors currently with the additional bonus that the new algorithm also determines the type of observation being detected. The probability of detecting a star when a RSO is hypothesized using the proposed algorithm is also much smaller than the probability of detecting a star in the data using the PD algorithm.

 figure: Fig. 11

Fig. 11 PD NEO using CD MHT and PD using PD vs. SST Frame Number.

Download Full Size | PDF

The data from the SST is used directly to test the ability of the new algorithm to detect a RSO; however, observations of stellar objects were not obtained because the telescopes were set to track the geostationary ANIK-F1 RSO and any observed stars streak through the data frames and the new algorithm was not designed to test against streaks. These results demonstrate that heritage USAF SSN resources can implement this new detection process to determine if an observation is a RSO in a single frame of data. The SPOT facility at Lockheed Martin, Santa Cruz would be the ideal arrangement to test the performance of this algorithm because that facility has telescopes on railroad tracks allowing for an adjustable baseline, the telescopes are identical which would allow for the CD proposed algorithm to be independent of signal luminosity, and there are different CCD and telescope configurations which would allow the experiment to be optimized by maximizing PSF shift caused by parallax.

6. Conclusions

The experiment showed that a system of different telescopes collecting simultaneous frames of data and separated by some physical baseline are effective at detecting parallax and the nature of a detected target can be categorized as either a stellar or RSO observation in a single frame of data. The simulation showed that increasing the baseline difference between the reference and parallax sensing telescopes increases the performance of the proposed algorithm when the luminosity of the target is closer to that of the background. When the target is much brighter than the background, as seen in the SNR 6 simulation, all system baselines correctly detect and categorize the target with assuredness. The experiment showed that real DOD assets can be utilized in their present state to detect the parallax effect in observations and correctly identify a target in single data frames. Moreover, the probability of detecting a RSO target and correctly categorizing that detection, PD NEO, is higher than the probability of target detection, PD, using heritage techniques. This is a big improvement over heritage PD algorithms which also require multiple frames to register a detection. The new algorithm could also be used to determine the altitude of a given observation by using more hypothesized PSF’s or the speed of an observation by using successive frames and effects of horizontal or vertical tilt. With the altitude, speed, right ascension and declination, all orbital parameters of the satellite can be computed. Further research examining the sensitivity of the proposed algorithm, required baselines between telescopes as a function of object illumination and other system parameters as well as its applicability to space surveillance operations needs to be accomplished and remains a goal of future research efforts.

Funding

Air Force Office of Scientific Research funding document F4FGA08052J005.

Acknowledgements

We would like to thank the Naval Observatory and the DARPA SST program for providing the measured telescope data used in our study.

Disclosures

The views and opinions expressed in this article do not necessarily reflect those of the United States Air Force or the Department of Defense.

References

1. US Department of Defense and Office of the Director of National Defense, “National Security Space Strategy Unclassified Summary,” p. 14, 2011.

2. Office of the President of the United States of America, “National Space Policy of the United States of America,” United States Gov., p. 18, 2010.

3. H. Viggh, G. Stokes, F. Shelly, M. Blythe, and J. Stuart, “Applying Electro-Optical Space Surveillance Technology to Asteroid Search and Detection: The LINEAR Program Results,” Sp. 98, pp. 373–381, 1998.

4. P. S. Gural, P. R. Otto, and E. F. Tedesco, “Moving object detection using a parallax shift vector algorithm,” Publ. Astron. Soc. Pac. 130(989), 074504 (2018). [CrossRef]   [PubMed]  

5. J. S. Ruprecht, J. S. Stuart, D. F. Woods, and R. Y. Shah, “Stuart, D. F. Woods, and R. Y. Shah, “Detecting small asteroids with the Space Surveillance Telescope,” Icarus 239, 253–259 (2014). [CrossRef]  

6. “Pan-STARRS, http://pan-starrs.ifa.hawaii.edu/public/home.html.” [Online]. Available: http://pan-starrs.ifa.hawaii.edu/public/home.html. [Accessed: 16-Nov-2016].

7. J. W. Goodman, Statistical Optics. New York: John Wiley & Sons, Inc., 2000.

8. E. Bertin and S. Arnouts, “SExtractor: software for source extraction,” Astron. Astrophys. Suppl. Ser. 117(2), 393–404 (1996). [CrossRef]  

9. D. L. Fried, “Optical resolution through a randomly inhomogeneous medium for very long and very short exposures,” J. Opt. Soc. Am. 56(10), 1372 (1966). [CrossRef]  

10. S. C. Cain and T. Watts, “Nonparaxial Fourier propagation tool for aberration analysis and point spread function calculation,” Opt. Eng. 55(8), 85104 (2016). [CrossRef]  

11. H. L. Van Trees, Detection Estimation and Modulation Theory, New York, John Wiley and Sons Inc., 2001.

12. T. Hardy, “Optical Theory Improvements to Space Domain Awareness,” Air Force Institute of Technology, 2016.

13. J. Chris Zingarelli, E. Pearce, R. Lambour, T. Blake, C. J. R. Peterson, and S. Cain, “Improving the space surveillance telescope’s performance using multi-hypothesis testing,” Astron. J. 147(5), 111 (2014). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Visual Depiction of Telescope Arrangement for Parallax Detection.
Fig. 2
Fig. 2 New Coordinate System for Non-Paraxial Propagation
Fig. 3
Fig. 3 Two Telescopes Tracking Primary Target and Secondary Target in FOV.
Fig. 4
Fig. 4 Observation from SST Telescope of ANIK-F1 and ANIK-F1R
Fig. 5
Fig. 5 Observation from Naval Observatory Telescope of ANIK-F1 and ANIK-F1R
Fig. 6
Fig. 6 Test PSF for Reference Telescope
Fig. 7
Fig. 7 Test PSF for Parallax Sensing Telescope Hypothesizing H1 is true
Fig. 8
Fig. 8 Test PSF for Parallax Sensing Telescope Hypothesizing H2 is true.
Fig. 9
Fig. 9 SNR using CD MHT and SNR using PD vs. SST Frame Number
Fig. 10
Fig. 10 SNR of Z1 vs. SST Frame Number
Fig. 11
Fig. 11 PD NEO using CD MHT and PD using PD vs. SST Frame Number.

Tables (3)

Tables Icon

Table 1 Performance Metrics for Satellite Detection.

Tables Icon

Table 2 Locations of Both Targets and Both Telescopes.

Tables Icon

Table 3 Registration of Parallax Effect due to Different Observation Locations

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

SN R PD = d(x x o ,y y o )B σ .
SN R MF = x y (d(x x o ,y y o )B)h(x,y) σ MF .
σ MF = 1 N x y (d(x x o ,yy)B) 2 .
H L ( f x , f y )=exp[ ( λ ¯ z f x 2 + f y 2 r o ) 5/3 ].
h(x,y)= F (1) { H L ( f x , f y )}.
U'(x',y')= e j2πz/λ e jπ(x ' 2 +y ' 2 ) jλz .
U( x d , y d )= 1 jλz U ( x s , y s ) e j2π R 0 ( x s , y s )/λ e j2π(( x d x s + y d y s ))/λz d x s d y s .
Λ i = P( d 1 (x,y)| H i )P( d 2 (x,y)| H i ) P( d 1 (x,y)| H o )P( d 2 (x,y)| H o ) > < 1.
Λ i = x=1 N y=1 N [ h ref (x,y)( d 1 (x,y) B 1 ) σ 1 + S 2 h i (x,y)( d 2 (x,y) B 2 ) S 1 σ 2 ] .
SN R 1 = x=1 N y=1 N [ h ref (x,y)( d 1 (x,y) B 1 ) σ Λ 1 σ 1 + S 2 h NEO (x,y)( d 2 (x,y) B 2 ) S 1 σ 2 σ Λ 1 ] > < τ.
SN R 2 = x=1 N y=1 N [ h ref (x,y)( d 1 (x,y) B 1 ) σ Λ 2 σ 1 + S 2 h star (x,y)( d 2 (x,y) B 2 ) S 1 σ 2 σ Λ 2 ] > < τ.
Z ¯ 1 =E[ Z 1 ]= m Z 1 = m 1 m 2 .
σ Z 1 2 =E[ ( Z 1 Z ¯ 1 ) 2 ]= σ Λ 1 2 + σ Λ 2 2 2 σ Λ 1 σ Λ 2 ρ 12 .
ρ 12 = E[( Λ 1 m 1 )( Λ 2 m 2 )] σ Λ 1 σ Λ 2 .
P( Z 1 >0)= 0 1 2π σ Z 1 2 e (x m Z 1 ) 2 /2 σ Z 1 2 dx.
P Dneo =P( Z 1 >0) P D .
P Dstar =P( Z 1 <0) P D .
P fa =P( Λ 1 >τ)+P( Λ 2 >τ)P( Λ 1 Λ 2 >τ).
P Λ 1 >τ (τ)= τ 1 2π σ 1 2 e ( Λ 1 m 1 ) 2 /2 σ 1 2 d Λ 1 .
P Λ 2 >τ (τ)= τ 1 2π σ 2 2 e ( Λ 2 m 2 ) 2 /2 σ 2 2 d Λ 2 .
P fa = τ e ( ( Λ 1 m 1 ) 2 2 σ 1 2 (1 ρ 12 2 ) + ( Λ 2 m 2 ) 2 2 σ 2 2 (1 ρ 12 2 ) 2 ρ 12 ( Λ 1 m 1 )( Λ 2 m 2 ) 2 σ 1 σ 2 1 ρ 12 2 2π σ 1 σ 2 1 ρ 12 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.