## Abstract

The detection and positioning system of point targets has critical applications in many fields. However, its spatial and temporal resolution is limited for the image-based system due to a large amount of data. In this work, an image-free system with less data and high update rate is proposed for the detection and positioning of point targets. The system uses a digital micromirror device (DMD) for light modulation and a pixel array as the light intensity detector, and the DMD is divided into multiple blocks to selectively acquire the intensity information in the region of interest. The centroid position of a point target is calculated from the intensity on the adjacent rows or columns of the micromirror. Simulation indicates that the performance of the proposed method is close to or better than that of the traditional methods. In static experiments, the centroiding accuracy of the proposed system is about 0.013 pixel. In dynamic experiments, the centroiding accuracy is better than 0.07 pixel in the condition of signal-to-noise ratio (SNR) greater than 35.2 dB. Meanwhile, the built system has an update rate of 1 kHz in the range of 1024×768 pixels, and the method acquires only 8 bytes of data for one-time positioning of a point target, making it applicable to real-time detection and positioning of point targets.

© 2021 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

## 1. Introduction

The detection and positioning of point targets are essential in various fields, including astronomical target observation [1–4], biomedical particle tracking [5–7], and industrial measurement [8], etc. For example, in astronomical navigation, the accurate detection and positioning of star point targets is vital in determining spacecraft attitude [1,2]. In structural cell biology studies, the resolution of single-molecule localization microscopy depends on the ability to detect and position fluorophores in a short time [7].

Among these applications, image-based methods have been widely used in the past decades with the development of image sensor technology. Image sensors are used to record spatial intensity information of point targets. The temporal and spatial resolution of the image-based method is determined by the pixel count and data transmission rate of the image sensor, as well as the performance of algorithms. The design of high-speed cameras makes it available to capture high-resolution images at high frame rates [9]. However, it typically results in huge data throughput for storage and transmission, and an exponential increase in image processing computation comes with it [10,11]. For instance, in high-speed particle tracking, a high-speed camera (500 Hz, 1280×1024 pixels) will generate 4 GB of dedicated video RAM in 6.5 seconds [10]. Meanwhile, the short exposure time caused by the high frame rate often leads to low signal-to-noise ratios (SNR) for point targets and reduces measurement accuracy [12,13]. In addition, the high-speed camera system is complex and costly. Hence, the image-based methods are not suitable for real-time detection and positioning of point targets that require high spatial and temporal resolution. Actually, the spatial distribution of point targets is sparse, resulting in most meaningless information in the field of view. This provides the feasibility of reducing the amount of data through compression.

Compressed sensing theory has been proposed and applied to the detection and positioning of point targets. It obtains spatial intensity information of targets and reconstructs them by generating a much smaller amount of data than that required by the Nyquist criterion. Single-pixel imaging (SPI) [14–16] is implemented based on compressed sensing to detect targets. For example, Omar et al. [16] and Gregory et al. [14] achieved the detection and position of targets with 256 × 256 pixels at 14 Hz using the compressed sensing method, and only 2.44% of the data amount established by the Nyquist criterion was produced. Shi et al. used the Hadamard transform to transform 2-dimension images into 1-dimension projection curves and detected the point target with 256×256 pixels at ∼177 Hz [17]. Recently, several multi-pixel extensions of SPI have been proposed [18–20], and this multi-pixel parallel computation method can further improve the temporal resolution. Even though the amount of data produced by compressed sensing is small, the computation used to reconstruct targets still limits the spatial and temporal resolution. In the meantime, the motion of point targets is continuous. Thus the detection and positioning of point targets can be achieved by tracking the target region and obtaining their intensity information [21,22].

Here we proposed an image-free system that produces a small amount of data by selective compression for the real-time detection and positioning of point targets. It innovatively uses a digital micromirror device (DMD) divided into multiple blocks to selectively acquire the spatial intensity information in the region where point targets are located. A pixel array is used as the detector. To ensure the accuracy of centroid position and real-time performance of the system, a ratio centroid method based on Gaussian distribution is proposed. The effects of Gaussian radius (*σ*) and SNR on the accuracy of the centroid position are simulated and compared among three different methods, including weighted centroid method, Gaussian fitting method, and the proposed method. The system is built and the expected function is realized. The performance of the proposed method is verified by laboratory experiments.

## 2. Principles and methods

#### 2.1 DMD-based image-free system design for point target detection and positioning

The schematic diagram of the proposed DMD-based image-free system for point targets detection and positioning is shown in Fig. 1(a), which consists of two lenses, a DMD chip, a total internal reflection (TIR) prism, the data acquisition and processing modules, and the DMD drive module. Point targets are imaged on the DMD chip by Lens1, and the modulated light intensity distribution is passed through the TIR prism and later converged to the pixel array by Lens2. The acquired light intensity data is processed and the result determines the coding mask of the DMD at the next moment.

As an efficient and fast spatial light modulator, the DMD chip is a micro-electro-mechanical system (MEMS) consisting of hundreds of thousands of tiny switchable micromirrors which have two stable states (+12° and -12°) and a flip frequency of over 20 kHz. The micromirror array on the DMD chip is divided into multiple blocks that corresponding to the pixels on the pixel array, as shown in Fig. 1(b). The combination of a blocked DMD and a pixel array in this work reduces the requirements for detector sensitivity and A/D conversion accuracy of the data acquisition module. This approach enables the simultaneous transmission and parallel computation of data and substantially reduces computation time.

Since the paths of the incident light and the +12° branch light are very close in front of the DMD, the TIR prism is used in the optical path to avoid the interference between them, as shown in Fig. 1(c). When the micromirror is operated at +12°, the light reflected by the micromirror goes through the TIR prism and is converged by Lens2. When the micromirror works at -12°, the light is absorbed by the absorbing coating of the TIR prism. In the meantime, micromirrors and the TIR prism cause the deflection of light, resulting in object and image planes that are not perpendicular to the optical axis. Thus the pixel array should be adjusted to the proper tilt angle in the system to meet the oblique field imaging conditions.

#### 2.2 Modeling of point target intensity distribution

The intensity distribution of a point target imaged on the micromirror plane can be expressed as $g(x,y)$.

where $f(x,y)$ is the intensity distribution of a point target on the object plane, and $h({x,y} )$ is the point spread function (PSF) of the Lens1. ${\otimes}$ denotes the convolution operator.In many cases, $g(x,y)$ is approximately a two-dimensional Gaussian distribution function [23]. Its probability density distribution function, $p(x,y)$, can be expressed as follows,

*x*-axis and

*y*-axis, respectively; ${\sigma _x}$ and ${\sigma _y}$ are the standard deviations of the intensity distribution on the

*x*-axis and

*y*-axis, respectively; $\rho$ is the correlation coefficient between the distribution on the

*x*-axis and

*y*-axis. In this work, $\rho$ is 0, meaning the distribution on the

*x*-axis and

*y*-axis is independent of each other. $p(x)$ and $p(y)$ are the marginal distribution of $p(x,y)$ on the

*x*-axis and

*y*-axis, respectively.

#### 2.3 Parallel acquisition of intensity information

Assuming that the total intensity received by the light pupil of the lens is ${I_{total}}$, the density distribution of the point target on DMD can be expressed as follows,

Driving the flip state of the micromirrors on DMD, the coding mask function, $w(x,y)$, is generated. The modulated result $G(x,y)$ can be expressed as the following,

where the coding mask function $w(x,y)$ is 1 when the corresponding micromirror works at +12° and 0 when the corresponding micromirror works at -12°.By dividing the DMD chip into multiple blocks corresponding to the pixel array, the parallel selective compression is achieved. The modulated density distribution, $G(x,y)$, is received by the pixel array. Assuming that the pixels have uniform intensity sensitivity over their effective surface area and the response is linear, the integral intensity received by the area, ${S_{(u,v)}}$, of the pixel can be expressed as $I(u,v)$. $I(u,v)$ is calculated according to Eq. (5). The conversion relationship from $g(x,y)$, to $G(x,y)$, to $I(u,v)$ during the parallel acquisition of intensity information is shown in Fig. 2.

To detect the point targets, the micromirrors are all driven to work at +12° at first, so that all intensity information on the blocks are received by the pixel array. Then, when the $I(u,v)$ of a block is below the threshold, the corresponding micromirror flip state will change to -12°, and the intensity information on this block will not be received by the pixel array. If $I(u,v)$ is higher than the threshold, then the corresponding blocks will be scanned row by row and column by column to determine the region of the point target, which is achieved by controlling the flip state of the corresponding micromirrors. The threshold is determined by the total intensity of a point target.

By scanning the intensity distribution of point targets on the filtered blocks, the marginal distribution of the point target intensity in both the *x*-axis and *y*-axis is obtained, and the regions where they are located are determined. As shown in Fig. 3, since the intensity distribution of point targets satisfies a two-dimensional Gaussian distribution, its marginal distribution in both the *x*-axis and *y*-axis satisfies a one-dimensional Gaussian distribution. The $I(u,v)$ on the ${x_k}$ column and ${y_k}$ row of the micromirror block can be expressed as:

#### 2.4 Ratio centroid method based on the Gaussian distribution

The centroid of the point target is further determined by the proposed ratio centroid method. As shown in Fig. 4(a), the coordinate of the center of the micromirror with the highest integral intensity is identified as $({x_0},{y_0})$. $({x_c},{y_c})$ is the coordinate of the centroid and it is located in this micromirror. Its centroid offset is calculated as follows,

${I_{ab}}$, ${I_{cd}}$, and ${I_{ef}}$ are the integral intensity of ${x_k}$ column on the *x*-axis when ${x_k}$ is ${x_0}$-1, ${x_0}$, and ${x_0}$+1, respectively. Figure 4(b) displays the variation of ${I_{ab}}$, ${I_{cd}}$, and ${I_{ef}}$ when the centroid offset $\Delta x$ changes from -0.5 to 0.5 pixel, and they are expected to be parts of the Gaussian distribution curves. ${I_{cd}}$ is defined as ${I_1}$, and the larger one of ${I_{ab}}$ and ${I_{ef}}$ is defined as ${I_2}$. At the next moment, the approximate region in which the centroid is located can be determined by its previous state of motion, and scanning this region gives the new $({x_0},{y_0})$ for the next moment.

As the intensity distribution on the *x*-axis and *y*-axis is independent of each other, according to Eq. (2) and Eq. (6), ${I_1}$ can be expressed as the product of the integrals over *x* and *y* by Eq. (8).

*c*and

*d*are the boundaries of the integration region on the

*x*-axis;

*g*and

*h*are the boundaries of the integration region on the

*y*-axis;

*σ*and

_{x}*σ*are estimated from the marginal distribution of intensity on the

_{y}*x*-axis and

*y*-axis [24];

*k*is the integral function and can be expressed in the form of the error function as in Eq. (9) [4].

For ${I_2}$, when $\Delta x$ is between -0.5 and 0 pixel, the boundaries are *a* and *b*, while when $\Delta x$ is between 0 and 0.5 pixel, the boundaries are *e* and *f*. Taking the first case as an example, ${I_2}$ and ${I_2}/{I_1}$ can be expressed as below,

Since *k* is highly non-linear, its approximate expression is obtained after the Taylor series expansion, that is Eq. (12), and used to solve for $\Delta x$.

Using $T_0^{(1)},T_1^{(1)},T_2^{(1)},T_3^{(1)},T_4^{(1)}$ and $T_0^{(2)},T_1^{(2)},T_2^{(2)},T_3^{(2)},T_4^{(2)}$ to represent the zero-to-fourth-order coefficients in the approximation expressions of $k({\Delta x,{\sigma_x},c,d} )$ and $k({\Delta x,{\sigma_x},a,b} )$, respectively. Equation (11) can be approximated as the following,

By solving Eq. (13) and choosing a solution between -0.5 and 0.5 the centroid offset $\Delta x$ can be obtained, then ${x_c}$ can be gotten by ${x_0} + \Delta x$. The determination of ${y_c}$ is similar to that of ${x_c}$.

## 3. Results and analysis

#### 3.1 Simulation

Simulating the motion of a point target with different Gaussian radius at uniform velocity, ${{{I_2}} / {{I_1}}}$ varies regularly as a function of centroid offset from 0 to 0.5 pixel with or without noise. This indicates the possibility of determining the centroid position by the proposed method. As shown in Fig. 5, when centroid offset changes from 0 to 0.5 pixel, ${{{I_2}} / {{I_1}}}$ increases nonlinearly with it and finally approaches 1. In detail, ${{{I_2}} / {{I_1}}}$ changes faster when centroid offset is close to 0.5 pixel, indicating the ${{{I_2}} / {{I_1}}}$ is more sensitive to centroid offset at this time. Considering point targets with different *σ*, ${{{I_2}} / {{I_1}}}$ increases with the increase of *σ*, so that the variation of ${{{I_2}} / {{I_1}}}$ becomes smaller. *σ* is considered to be either *σ _{x}* or

*σ*since these two are approximately equal for point targets. In the meantime, adding noise makes the curve of ${{{I_2}} / {{I_1}}}$ fluctuating. It should be noted that the variation of ${{{I_2}} / {{I_1}}}$ is symmetric about the center of the micromirror when the centroid offset varies from -0.5 to 0.5 pixel, and it will be repeated cyclically on the next micromirror.

_{y}The noise of the intensity signal mainly includes dark current noise, shot noise, and readout noise. The SNR expresses the quality of point target intensity signal and is defined as Eq. (14) [25].

where ${I_{signal}}$ is the readout value of the pixel when the total intensity of the point target is received, ${\bar{I}_{signal}}$ is the mean value of ${I_{signal}}$, ${\sigma _{{I_{signal}}}}$ is the standard deviation of ${I_{signal}}$, and ${\bar{I}_{back}}$ is the mean readout value of the pixel when only background intensity is received. The noise considered here includes the integration-related noise and the output noise. The integration-related noise includes shot noise, dark current noise and transfer noise, and the output noise includes readout, amplifier and quantization noise.The accuracy of the proposed method is mainly determined by *σ* and SNR, which is evaluated by the root mean square error (RMSE) of the centroid offset. As shown in Fig. 6, the RMSE first decreases and then increases when *σ* changes from 0.3 to 1.5 pixels, and the RMSE is lowest when *σ* is 0.5 pixel. This is because when *σ* is small, the corresponding ${{{I_2}} / {{I_1}}}$ is too low and easy to be influenced by noise, and when *σ* is large, ${{{I_2}} / {{I_1}}}$ is close to 1 thus its variation with the centroid offset will be small. For SNR, RMSE decreases with the increase in SNR. This is reasonable because as SNR increases, the interference of noise becomes smaller. Simulation results indicate that the proposed method has the best performance for point targets with a *σ* of 0.5 pixel. Meanwhile, when *σ* is between 0.5 to 1 pixel and SNR is greater than 36 dB, the RMSE is within 0.05 pixel.

The accuracy of the proposed method is further compared with that of the weighted centroid method and Gaussian fitting method with or without noise. Point targets with *σ* of 0.5 and 1.5 pixels are selected for analysis. In the condition without noise, the residual error of centroid offset position calculated with the proposed method is the lowest among the three methods whether *σ* is 0.5 or 1.5 pixels, as shown in Fig. 7(a) and (b). The residual error of the Gaussian fitting method is the highest when *σ* is 0.5 pixel, while that of the weighted centroid method is the highest when *σ* is 1.5 pixels. Correspondingly, the RMSE of centroid offset for the three methods is listed in Table 1. The proposed method has the lowest RMSE, which is 1.544×10^{−4} and 2.132×10^{−4} pixel at *σ* of 0.5 and 1.5 pixels, respectively. When SNR is 48 dB, i.e., in the presence of noise, the residual errors for the proposed method and Gaussian fitting method become close to each other, and both are lower than that of the weighted centroid method (Fig. 7(c) and (d)). For the proposed method, at an SNR of 48 dB, the RMSE of centroid offset when *σ* is 0.5 and 1.5 pixels is 0.0049 and 0.0221 pixel, respectively. According to Table 1, the RMSE of the proposed method is lower than or close to those of the other two conventional methods in the absence or presence of noise. Thus the accuracy of the proposed method is acceptable.

Simulation results show that among the three methods, the proposed method can satisfy the requirement of accuracy and computational efficiency simultaneously when applied to the DMD-based system. For the weighted centroid method, its advantages are the simplicity of the algorithm and the small amount of computation [26]. However, it has an obvious systematic error and is highly influenced by noise [27,28], since its RMSE is the highest after adding noise. In applications [29,30], its RMSE is approximately between 0.05 and 0.1 pixel. For the Gaussian fitting method, which is considered as a method with high accuracy in applications [31,32], its accuracy is similar to that of the proposed method in this study in the presence of noise. While the disadvantage is that its amount of computation is large. In contrast, for the proposed method, the amount of data and computation are significantly reduced compared with that of the Gaussian fitting method, and achieve relatively high accuracy at the same time, indicating the proposed method is suitable for application in the real-time detection and positioning of the point target.

#### 3.2 Laboratory experiments

We built a DMD-based image-free system for point targets detection and positioning in the laboratory, as shown in Fig. 8. It includes Lens1 (Nikon, 35-105mm F/3.5-4.5), Lens2 (Computar, 12mm F/1.4), the DMD chip (Texas Instruments, array size 1024×768), the TIR-prism, and the pixel array. The 1024×768 micromirror array on the DMD chip is divided into the 16×12 blocks, while every 64×64 micromirror array is a block and every block corresponds to a pixel on the pixel array. In this work, the pixel array is implemented by combining the binning pixel readout mode and the window readout mode of a detector (Hamamatsu, ORCA-Flash 4.0 V2). Every 4×4 original pixels on the detector is binned to a new pixel, and then a 16×12 pixel array is generated through the window readout mode. During the experiments, point targets are generated by a high-precision dynamic target simulator with a field of view of 10.5°×7.5°. The simulator has a resolution of 1280 × 914 pixels, an angular accuracy of 30 arc seconds (3σ), and the highest refresh rate of 4225 Hz.

Since measurements on both *x*-axis and *y*-axis are needed to position a point target, and 2 measurements are required on each axis, 4 measurements are required for the location of a point target for one time. Because the data readout frequency is lower than the flip frequency of the DMD in the system, the update rate of system is determined by the data readout frequency of the detector and the number of measurements needed to position a point target. The data readout frequency is 6 kHz in our work, and the number of measurements is at least 4. Thus the update rate of system can achieve 1.5 kHz (6 k measurements/s ${\div}$ 4 measurements) under ideal conditions. Meanwhile, the positioning of a point target consumes 8 bytes data (16 bits (2 bytes)/measurement × 4 measurements). So the data throughput required for each point target of the proposed method in our experiments is 12,000 bytes per second. According to the experimental results, the update rate of the system is about 1 kHz, because sometimes additional numbers of measurement is needed when detecting point targets and point targets moving across blocks. In addition, by acquiring data with the corresponding 16×12 pixels, multiple point targets detection and positioning can be achieved simultaneously.

To evaluate the accuracy of centroid positioning, a static point target is generated by the simulator and employed for the system performance test. The Gaussian radius of the static point target is estimated to be 0.51 pixel according to the marginal distribution of intensity. The SNR is set to 47.4 dB by adjusting the light source brightness of the simulator. One frame is defined as detecting and positioning the point target for one time. As shown in Fig. 9, the centroid position of the point target has been measured 500 times, and the standard deviation of centroid error is calculated to be 0.0133 and 0.0128 pixel (1σ) on the *x*-axis and *y*-axis, respectively.

Experiments when point target moves with a uniform velocity are conducted as well to evaluate the performance of system under dynamic conditions. The size of point target and SNR are the same as those of the static experiment. Position of the point target is tested 200 times, and the variation of ${I_1}$, ${I_2}$, and their ratio ${{{I_2}} / {{I_1}}}$ during the motion is shown in Fig. 10. On both the *x*-axis and *y*-axis, ${I_1}$, ${I_2}$, and ${{{I_2}} / {{I_1}}}$ change cyclically. This is because the centroid offset varies cyclically when the point target moves uniformly, and ${I_1}$, ${I_2}$, and ${{{I_2}} / {{I_1}}}$ are functions of the centroid offset. Meanwhile, as the point target has different motion intervals on the *x*-axis and *y*-axis, the change cycles of ${I_1}$, ${I_2}$, and ${{{I_2}} / {{I_1}}}$ on the *x*-axis and *y*-axis are different as well. Calculating the centroid position of the point target, its measured movement trajectory and the fitting trajectory are shown in Fig. 11. The average motion intervals of the point target are obtained as 2.68 and 1.12 pixels on the *x*-axis and *y*-axis, respectively based on the fitting trajectory. The centroid position derived from the fitting trajectory is considered as the real value in the following centroiding accuracy calculation.

The experimental results also indicate that the proposed system is robust under different SNR conditions. By adjusting the brightness of the target, different SNR is achieved in the experiments and estimated according to Eq. (14) as 35.2, 41.6, and 47.4 dB, respectively. The Gaussian radius of the point target is 0.51 pixel, which is the same as that in the static experiment. Figure 12 displays the residual error of the centroid which is the difference between the measured centroid position and the fitting position during the motion of point targets. It is also shown that for the dynamic point targets with different SNR, the RMSE of centroid offset decreases with the increase of SNR, indicating an increase of centroiding accuracy (Table 2). When SNR is 35.2 dB, the RMSE of the centroid offset on the *x*-axis and *y*-axis are 0.0686 pixel and 0.0682 pixel, respectively. When SNR is 47.4 dB, the RMSE of the centroid offset on the *x*-axis and *y*-axis are 0.0175 and 0.0176 pixel, respectively. Thus in the common condition of SNR greater than 35.2 dB, the RMSE of centroid offset is better than 0.0686 pixel.

## 4. Conclusion

In this work, a DMD-based image-free system is proposed for real-time point targets detection and positioning. It overcomes the limitations of data storage and processing computation, and achieves a high spatial and temporal resolution of measurements. The simulation results indicate that the accuracy of proposed method is close to or better than that of conventional methods, i.e., the weighted centroid method and the Gaussian fitting method under the different simulation conditions. It is also shown that the proposed method performs best when *σ* is 0.5 pixel. Static experiments show that the centroiding accuracy of the proposed system is 0.013 pixel. For the dynamic point targets, experimental results show that the centroiding accuracy is better than 0.07 pixel in the condition of SNR greater than 35.2 dB. Additionally, the update rate of the proposed system is approximately 1 kHz in the range of 1024×768 pixels, and the method acquires only 8 bytes of data for one-time positioning of a point target, indicating its feasibility for real-time detection and positioning of point targets.

## Funding

National Key Research and Development Program of China (No. 2016YFB0501201); National Natural Science Foundation of China (No. 51827806).

## Acknowledgments

The authors acknowledge the support from TY-Space Technology (Beijing) Ltd. for the cooperation in the experiment.

## Disclosures

The authors declare no conflicts of interest.

## Data availability

Data underlying the results presented in this paper are not publicly available at this time but maybe obtained from the authors upon reasonable request.

## References

**1. **M. Wei, F. Xing, and Z. You, “A real-time detection and positioning method for small and weak targets using a 1D morphology-based approach in 2D images,” Light: Sci. Appl. **7**(5), 18006 (2018). [CrossRef]

**2. **T. Delabie, “Star position estimation improvements for accurate star tracker attitude estimation,” in AIAA Guidance, Navigation, and Control Conference, (AIAA, 2015), 1332.

**3. **T. Sun, F. Xing, X. Wang, J. Li, M. Wei, and Z. You, “Effective star tracking method based on optical flow analysis for star trackers,” Appl. Opt. **55**(36), 10335–10340 (2016). [CrossRef]

**4. **B. M. Quine, V. Tarasyuk, H. Mebrahtu, and R. Hornsey, “Determining star-image location: A new sub-pixel interpolation technique to process image centroids,” Comput. Phys. Commun. **177**(9), 700–706 (2007). [CrossRef]

**5. **N. Ogawa, H. Oku, K. Hashimoto, and M. Ishikawa, “Microrobotic visual control of motile cells using high-speed tracking system,” IEEE Trans. Robot. **21**(4), 704–712 (2005). [CrossRef]

**6. **M. Maška, V. Ulman, D. Svoboda, P. Matula, P. Matula, C. Ederra, A. Urbiola, T. España, S. Venkatesan, D. M. W. Balak, P. Karas, T. Bolcková, M. Štreitová, C. Carthel, S. Coraluppi, N. Harder, K. Rohr, K. E. G. Magnusson, J. Jaldén, H. M. Blau, O. Dzyubachyk, P. Křížek, G. M. Hagen, D. Pastor-Escuredo, D. Jimenez-Carretero, M. J. Ledesma-Carbayo, A. Muñoz-Barrutia, E. Meijering, M. Kozubek, and C. Ortiz-de-Solorzano, “A benchmark for comparison of cell tracking algorithms,” Bioinformatics **30**(11), 1609–1617 (2014). [CrossRef]

**7. **N. Gustafsson, S. Culley, G. Ashdown, D. M. Owen, P. M. Pereira, and R. Henriques, “Fast live-cell conventional fluorophore nanoscopy with ImageJ through super-resolution radial fluctuations,” Nat. Commun. **7**(1), 12471 (2016). [CrossRef]

**8. **F. Viani, P. Rocca, G. Oliveri, D. Trinchero, and A. Massa, “Localization, tracking, and imaging of targets in wireless sensor networks: An invited review,” Radio Sci. **46**(5), 1 (2011). [CrossRef]

**9. **A. Hijazi and V. Madhavan, “A novel ultra-high speed camera for digital image processing applications,” Meas. Sci. Technol. **19**(8), 085503 (2008). [CrossRef]

**10. **K.-Y. Chan, D. Stich, and G. A. Voth, “Real-time image compression for high-speed particle tracking,” Rev. Sci. Instrum. **78**(2), 023704 (2007). [CrossRef]

**11. **S. Puttinger, G. Holzinger, and S. Pirker, “Investigation of highly laden particle jet dispersion by the use of a high-speed camera and parameter-independent image analysis,” Powder Technol. **234**, 46–57 (2013). [CrossRef]

**12. **F. Xue, W. He, F. Xu, M. Zhang, L. Chen, and P. Xu, “Hessian single-molecule localization microscopy using sCMOS camera,” Biophys Rep **4**(4), 215–221 (2018). [CrossRef]

**13. **J. Fan, X. Huang, L. Li, S. Tan, and L. Chen, “A protocol for structured illumination microscopy with minimal reconstruction artifacts,” Biophys Rep **5**(2), 80–90 (2019). [CrossRef]

**14. **G. A. Howland, D. J. Lum, M. R. Ware, and J. C. Howell, “Photon counting compressive depth mapping,” Opt. Express **21**(20), 23822–23837 (2013). [CrossRef]

**15. **M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics **13**(1), 13–20 (2019). [CrossRef]

**16. **O. S. Magaña-Loaiza, G. A. Howland, M. Malik, J. C. Howell, and R. W. Boyd, “Compressive object tracking using entangled photons,” Appl. Phys. Lett. **102**(23), 231104 (2013). [CrossRef]

**17. **D. Shi, K. Yin, J. Huang, K. Yuan, W. Zhu, C. Xie, D. Liu, and Y. Wang, “Fast tracking of moving objects using single-pixel imaging,” Opt. Commun. **440**, 155–162 (2019). [CrossRef]

**18. **S. Uttam, N. A. Goodman, M. A. Neifeld, C. Kim, R. John, J. Kim, and D. Brady, “Optically multiplexed imaging with superposition space tracking,” Opt. Express **17**(3), 1691–1713 (2009). [CrossRef]

**19. **J. Ke and E. Y. Lam, “Object reconstruction in block-based compressive imaging,” Opt. Express **20**(20), 22102–22117 (2012). [CrossRef]

**20. **S.-H. Cho, S.-H. Lee, C. Nam-Gung, S.-J. Oh, J.-H. Son, H. Park, and C.-B. Ahn, “Fast terahertz reflection tomography using block-based compressed sensing,” Opt. Express **19**(17), 16401–16409 (2011). [CrossRef]

**21. **Z. Zhang, J. Ye, Q. Deng, and J. Zhong, “Image-free real-time detection and tracking of fast moving object using a single-pixel detector,” Opt. Express **27**(24), 35394–35401 (2019). [CrossRef]

**22. **G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics **10**(1), 23–26 (2016). [CrossRef]

**23. **L. Auer and W. Van Altena, “Digital image centering. II,” The Astronomical Journal **83**, 531–537 (1978). [CrossRef]

**24. **Z. Hegedus and G. Small, “Shape measurement in industry with sub-pixel definition,” Acta Polytech. Scand. Appl. Phys. **150**, 101–104 (1985).

**25. **“Standard for Characterization of Image Sensors and Cameras, Release 3.1,” EMVA Standard 1288 (2016)

**26. **S. B. Grossman and R. B. Emmons, “Performance Analysis And Size Optimization Of Focal Planes For Point-Source Tracking Algorithm Applications,” Opt. Eng. **23**(2), 167–176 (1984). [CrossRef]

**27. **V. Akondi, M. Roopashree, and R. P. Budihala, “Improved iteratively weighted centroiding for accurate spot detection in laser guide star based Shack Hartmann sensor,” in * Atmospheric and Oceanic propagation of electromagnetic waves IV*, (SPIE, 2010), 758806.

**28. **X. Wei, J. Xu, J. Li, J. Yan, and G. Zhang, “S-curve centroiding error correction for star sensor,” Acta Astronaut. **99**, 231–241 (2014). [CrossRef]

**29. **R. C. Stone, “A comparison of digital centering algorithms,” The Astronomical Journal **97**, 1227–1237 (1989). [CrossRef]

**30. **M. R. Shortis, T. A. Clarke, and T. Short, “Comparison of some techniques for the subpixel location of discrete target images,” in * Videometrics III*, (SPIE, 1994), 239-250.

**31. **T. Delabie, J. D. Schutter, and B. Vandenbussche, “An Accurate and Efficient Gaussian Fit Centroiding Algorithm for Star Trackers,” J of Astronaut Sci **61**(1), 60–84 (2014). [CrossRef]

**32. **H. Wang, E. Xu, Z. Li, J. Li, and T. Qin, “Gaussian Analytic Centroiding method of star image of star tracker,” Adv. Space Res. **56**(10), 2196–2205 (2015). [CrossRef]