Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Real-time on-orbit image quality improvement for a wide-field imaging system with detector arrays

Open Access Open Access

Abstract

Wide-field imaging systems are faced with the problem of massive image information processing and transmission. Due to the limitation of data bandwidth and other factors, it is difficult for the current technology to process and transmit massive images in real-time. With the requirement for fast response, the demand for real-time on-orbit image processing is increasing. In practice, nonuniformity correction is an important preprocessing step to improve the quality of surveillance images. This paper presents a new real-time on-orbit nonuniform background correction method, which only uses the local pixels of a single row output in real-time, breaking the dependence of the traditional algorithm on the whole image information. Combined with the FPGA pipeline design, when the local pixels of a single row are read out, the processing is completed, and no cache is required at all, which saves the resource overhead in hardware design. It achieves microsecond-level ultra-low latency. The experimental results show that under the influence of strong stray light and strong dark current, our real-time algorithm has a better image quality improvement effect compared with the traditional algorithm. It will greatly help the on-orbit real-time moving target recognition and tracking.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The Inter-Agency Space Debris Coordination (IADC) defines the term space debris as: “Space debris are all man-made objects including fragments and elements thereof, in Earth orbit or re-entering the atmosphere, that are non-functional” [1]. Since the early days of space flights, space has become more and more crowded with active satellites and space debris. The US Space Surveillance Network currently catalogues more than 17.000 objects [2], but it is estimated that more than 170 million objects larger than 1 mm are currently orbiting the Earth [3]. The Space Surveillance and Tracking segment is responsible for the detection and prediction of the movement of space debris in orbit around the Earth in order to avoid the degradation of space activities due to collisions [4]. Space environment surveillance is very important for space security [57]. Space surveillance telescopes with wide field-of-view (FOV) and long exposure time can perceive faint targets in space well, which is the development trend of space target detection systems. The concentric objective optical system can meet the high-quality imaging requirements in the wide-field imaging system. The concentric objective optical system has a compact structure and meets the strict resource requirements of space payloads. However, the spherical image surface formed by the concentric lens optical system cannot be directly coupled with the planar detector, which is a technical bottleneck that limits its application. With the development of optical fiber manufacturing technology, the use of optical fiber panels or optical fiber cone devices makes it possible to flatten the spherical image surface. However, the optical fiber image sensor produces loss in the image transmission process and the discrete sampling characteristics of the fiber image sensor's own structure, which will affect the signal-to-noise ratio, modulation transfer function, optical transmittance, image quality and other characteristics of the wide-field imaging system. When the imaging system coupled with the image-transmitting fiber bundle is used for imaging, due to the processing technology, fiber panel bonding, optical fiber light transmission characteristics and optical system, when the light with the same illuminance is incident, the obtained surveillance image will appear uneven phenomenon. At the same time, due to the compact, miniaturized and lightweight structure of the space optical imaging system, limited by its own conditions and the influence of its working environment, the design of the mechanical structure for eliminating stray light has a limited ability to suppress stray light. The influence of stray light on the space target detection system is still inevitable. Due to the influence of stray light, the background of the surveillance image will show a very serious nonuniform distribution characteristic, which prevents the segmentation of the target from the image background, and even causes the target with a low signal-to-noise ratio to be directly lost in the threshold segmentation process. How to accurately correct the nonuniform background caused by stray light while retaining the target information as effectively as possible is the key problem to be solved in this paper. At the same time, a long exposure time will also bring the influence of strong dark current. These factors cause the uneven background of surveillance images, which seriously affects the effective identification of stars and targets. It is extremely important to correct the nonuniform background of the detector array. Due to the complexity and particularity of this surveillance image, the traditional flat-field correction belongs to the static correction range and cannot solve the nonuniform phenomenon caused by unknown factors. It is difficult to meet the task requirements. This paper mainly uses the background estimation method to correct the nonuniform background.

Common classical filtering algorithms mainly include max-median filter, max-mean filter [8], two-dimensional least mean square (tdlms) filter [911], morphological filter [12,13], etc. These algorithms have unstable performance for low SNR images [14]. In order to solve this problem, Bai [15] proposed a new method based on the modified top-hat transformations, which reduces the sensitivity of the filter to stray light by introducing a threshold. Bai et al. [16] proposed a new top-hat transformation method, which realizes the multi-scale adjustment of top-hat transformation through two different structural elements. At the same time, the different information between the target area and the surrounding area is also considered. Deng et al. [17] optimized the structural elements of the top-hat filter by using quantum genetic algorithm. The above methods have better performance than the typical filter methods, but they are still limited by stray light nonuniform background with strong noise, and can not achieve good detection results. In recent years, Xu et al. [18] proposed an improved new top-hat transformation (INTHT) based on the new top-hat transformation to correct stray light nonuniform background for a wide-field surveillance system. Xu et al. [19] proposed the recursion multi-scale gray-scale morphology (RMGM) based on the new top-hat transformation for wide-field surveillance. They all have achieved good results in stray light nonuniform background correction and stray light elimination and basically achieved more accurate stray light nonuniform background elimination and high-precision target retention of the space surveillance camera.

For the image processed by morphological operations with specific structural elements, the region of interest can be effectively improved by eliminating the background and suppressing the noise to a lower level so as to achieve the goal of target recognition. However, the structural elements with the size of M ${\times}$ N pixels used in the traditional morphological operation have correlation between rows. Mathematical morphology often requires repeated expansion and corrosion operations on the image. This method requires a large memory to store the intermediate images generated during the processing. These intermediate images are only used as inputs for subsequent morphological calculations and will not be output as effective images eventually, which leads to a huge waste of hardware resources and a long latency time. The wide-field imaging system has a huge amount of images and is limited by onboard storage resources, computing resources, the complexity of algorithms, and the whole image processing mode. These factors restrict the traditional algorithms from achieving real-time correction on orbit.

In order to improve the real-time performance of the imaging system, Wei [20] proposed a real-time detection and positioning method for small and weak targets using a 1D morphology-based approach in 2D images. Ding [21] proposed a real-time star centroid extraction algorithm with high speed and superior denoising ability based on a 1D top-hat filter. However, these algorithms usually require threshold segmentation, which inevitably requires whole image information to calculate the threshold.

In order to solve the above problems and realize real-time on-orbit correction, this paper proposes a high-precision background estimation method that only uses the local pixels of a single row. This method does not need to calculate the threshold, which breaks the dependence of the traditional background estimation on the global image information. The image processing is completed and output at the same time as the pixel is read out. This method achieves microsecond-level ultra-low latency and has extremely high real-time performance. It will save at least 5 orders of magnitude of output latency. This will bring great help to on-orbit real-time moving target detection and is of great significance in engineering practice.

2. Imaging characteristics

In order to meet the lightweight design requirements of microsatellites, the optical telescope should have a miniaturized and compact structure under the requirements of a wide FOV. In order to obtain better imaging quality, the wide-field imaging system adopts the technical scheme of a concentric optical system and optical fiber image bundle. Figure 1 shows the optical system and focal plane array.

 figure: Fig. 1.

Fig. 1. (a) Concentric optical system; (b) Focal plane array.

Download Full Size | PDF

The optical image is modeled as follows:

$$F(\textrm{i},j) = T(i,j) + S(i,j) + B(i,j) + N(i.j)$$
where $(i,j)$ denotes the pixel coordinates of the image, $F(i,j)$ represents the image grayscale value at the space coordinate $(i,j)$, $T(i,j)$ and $S(i,j)$ represents the space targets and stars, and $B(i,j)$ represents the space background, which is often nonuniform because of the effects of stray light. $N(i,j)$ represents the noise, including thermal noise, readout noise, dark current, etc.

The space target processed in this paper takes the star point as an example, and the energy distribution of the star point can be approximately expressed by the Gaussian point spread function.

$$T(x,y) = \frac{{{T_0}}}{{2\pi \delta _{PSF}^2}}\exp \left\{ {\left. { - \frac{{{{(x - {x_0})}^2} + {{(y - {y_0})}^2}}}{{2\delta_{PSF}^2}}} \right\}} \right.$$
where ${T_0}$ represents the total energy radiated by the star on the image sensor, $({x_0},{y_0})$ is the centroid coordinate of the star, $\delta _{PSF}^{}$ is the Gaussian radius of the star point.

For ideal stars, the energy distribution of observation stars follows a two-dimensional Gaussian distribution, that is, the gray value of pixels in the star dispersion area is symmetrically distributed. In the dispersion area, the grey level of the target decreases gradually from the center to the surrounding area as shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. (a) Imaging model of an ideal star; (b) 3D plot of a star.

Download Full Size | PDF

3. Analysis of the surveillance image

The images used in this paper are taken by the wide-field imaging system with CMOS detector arrays. The sensor is gsense6060 CMOS image sensor of ChangGuangChenXin company. Its spectral response is shown in Fig. 3. The sensor specifications are shown in Table 1.

 figure: Fig. 3.

Fig. 3. Spectral response of gsense6060 CMOS sensor.

Download Full Size | PDF

Tables Icon

Table 1. Sensor specifications

Firstly, we obtained the surveillance images through ground-based observations. Figure 4 shows the images taken by the wide-field imaging system with detector arrays. Figure 5 shows the imaging characteristics of the wide-field camera under long exposure time in detail. Figure 5(a) is the original two-dimensional image, Fig. 5(b) is the one-dimensional image, Fig. 5(c) is the 3D image with a size of 30 × 30 pixels, and Fig. 5(d) is the 3D image with a size of 100 × 100 pixels, which can show more details in multiple dimensions. From Fig. 5(a), we can see that the wide-field imaging system is seriously affected by nonuniform background. This is because our wide-field imaging system uses fiber image transmission components, which will bring serious nonuniformity to the imaging system. At the same time, due to the loss of the fiber image transmission components in the image transmission process and the discrete sampling characteristics of the fiber image transmission components themselves, its application will inevitably affect the signal-to-noise ratio, modulation transfer function, and optical transmittance.

 figure: Fig. 4.

Fig. 4. Images taken by the wide-field imaging system.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Influence of nonuniform background on wide-field surveillance in complicated situations. (a) Original surveillance image, (b) One-dimensional analysis of nonuniform background, (c) 3D display of the target submerged in nonuniform background, (d) Three dimensional analysis of nonuniform background.

Download Full Size | PDF

Secondly, we can also see a lot of noise in the surveillance image. This noise mainly comes from the sensor and sensor circuit, including reset noise, quantization noise, photon shot noise, and dark current. And it will also be affected by space radiation noise. We can see that due to the long exposure time characteristic of space surveillance, space targets or stars are basically submerged in dark current with strong energy, as shown in Fig. 5(b). Dark current constitutes a part of the nonuniform background of the surveillance image.

The star in Fig. 5(c) and (d) is the same star and it is strongly affected by the nonuniform background. In general, on the one hand, wide-field and long exposure bring better target detection ability for space surveillance. On the other hand, it will also bring serious nonuniformity to the image. These factors mainly include fiber image transmission components, dark current, and inevitable stray light. These factors bring serious challenges to space target detection and tracking. In order to better detect space targets, we need to correct the nonuniform background of the surveillance image, which is also an essential part of improving the imaging performance of the wide-field imaging system.

4. Nonuniform background correction

In traditional algorithms, morphological filtering requires repeated expansion and erosion operations. The size of structural elements is usually M ${\times}$ N pixels, and there is correlation between rows. When we process an image in one row, we need to use the image information of other rows. Therefore, we need to cache the whole image before processing, as shown in Fig. 6(b). Before each morphological operation, we need to cache the whole image information of the previous step. Therefore, traditional algorithms need to store, read and process repeatedly, which leads to excessive processing time and brings great latency to the imaging system. Figure 7(b) shows the processing of the traditional algorithm. The latency time is at least the transmission time of an image. For an image with a size of 6 k ${\times}$ 6 k pixels, the output latency time of the traditional method is at least 6 k ${\times}$ 6 k pixel clock cycles.

 figure: Fig. 6.

Fig. 6. (a) Latency time of real-time correction algorithm; (b) Latency time of traditional algorithm.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. (a) Processing process of real-time correction algorithm; (b) Processing process of traditional algorithm.

Download Full Size | PDF

The real-time correction algorithm proposed in this paper only uses the local pixels of a single row in mathematical morphology operation and does not need threshold segmentation, completely getting rid of the dependence on the whole image information. Combined with FPGA pipeline design, nonuniform background correction can be completed at the same time as the image output. That is to say, we do not need to obtain the whole image information and then perform operations. Instead, we perform operations through the local pixels of a single row output in real-time. After operations, we can output the processed images in real-time. It can be seen from Fig. 6(a) that the pixel readout time, image processing time, and image readout time in our real-time correction algorithm basically coincide, which greatly improves the real-time performance of the system. Because the traditional algorithm uses the whole image information, it requires repeated caching, reading, and processing, which brings huge latency to the system, as shown in Fig. 6(b). Figure 7(a) shows the processing process of the real-time correction algorithm, and there is no need to cache the whole image. The system latency time is usually hundreds of pixel clock cycles, which is microsecond-level. Figure 7(b) shows the processing process of the traditional algorithm. It can be seen that the whole image information is required before each morphological operation, so the traditional method needs a cache, which will bring large latency to the system. Our proposed algorithm will save at least 5 orders of magnitude of output latency.

The flow chart of real-time correction algorithm is shown in Fig. 8.

 figure: Fig. 8.

Fig. 8. The flow chart of real-time correction algorithm.

Download Full Size | PDF

4.1 Real-time correction algorithm based on local pixels

The nonuniform background of surveillance images is relatively complex, which is mainly caused by fiber coupling devices, dark current, and stray light. In this paper, mathematical morphology is used for background estimation to eliminate the nonuniform background. Mathematical morphology operation is based on two basic operations: expansion and corrosion. From expansion and corrosion, opening operation, closing operation, and top-hat transformation can be obtained respectively.

$$f \oplus \Phi = \max \{{f(i - m,j - n) + } \Phi (m,n) {|{(m,n) \in {D_\Phi }} } \}$$
$$f\textrm{ }\Theta \textrm{ }\Phi = \min \{{f(i + m,j + n) - } \Phi (m,n) {|{(m,n) \in {D_\Phi }} } \}$$
$$f \bullet \Phi = (f \oplus \Phi )\Theta \Phi $$
$$f \circ \Phi = (f\Theta \Phi ) \oplus \Phi $$
where ${\oplus}$ and $\Theta $ represents the dilation and erosion, respectively. $\Phi $ represents the structural element (SE), which is an important part of erosion and dilation operations. SE is a matrix with only 1s and 0s of any size and shape. Dilation makes the image gray value larger than that of the original image, which will also increase the size of the bright region. Erosion makes the image gray value smaller than that of the original image, which will also decrease the size of bright region. ${\circ}$ represents the opening operator which can smooth bright small regions of the image and ${\bullet}$ represents the closing operation which can eliminate dark small holes.

When the top-hat transform removes the nonuniform background, there will be residual background, so it is necessary to perform threshold segmentation on the image, and the calculation of the threshold requires global image information, which cannot realize real-time image processing. The new top-hat transform does not need to calculate the threshold value. It uses a new structural element composed of two structural elements as the expansion operation in the first step, which improves the gray value of the background. During the erosion operation, accurate background estimation is achieved. However, the new top-hat transform cannot use linear structural elements, which also cannot achieve real-time image processing. In order to solve this problem and realize background correction by using the local pixels of a single row in real-time, we propose an improved top-hat transform. First, we perform the closing operation on the image to improve the gray value of the nonuniform background without destroying the energy distribution of the target.

$${T_0}(i,j) = f(i,j) \bullet {\Phi _\alpha }$$
where $f(i,j)$ is the surveillance image. ${\Phi _\alpha }$ represents the structural element of the closing operation. Then, we perform an opening operation to estimate the nonuniform background.
$${T_1}(i,j) = {T_0}(i,j) \circ {\Phi _\beta }\textrm{ = }f(i,j) \bullet {\Phi _\alpha } \circ {\Phi _\beta }$$
where ${T_1}(i,j)$ is the estimated background of the improved top-hat transform. ${\Phi _\beta }$ represents the structural element of the opening operation. In the traditional method, the opening operation result is used as the background. In our improved algorithm, we first perform the closing operation and then perform the opening operation to obtain the background of the image. This operation has two advantages. It will enhance the background estimation and no longer need to calculate the whole image threshold. Besides, by introducing the structural element of the closing operation, we can increase the micro-adjustment function of noise suppression, which will achieve more accurate background estimation, and achieve more accurate target segmentation.

Then the image after nonuniform background elimination is obtained.

$${f_{ITH}} = f(i,j)\textrm{ - }{T_1}(i,j)\textrm{ = }f(i,j) - f(i,j) \bullet {\Phi _\alpha } \circ {\Phi _\beta }$$

The improved top-hat transform can avoid the whole image threshold calculation because it enhances the performance of background estimation. At the same time, we can also use linear structural elements that cannot be used in the new top-hat transformation. Therefore, we can independently process the local pixels of a single row output in real-time, so we can achieve real-time correction of nonuniform background by combining FPGA pipeline design in the true sense.

It should be noted that in the background estimation algorithm, strong dark current will be protected as a target and cannot be effectively eliminated. Therefore, we need to improve the algorithm again. Before correcting the uneven background of the image, we need to preprocess the noise. The traditional median filtering algorithm is a nonlinear signal processing technology based on sorting theory. Although it can effectively suppress noise, it also weakens the weak space target, which is not conducive to the following image processing and brings great difficulty to weak target recognition. Therefore, this paper adopts a method based on dark frame subtraction. We took dark frame images with different integration times and stored them in FLASH. Then, before background estimation, the image is pre-corrected for dark current.

$$f(i,j) = {I_0}(i,j,t) - {I_D}(i,j,t)$$
where ${I_0}(i,j,t)$ represents the original surveillance image, ${I_D}(i,j,t)$ represents the dark frame image, t represents integration time. Finally, we get the real-time correction algorithm in this paper.
$${f_{RTH}} = {I_0}(i,j,t) - {I_D}(i,j,t) - ({I_0}(i,j,t) - {I_D}(i,j,t)) \bullet {\Phi _\alpha } \circ {\Phi _\beta }$$
i.e.
$$\begin{aligned} {f_{RTH}} &= {I_0}(i,j,t) - {I_D}(i,j,t) - \\ &\textrm{ }(({I_0}(i,j,t) - {I_D}(i,j,t)) \oplus {\Phi _\alpha }\Theta \textrm{ }{\Phi _\alpha }\Theta \textrm{ }{\Phi _\beta } \oplus {\Phi _\beta }) \end{aligned}$$

Since the target diameter we care about is less than 30 pixels, we choose a structural element with a size of 1 ${\times}$ 30 pixels for the opening operation. In this way, the target information will not remain in the background during background estimation. The purpose of the closing operation is to enhance the background estimation and suppress the background noise at the same time. The size of the structural element of the closing operation is usually the same as that of the opening operation. Because the closing operation also has the micro-adjustment function of noise suppression, it can also be properly adjusted for better noise suppression performance.

Figure 9 shows the noise suppression micro-adjustment function of our proposed real-time correction algorithm. We use the star target as a reference. Figure 9(a) is the original image, and Fig. 9(b)-(d) are the background correction results using structural elements of different sizes when performing the closing operation. It can be seen from the image that the larger the structural element is, the stronger the background noise suppression is. This new function makes our estimation of the nonuniform background more accurate.

 figure: Fig. 9.

Fig. 9. Noise suppression results using structural elements of different sizes when performing the closing operation. (a) Original surveillance image, (b) Noise suppression result when the structural element is 1${\times}$20 pixels, (c) Noise suppression result when the structural element is 1${\times}$30 pixels, (d) Noise suppression result when the structural element is 1${\times}$40 pixels.

Download Full Size | PDF

4.2 Pipeline design for real-time correction algorithm

Figure 10 is the structure block diagram of the imaging control system. The detector driver unit is used to drive the CMOS detector. The data transmission unit is used to transmit image data within the system. The communication unit is used to communicate control commands within the system. The time synchronization unit realizes the time synchronization of each detector drive unit and ensures the synchronous shooting of each detector unit. The RTH algorithm implementation module is used to implement the nonuniform background real-time correction algorithm based on the local pixels of a single row proposed in this paper.

 figure: Fig. 10.

Fig. 10. Structure block diagram of imaging control system.

Download Full Size | PDF

In order to realize the design without cache and ultra-low latency, we designed the FPGA pipeline. As mentioned above, the improved top-hat transform combined with the FPGA pipeline design can achieve real-time correction. Our real-time correction algorithm based on the local pixels has a latency time of only hundreds of pixel clock cycles. The latency time is mainly caused by the structural elements.

$${T_{\textrm{r - delay}}} = {1 / {CLK}} \times (( S{E_ \bullet } + S{E_ \circ }) \times 2\textrm{ - }1)$$
where ${T_{\textrm{r - delay}}}$ represents the latency time of the real-time correction algorithm based on the local pixels of a single row. Here, we mainly consider the factors of structural elements. $CLK$ is the pixel clock. $S{E_ \bullet }$ represents the size of the closing operation structural element. $S{E_ \circ }$ represents the size of the opening operation structural element. Take the linear structural elements with a size of 1 ${\times}$ 3 pixels as an example, we describe the reasons for the latency in the operation of the algorithm. Figure 11 is the FPGA pipeline design using the linear structural element with a size of 1${\times}$ 3 pixels. It can be seen that the algorithm proposed in this paper has strong real-time processing performance.

 figure: Fig. 11.

Fig. 11. Pipeline design of real-time correction algorithm.

Download Full Size | PDF

The block diagram of the RTH algorithm is shown in Fig. 12. To facilitate understanding, we select the structural elements with a size of 1 ${\times}$ 3 pixels. Din represents the input pixel and DOUT represents the output pixel. MAX represents the comparator and outputs the large value after comparison. Min represents the comparator and outputs the small value after comparison. It can be seen from the principle block diagram that when the size of the structural element is 1 ${\times}$ 3 pixels, the latency time of the wide-field imaging system is only 11 pixel clock cycles.

 figure: Fig. 12.

Fig. 12. The RTH algorithm block diagram.

Download Full Size | PDF

This chapter introduces the technical barriers of traditional algorithms in the application of on-orbit real-time correction of the wide-field imaging system with CMOS detector arrays. Traditional background estimation requires threshold segmentation. The selection of threshold is very difficult for space images with complex backgrounds. Because weak targets are very sensitive to threshold segmentation, inappropriate threshold segmentation algorithms will make weak space targets disappear completely. In addition, threshold segmentation requires global image information and is not suitable for real-time image processing. The improved new top-hat transform can not use the linear structural elements and still needs to use the global image information, so it can not complete the real-time image processing when the pixel is output. We give a reasonable solution. An innovative real-time correction algorithm based on the local pixels of a single row is proposed, and the algorithm principle and performance are introduced in detail. It breaks the dependence of traditional algorithms on whole image information and only uses the local pixel information output in real time to complete nonuniformity correction. Figure 13 shows the processing results of the real-time correction algorithm based on the local pixels. Figure 13(a) is the original image. Figure 13(b) are two stars in two rows, which can let us see more details. Figure 13(c) is the 3D display of Fig. 13(b), which can intuitively see the nonuniformity of the background. Figure 13(d) shows the results of two rows before and after correction. It can be seen that the real-time correction algorithm can eliminate the nonuniform background of the two rows well after correction. Figure 13(e)-(f) shows the partial enlargement of Fig. 13(d), showing the correction results of targets in the two rows respectively. It can be seen from the details that the stars are well preserved, and the residual background noise around the two targets is very low, which fully demonstrates the feasibility of the application of the real-time correction algorithm.

 figure: Fig. 13.

Fig. 13. Processing results of real-time correction algorithm. (a) Original surveillance image, (b) Two targets in the original surveillance image, (c) 3D display of the targets submerged in nonuniform background, (d) Correction results based on the local pixels of a single row.

Download Full Size | PDF

5. Experiments and discussions

In order to objectively evaluate the validity and practicability of the method, we compared the performance of the proposed method with that of the new top-hat transform algorithm (non-real-time) [16]. We use the signal-to-noise ratio(SNR) [22] to measure the improvement of image quality. The nonuniform background correction is to improve the image quality. From a macro perspective, it is to eliminate the nonuniform background that does not contain target information. From a microscopic point of view, it can preserve the target signal and minimize residual background noise. So we take the star point in local area as a reference. The higher the SNR of the star point after processing, the greater the improvement of the image quality. Therefore, we use the method of calculating the SNR of stars to quantitatively measure the improvement of image quality of different algorithms.

5.1 Image quality analysis

The purpose of background correction is to eliminate meaningless background and reserve space targets and stars. In order not to excessively eliminate the nonuniform background, we use the weak star point with a size of 3${\times}$ 3 pixels as the reference to select the structural element and keep the reference star point with the same energy after processing by the two algorithms. In order to better understand the performance of our algorithm, we compare the traditional new top-hat transform algorithm (non-real-time) with our proposed real-time correction algorithm. Considering that the image of the wide-field imaging system with CMOS detector arrays is too large, we only show the image of two CMOS detectors so that we can see more image details. From a macro perspective, both our real-time correction algorithm and traditional algorithm can achieve good background estimation, but from a detailed perspective, our algorithm has less residual noise, as shown in Fig. 14. More local details can be seen from the stars. The image after correction by our algorithm has less residual background noise around the stars. Table 2 is the SNR of the three stars. Compared with the traditional algorithm, the SNR of the stars processed by our real-time correction algorithm is greatly improved. Theoretically, this is because our real-time correction algorithm has the background suppression micro-adjustment function, which can achieve more accurate background estimation. At the same time, we also consider the noise in dark frame, which can effectively eliminate the dark current. So our real-time correction can greatly improve the quality of the surveillance images.

 figure: Fig. 14.

Fig. 14. Background correction results. (a) Original surveillance image of Sensor-1, (b) Original surveillance image of Sensor-2, (c) Processing result of the traditional algorithm, (d) Processing result of the traditional algorithm, (e) Processing result of real-time correction algorithm, (f) Processing result of the real-time correction algorithm.

Download Full Size | PDF

Tables Icon

Table 2. SNR of different algorithms

In general, compared with the traditional algorithm, the algorithm proposed by us can not only achieve real-time processing based on the local pixels of a single row but also achieve more accurate background estimation. The residual background noise is less and the image quality is better. Compared with traditional algorithms, our real-time correction algorithm still has excellent performance. The feasibility of the real-time correction algorithm based on local pixel information is fully illustrated.

The surveillance image is easy to be disturbed by stray light, and the background will present a certain degree of nonuniformity. We took some images affected by different stray lights to verify the algorithm's stray light suppression performance. From Fig. 15, we can see that our algorithm still performs well compared with the traditional algorithm under strong stray light interference. Table 3 shows that the SNR of the two targets corrected by our algorithm has been greatly improved, indicating that the performance of the real-time correction algorithm is still better than that of the traditional non-real-time algorithm in the case of strong stray light interference.

 figure: Fig. 15.

Fig. 15. Background correction results under the influence of strong stray light. (a) Original surveillance image of Sensor-1, (b) Another surveillance image of Sensor-1, (c) Processing result of the traditional algorithm, (d) Processing result of the traditional algorithm, (e) Processing result of real-time correction algorithm, (f) Processing result of the real-time correction algorithm.

Download Full Size | PDF

Tables Icon

Table 3. SNR of different algorithms under the influence of strong stray light

In principle, compared with the traditional algorithm, our algorithm not only realizes real-time correction of ultra-low latency but also can eliminate the nonuniform background more accurately. That is to say, our algorithm can well eliminate nonuniform background noise, dark current, and stray light. It can achieve a better effect of image quality improvement under the influence of strong stray light. It fully shows that the real-time correction algorithm based on the local pixels of a single row is reliable and robust.

According to the requirements of the imaging experiment on orbit, we took 50 photos continuously under the influence of strong stray light. We selected the space weak targets with a size of 3${\times}$ 3 pixels to calculate the signal-to-noise ratio as shown in Fig. 16. The test results are shown in Fig. 17 and Fig. 18. As the star point is in a moving state, and the stray light and noise also change accordingly, so we can see that the signal-to-noise ratio of the star point is also changing. Compared with the traditional algorithm, our real-time correction algorithm has significantly improved the image quality. Under the interference of strong light, it still has good performance. It further shows that the algorithm is helpful to improve the imaging performance of the wide-field imaging system, which will greatly help the detection and tracking of space moving targets.

 figure: Fig. 16.

Fig. 16. Surveillance image sequence.

Download Full Size | PDF

 figure: Fig. 17.

Fig. 17. Signal-to-noise ratio of target 6.

Download Full Size | PDF

 figure: Fig. 18.

Fig. 18. Signal-to-noise ratio of target 7.

Download Full Size | PDF

6. Conclusion

To realize real-time on-orbit image quality improvement for the wide-field imaging system, this paper proposes a new real-time correction algorithm based on the local pixels of a single row. Based on the basic theory of mathematical morphology, we improve the traditional top-hat transform, which not only enhances the background suppression but also suppresses the noise more accurately. It breaks the dependence of traditional algorithms on the information of the whole image and can achieve more accurate background estimation only by using the local pixels of a single row. When processing massive wide-field surveillance images, it achieves microsecond-level ultra-low latency, realizing the real sense of real-time processing and transmission of massive images. In the field experiment, the real-time correction algorithm proposed in this paper is verified. Under the influence of strong stray light, even compared with the non-real-time traditional background estimation algorithm, our algorithm still has better background correction performance. The corrected image has a better image quality improvement performance. It will bring great help to real-time moving target recognition and tracking on orbit.

Funding

Strategic Priority Research Program of Chinese Academy of Sciences (XDA17010205).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but maybe obtained from the authors upon reasonable request.

References

1. Inter-Agency Space Debris Coordination Committee. IADC Space Debris Mitigation Guidelines, 2007.

2. National Aeronautics and Space Administration, “Monthly number of objects in earth orbit by object type,” Orbital Debris Quarterly News, 2, (2016).

3. European Space Agency, “How many space debris objects are currently in orbit?” (2016).

4. R Fernandez O, J Utzmann, and U. Hugentobler, “SPOOK-A comprehensive Space Surveillance and Tracking analysis tool,” Acta Astronaut. 158, 178–184 (2019). [CrossRef]  

5. J.N. Pelton, Space Debris and Other Threats from Outer Space[M]. SpringerBriefs in Space Development, 2013.

6. Doug Messier, “The Current State of Space Debris,” Parabolic Arc, Oct 13, 2020.

7. Guidelines for the long-term sustainability of outer space activities of the Committee on Peaceful Uses of Outer Space adopted. UNOOSA Press Release, June 22, 2019.

8. R. Venkateswarlu, “Max-mean and max-median filters for detection of small targets,” Proc. SPIE – Int. Soc. Opt. Eng. 3809, 74–83 (1999).

9. Y. Cao, R.M. Liu, and J. Yang, “Small target detection using two-dimensional least mean square (tdlms) filter based on neighborhood analysis,” Int. J. Infrared Millimeter Waves 29(2), 188–200 (2008). [CrossRef]  

10. T.-W. Bae, Y.-C. Kim, S.-H. Ahn, and K.-I. Sohng, “A novel two-dimensional lms (tdlms) using sub-sampling mask and step-size index for small target detection,” IEICE Electron. Exp. 7(3), 112–117 (2010). [CrossRef]  

11. B. Zhang, T. Zhang, Z. Cao, and K. Zhang, “Fast new small-target detection algorithm based on a modified partial differential equation in infrared clutter,” Opt. Eng. 46(10), 106401 (2007). [CrossRef]  

12. O.E. Drummond, “Morphology-based algorithm for point target detection in infrared backgrounds,” in: Signal and Data Processing of Small Targets, 1993, pp. 2–11.

13. X. Bai, F. Zhou, and T. Jin, “Enhancement of dim small target through modified top-hat transformation under the condition of heavy clutter,” Signal Process. 90(5), 1643–1654 (2010). [CrossRef]  

14. Y. Lu, S. Huang, and W. Zhao, “Sparse representation based infrared small target detection via an online-learned double sparse background dictionary,” Infrared Phys. Technol. 99, 14–27 (2019). [CrossRef]  

15. X. Bai and F. Zhou, “Technical communication: Infrared small target enhancement and detection based on modified top-hat transformations,” Comput. Electr. Eng. 36(6), 1193–1201 (2010). [CrossRef]  

16. X. Bai and F. Zhou, “Analysis of new top-hat transformation and the application for infrared dim small target detection,” Pattern Recogn. 43(6), 2145–2156 (2010). [CrossRef]  

17. L. Deng, H. Zhu, Q. Zhou, and Y. Li, “Adaptive top-hat filter based on quantum genetic algorithm for infrared small target detection,” Multimed. Tools Appl. 6, 1–13 (2017).

18. Zeming Xu, Dan Liu, Changxiang Yan, and Chunhui Hu, “Stray light nonuniform background correction for a wide-field surveillance system,” Appl. Opt. 59(34), 10719–10728 (2020). [CrossRef]  

19. Zeming Xu, Dan Liu, Changxiang Yan, and Chunhui Hu, “Stray Light Elimination Method Based on Recursion Multi-Scale Gray-Scale Morphology for Wide-Field Surveillance,” IEEE Access 9, 16928–16936 (2021). [CrossRef]  

20. S Wei M, F Xing, and Z. You, “A real-time detection and positioning method for small and weak targets using a 1D morphology-based approach in 2D images,” Light: Sci. Appl. 7(5), 18006 (2018). [CrossRef]  

21. J. Ding, D. Dai, W. Tan, X. Wang, and S. Qin, “Implementation of a real-time star centroid extraction algorithm with high speed and superior denoising ability,” Appl. Opt. 61(11), 3115–3122 (2022). [CrossRef]  

22. S. Liu, J. Zhang, G. Sun, G. Zhang, S. Chen, and J. Chen, “Research on evaluation index of stray light suppression ability of star sensor based on signal-to-noise ratio,” Opt. Commun. 530, 129175 (2023). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but maybe obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (18)

Fig. 1.
Fig. 1. (a) Concentric optical system; (b) Focal plane array.
Fig. 2.
Fig. 2. (a) Imaging model of an ideal star; (b) 3D plot of a star.
Fig. 3.
Fig. 3. Spectral response of gsense6060 CMOS sensor.
Fig. 4.
Fig. 4. Images taken by the wide-field imaging system.
Fig. 5.
Fig. 5. Influence of nonuniform background on wide-field surveillance in complicated situations. (a) Original surveillance image, (b) One-dimensional analysis of nonuniform background, (c) 3D display of the target submerged in nonuniform background, (d) Three dimensional analysis of nonuniform background.
Fig. 6.
Fig. 6. (a) Latency time of real-time correction algorithm; (b) Latency time of traditional algorithm.
Fig. 7.
Fig. 7. (a) Processing process of real-time correction algorithm; (b) Processing process of traditional algorithm.
Fig. 8.
Fig. 8. The flow chart of real-time correction algorithm.
Fig. 9.
Fig. 9. Noise suppression results using structural elements of different sizes when performing the closing operation. (a) Original surveillance image, (b) Noise suppression result when the structural element is 1${\times}$20 pixels, (c) Noise suppression result when the structural element is 1${\times}$30 pixels, (d) Noise suppression result when the structural element is 1${\times}$40 pixels.
Fig. 10.
Fig. 10. Structure block diagram of imaging control system.
Fig. 11.
Fig. 11. Pipeline design of real-time correction algorithm.
Fig. 12.
Fig. 12. The RTH algorithm block diagram.
Fig. 13.
Fig. 13. Processing results of real-time correction algorithm. (a) Original surveillance image, (b) Two targets in the original surveillance image, (c) 3D display of the targets submerged in nonuniform background, (d) Correction results based on the local pixels of a single row.
Fig. 14.
Fig. 14. Background correction results. (a) Original surveillance image of Sensor-1, (b) Original surveillance image of Sensor-2, (c) Processing result of the traditional algorithm, (d) Processing result of the traditional algorithm, (e) Processing result of real-time correction algorithm, (f) Processing result of the real-time correction algorithm.
Fig. 15.
Fig. 15. Background correction results under the influence of strong stray light. (a) Original surveillance image of Sensor-1, (b) Another surveillance image of Sensor-1, (c) Processing result of the traditional algorithm, (d) Processing result of the traditional algorithm, (e) Processing result of real-time correction algorithm, (f) Processing result of the real-time correction algorithm.
Fig. 16.
Fig. 16. Surveillance image sequence.
Fig. 17.
Fig. 17. Signal-to-noise ratio of target 6.
Fig. 18.
Fig. 18. Signal-to-noise ratio of target 7.

Tables (3)

Tables Icon

Table 1. Sensor specifications

Tables Icon

Table 2. SNR of different algorithms

Tables Icon

Table 3. SNR of different algorithms under the influence of strong stray light

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

F ( i , j ) = T ( i , j ) + S ( i , j ) + B ( i , j ) + N ( i . j )
T ( x , y ) = T 0 2 π δ P S F 2 exp { ( x x 0 ) 2 + ( y y 0 ) 2 2 δ P S F 2 }
f Φ = max { f ( i m , j n ) + Φ ( m , n ) | ( m , n ) D Φ }
f   Θ   Φ = min { f ( i + m , j + n ) Φ ( m , n ) | ( m , n ) D Φ }
f Φ = ( f Φ ) Θ Φ
f Φ = ( f Θ Φ ) Φ
T 0 ( i , j ) = f ( i , j ) Φ α
T 1 ( i , j ) = T 0 ( i , j ) Φ β  =  f ( i , j ) Φ α Φ β
f I T H = f ( i , j )  -  T 1 ( i , j )  =  f ( i , j ) f ( i , j ) Φ α Φ β
f ( i , j ) = I 0 ( i , j , t ) I D ( i , j , t )
f R T H = I 0 ( i , j , t ) I D ( i , j , t ) ( I 0 ( i , j , t ) I D ( i , j , t ) ) Φ α Φ β
f R T H = I 0 ( i , j , t ) I D ( i , j , t )   ( ( I 0 ( i , j , t ) I D ( i , j , t ) ) Φ α Θ   Φ α Θ   Φ β Φ β )
T r - delay = 1 / C L K × ( ( S E + S E ) × 2  -  1 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.