Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Study of the image motion compensation method for a vertical orbit dynamic scanning TDICCD space camera

Open Access Open Access

Abstract

In this study, a collaborative compensation method for low-dimensional attitude maneuvering and time delay integration charge-coupled device (TDICCD) line-frequency matching is proposed. The method is combined with the validation and analysis of the coordinate system transformation model to address the mismatch between the TDI charge transfer speed and the speed of the target. This mismatch is caused by the inconsistency between the rotational scanning direction of the double-sided mirror used for dynamic vertical orbit scanning imaging in low Earth-orbit satellites and the direction of the satellite along its orbit. The image motion per unit exposure time is decreased from 0.619µm to 0.023µm compared with the uncompensated maneuver mode, and the image quality is noticeably higher.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

High-resolution remote sensing cameras are widely used, in geological analysis, agricultural and forestry disaster monitoring, and military reconnaissance. For these fields, the main focus of satellite development is balancing the resolution and area surveyed [14]. To achieve this goal, flexible satellite control platforms and high-performance time delay integration charge-coupled devices (TDICCD) have been developed, enabling remote sensing cameras to be utilized for a wide range of applications [5,6].

The use of TDICCD line-array push-scan imaging technology, which can significantly improve the signal-to-noise ratio of the imaging system and reduce the aperture requirements of the camera [7], is key for building miniaturized and lightweight satellites. To ensure imaging quality, it is necessary to accurately match the electronic signals of the multistage exposure in the integration mode [8]. In recent years, the image motion of high-resolution spatial TDI cameras has been the subject of several extensive theoretical and practical investigations [9]. Currently, there is an urgent need for large-area high-resolution imaging.

Traditional methods for realizing large-area imaging in space primarily include along-orbit multi-strip splicing, multi-camera splicing, and satellite network formation splicing. Multi-band splicing imaging is currently the most frequently used technique to increase the vertical orbit observation distance [10]. However, to perform ultra-large-format wide imaging in a direction that is not along the orbital path, it is necessary to splice 10 or more images, which is an obstacle to finishing ultra-large-format wide imaging within the time constraints. Multi-camera satellite technology involves equipping a satellite with multiple cameras to capture a broader view of the Earth's surface. Each camera maintains consistent imaging parameters, but their installation angles differ. During in-orbit imaging, multiple cameras simultaneously observe the Earth’s surface from different angles to capture a multiple-strip image [1113]. Satellite network formation splicing imaging is used when the revisit cycle is short, but this requires hundreds of satellites [14], which significantly raises launch and manufacturing costs. Multi-camera and satellite network formation splicing imaging are no longer applicable due to constraints on weight, volume, and cost. Recently, a new imaging method was proposed by Xibin Cao’s team at the Harbin Institute of Technology (HIT) in Harbin, China: the dynamic circular scanning space camera, which attains a large imaging area via rotation [15]. The conical motion of the double camera has been characterized, and the ultra-wide coverage of the imaging area has been realized by designing the single-camera constant-speed OCPSI and the dual-camera variable-speed OCPSI modes [16]. The circular scanning camera uses a long-exposure array to improve the signal-to-noise ratio of the system. However, it produces an edge velocity field and exhibits poor imaging quality [17]. The dynamic circular scanning camera can only be used for array imaging, not for TDI imaging.

To solve the image motion during the imaging process, Boris M. Miller hypothesized applying the delayed integral mode idea to the CCD transverse charge transfer process to achieve optimal control of compensation by selecting segmentation constants to match the target image motion in arbitrary modes [18]. Based on the working principle of TDICCD, Wang et al. used correction factor k to synchronize the optical encoder with the TDICCD synchronization signal, which substantially improved the system modulation transfer function amplitude [19]. A method was proposed to realize image motion compensation by estimating the real-time perturbation and offsetting the perturbation in real time using the fast mirror [20]. And an algorithm was proposed for optic axis and motion compensation and verified the effectiveness of the algorithm with a gaze system containing two single-axis fast steering mirrors and a two-axis gimbal [21]. Tian et al. proposed a motion compensation method to compensate for the image rotation and image translation due to the rotational motion of the camera and realized high-resolution, wide-coverage imaging [22]. Liu et al. based on the nodal aberration theory (NAT), designed a secondary tilted-aplanatic two-mirror optical system (STATOS), which realizes real-time and rapid compensation of relative motion by fast tilting of secondary mirrors to ensure high-resolution images [23].

For the learning and application of the pendulum scanning camera. Sun et al. employed the method of edge blurring in side-swing imaging and proposed the SOIM recovery method to improve image quality [24]. In addition, Yang implemented vertical orbit area staring imaging for a vertical orbit area search model by reducing the overlap rate between frames so that the array camera expanded the imaging range of the vertical orbit region [25]. Furthermore, Du et al. proposed an agile imaging mode in which the satellite undergoes pendulum maneuvers while utilizing a pendulum mirror sweep to enhance imaging capabilities. The scanning direction was perpendicular to the orbital direction, and the satellite side-pendulum angle was adjusted during the scanning process to achieve curve detection in coastal areas. The image areas of the various spectral regions in the detector shifted as scanning progressed [26]. Additionally, there were deviations in the imaging regions of different spectral regions of the detector .To address this issue, spectral misaligned strip images were captured and aligned using point-line alignment techniques, ultimately facilitating hyperspectral image fusion [27]. The above mentioned studies are based on agile satellite area array imaging or TDI imaging with the push-broom direction parallel to the flight direction, which does not consider image quality degradation in multi-stage integration while vertical orbit dynamic scanning is performed.

To eliminate image motion blur in the multilevel integration mode of the TDI space camera, targets in the direction of the vertical orbit are imaged with an ultra-large-format width and high resolution.This study proposes a TDI camera with a dynamic scanning imaging mode and an image motion compensation method. The rest of this paper is organized as follows. Section 2 describes the characteristics of dynamic scanning imaging. In Section 3, image motion compensation methods are analyzed. In Section 4, a simulation of the imaging characteristics of a wide-area camera in vertical orbit scanning imaging mode is explained. Finally, Section 5 summarizes the study and draws the conclusions.

2. Characteristics of dynamic scanning imaging

2.1 Dynamic scanning imaging mode of the vertical orbit search camera

Current satellite maneuverability has significantly increased to meet a wider range of application requirements. However, several problems exist, such as strong nonlinear perturbations [28], large model uncertainties, multilevel saturation, and control system complexities [29]. To realize ultra-large-area coverage and high-resolution imaging, the satellite is required to have fast response, high precision, and stable pointing capabilities. The use of the agile satellite maneuvering mode may lead to system instability, and thus the double-sided mirror scanning mode is used to execute high-resolution and large-area detection and identification. The scanning double-sided mirror mechanism contains an encoder that can precisely monitor and control the position and speed of the mirror. When the double-sided mirror reaches a specific position, it activates the imaging command, which signals the camera to initiate vertical orbit scanning and imaging.

When the satellite passes over a target area, it starts detecting from one side of the target area by adjusting its attitude. As shown in Fig. 1, the imaging region is represented by a strip area perpendicular to the orbit of the sub-satellite point after the scanning is complete. During the overhead time, the satellite’s attitude keeps pitching to scan and image the target area. This allows the satellite's maneuvering attitude to be allocated during overhead time, which results in imagery with a wide field of view, high resolution, and high signal-to-noise ratio.

 figure: Fig. 1.

Fig. 1. Schematic diagram of wide-area ground imaging.

Download Full Size | PDF

Scanners are important components of remote sensing systems. The double-sided mirror rotation scanning method has the advantages of a large scanning angle, superior linearity, stability, and dependability, which can significantly increase the scanning efficiency [30].

Figure 2 shows a schematic of the scanning function of the double-sided mirror in the wide-area camera’s optical system, where the X-direction is the direction of satellite flight (the double-sided mirror rotates around it), the Y-direction is the imaging push sweep direction, and the imaging area is the area of the vertical orbit strip.

 figure: Fig. 2.

Fig. 2. Schematic diagram of the optical system of the wide-area camera.

Download Full Size | PDF

When scanning and imaging, the effective imaging area is a function of the scanning angle of the double-sided mirror. When the rotation angle of the double-sided mirror is -φ∼+φ, the camera is in imaging mode, and the scanning angle from the optical imaging axis to the ground is -2φ∼+2φ. The latter increases the width of the direction of the vertical orbit area, thus imaging a wide range of the vertical orbit area. The change in spatial resolution when the camera images at different scanning angles is shown in Fig. 3.

 figure: Fig. 3.

Fig. 3. Schematic diagram of double-sided mirror expanding the width of the area.

Download Full Size | PDF

As the double-sided mirror scans, the observation distance between the satellite and target region varies continuously, and the resolution of the target is correlated with the observation distance and scanning angle via

$$GS{D_y} = {R_e}\cdot \Delta \gamma $$

In the camera imaging mode, the scanning angle range of the double-sided mirror determines the vertical rail width. The instantaneous vertical rail width is expressed as

$$SW = {R_e}\cdot \gamma $$
where γ is the geocentric angle and △γ corresponds to the geocentric angle of a single pixel at different scanning angles.

2.2 Principle of multi-velocity image motion compensation for vertical orbit dynamic scanning time-delayed integral imaging

TDICCD operates according to the principle of multiple exposures to the same target, and the electronic signals resulting from each exposure are successively integrated to boost the signal-to-noise ratio of the system. For remote sensing cameras used in space, the same optical system can be used with TDICCDs to obtain higher image quality in the same environment. However, TDICCDs require the electron transfer rate of each row to be synchronized with the moving velocity of the target on the image plane. The ideal working mode of a TDICCD is shown in Fig. 4(a), where the satellite push-broom direction is forward, the target movement direction is opposite to the push-broom direction. After exposing the target, the integrated charge transfer is directed backward and added to the next stage of the exposure signal charge. During the integration period, the target projection velocity is in the direction of the TDICCD column. Finally, the accumulated charge of the same target is transferred and produced as output.

 figure: Fig. 4.

Fig. 4. (a) Ideal working mode of a TDICCD; (b) TDICCD image motion mismatch caused by inconsistency between the scanning and flight directions.

Download Full Size | PDF

If no compensation measures are adopted, the phenomenon shown in Fig. 4(b) occurs. When the satellite flight direction is perpendicular to the push-broom direction, the target projection velocity vector is inconsistent with the charge transfer vector. The projection of the target onto the image plane has a displacement in the direction of the TDICCD rows, resulting in inconsistent electronic signals for each integration. Consequently, the imaging information cannot be synchronized, thereby reducing the image quality.

To synchronize the charge transfer imaging information in each row, it is necessary to implement image motion compensation measures for the system. This action reduces the impact of image motion mismatches on the imaging quality. First, a cooperative control system for the satellite’s attitude and the camera’s line frequency is established. Additionally, the scanning period and scanning angular velocity of the double-sided mirror are determined based on the scanning width. Second, the flight orbit and initial attitude of the satellite are determined, and the double-sided mirror position is sent to the encoder. Based on these data, the image motions of different pixels at different moments are obtained. Finally, satellite attitude control and line frequency matching are used as compensation methods, and the image motion compensation amount is used as feedback to form a control compensation system, to realize the image motion compensation of large-area scanning imaging in vertical orbit.

2.3 Image motion velocity field modeling

The principle of vertical orbit dynamic scanning imaging is based on the rotation and scanning of a double-sided mirror, which achieves large-area imaging in the vertical orbit direction. The rotational angular velocity of the double-sided mirror is denoted as $\dot{\varphi }$, satellite pitch angular velocity is denoted as $\dot{\theta }$, orbital height of the remote sensing camera relative to the sub-satellite point is denoted as H, rotation angle of the satellite double-sided mirror is denoted as $\varphi $, and pitch angle is denoted as $\theta $. In the process of imaging the target area, a scanning imaging trajectory perpendicular to the trajectory of the sub-satellite point can be formed on the ground through the coordination between the rotational scanning of the double-sided mirror and the satellite pitch attitude.

There are four main types of parameters for the motion of a scanned target image: satellite flight along the orbit, Earth’s rotation, double-sided mirror scanning, and satellite attitude changes (pitch, yaw, and roll). When the satellite images the target, the velocity of the target point relative to the camera is denoted as V. Through the relationship of the relative motion between the camera and the target, the velocity V can be decomposed into the projected velocity ${V_S}$ of the satellite flight on the ground, rotational velocity ${V_\textrm{e}}$ of the Earth, projected velocity ${V_M}$ of the double-sided mirror’s scans on the ground, and projected velocity ${V_\theta }$ of the satellite’s pitch maneuver on the ground, as shown in Fig. 5.

 figure: Fig. 5.

Fig. 5. Schematic diagram of the satellite’s flight and imaging trajectory.

Download Full Size | PDF

Therefore, the scanned target image velocity vector V can be synthesized using the vectors ${V_S}$, ${V_\textrm{e}}$, ${V_\textrm{M}}$, and ${V_\theta }$:

$$\vec{V} = {\vec{V}_S} + {\vec{V}_e} + {\vec{V}_M} + {\vec{V}_\theta }$$

According to the geographic location of the target and its position relative to the camera, the velocity vectors at the target point of the four parameters shown in Fig. 6 can be obtained, and the expression for each velocity vector is

$${\vec{V}_S} = {\omega _S} \cdot {R_e} \cdot {\vec{e}_S} = \sqrt {\frac{\mu }{{{{({R_e} + H)}^3}}}} \cdot {R_e} \cdot {\vec{e}_S}$$
$${\vec{V}_e} = {\omega _e} \cdot {R_e} \cdot \textrm{cos}{\lambda _t} \cdot {\vec{e}_e}$$
$${\vec{V}_\theta } = {\omega _\theta } \cdot L \cdot \cos \theta \cdot {\vec{e}_\theta }$$
$${\vec{V}_M} = {\omega _\varphi } \cdot L \cdot \cos 2\varphi \cdot {\vec{e}_\phi }$$

 figure: Fig. 6.

Fig. 6. Schematic diagram of the image motion compensation attitude maneuver.

Download Full Size | PDF

Because the projected velocity of the Earth's rotation, satellite’s attitude, and double-sided mirror’s scanning on the ground target point vary with the position of the object, the position of the object is necessary for calculating the projected velocity of the target on the image plane. The latitude of the object can be obtained by combining the pixel, optical axis, and geocentric angles using the geometric relationship in spherical coordinates, where $\varepsilon $ is the angle between the sub-satellite point and the target, $\gamma $ is the geocentric angle between the sub-satellite point and the target, ${\lambda _t}$ is the latitude of the target point, and ${\lambda _S}$ is the latitude of the sub-satellite point. The equations describing these variables are

$$\varepsilon = \arccos (\cos (2\varphi ) \cdot \cos \theta )$$
$$\gamma = \arcsin (\frac{{\sin \varepsilon \cdot ({R_e} + H)}}{{{R_e}}}) - \varepsilon $$
$${\lambda _t} = \arcsin (\sin ({\lambda _{s1}}) \cdot \cos \gamma + \cos ({\lambda _{s1}}) \cdot \sin \gamma \cdot \cos i)$$
where L represents the distance from the satellite to the observation point. The variation in L with the scanning and pitch angles can be obtained by examining Fig. 5.
$$L = \frac{{{R_e} \cdot \textrm{sin}\gamma }}{{\sin \varepsilon }}$$

The change in latitude of the sub-satellite point of the satellite’s flight changes linearly, and the latitude variation of the target point on the axis can be calculated after determining the time, initial position, single-strip period, and maximum lateral pendulum angle.

3. Analysis of compensation methods

3.1 Line frequency compensation for image motion in the vertical orbit direction

The analysis in Section 2.3 clearly indicates that the double-sided mirror rotation scanning velocity and satellite flight velocity are key contributors to the target’s image motion during the vertical orbit scanning procedure. In addition, the rotation of the Earth is an important factor that causes the target’s projection to move on the image plane. According to the working principle of TDICCD presented in Section 2.2, the charge transfer velocity and projection velocity vector of the corresponding target point on the image plane should be the same in size and direction. By calculating the combined velocity vector of the TDICCD integration direction on the image plane and obtaining the line frequency of the scanning imaging at different moments, the charge transfer rate of each row of the TDICCD can be synchronized with the target’s column velocity on the image plane.

The scanning angular velocity of the double-sided mirror is calculated using the expected vertical orbit width and the imaging period [31] via

$${\omega _\varphi } = \frac{{\arctan (\frac{{SW}}{{2H}})}}{T}$$
where T is the total time required to image a single strip of the target.

When the scanning angle of the double-sided mirror is $\varphi $, the projection velocity corresponding to the target is

$${V^{\prime}_e} = {V_e} \times \cos (2\varphi (t)) \times \cos i$$
$${V_y} = V_M^{} + {V^{\prime}_e}$$
where ${V^{\prime}_e}$ is the Earth's relative velocity projected onto the object plane and ${V_y}$ is the velocity at the surface of the object.

The velocity on the object plane is converted to that on the image plane, obtaining the velocity on the image plane and solving the line frequency according to

$${V^{\prime}_y}(t) = \frac{{{V_y}(t) \times f}}{{L(t)}}$$
$$F(t) = \frac{{{{V^{\prime}}_y}(t)}}{a}$$
where ${V^{\prime}_y}(t)$ is the velocity in the image plane, $F(t)$ is the TDICCD line frequency, f is the focal length of the camera, and a is the size of a single pixel. According to the above equation, the TDICCD line frequency varies nonlinearly with the double-sided mirror scanning angle.

3.2 Attitude compensation for image motion along the orbit direction

After line frequency matching is performed, the image motion mismatch in the vertical orbit direction is more efficiently compensated. However, it is also important to consider image motion mismatch in the along-orbit direction throughout the satellite’s flight. Because the image motion along the orbit is perpendicular to the TDICCD push-scan direction and the electronic method is unable to compensate for this motion, other techniques must be utilized.

In this imaging mode, the motion of the target due to Earth's rotation is inconsistent with the satellite’s imaging push-scan direction, which can be decomposed into ${V_{ex}}$ and ${V_{ey}}$:

$${V_{ex}} = {V_e} \cdot \cos (inc)$$
$${V_{ey}} = {V_e} \cdot \sin (inc)$$

Suppose the satellite’s flight direction is from south to north, the initial sub-satellite point’s latitude is 0°, the single strip imaging period is T = 20 s, and the double-sided mirror scanning angle is -22.5°–22.5°. At t = T/2 = 10 s, the double-sided mirror angle is defined as 0°. At this time, the sub-satellite point’s latitude is ${\lambda _S} = 0.68116^\circ $, and the target’s latitude corresponding to the optical axis can be obtained using Eq. (10).

The maneuvering of the satellite’s pitch attitude along the direction of the projection is perpendicular to the X-axis, as shown in Fig. 6.

To achieve high-resolution Earth observations, the satellite adjusts its pitch attitude to achieve a combined velocity ${V_x} = 0$ along the orbital direction (i.e., ${\vec{V}_S} + {\vec{V}_\theta } + {\vec{V}_{ex}} = 0$). ${\vec{e}_s}$ is opposite to the direction of ${\vec{e}_\theta }$ and ${\vec{e}_{ex}}$.

$$\sqrt {\frac{\mu }{{{{({R_e} + H)}^3}}}} \cdot {R_e}\textrm{ - (}{\omega _\theta }(t) \cdot L(t) \cdot \cos \theta (t) + {\omega _e} \cdot {R_e} \cdot \cos {\lambda _t}(t) \cdot \cos i) = 0$$

The target point trajectory is not a traditional push sweep or unidirectional side pendulum imaging maneuver. Both the satellite pitch attitude maneuver and the double-sided mirror scan must be considered. That is, the ground target is dynamically imaged in both the X- and Y-directions, and the target latitude value is calculated according to

$$\sin {\lambda _B} = \sin {\lambda _S}\cos {\gamma _B} - \cos {\lambda _S}\sin {\gamma _B}\sin i$$
$$\sin {\lambda _t} = \sin {\lambda _B}\cos {\gamma _R} - \cos {\lambda _B}\sin {\gamma _R}\cos i$$
where ${\lambda _B}$ represents the latitude of the target point indicated by the backward-pendulum optical axis of the satellite and ${\lambda _R}$ represents the latitude of the target point indicated by the right-pendulum optical axis of the satellite. The expected target latitude ${\lambda _t}$ of the camera’s optical axis is
$$\lambda {(t)_t} = \arcsin (\sin ({\lambda _{s1}}(t)) \cdot \cos \gamma (t) + \cos ({\lambda _{s1}}(t)) \cdot \sin \gamma (t) \cdot \cos i)$$

The change in the satellite’s geocentric angle caused by the pitch attitude can be obtained from the target position calculation equation presented in Section 2.3:

$$\gamma (t) = \arcsin (\frac{{\sin \theta (t) \cdot ({R_e} + H)}}{{{R_e}}}) - \theta (t)$$

Finally, R and the satellite’s velocity position have the following relationship:

$$\gamma (t) = \frac{{{V_s}t}}{{{R_e} + H}}$$

According to the above equation, the variation of the attitude angle during the imaging period satisfies ${\vec{V}_S} + {\vec{V}_\theta } + {\vec{V}_{ex}} = 0$.

4. Simulation

The imaging characteristics in this mode are quantitatively analyzed to better understand the imaging characteristics of the wide-area camera in vertical orbit scanning imaging mode. The camera simulation parameters are as follows in Table 1:

Tables Icon

Table 1. Parameters of the camera simulation

The target resolutions corresponding to different pixels at different side pendulums of the wide-area camera were calculated and analyzed, and the analysis results are shown in Fig. 7.

 figure: Fig. 7.

Fig. 7. GSD at different side pendulums and different pixels

Download Full Size | PDF

Figure 7 indicates that the GSD increases as the double-sided mirror scanning angle increases, and the GSDs corresponding to different pixel points on the image plane at the same moment are also different. The larger the field of view, the lower the resolution and the lower the resolution at the detector edge (relative to the center).

As shown in Fig. 8, the distance between the target point and sub-satellite point of the camera imaging increases as the lateral pendulum angle increases. The growth trends of the GSD and vertical orbit width are not linear with respect to the scanning angle, and a larger scanning angle results in a faster growth rate.

 figure: Fig. 8.

Fig. 8. Vertical orbit width for different scan angle ranges.

Download Full Size | PDF

4.1 Results of line frequency compensation

Table 2 shows the imaging parameters used during the wide-area camera imaging process.

Tables Icon

Table 2. Imaging parameters

Section 3.1 established a nonlinear relationship between the scanning angle of the double-sided mirror and the line frequency. Calculate the line frequency within the camera's imaging cycle by solving the image motion on the image plane per unit exposure time. The real-time line frequency solution is presented in Fig. 9.

 figure: Fig. 9.

Fig. 9. Real-time line frequency compensation.

Download Full Size | PDF

When imaging began, the scanning angle of the double-sided mirror was -22.5°, and the line frequency was 11.2246 kHz. When the scanning angle of the double-sided mirror was 0°, the optical axis of the camera pointed toward the sub-satellite point, and the corresponding line frequency was 11.2292 kHz. This was primarily because the observation distance at the edge was larger than that at the center, causing the edge of the image’s motion velocity to be slower.

4.2 Attitude compensation results

The orbit inclination is 86°, which is less than 90°, and the satellite runs from west to east along the direction of Earth's rotation (a prograde orbit). The angle between the optical axis and the sub-satellite point is defined as negative when the satellite conducts a forward view and positive when it conducts a backward view.

The initial attitude between the satellite’s optical axis pointing and the satellite’s sub-satellite point is $- \theta $, and ${V_S}$ and ${V_{ex}}$ are compensated by adjusting the pitch attitude of the satellite. The relationship between the pitch attitude maneuver angle θ and time is shown in Fig. 10.

 figure: Fig. 10.

Fig. 10. Pitch angle variation.

Download Full Size | PDF

The satellite reached a desired stable attitude of -8.021° ahead of observing the target. When it reached the necessary location, the camera performed vertical orbit strip push-broom imaging. The satellite pitch angle was varied as shown in Fig. 10. The satellite pointed to the sub-satellite point at T/2, and the pitch angle was 0° at this time. In addition, the pitch angle of the satellite increased nonlinearly from the start to time T, as shown in Fig. 11. Finally, the pitch angle velocity variation increased monotonically from the start to T/2 and decreased monotonically from T/2 to T.

 figure: Fig. 11.

Fig. 11. Pitch angle velocity variation.

Download Full Size | PDF

The amount of image motion is calculated according to

$${s_{im}} = {V_{im}} \times {t_{{\mathop{\rm int}} }} \times M$$
where ${t_{{\mathop{\rm int}} }}$ is the unit exposure time and M is the integral number.

To verify the correctness of the theoretical model, the results of the attitude change calculations obtained using the geometric analysis method are introduced into the coordinate change model.

The target indicated by the optical axis at moment T/2 is selected as the fixed target point. If the satellite does not perform an attitude maneuver, the along-orbit projection position of the fixed target point on the image plane changes, as shown in Fig. 12. The velocity of the target point projected onto the image plane is shown in Fig. 13. The image velocity reaches a maximum at T/2.

 figure: Fig. 12.

Fig. 12. Position change of a fixed target point in the image plane along the orbital direction without attitude compensation.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Velocity on the image plane without attitude compensation.

Download Full Size | PDF

The trends in the displacement and image velocity of the target point projection without pitch compensation are shown in Fig. 13. The maximum image velocity along the orbital direction was $7056/F(t) = 0.619\mu m$ per unit exposure time. The image velocity after pitch compensation during the imaging period is shown in Fig. 14.

  • 1) For a given pixel, the image velocity at the starting moment is greater than the image velocity at T/2.
  • 2) For a given moment, the image velocity at the edge is greater than the image velocity at the center of the image.

 figure: Fig. 14.

Fig. 14. Image velocity at different pixels within the exposure time after compensation.

Download Full Size | PDF

The maximum image velocity along the orbit at the edge of the image plane is 260.2 µm/s, and the maximum image velocity within one integration time is $260.2/F(t) = 0.023\mu m$, which is approximately 0.006 pixels.

Simulations of the images before and after compensation are performed to visually observe the imaging effect with and without compensation. The moment of maximum image velocity during the imaging process (i.e., when the image motion per unit of integration was the largest) is selected for the simulation. The simulation results for the images at various integration stages are shown in Figs. 15 and 16.

 figure: Fig. 15.

Fig. 15. Original image.

Download Full Size | PDF

 figure: Fig. 16.

Fig. 16. (a) and (b) Imaging results of the 16-stage and 64-stage integrations without attitude compensation, respectively. (c) and (d) Imaging results of the 16-stage and 64-stage integrations with attitude compensation, respectively.

Download Full Size | PDF

Figure 15 shows the original image. Figures 16(a) and (b) show the image without attitude compensation. The uncompensated image is very fuzzy, and the details of the image are difficult to distinguish after 64-stage integration. Figures 16(c) and (d) show the compensated image. It is easy to distinguish the details of the image after 64-stage integration, which satisfies the requirements for high-quality imaging.

To objectively evaluate the image quality, the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) were calculated for the images shown in Fig. 16. The results are summarized in Table 3.

Tables Icon

Table 3. Imaging parameters

A comparison of the PSNR and SSIM for the different images reveals that the resolution of the compensated image is obviously improved, but the higher the number of integration stages, the more blurred the image.

5. Summary

In this study, an image motion compensation method for a satellite camera in integral mode is proposed. The method is based on the double-sided mirror scanning imaging technique of a TDI camera used in space. First, geometric modeling and vector decomposition of the camera’s vertical orbit imaging are performed to obtain the velocity field distribution at the focal plane. Then, the image velocity is decomposed into the along-orbit image velocity and vertical orbit image velocity. The vertical orbit image motion mismatch is compensated via TDICCD line frequency matching, and the along-orbit image motion mismatch is compensated via satellite attitude adjustments. Finally, the compensated attitude maneuver angle is introduced into the model of the coordinate system for the image velocity. After attitude compensation, the image motion per unit exposure time is reduced from 0.619 µm to 0.023 µm compared with the no-attitude maneuver mode, which verifies the effectiveness of the compensation method. The imaging area of the remote sensing camera simulated in this study is a wide area perpendicular to the sub-satellite point. Therefore, for different imaging tasks, the satellite can model the velocity field of the desired imaging area and adopt the corresponding attitude compensation measures. This method provides a theoretical basis for satisfying the current large-width and high-resolution imaging requirements of remote sensing cameras.

Funding

National Natural Science Foundation of China (51827806).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. L. Scaduto, E. G. Carvalho, A. R. Santos, et al., “The advanced wide field imaging camera(AWFI) for the Amazonia 1 Brazilian satellite,” (2010).

2. W. Yang, Y. Zhao, M. Liu, et al., “Method of space object detection by wide field of view telescope based on its following error,” Opt. Express 29(22), 35348–35365 (2021). [CrossRef]  

3. K. Lv and M. Liu, “Wide swath range sweep SAR: gapless imaging of wide scenes that are not parallel to the satellite orbit,” Remote sensing letters 13(2), 126 (2022). [CrossRef]  

4. Y.-m. Wang, W. Xu, G. Jin, et al., “An attitude algorithm based on the band seamless splicing imaging for agile satellite,” Optoelectron. Lett. 13(5), 376–380 (2017). [CrossRef]  

5. N. Du, S. Wu, Z. Chen, et al., “Attitude guidance algorithms for agile satellite dynamic imaging,” Guid. Navigat. Control 02(04), 2250022 (2022). [CrossRef]  

6. Y. Xu, H. Feng, Z. Xu, et al., “Analysis on stitching overlap pixel threshold of one-orbit multi-strip agile remote sensing imaging,” Opto-Electronic Engineering 44(11), 1066–1074 (2017). [CrossRef]  

7. B. X. Yang, “Study on the SNR of TDICCD camera,” Spacecraft Recovery & Remote Sensing (2005).

8. X. Zhou, H. Liu, Y. Li, et al., “Analysis of the influence of vibrations on the imaging quality of an integrated TDICCD aerial camera,” Opt. Express 29(12), 18108–18121 (2021). [CrossRef]  

9. C. Yufeng, W. Mi, J. Shuying, et al., “New on-orbit geometric interior parameters self-calibration approach based on three-view stereoscopic images from high-resolution multi-TDI-CCD optical satellites,” Opt. Express 26(6), 7475 (2018). [CrossRef]  

10. W. Xu, G. Jin, and J.-Q. Wang, “Optical imaging technology of JL-1 lightweight high resolution multispectral remote sensing satellite,” (2017).

11. Tang Rugang, Shen Fang, Pan Yanqun, et al., “Multi-source high-resolution satellite products in Yangtze Estuary:cross-comparisons and impacts of signal-to-noise ratio and spatial resolution,” Opt. Express 27(5), 6426–6441 (2019). [CrossRef]  

12. W. Taoyang, H. Wenchao, L. Siyue, et al., “Combined calibration method based on rational function model for the Chinese GF-1 wide-field-of-view imagery,” Photogrammetric Engineering & Remote Sensing: Journal of the American Society of Photogrammetry 82(4), 291–298 (2016). [CrossRef]  

13. Q.-b. Zhou, Q.-y. Yu, L. Jia, et al., “Perspective of Chinese GF-1 high-resolution satellite data in agricultural remote sensing monitoring,” J. Integr. Agric. 16(2), 242–251 (2017). [CrossRef]  

14. J. C. Mcdowell, “The low earth orbit satellite population and impacts of the SpaceX Starlink constellation,” (2020).

15. F. Wang, R. Xi, C. Yue, et al., “Conceptual Rotational Mode Design for Optical Conical Scanning Imaging Small Satellites,” Sci. China Technol. Sci. 63(8), 1383–1395 (2020). [CrossRef]  

16. Z. Zhi, H. Qu, S. Tao, et al., “The design of cone and pendulum scanning mode using dual-camera with multi-dimensional motion imaging micro-nanosatellite,” Remote Sens. 14(18), 4613 (2022). [CrossRef]  

17. T. Xu, X. Yang, S. Wang, et al., “Imaging velocity fields analysis of space camera for dynamic circular scanning,” IEEE Access 8, 191574–191585 (2020). [CrossRef]  

18. B. M. Miller and E. Y. Rubinovich, “Image motion compensation at charge-coupled device photographing in delay-integration mode,” Autom. Remote Control (Engl. Transl.) 68(3), 564–571 (2007). [CrossRef]  

19. D. Wang, W. Li, Y. Yao, et al., “A fine image motion compensation method for the panoramic TDI CCD camera in remote sensing applications,” Opt. Commun. 298-299, 79–82 (2013). [CrossRef]  

20. L. Wang, X. Liu, and C. Wang, “Modeling and design of fast steering mirror in image motion compensation for backscanning step and stare imaging systems,” Opt. Eng. 58(10), 1 (2019). [CrossRef]  

21. J. Xiu, P. Huang, J. Li, et al., “Line of sight and image motion compensation for step and stare imaging system,” Appl. Sci. 10(20), 7119 (2020). [CrossRef]  

22. D. Tian, Y. Wang, Z. Wang, et al., “Long integral time continuous panorama scanning imaging based on bilateral control with image motion compensation,” Remote Sens. 11(16), 1924 (2019). [CrossRef]  

23. X. Liu, D. Yuan, L. Song, et al., “Two-mirror aerial mapping camera design with a tilted-aplanatic secondary mirror for image motion compensation,” Opt. Express 31(3), 4108–4121 (2023). [CrossRef]  

24. T. Sun, H. Long, B.-C. Liu, et al., “Application of side-oblique image-motion blur correction to Kuaizhou-1 agile optical images,” Opt. Express 24(6), 6665 (2016). [CrossRef]  

25. L. Jiang and X. Yang, “Study on enlarging the searching scope of staring area and tracking imaging of dynamic targets by optical satellites,” IEEE Sens. J. 21(4), 5349–5358 (2021). [CrossRef]  

26. J. Du, X. Yang, M. Wu, et al., “Design of matching imaging on agile satellite with wide-swath whiskbroom payloads along the coastal zone,” in Photonics, (MDPI, 2022), 930.

27. J. Du, X. Yang, M. Zhou, et al., “Fast multispectral fusion and high-precision interdetector image stitching of agile satellites based on velocity vector field,” IEEE Sens. J. 22(22), 22134–22147 (2022). [CrossRef]  

28. K. Zhang and B. Pan, “Control design of spacecraft autonomous rendezvous using nonlinear models with uncertainty,” Journal of Zhejiang University 56, (4), 833 (2022).

29. P. Han, Y. Guo, and C. Li, “A relative imaging time coding-based genetic algorithm for agile imaging satellite task planning,” Journal of Astronautics 42, 1427–1438 (2021).

30. S. Sheng, C. Liu, and Z. Yuan, “Design of double-sided fast steering mirror based on piezoelectric actuating,” Optics and Precision Engineering 24, 2777–2782 (2016). [CrossRef]  

31. X. Yang, J. Wang, G. Jin, et al., “Row frequency calculation method of vertical orbit rotation swing scanning imaging for TDI camera,” (2019).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1.
Fig. 1. Schematic diagram of wide-area ground imaging.
Fig. 2.
Fig. 2. Schematic diagram of the optical system of the wide-area camera.
Fig. 3.
Fig. 3. Schematic diagram of double-sided mirror expanding the width of the area.
Fig. 4.
Fig. 4. (a) Ideal working mode of a TDICCD; (b) TDICCD image motion mismatch caused by inconsistency between the scanning and flight directions.
Fig. 5.
Fig. 5. Schematic diagram of the satellite’s flight and imaging trajectory.
Fig. 6.
Fig. 6. Schematic diagram of the image motion compensation attitude maneuver.
Fig. 7.
Fig. 7. GSD at different side pendulums and different pixels
Fig. 8.
Fig. 8. Vertical orbit width for different scan angle ranges.
Fig. 9.
Fig. 9. Real-time line frequency compensation.
Fig. 10.
Fig. 10. Pitch angle variation.
Fig. 11.
Fig. 11. Pitch angle velocity variation.
Fig. 12.
Fig. 12. Position change of a fixed target point in the image plane along the orbital direction without attitude compensation.
Fig. 13.
Fig. 13. Velocity on the image plane without attitude compensation.
Fig. 14.
Fig. 14. Image velocity at different pixels within the exposure time after compensation.
Fig. 15.
Fig. 15. Original image.
Fig. 16.
Fig. 16. (a) and (b) Imaging results of the 16-stage and 64-stage integrations without attitude compensation, respectively. (c) and (d) Imaging results of the 16-stage and 64-stage integrations with attitude compensation, respectively.

Tables (3)

Tables Icon

Table 1. Parameters of the camera simulation

Tables Icon

Table 2. Imaging parameters

Tables Icon

Table 3. Imaging parameters

Equations (25)

Equations on this page are rendered with MathJax. Learn more.

G S D y = R e Δ γ
S W = R e γ
V = V S + V e + V M + V θ
V S = ω S R e e S = μ ( R e + H ) 3 R e e S
V e = ω e R e cos λ t e e
V θ = ω θ L cos θ e θ
V M = ω φ L cos 2 φ e ϕ
ε = arccos ( cos ( 2 φ ) cos θ )
γ = arcsin ( sin ε ( R e + H ) R e ) ε
λ t = arcsin ( sin ( λ s 1 ) cos γ + cos ( λ s 1 ) sin γ cos i )
L = R e sin γ sin ε
ω φ = arctan ( S W 2 H ) T
V e = V e × cos ( 2 φ ( t ) ) × cos i
V y = V M + V e
V y ( t ) = V y ( t ) × f L ( t )
F ( t ) = V y ( t ) a
V e x = V e cos ( i n c )
V e y = V e sin ( i n c )
μ ( R e + H ) 3 R e  - ( ω θ ( t ) L ( t ) cos θ ( t ) + ω e R e cos λ t ( t ) cos i ) = 0
sin λ B = sin λ S cos γ B cos λ S sin γ B sin i
sin λ t = sin λ B cos γ R cos λ B sin γ R cos i
λ ( t ) t = arcsin ( sin ( λ s 1 ( t ) ) cos γ ( t ) + cos ( λ s 1 ( t ) ) sin γ ( t ) cos i )
γ ( t ) = arcsin ( sin θ ( t ) ( R e + H ) R e ) θ ( t )
γ ( t ) = V s t R e + H
s i m = V i m × t int × M
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.