Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Application of side-oblique image-motion blur correction to Kuaizhou-1 agile optical images

Open Access Open Access

Abstract

Given the recent development of agile optical satellites for rapid-response land observation, side-oblique image-motion (SOIM) detection and blur correction have become increasingly essential for improving the radiometric quality of side-oblique images. The Chinese small-scale agile mapping satellite Kuaizhou-1 (KZ-1) was developed by the Harbin Institute of Technology and launched for multiple emergency applications. Like other agile satellites, KZ-1 suffers from SOIM blur, particularly in captured images with large side-oblique angles. SOIM detection and blur correction are critical for improving the image radiometric accuracy. This study proposes a SOIM restoration method based on segmental point spread function detection. The segment region width is determined by satellite parameters such as speed, height, integration time, and side-oblique angle. The corresponding algorithms and a matrix form are proposed for SOIM blur correction. Radiometric objective evaluation indices are used to assess the restoration quality. Beijing regional images from KZ-1 are used as experimental data. The radiometric quality is found to increase greatly after SOIM correction. Thus, the proposed method effectively corrects image motion for KZ-1 agile optical satellites.

© 2016 Optical Society of America

1. Introduction

Agile optical satellites with large side-oblique angles are constantly being improved to meet the increasing need for effective spatial data collection and rapid-response land observation. Agile optical satellites adaptively adjust and outperform ordinary satellites by exhibiting greater attitude mobility and accuracy for flexible multimode Earth observation missions. Since SPOT-5, with satellite motorization ability, was launched on May 4, 2002, agile satellites have become an active research issue, and more agile image applications have been presented in the literature [1–5]. Image motion is caused by the relative movement of image points in the focal plane because of the Earth’s rotation, spacecraft orbital motion, and attitude changes. Image motion is a vital factor that affects the spatial resolution of optical sensors [6–8] and significantly affects large side-oblique images taken by agile satellites at high resolution [9–11]. Therefore, research on the precise calculation of image motion vectors and blur correction is crucial to ensure that agile optical satellites obtain high-quality images.

The image motion of agile satellites can be divided into three components according to the source. First, the image motion caused by the Earth’s rotation is too subtle to be negligible in affecting the image quality. Second, forward image motion, which affects both ordinary and agile satellites, is a simple image motion caused by spacecraft orbital motion and not attitude changes in the motion. The forward image motions of all pixels are made uniform to achieve easy correction [12]. Third, the image motion caused by the attitude changes of the satellite is the primary reason for image blur in agile satellite images; it significantly affects the radiometric quality and is difficult to correct. These motions have three effects: (1) Cross-track attitude changes of agile satellites introduce side-oblique image motion (SOIM), in which the image motions of all pixels have the same direction (the longitudinal direction) but different magnitudes [13]. (2) Both cross- and along-track attitude changes of agile satellites cause image motion with transverse and longitudinal components. A universal method of calculating both components of the image motion can be found in [14]. (3) Very few satellites (e.g., Pleiades [15]) have rotational agility. For those agile satellites, rotational motion [16] should be considered. In general, SOIM caused by cross-track attitude changes is a common image motion type for agile satellites.

In addition to the above external factors, internal factors such as the time integration of the sensor and high-frequency attitude jitter caused by instability of the satellite platform also cause subtle blurring of images taken by both ordinary and agile satellites. However, those components are relatively small, and our early works [18,19] propose methods of detecting and correcting the resulting radiometric degradation. In other words, image pre-processing can be performed before SOIM blur correction to achieve higher image quality.

Kuaizhou-1 (KZ-1) is a small-scale agile mapping satellite from China developed by Harbin Institute of Technology and was launched on September 25, 2013. KZ-1 is equipped with a high-resolution panchromatic sensor with a ground sample distance (GSD) of 1.2 m, swath width of 19 km, 0° to 45° cross-track agility, and subtle along-track agility for Earth observation. In an agile satellite mission, the panchromatic sensor of KZ-1 does not start operation until the agile movement is completed and the platform is stable. In addition, KZ-1 provides a segmental-time-integration adjustment device to guarantee that the spatial resolution of all the pixels in a panchromatic image is always 1.2 m under any side-oblique condition. The detailed design parameters of the KZ-1 satellite and sensor are listed in Table 1. Because of the above satellite characteristics, the transverse component of image motion is subtle and controllable. The longitudinal component of image motion due to SOIM is the primary reason for KZ-1 image blur and is the main problem addressed in this study. This study fully exploits the image motion principle for SOIM detection and correction. As a research application for agile satellites, this study makes several contributions to the field: (1) A systematic process of SOIM detection, blur correction, and result assessment is developed. The SOIM results for KZ-1 are analyzed and reported for the first time. (2) The accuracy of SOIM detection is improved, and a segmental correction method with a matrix form is proposed to complete the process of SOIM correction and enhance the radiometric quality of the images. (3) A suitable objective evaluation index for evaluating the image quality with or without SOIM blur correction is investigated. We believe that this study makes exemplary contributions to the field of SOIM detection and correction for agile satellites.

Tables Icon

Table 1. KZ-1 Design Parameters

2. Methods

2.1 Detection: side-oblique image motion described by point spread functions

When the satellite is performing side-oblique scanning, the motion blur of the scanned images will vary from the apogee to the perigee of the CCD array. Figure 1 shows the satellite flight speed V and flight height H. The side-oblique angle, focal length, and half-field angle are expressed as (90° − δ), f, and θ, respectively. The image motion speed at apogee during side-oblique photography according to the applicable photography theory is as follows:

Va=f(VH)sin(δθ)cos(θ),
whereas that at perigee is
Vp=f(VH)sin(δ+θ)cos(θ).
The ratio of the speeds at apogee and perigee is

 figure: Fig. 1

Fig. 1 Side-oblique imaging mode.

Download Full Size | PDF

VaVp=sin(δθ)sin(δ+θ).

Therefore, when the half-field angle θ for the sensors is defined, the speed difference between apogee and perigee further expands with increasing side-oblique angle (90° − δ). The image motion speed ratio cannot be neglected when the side-oblique angle becomes larger. Building the side-oblique motion blur model is vital to segmental correction of motion blur under different speeds.

We define the half-field angle of a generic pixel as ϕ([θ,+θ]) and calculate the image motion speed of a generic pixel to describe its image motion:

Vg=f(VH)sin(δϕ)cos(ϕ).

Therefore, the image motion value L can be calculated and expressed as follows by denoting T as the integration time of the CCD array:

L=f(VH)sin(δϕ)cos(ϕ)(1c)T,
where c is the pixel size (unit: µm). Equation (5) shows the linear relationship between image motion value and the half-field angle of a generic pixel. The maximum and minimum image motion values are the pixels in the perigee and apogee conditions, respectively:

LMax=Lperigee=f(VH)sin(δ+θ)cos(θ)(1c)T,
LMin=Lapogee=f(VH)sin(δθ)cos(θ)(1c)T.
ΔL=LMaxLMin.

Consequently, we quantify the image motion value as an integer as follows:

L=[.LMin]+i(0nΔL),
where i is an integer number, and [.] is a rounding operator. Thus, we obtain segmental point spread functions (PSFs) (the total number of PSFs is[.ΔL]+1) to describe the SOIM.

PSF=h(n,i)={1L,n=0,1,L10.

For example, PSFs with a  Lmax value of 10 pixels and a Lmin value of 1 pixel are shown in Fig. 2.

 figure: Fig. 2

Fig. 2 PSFs with Lmax = 10 pixels and Lmin = 1 pixel.

Download Full Size | PDF

2.2 Restoration: segmental restoration method with a matrix form

2.2.1 Individual segmentation width

Before using the segmental PSF to correct for image motion, we should determine the segmentation width that can use approximately one segmental PSF in this region. We denote the segment width asSwid.

Figure 3 shows that Swid is determined by the following equation:

 figure: Fig. 3

Fig. 3 Diagram of the segment width.

Download Full Size | PDF

Swid=fc(tanϕ1tanϕ2).

The discrete one-pixel motion range is expressed as follows:

(L+12)(L12)=1=fc(VH)Tcosδ(tanϕ1tanϕ2).
Therefore,

Swid=1(VH)Tcosδ.

Equation (13) shows that the width of the segment region can be determined from the satellite speed, height, integration time, and side-oblique angle.

2.2.2 SOIM correction

The degradation in an imaging system in which the PSFs in the entire image field are approximately equal to the near-optical-axis PSF is space invariant. Many methods that can effectively correct space-invariant image degradation have been presented in the literature [17]. Image degradation is space-variant in the side-oblique imaging system because the PSFs in the regions from the apogee to the perigee are different and cannot be replaced by the near-optical-axis PSF. Therefore, the image degradation caused by the SOIM of a time delay and integration (TDI) CCD can be expressed using the space-variant degradation model (SVDM), which is a combination of two PSFs: one is a 1D rectangle,hsca(x,y;u,v), that expresses the constant motion of the scanning direction, and the other is a 2D rectangle,hdet(x,y;u,v), that expresses the integration time of the sensors. The coordinates of an object in the degraded (captured) and restored images are (x,y)and(u,v), respectively. The input and output pairs of coordinates arex,y;u,v. With respect to the SOIM, Eq. (10) is the calculation formula forhsca(x,y;u,v), and hdet(x,y;u,v) is defined as follows:

hdet(x,y;u,v)=rect(xa,ya)*rect(xa).
where rect is expressed as a rectangle function.

At every discrete image motion speed related to the corresponding PSF, h(x,y;u,v), G(m,n) is denoted as the quantized discrete digital image obtained by multistage integration sampling of the TDI CCD, where r(u,v) is the restored signal. The degradation caused by image motion can be expressed by summing up the linear space-variant degraded signal E(x,y) into the restored signalr(u,v). In other words, the images G(x,y) captured from the multistage integration sampling TDI CCD can be considered the ideal quantized sampling of mixed signals combined with multiple signals from different points of the restored image r(u,v), which obeys the rule of the space-variant degradation system E. Assume that the corresponding PSF convolution kernel is denoted by K(x,y;u,v); the relationship between the PSF and the convolution kernel is then

h(x,y;u,v)=K(x+u,y+v;x,y),K(x,y;u,v)=h(u,v;xu,yv).

The convolution kernel associated with the PSF model is used to facilitate the following derivation. Conversion between the PSF model and convolution kernel would be done if necessary. Then, the discrete image signals with space-variant motion can be described as follows:

g(x,y)=++r(u,v)K(x,y;u,v)dudv.

Equation (16) can be discretely sampled with the sample interval d as follows:

G(m,n)=g(md0.5d,nd0.5d).

In one integration time, K(x,y;u,v) can be considered to be close to the average of the different impulse functions of the integration series when the difference between the minimum and maximum speeds is less than 10%. Thus,

K(x,y;u,v)=N1i=1Nδ(uxi,vyi),
where N is the number of integration series, and xi and yi can be obtained by calculating the motion using Eq. (5). Therefore, Eq. (16) can be expressed in another form as follows:

g(x,y)=++r(xu,yv)K(x,y;xu,yv)dudv.

If we denote the integral number by m0,n0,m1,n1,Δm,Δn, the discrete image digital number G(m0,n0) is equal to the image signal g(m0,n0). In the discrete sampled pixels of the image, denoted by (x0+m1d,y0+n1d), near the central pixel (x0,y0) of the segment region [near K(x,y;u,v)], g(x0+m1d,y0+n1d) is equal to

g(x0+m1d,y0+n1d)=++r(xu,yv)K(x0+m1d,y0+n1d;x0u,y0v)dudv.

The Taylor expansion of r(x0u,y0v) at the point (x0,y0) for Eq. (20) is performed to generate a linear polynomial formula with (u,v) as the independent variable. The result is simplified to obtain the following formula:

g(x0+m1d,y0+n1d)=n=0N(1)ni=0Mr(ni,i)(x0,y0)h(ni,i)(x0,y0;m1d,n1d)/(n!i!),
where hni,i(x0,y0;m1d,n1d) is the PSF model calculated by Eq. (10) with the variables (ni,i). Further, x0,y0;m1d,n1d represent the input and output pairs of coordinates. In addition, hni,i(x0,y0;m1d,n1d) can be expressed using the convolution kernel as follows:

h(ni,i)(x0,y0;m1d,n1d)=++u(ni,i)viK(x0+m1d,y0+n1d;x0u,y0v)dudv.

By replacing (x0,y0) in Eq. (22) with (x0+Δmd,y0+Δnd) for the central pixel of every segment region, denoted by (x0+Δmd,y0+Δnd), the general linear polynomial formula with (u,v) as the independent variable is expressed as follows:

g(x0+m1d+Δmd,y0+n1d+Δnd)=n=0N(1)ni=0Mr(ni,i)(x0+Δmd,y0+Δnd)h(ni,i)(x0,y0,m1d+Δmd,n1d+Δnd)/(n!i!).

Consequently, the Taylor expansion of r(ni,i)(x0+Δmd,y0+Δnd) at the point (x0,y0) for Eq. (20) is performed to generate a linear polynomial formula with (Δmd,Δnd) as the independent variable. The above Taylor expansion formula is substituted into Eq. (23). We obtain the following formula by simplifying the results.

g(x0+m1d+Δmd,y0+n1d+Δnd)=n=0M(1)ni=0M(n!i!)1n=0Mnj=0Mi(Δmd)mj(Δnd)jr(ni+mj,i+j)(x0,y0)h(ni,i)(x0+m1d+Δmd,y0+n1d+Δnd)/(m!i!).

Therefore, the links between discrete image signals with space-variant motion g(x,y) and the PSF model hn,i [Eq. (10)] are built using the convolution kernel K(x,y;u,v). When g(x,y) and hn,i are known, the unknown restored image r(u,v)would be reconstructed step by step according to Eqs. (23) and (24).

Additional information in the form of the above Taylor expansions of the restored signals r(u,v) and convolution kernel of the PSF is necessary to derive the linear system for solving this space-variant image degradation in the side-oblique imaging system. Without them, the following matrix form for restoration cannot be built in the nonlinear system.

2.2.3 Matrix form for restoration

By integrating (M+1)2 points around the segmented region with the central point(m0+m1) into Eqs. (23) and (24), we can obtain (M+1)2 equations in the space defined by R=(M+1)2as follows:

[G(m0+m1[M/2],n0+n1[M/2])G(m0+m1[M/2],n0+n1[M/2]+1)G(m0+m1,n0+n1)G(m0+m1+M[M/2],n0+n1+M[M/2])]=[S11S12S1RS21S22S2RSR1SR2SRR][r(0,0)r(0,1)r(0,M)r(M,M)],
where the values of G are obtained from images by discrete sampling of signals g(x,y), and Snn is the corresponding coefficient related to the function hn,i [see Eqs. (23) and (24)] of the independent variable r. Therefore, the space-invariant signal of  (m0+m1), denoted by r(p,q)(x0,y0),(p,q=0,M) can be obtained by solving these (M+1)2 equations written in matrix form:
G=Sr.
Therefore,

r=(STS)1STG.

Given that the coefficient matrix S should dramatically increase with increasing M, the parameter M should be set to a small value to decrease the computational cost.

The processing operator is shown as follows. First, the corresponding parameters are entered to calculate the PSF [h(n,i)(x,y;u,v)] and the width d. Second, the SVDM formula is constructed to form the parameter matrix S and observation matrix G. Finally, the space-invariant signals (restored image), denoted by r, are calculated using Eq. (27).

2.3 Assessment of correction

An objective evaluation method is used in this study to evaluate the effect of side-oblique motion correction on the radiometric quality. The following radiometric objective evaluation indices (OEIs) are selected as assessment criteria: the clarity (CLA), detail energy (DET), edge energy (EDG), and contrast (CON). The indices are described as follows.

  • (1) CLA can suitably reflect the image quality, and an image with higher CLA values has more clarity in the objective expression than one with lower CLA values.
    CLA=ab(dGray/dp)2/|(Gray(b)Gray(a)|,

    where dGray/dp and Gray(b)Gray(a) are the gray rate and contrast degree in the direction perpendicular to the edge, respectively.

  • (2) DET and EDG describe the detail and edge characteristics of the image, respectively, in the high-frequency region of the frequency domain. An image with higher DET and EDG values contains more information.
    DET=1nσ2(x,y),
    σ2(x,y)=1(2M+1)2i=MMj=MM[G(x+i,y+j)Mean(x,y)],

    where σ2(x,y) is the regional variance, Mean(x,y) is the regional mean, G(x,y) is the digital image, and M is generally set to one.

    EDG=1mnx=1my=1nβ2(x,y),
    β(x,y)=OP1(f(x,y))+OP2(f(x,y)),

    where OP represents the following convolution of an image:

    OP1=[1/61/61/61/64/61/61/61/61/6],OP2=[1/61/61/61/64/61/61/61/61/6].

  • (3) CON is an important OEI for the image’s radiometric quality, and the CON value reflects the ability to distinguish the target and background. The information and texture of an image become more evident at a higher CON.
    CON=|ij|2=0Y1|ij|2{i=0Y1j=0Y1p^(i,j)},

    where Y is the maximum gray-level value, and p^(i,j) is the normalized gray-level co-occurrence matrix.

3. Experiment and data analysis

3.1 Data overview and data processing operation

The experimental data in this study consist of the complete KZ-1 data that cover the Beijing region in China and are within the scope of a complete image of the original panchromatic data. Beijing has many artificial features (buildings) for which it is convenient to assess the correction results. Figure 4 shows an experimental image with a flight height H of 294.595 km, a flight speed of 7.801 km/s, a side-oblique angle (90° − δ) of 33.701°, and a GSD of 1.2 m. Those parameters are obtained by extracting the satellite auxiliary data. The values of the flight height, flight speed, and side-oblique angle are approximated to three decimal places. Using Eqs. (6) and (7), we can calculate the experimental rounding error of the image motion value in the apogee and perigee regions:

εLMax=f(V*H*)sin(δ*+θ)cos(θ)(1c)Tf(VH)sin(δ+θ)cos(θ)(1c)T,=7.93×104(pixel)
εLmin=f(V*H*)sin(δ*θ)cos(θ)(1c)Tf(VH)sin(δθ)cos(θ)(1c)T.=7.83×104(pixel)
where θ is 2.1°, c is 9μm, and T is 1/2016 s. Therefore, the error caused by using the values that are approximated to the third decimal place can be neglected.

 figure: Fig. 4

Fig. 4 Experimental data.

Download Full Size | PDF

The data are processed as follows:

  • 1) Parameters such as the satellite speed, height, integration time, and side-oblique angle are used to determine the individual width of segmentation d.
  • 2) Segmental PSFs for SOIM correction are calculated using Eq. (10).
  • 3) SOIM analysis of the image quality is performed.
  • 4) Image pre-processing is performed to correct image radiometric degradation due to attitude jitter and time integration.
  • 5) The restoration matrix is formed to correct for SOIM using the segmental PSFs.
  • 6) The image quality is assessed.

3.2 SOIM analysis of imaging quality

The modulation transfer function (MTF) is an important index for describing the image quality. The MTF degradation is related to the image motion [16] as follows:

MTFde=sin(πLΡ)πLΡ,
where L is the image motion value, and P is the lens resolution (units: lp/mm).

Figure 5 shows the effect of the MTF under various image motions with three lines for lens resolutions of 30 lp/mm, 40 lp/mm, and 80 lp/mm, as well as one line for the KZ-1 resolution of 57.140 lp/mm. This figure illustrates that a higher image resolution causes more significant degradation of the information. With respect to the apogee and perigee areas of the experimental data, the MTF degradation caused by SOIM under a side-oblique angle of 33.701° is also represented in Fig. 5. Both areas are less than 0.7 (i.e., the y coordinate in Fig. 5 equals 1 MTFde); thus, SOIM correction is vital in minimizing this degradation.

 figure: Fig. 5

Fig. 5 Impact on MTF of various image motions with various lens resolutions.

Download Full Size | PDF

3.3 Results of SOIM correction and assessment

Image motion was corrected by the method discussed in Section 2.2. Several typical terrains from the apogee and perigee regions were selected, such as urban regions, farms, the castle, and the Bird’s Nest, to show the effect of this correction. A visual impression illustrates that the restored image is clearer than the original image in both the apogee and perigee regions (Fig. 6). Figure 4 also shows the location of these regions in the experimental data. In addition, comparison experiments were performed using a high-pass filter (HPF) method and high-quality motion deblurring (HMD) method [20]. The processing programs for the HPF and HMD methods were downloaded from the CxImage library [21] and Qi Shan’s project website [22], respectively. Default parameters of [27, 27, 0.01, 0.2, 1, 0, 0, 0, 0, 3.5] were used for HMD. The parameters are defined on the website. Comparison results are also shown in Fig. 6 and Table 2.

 figure: Fig. 6

Fig. 6 Visual impression of images: (a–b) apogee region image; (c–d) middle region image; (e–f) perigee region image; image without restoration (first column), after HPF method (second column), after HMD method (third column), and after our method (last column).

Download Full Size | PDF

Tables Icon

Table 2. Objective Evaluation Indices of Images in Each Region with or without Restoration

The index values of the images of each region were subsequently obtained using the radiometric OEIs described in Section 2.3 and listed in Table 2. Here, the OEIs are statistical values, and have multiple digits after the decimal point because of the divisor in the corresponding calculation formula. However, the OEIs values in the Tables 2 and 3 have large difference between each other. Therefore, it is not necessary to reserve all the decimal digits. In this paper the OEIs were rounded to the nearest number which is accurate enough for comparison.

Tables Icon

Table 3. Objective Evaluation Indices of Images in Each Region after Pre-processing Steps

The results listed in Table 2 show that the OEIs of the restored images of different typical feature terrains among the apogee, middle, and perigee regions were greatly improved. The results of the comparison experiments show that our method and the HMD method apparently outperform the HPF method in both the apogee and perigee regions. The CLA, DET, and CON indices show that the image processed by our method has the best clarity, contrast, and detail texture among the images processed by the three methods. However, the EDG index shows that the image processed by the HMD method has the best edge texture characteristics. Therefore, the HMD method is preferred for the purpose of edge extraction. These results are consistent with the visual impressions.

In addition, Table 3 shows the OEIs of the images after pre-processing steps for attitude jitter and time integration degradation correction. The pre-processing column results show limited improvement in the radiometric quality, thus, the image radiometric degradation due to attitude jitter and time integration is smaller. We consider that SOIM is the main reason for image blur and has been corrected well.

4. Discussion

This study uses a discrete rounding operation to determine the image motion of a segmented region, which inevitably introduces a discrete error. However, the image motion difference between the side and center regions is at most 0.5 pixels according to our rounding operation. Therefore, the established PSF can ensure a restoration precision within one pixel. This study presents an image-processing-based restoration method for multiple types of image motion blur in side-oblique mode. This new method provides a needed image motion restoration algorithm (i.e., restoration is performed after image capture) and has advantages over synchronous restoration methods such as CCD electron transfer [23] and device body restoration [24,25]. With regards to the practicability of this study, we recommend using parallel computation technique to accelerate the speed of the proposed method in order to achieve the real-time processing.

5. Conclusion

This study presented an approach to SOIM detection, restoration, and result assessment for an agile satellite (KZ-1). SOIM detection based on PSFs was adopted because of the image motion principle. A new mathematical method that uses segmental PSFs to compensate for different image motion blurs from the apogee to perigee regions was proposed. This method plays an important role in improving the effectiveness of correction of each pixel of side-oblique images captured by agile satellites. Radiometric objective evaluation was then used to evaluate the effect of image motion restoration on the radiometric level. The experimental data demonstrate that our framework for SOIM detection and restoration effectively address the SOIM of the KZ-1 agile satellite. This research is an application of agile satellite image motion process technology and will play a role in the technical foundation of new Chinese agile satellite launches. We believe that this framework can be considered a type of pre-processing before a quantitative study and should be useful for other agile satellites.

Acknowledgments

The authors would like to thank the editors and reviewers for their constructive and helpful comments for the substantial improvement of the paper. This research is financially supported by the State Key Program of the National Natural Science Foundation of China (NSFC) (Grant No. 61331017).

References and links

1. P. Tangpattanakul, N. Jozefowiez, and P. A. Lopez, “Multi-objective local search heuristic for scheduling Earth observations taken by an agile satellite,” Eur. J. Oper. Res. 245(2), 542–554 (2015). [CrossRef]  

2. C. Peiling, H. Jingxian, C. Jian, and L. Haitao, “Improved path planning and attitude control method for agile maneuver satellite with double-gimbal control moment gyros,” Math. Probl. Eng. 2015, 878724 (2015).

3. M. Marisaldi, F. Fuschino, C. Labanti, M. Galli, F. Longo, E. Del Monte, G. Barbiellini, M. Tavani, A. Giuliani, E. Moretti, S. Vercellone, E. Costa, S. Cutini, I. Donnarumma, Y. Evangelista, M. Feroci, I. Lapshov, F. Lazzarotto, P. Lipari, S. Mereghetti, L. Pacciani, M. Rapisarda, P. Soffitta, M. Trifoglio, A. Argan, F. Boffelli, A. Bulgarelli, P. Caraveo, P. W. Cattaneo, A. Chen, V. Cocco, F. D’Ammando, G. De Paris, G. Di Cocco, G. Di Persio, A. Ferrari, M. Fiorini, T. Froysland, F. Gianotti, A. Morselli, A. Pellizzoni, F. Perotti, P. Picozza, G. Piano, M. Pilia, M. Prest, G. Pucella, A. Rappoldi, A. Rubini, S. Sabatini, E. Striani, A. Trois, E. Vallazza, V. Vittorini, A. Zambra, D. Zanello, L. A. Antonelli, S. Colafrancesco, D. Gasparrini, P. Giommi, C. Pittori, B. Preger, P. Santolamazza, F. Verrecchia, and L. Salotti, “Detection of terrestrial gamma ray flashes up to 40 MeV by the AGILE satellite,” J. Geophys. Res. 115(A3), A00E13 (2010). [CrossRef]  

4. C. Labanti, M. Marisaldi, F. Fuschino, M. Galli, A. Argan, A. Bulgarelli, G. Di Cocco, F. Gianotti, M. Tavani, and M. Trifoglio, “Design and construction of the mini-calorimeter of the agile satellite,” Nucl. Instrum. Methods Phys. Res. A 598(2), 470–479 (2009). [CrossRef]  

5. B. C. Han, S. Q. Zheng, X. Wang, and Q. Yuan, “Integral design and analysis of passive magnetic bearing and active radial magnetic bearing for agile satellite application,” IEEE Trans. Magn. 48(6), 1959–1966 (2012). [CrossRef]  

6. O. Hadar, M. Fisher, and N. S. Kopeika, “Image resolution limits resulting from mechanical vibrations, part III: numerical calculation of modulation transfer function,” Opt. Eng. 31(3), 581–589 (1992). [CrossRef]  

7. A. Stern and N. S. Kopeika, “Analytical method to calculate optical transfer functions for image motion and vibrations using moments,” J. Opt. Soc. Am. A 14(2), 388–396 (1997). [CrossRef]  

8. B. M. Miller and E. Y. Rubinovich, “Image motion compensation at charge-coupled device photographing in delay-integration mode,” Automat. Rem. Contr. 68(3), 564–571 (2007). [CrossRef]  

9. M. C. Algrain and M. K. Woehrer, “Determination of attitude jitter in small satellite,” Proc. SPIE 2739, 215–228 (1996). [CrossRef]  

10. V. C. Chen and W. J. Miceli, “The effect of roll, pitch and yaw motions on ISAR imaging,” Proc. SPIE 3810, 149–158 (1999). [CrossRef]  

11. A. J. Smirnov, “Image stabilization in small satellite optoelectronic remote sensing systems,” Proc. SPIE 3119, 36–45 (1997). [CrossRef]  

12. M. Liu, “Research on detection and compensation technology of forward image motion in aerial photography based on image restoration,” Ph.D. dissertation, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, China (2005).

13. S. Li, “Real-time restoration algorithms research for aerial images with different rates of image motion,” Ph.D. dissertation, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, China (2010).

14. V. L. Kistlerov, P. I. Kitsul, and B. M. Miller, “Computer-aided design of the optical devices control systems based on the language of algebraic computation FLAC,” Math. Comput. Simul. 33(4), 303–307 (1991). [CrossRef]  

15. D. Poli, F. Remondino, E. Angiuli, and G. Agugiaro, “Radiometric and geometric evaluation of GeoEye-1, WorldView-2 and Pléiades-1A stereo images for 3D information extraction,” ISPRS J. Photogramm. 100, 35–47 (2015). [CrossRef]  

16. Q. Shan, W. Xiong, and J. Jia, “Rotational motion deblurring of a rigid object from a single image,” in Proceedings of IEEE 11th International Conference on Computer Vision (IEEE, 2007), pp. 1–8. [CrossRef]  

17. H. C. Andrews and B. R. Hunt, Digital Image Restoration (Englewood Cliffs, NJ: Prentice-Hall, 1977).

18. T. Sun, H. Long, B. C. Liu, and Y. Li, “Application of attitude jitter detection based on short-time asynchronous images and compensation methods for Chinese mapping satellite-1,” Opt. Express 23(2), 1395–1410 (2015). [CrossRef]   [PubMed]  

19. S. Tao, L. Hui, Z. Dong, and L. Ying, “Detection and compensation of satellite flutter based on image from multispectral camera with five spectral combinations,” Acta Opt. Sin. 34(7), 0728005 (2014). [CrossRef]  

20. Q. Shan, J. Jia, and A. Agarwala, “High-quality motion deblurring from a single image,” ACM Trans. Graph. 27(3), 73 (2008). [CrossRef]  

21. D. Pizzolato, “CxImage,” http://www.codeproject.com/KB/graphics/cximage.aspx.

22. Q. Shan, “High-quality motion deblurring programs,” http://www.cse.cuhk.edu.hk/~leojia/programs/deblurring/deblurring.htm.

23. A. G. Lareau, “Advancements in E-O framing,” Proc. SPIE 3431, 96–107 (1998). [CrossRef]  

24. X. U. Yong-Sen, Y. L. Ding, H. Y. Tian, and B. Dong, “Calculation and compensation for image motion of aerial remote sensor in oblique situation,” Opt. Prec. Eng. 15(11), 1779–1783 (2007).

25. B. A. Gorin, “Side oblique real-time orthophotography with the 9Kx9K digital framing camera,” Proc. SPIE 5109, 86–97 (2003). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Side-oblique imaging mode.
Fig. 2
Fig. 2 PSFs with Lmax = 10 pixels and Lmin = 1 pixel.
Fig. 3
Fig. 3 Diagram of the segment width.
Fig. 4
Fig. 4 Experimental data.
Fig. 5
Fig. 5 Impact on MTF of various image motions with various lens resolutions.
Fig. 6
Fig. 6 Visual impression of images: (a–b) apogee region image; (c–d) middle region image; (e–f) perigee region image; image without restoration (first column), after HPF method (second column), after HMD method (third column), and after our method (last column).

Tables (3)

Tables Icon

Table 1 KZ-1 Design Parameters

Tables Icon

Table 2 Objective Evaluation Indices of Images in Each Region with or without Restoration

Tables Icon

Table 3 Objective Evaluation Indices of Images in Each Region after Pre-processing Steps

Equations (37)

Equations on this page are rendered with MathJax. Learn more.

V a =f( V H ) sin(δθ) cos(θ) ,
V p =f( V H ) sin(δ+θ) cos(θ) .
V a V p = sin(δθ) sin(δ+θ) .
V g =f( V H ) sin(δϕ) cos(ϕ) .
L=f( V H ) sin(δϕ) cos(ϕ) ( 1 c )T,
L Max = L perigee =f( V H ) sin(δ+θ) cos(θ) ( 1 c )T,
L Min = L apogee =f( V H ) sin(δθ) cos(θ) ( 1 c )T.
ΔL= L Max L Min .
L=[. L Min ]+i ( 0nΔL ),
PSF= h (n,i) ={ 1 L ,n=0,1,L1 0 .
S wid = f c (tan ϕ 1 tan ϕ 2 ).
(L+ 1 2 )(L 1 2 )=1= f c ( V H )Tcosδ(tan ϕ 1 tan ϕ 2 ).
S wid = 1 ( V H )Tcosδ .
h det ( x,y;u,v )=rect( x a , y a )*rect( x a ).
h(x,y;u,v)=K(x+u,y+v;x,y),K(x,y;u,v)=h(u,v;xu,yv).
g(x,y)= + + r(u,v) K(x,y;u,v)dudv.
G(m,n)=g(md0.5d,nd0.5d).
K(x,y;u,v)= N 1 i=1 N δ(u x i ,v y i ) ,
g(x,y)= + + r(xu,yv) K(x,y;xu,yv)dudv.
g( x 0 + m 1 d, y 0 + n 1 d)= + + r(xu,yv) K( x 0 + m 1 d, y 0 + n 1 d; x 0 u, y 0 v)dudv.
g( x 0 + m 1 d, y 0 + n 1 d)= n=0 N (1) n i=0 M r (ni,i) ( x 0 , y 0 ) h (ni,i) ( x 0 , y 0 ; m 1 d, n 1 d)/(n!i!),
h (ni,i) ( x 0 , y 0 ; m 1 d, n 1 d)= + + u (ni,i) v i K( x 0 + m 1 d, y 0 + n 1 d; x 0 u, y 0 v)dudv.
g( x 0 + m 1 d+Δmd, y 0 + n 1 d+Δnd)= n=0 N (1) n i=0 M r (ni,i) ( x 0 +Δmd, y 0 +Δnd) h (ni,i) ( x 0 , y 0 , m 1 d+Δmd, n 1 d+Δnd)/(n!i!).
g( x 0 + m 1 d+Δmd, y 0 + n 1 d+Δnd)= n=0 M (1) n i=0 M (n!i!) 1 n=0 Mn j=0 Mi (Δmd) mj (Δnd) j r (ni+mj,i+j) ( x 0 , y 0 ) h (ni,i) ( x 0 + m 1 d+Δmd, y 0 + n 1 d+Δnd)/(m!i!).
[ G( m 0 + m 1 [M/2], n 0 + n 1 [M/2]) G( m 0 + m 1 [M/2], n 0 + n 1 [M/2]+1) G( m 0 + m 1 , n 0 + n 1 ) G( m 0 + m 1 +M[M/2], n 0 + n 1 +M[M/2]) ]=[ S 11 S 12 S 1R S 21 S 22 S 2R S R1 S R2 S RR ][ r (0,0) r (0,1) r (0,M) r (M,M) ],
G=Sr.
r= ( S T S) 1 S T G.
CLA= a b (dGray/dp) 2 /| (Gray(b)Gray(a) | ,
DET= 1 n σ 2 (x,y) ,
σ 2 (x,y)= 1 (2M+1) 2 i=M M j=M M [ G(x+i,y+j)Mean(x,y) ] ,
EDG= 1 mn x=1 m y=1 n β 2 (x,y) ,
β(x,y)=O P 1 (f(x,y))+O P 2 (f(x,y)),
O P 1 =[ 1/6 1/6 1/6 1/6 4/6 1/6 1/6 1/6 1/6 ],O P 2 =[ 1/6 1/6 1/6 1/6 4/6 1/6 1/6 1/6 1/6 ].
CON= | ij | 2 =0 Y1 | ij | 2 { i=0 Y1 j=0 Y1 p ^ (i,j) } ,
ε L Max =f( V * H * ) sin( δ * +θ) cos(θ) ( 1 c )Tf( V H ) sin(δ+θ) cos(θ) ( 1 c )T, =7.93× 10 4 (pixel)
ε L min =f( V * H * ) sin( δ * θ) cos(θ) ( 1 c )Tf( V H ) sin(δθ) cos(θ) ( 1 c )T. =7.83× 10 4 (pixel)
MT F de = sin(πLΡ) πLΡ ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.