Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Using sub-resolution features for self-compensation of the modulation transfer function in remote sensing

Open Access Open Access

Abstract

Space smart optical orbiting payloads integrated with attitude and position (SSPIAP) are emerging as an essential tool that is extensively used in microsatellites. The on-orbit imaging link of SSPIAPs includes atmospheric disturbances, defocusing, and relative motion, and other noises, thereby resulting in low modulation transfer function (MTF) and poor image quality. The introduction of MTF compensations has pushed the limits of optical imaging, enabling high-resolution on-orbit dynamic imaging. However, the external targets for compensating MTF are limited by space and time because the availability and access to external targets are infrequently easy when a remote sensor is working on-orbit. Here, a new and robust MTF self-compensation method for a SSPIAP is proposed. In comparison with conventional methods with external targets, this method utilizes multiple natural sub-resolution features (SRFs), occupying several pixels on a uniform background, as observation targets which makes MTFC more maneuverable, robust and authentic. A mathematical morphology algorithm is used to extract SRFs. Moreover, the method relies on a regularization total variation energy function, a sparse prior framework, to invert the MTF. Experimental measurements confirm that the proposed method is effective and convenient to implement. This technique does not rely on specific external targets to compensate the MTF, making it potentially suitable for on-orbit dynamic long-range imaging.

© 2017 Optical Society of America

1. Introduction

Remote sensing optical cameras (RSOCs) have emerged as an essential tool in various fields, such as deep space exploration, celestial navigation, and Earth observation, among others. Approaching higher ground resolution has become the critical issue for remote sensing cameras [1]. However, the imaging conditions of on-orbit RSOCs are influenced by atmospheric disturbances, relative motion between ground targets and remote sensors, aberration and defocusing in optical systems, noises in image sensors and electronic systems, etc [2]. These factors result in RSOCs having a very low on-orbit modulation transfer function (MTF) [3], which reflects that the image quality is degraded from the aspect of captured images of remote sensing cameras. In order to improve the imaging performance, it is necessary to compensate the original on-orbit MTF.

Most recent satellites, such as IKONOS [4, 5], OrbView-3 [6], Landsat 7 [7], QuickBird [8], and SPOT5 [9], adopt filter-based MTF compensation (MTFC) methods to enhance the imaging performance of a remote sensor. The filter-based MTFC method mainly includes two steps: the MTF is first measured by edge [10–12], pulse [13–15], and relative target [16–18] methods; Based on the measured MTF a filter method is applied to obtain the compensated MTF. Mu et al. [19] obtained the MTF by direct measurements in laboratory and then used a constrained least square filter method to compensate MTF. Some filter-based MTFC methods also consider the characteristics of a remote sensor itself to reach the optimized MTFC. Oh et al. [20] adopted the pulse source to measure MTF and used the Wiener filter (WF), which considers the MTF value characterization as a geometric performance of the Geostationary Ocean Color Imager (GOCI), to compensate MTF to enhance image quality. X. Li et al. [21] use the measured MTF and modified inverse filter to compensate the total MTF after removing atmospheric MTF. Some filter-based MTFC methods compensate the part of the total MTF. Yitzhaky et al. [22] use a predicted rather than measured atmospheric MTF and an atmospheric WF to compensate the turbulence MTF. Other filter-based MTFC methods integrate some advanced technologies into the filter method to improve the MTFC performance. Guo et al. [23] performed the MTF compensation in the wavelet domain combined with a series of denoising processing to improve MTFC performance.

We noted that the filter-based MTF compensation algorithms first employ artificial testing targets, which are placed prior to the process, or exert effort to collect information on the real features in remote sensing images to measure MTF. However, the availability and access to external targets are infrequently easy when a remote sensor is working on-orbit. Existing MTFC methods are therefore limited by space and time, not suitable for on-orbit MTFC because they possess the common characteristic of needing external targets. To address this, this study proposes a new and robust MTF self-compensation method for remote sensing optical cameras. In comparison with conventional methods with external targets, the method is based on multiple natural sub-resolution features (SRFs), occupying several pixels on a uniform background, and considers the dependence of MTF and testing targets when conducting the MTF compensation. The method comprises four steps. In the first and second steps, mathematical morphology and correlation filter algorithm are employed, respectively, to extract multiple SRFs. In the third step, a regularization total variation energy function model is established to compensate the MTF. In the fourth step, the partial derivative iteration algorithm is applied to multiple points to obtain the compensated MTF. Consequently, we can directly obtain a two-dimensional compensated MTF. Unlike conventional approaches methods that involve artificial targets, this method makes use of cross-relation among SRFs so that we do not have to lay artificial targets ahead of time or put effort into collecting information about the real feature in an image. This method utilizes multiple natural opportunistic SRFs to invert a compensated MTF. In the presence of noise, the diversity gain resulted from inherent diversity of natural SRFs provides significant improvement in the accuracy of MTF compensation. Therefore, this method has the advantages of beneficial compensation effects, on-orbit utilization without spatial and temporal limitations, high stability, easy operation, and of requiring few testing targets. The method has been used in space smart optical orbiting payload integrated with attitude and position (SSPIAP) [24], which is the first optical camera integrated with attitude and position determination for mapping satellites. In our SSPIAP [25], the designed MTF for the high-accuracy SSPIAP is 0.0856–0.0951, while the requirement of the MTFC of SSPIAP is 0.14 at the Nyquist frequency, which is obtained using the simulations of the full imaging link of the SSPIAP. The proposed method can meet the requirement of MTF compensation of SSPIAPs and has the potential to become a compensation standard in the future.

2. MTF compensation method of optical remote sensors

The imaging link of optical remote sensors includes the atmosphere, satellite platform, optical system, image sensor, and imaging circuits. The imaging link model of optical remote sensors is shown in Fig. 1. The electromagnetic radiation energy of a ground object is received and transformed into continuous time-varying optical signals by an optical remote sensor. A charge coupled device (CCD) can further convert the optical signals into electrical signals and amplify them through electronic circuits. In an analog-to-digital (A/D) converter, the processed electrical signals are sampled and quantified by a time interval into the digital number (DN) value of image pixels. The image-capturing process can be modeled into a product operation in the Fourier transform domain as follows:

G(ξ,η)=F(ξ,η)Htotal(ξ,η)+N(ξ,η)Htotal(ξ,η)=Hoptics(ξ,η)Hdefocus(ξ,η)Hdetector(ξ,η)Himage_motion(ξ,η)Helectronics(ξ,η),
where G(ξ,η) is the frequency spectrum function of image, F(ξ,η) is the frequency spectrum function of the object, Htotal(ξ,η) is the total optical transfer function (OTF), Hoptics(ξ,η) is the point spread function (PSF)of optical system, Hdefocus(ξ,η) is the PSF of optical system defocus, Hdetector(ξ,η) is the PSF of CCD, Helectronics(ξ,η) is the PSF of electronic circuits, Himage_motion(ξ,η) is the PSF of image motion, and N(ξ,η) is the frequency spectrum function of noise.

 figure: Fig. 1

Fig. 1 Imaging link model of the optical remote sensors.

Download Full Size | PDF

The ideal OTF can be calculated by the design parameters of optical remote sensors [26–29]. MTFtotal(ξ,η) is the amplitude function of OTF. We define an optical transfer sensing function (OTSF), which is denoted by H*(ξ,η). Equation (1) can be expressed as follows:

G(ξ,η)=G(ξ,η)H(ξ,η)=F(ξ,η)H(ξ,η)H(ξ,η)+N(ξ,η).
Ideally, the amplitude of OTSF is equal to that of OTF. |H*(ξ,η)|<1; thus, the compensated MTF, that is, H(ξ,η)H*(ξ,η), can be improved. In the spatial domain, the division operation of Eq. (2) is a de-convolution operation. The MTF compensation can be considered a reverse process of imaging. Therefore, MTF compensation methods usually include two stages, namely, the OTSF measurement and de-convolution operation. The OTSF measurement is a prerequisite for MTF compensations; however, it is a difficult problem for obtaining an OTSF by using external targets when a remote sensor is working on-orbit. This study directly inverts H(ξ,η)H*(ξ,η) from the obtained multiple natural sub-resolution features.

3. MTF compensation model of sub-resolution features

We use the multiple extracted SRFs from the obtained images to compensate MTF and improve the robustness against noise. If certain aspects of the information are lost as a result of degradation, then other SRFs can compensate for them based on multiple independent SRFs. Here, the SRF is defined as the uniform background space around an object in a sub-image. A SRF only occupies several pixels on a uniform background. We use fω(ω=1,2,,Ω) to represent the ideal undistorted original sub-image of the sensor. Here, the image of the sensor is the digital output of the sensor. The original sub-image fω is degraded to gω through the imaging link of the remote sensor. nω(x,y) denotes the random noise and h denotes the total point spread function (TPSF) of remote sensors. Moreover, h of each sub-image is identical because these sub-images are extracted from the same image obtained by the sensor, which is considered globally shift-invariant. In the spatial domain, the sub-image acquisition can be modeled using common convolution symbols as follows:

gω(x,y)=fω(x,y)h(x,y)+nω(x,y).
Equation (3) in the frequency domain is expressed as follows:
Gω(ξ,η)=Fω(ξ,η)H(ξ,η)+Nω(ξ,η).
The corresponding matrix form of Eq. (4) is as follows:
Gω=FωH+Nω.
We attempt to use multiple sub-images to perform inverse calculation to solve MTF based on Eq. (5). However, the problem of inverting H from Gω is ill-advised because the observed sub-images Gω is the only known variable while H, Fω, and Nω are unknowns variable in Eq. (5). The total variation (TV) energy model, which belongs to the category of sparse priors, can solve this problem [30]. This model can be expressed as follows:
E(f)=T(f)+λ2U(gfh)2dU,
where U is the image domain, f is an ideal undistorted original image of the sensor, g can be a distorted image because of the imaging link of the optical remote sensor, and λ is a positive regularization parameter. T(f) is the total variation of f, which can be expressed as follows:
T(f)=U|f|dU=U(fy)2+(fx)2dU.
The h can be inverted by solving the minimum value of E(f).

We define a sub-image fω on finite points set denoted by Zfω={(i,j)z2,0iI1, 0jJ1}, where z2 is a set comprising pixel points on a 2D sub-image plane. We use Γ to denote the definition domain of the sub-image in Zfω, which can perform normalization operation into Γ[0,1]. We use D to represent a dictionary, which comprises the Ω sub-images. When Ω = 1in the frequency domain, Eq. (5) can determine an approximation H^ of H by solving the minimization of the convex TV as follows:

Eω(Fω)=minFω(GωHFωp2+λ1T(Fω)+λ2FωHGω22),
where λ1 and λ2 are the regularization parameters,  F represents the Frobenius norm (p = 2), and T() is the total variation of Fω. The anisotropic total variation can be expressed as follows:
T(Fω)=|xFω|+|yFω|=i=1M1|f(i,N)f(i+1,N)|+j=1N1|f(M,j)f(M,j+1)|+i=1M1j=1N1|fω(i,j)fω(i+1,j)|2+|fω(i,j)fω(i,j+1)|2.
When Ω>1, we establish a regularization energy function model, which can be expressed as follows:
E(H,F1,F2,...,Fω)=Q(H,F1,F2,...,Fω,G1,G2,...,Gω)+λ1T(H)+λ2W(F1,F2,...,Fω)+λ3R(F1,F2,...,Fω,G1,G2,...,Gω).
The first term is the noise residual that controls the errors between the blurred image HFω and the observed image Gω, which can be expressed as follows:
Q(F1,F2,...,Fω)=ω=1ΩHFωGω22.
The second term is the regularization of the point spread function, which is used to achieve an energy balance. Such regularization can control the iterated image energy to prevent a significant deviation from the original image. The third term is the regularization of every original sub-image, which has a similar role with the second term, and can be expressed as follows:
W(F1,F2,...,Fω)=ω=1ΩFω22.
λ1, λ2 and λ3 are the regularization factors to balance the constrained terms. The fourth term R(F1,F2,…,Fω, G1,G2,…,Gω) is the residual between the blurred sub-image sequences, which describe the blurriness error between two sub-images. We simplify the notation by using two sub-images to describe the specific forms of R (·). Figure 2 shows a sketch map with noiseless and noisy cases to provide an intuitive display.

 figure: Fig. 2

Fig. 2 Sketch map of the regularization term R.

Download Full Size | PDF

In the noiseless ideal system case, the original sub-images F1 and F2 are degraded into G1 and G2, respectively, by H. If we consider F1 and F2 as SRFs (similar to those in Fig. 2), then they can also be considered point spread function because of similar characters. The degraded sub-images G1 and G2 are degraded again into G'1 and G'2, respectively, by F1 and F2. The residual of the two final sub-images is zero because of the following factors:

G1F2=(F1H)F2=(F1F2)HG2F1=(F2H)F1=(F1F2)H.
In a practical system, there are G1F2G2F1 because noise exists. To make a practical system as near to the ideal system as possible, the built regularization energy function model needs a R(F1,F2,…,Fω, G1,G2,…,Gω) term to suppress the practical system noise. From the preceding considerations, we build R(), which can be expressed as follows:
R(F1,F2,...,Fω,G1,G2,...,Gω)=121ijΩGiFj-GjFi22.
To enhance the convergence of solutions, the additional convex constraints are as follows [31, 32]:
j=1N1i=1M1H(i,j)=1.
Therefore, the final regularization energy function model can be expressed as follows:
E(H,F1,F2,...,Fω)=ω=1ΩHFωGω22+λ1T(H)+λ2ω=1ΩFω22+λ3121ijΩGiFj-GjFi22.
We use the built regularization energy function model to determine the best TPSF H* and Fω that minimize the energy functional and distortion trade-off in the definition domain Γ.

Considering the regularization R(), the unblurred sub-image is a SRF. The SRF only occupies several pixels and the background is black. However, no pixel having a DN of 0 exists in the real remote sensing image. For example, the DN range of the remote sensing image obtained by the 12 bit imager is 0 to 4095; the minimal DN value is approximately 170. A propitious thing we learn from experience is that it is much easier to find sub-resolution features on uniform or small variation background. Figure 3 shows an example of sub-resolution features in two remote sensing images. The size of the remote sensing images is 512 × 512 pixels. The left figure has 86 SRFs and right picture has 293 SRFs. Each SRF is 7 × 7 pixels and the DN of the background variation is less than 5, thus, easily extracting natural SRFs from remote sensing images. The restricted sub-image can be expressed as follows:

gω=[gωb...gωb...gωp...gωb...gωb]=gωp+gωb=[00gωpgωb00]+[gωb...gωb...gωb...gωb...gωb],
where gωb is the block matrix with the same intensity value fωb, and gωp is the SRF matrix. Therefore, gωp=gωgωb can satisfy the requirement of Eq. (5), where gωb can be considered the brightness value of the local consistency background of the remote sensing images. When gωp=gωgωb (see Eq. (5)), the following equation is considered:
Gω-Gωb=FωH+N-Gωb.
Gωb = GωbHbased on Eq. (1).
Gω-Gωb=FωH+N-GωbH=(FωGωb)H+N.
Therefore, Eq. (19) can be used as a new imaging model to invert H when Gωb is a SRF.

 figure: Fig. 3

Fig. 3 Sub-resolution features in two remote images.

Download Full Size | PDF

4. The proposed MTF compensation

The entire MTF compensation procedure is summarized in the diagram in Fig. 4. The entire procedure comprises three steps. The first step is to denoise the original image based on the correlation filter algorithm to effectively suppress the noise.

 figure: Fig. 4

Fig. 4 The entire MTF compensation procedure.

Download Full Size | PDF

The second step is to extract the SRFs based on mathematical morphology. The dilation and erosion operators are used to extract the background matrix.

The third step is to compensate the MTF based on the regularization total variation energy function model. Lastly, the compensated MTF is calculated.

4.1Extraction of sub-resolution features

Prior to using SRFs to compensate MTF, multiple sub-images fω(ω=1,2,,Ω) must be automatically extracted from a remote sensing image. In this study, mathematical morphology is adopted in the SRF extraction method. The constrained open operator of mathematical morphology is employed to evaluate the background and set the extraction threshold. Equation (16) requires that sub-images have a flat background around a SRF. Background is crucial to extract sub-images. However, gradual variations in the background of a remote sensing image cause difficulty in the assessment of the total or simple threshold and extraction of SRF. To solve this problem, we employ the open operators of mathematical morphology to evaluate the background matrix Fωb of a sub-image. The open operators of mathematical morphology can be established by two basic morphological operations [33], such as erosion, denoted by a; and dilation, denoted by b. Let τ(x,y) be the gray value of sub-image at (x,y), δ(x,y) be the structural elements, Dσ be the domain of definition of δ(x,y), and Dτ be the domain of the definition of  τ(x,y). A structural element in mathematical morphology is a shape, used to probe with a given image, with the purpose of drawing conclusions on how this shape fits the shapes in the image. The erosion of  τ(x,y) byδ(x,y), denoted by τΘδ, is defined as follows:

a(x,y)=τΘδ=min(i,j)Dδ(x+i,y+j)Dτ[τ(x+i,y+j)δ(i,j)].
The dilation of  τ(x,y) by δ(x,y), denoted by τδ, is defined by:
b(x,y)=τδ=min(i,j)Dδ(x+i,y+j)Dτ[τ(xi,yj)+δ(i,j)].
When the dilation and erosion are combined, the open operator, denoted by c; and close operator, denoted by d, are obtained. The open and close operators can be expressed by the erosion and dilation operators as follows:
c=τδ=(τΘδ)δ,
d=τ^δ=(τδ)Θδ.
The open operator is the dilation of the erosion of the set τ by the structuring element δ, where as the close operator acts in reverse. The open operator can remove the weak connections between the objects and small details. When the radius of the structural element δ is larger than the radius of the SRF, the open operation can conveniently acquire the initial background matrix Fωb of the SRF sub-image. Figure 5 shows an example with small radius SRF. Let the pixel value of SRF be 150. The radius of structural element δ is 3, which is larger than the spot radius of τ. The open operator of mathematical morphology can extract the background matrix Fωb without spot information.

 figure: Fig. 5

Fig. 5 Gray value before and after mathematical morphology are (a) and (d), respectively; (b), (c), (e), (f) are the gray scale, color scale, and 3D plots of the same detailed view of a one star point.

Download Full Size | PDF

This study proposes a dual-structure element in the open operator for mathematical morphology by considering the degraded characteristic of the SRF under imaging link conditions. In the erosion stage of the open operation, the structural element δ1 in this study is set to line, with the elements of 9 pixels and 0degree of the line. The value of the structural element δ1(x, y) is set to 1 for convenient calculation. In the dilation stage of the open operation, the structural element δ2(x, y) is set to pair with offset vector n = [1,0] for convenient calculation. The employed dual-structure elements are larger than the radius of the SRF, which is determined by the theoretical point spread function model  htotal(x,y) of a remote sensor. For SSPIAP, f = 8000mm, D = 600mm, and λ = 0.5–0.8µm. Therefore, the minimum radius is equal to r = 1.22(λf/D) = 8.13µm, where D is the entrance pupil diameter, f is the focal length, and λ is the wavelength. The minimum size of a SRF is 2 × 2 pixels because the size of the SSPIAP imager is 8.75µm × 8.75µm.In performing the open operation, the gradually changing background matrix Fωb is assessed and will not be affected by the SRFs.

We must remove the pulse point noise after obtaining the gradually changing background matrix Fωb. The open and close operators can efficiently remove all pulse points or areas less than the structural element δ. The designed suppressing positive and negative impulse operators can be expressed as follows:

  • Suppressing positive impulse operators: PP=τδτ^δ;
  • Suppressing negative impulse operators: NP=τδτΘδ;
  • Suppressing positive negative impulse operators: PNP=(τδ)δ(τ^δ)Θδ.

Let Qω express the filtered image by the PNP operation. The objects can be expressed as follows:

gω=FωbQω.
To extract the SRFs, we use the threshold to determine the position center of these spots. The background o(x, y) can be expressed as follows:
B(x,y)=12(K1)+1×12(L1)+1×i=(K1)K1j=(L1)L1o(x+i,y+j).
We use the difference T(x, y) between f(x, y) and B(x, y) in the MTF extraction process to suppress the background noise. Such difference can be expressed as follows:

T(x,y)=f(x,y)B(x,y),
g(x,y)={0|g(x,y)|Tg(x,y)|g(x,y)|>T.

4.2 MTF inversion calculation

We use the gradient descent approach based on the built regularization energy function model to solve Eq. (16) to obtain the optimalizing MTF. To calculate the minimum of Eq. (16), the Euler–Lagrange equation is first calculated. The partial derivative with respect to Fω and H can be expressed as follows based on the built regularization energy function model:

{EH=ωFω(HFωGω)+λ1(H|H|)EFω=ωH(HFωGω)+λ2ω=1ΩFω+λ31ijΩGi(GiFj-GjFi),
where=(x,y) denotes the gradient operator and () denotes the divergence operator. The nth step iterative can be expressed as follows based on the gradient descent approach:
H(n)=H(n1)αE(n1)H(n1),
Fω(n)=Fω(n1)βE(n1)Fω(n1),
where α and β are the iteration step of the sub-image and point spread function, respectively. By alternately solving Eqs. (29) and (30), the final optimal stable solution of the optical transfer function H is obtained. The final compensated MTF is the amplitude of H.

5. Experimental results

Simulations and experiments are performed to verify the MTF compensation method using multiple SRFs. MTF and energy SRF are analyzed and compared before and after compensation.

5.1 Simulation experiments for verification

We use a self-developed camera as are remote sensor to perform simulation and analysis. First, we build a theoretical MTF model based on Eq. (1). The aperture diameter is 600 mm, that for the focal length is 8000 mm, and the F/number of the optical system was 13.3. The wavelength is 500nm–800nm. A time-delay integration charge coupled device (TDICCD) is considered an image sensor and its pixel size is 8.75µm × 8.75µm, the spatial sampling frequency is 114.29 line pairs per millimeter (lp/mm), and the Nyquist frequency is 57.143 lp/mm. The normalized frequency relative to the optical system of the camera is 0.46. The error of the focusing part is 0.01mm and the defocus caused by the non-coplanar TDICCDs is 0.01mm.The total defocus is 0.02mm. The theoretical MTFs of the camera is shown in Fig. 6. The 2D MTFs is shown in Fig. 7. We use the built theoretical MTF to perform the imaging simulation and verify the feasibility of the proposed method.

 figure: Fig. 6

Fig. 6 Theoretical MTF in the X direction.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Theoretical2D MTF and recovered PSF;(a) and (b) are the 3D plots of MTF and2D color scale, respectively, of the same detailed view;(c),(d), and (e) are the gray scale, color scale, and 3D plots, respectively, of the same detailed view of PSF.

Download Full Size | PDF

Second, we simulate an image with multiple SRFs to verify the proposed method (see Fig. 8). We use seven SRFs as ideal imaging objects (see Fig. 8(c)). Based on Eq. (1), the frequency spectrum function of the ideal image is F(ξ,η). To simulate the on-orbit imaging process, the ideal image is degraded by the theoretical on-orbit MTF model of the actual optical camera as Eq. (1), where the theoretical on-orbit MTF model is Htotal(ξ,η). The degraded image is obtained based on the theoretical on-orbit MTF model (Fig. 8(d)), where the degraded image is G(ξ,η). The degraded image is used as an obtained image by a remote-sensor to verify the feasibility of the proposed method. The SRFs are initially extracted by the proposed algorithm based on mathematical morphology. In frequency domain, the extracted SRFs can be expressed by G1,G2,G3,G4,G5,G6, andG7. The extracted SRFs are employed to compensate the MTF using this proposed method.

 figure: Fig. 8

Fig. 8 Simulated SRFs images, (a) original image with seven SRFs, (b) degraded image by MTF model, (c) magnified original SRFs, and (d) magnified degraded SRFs.

Download Full Size | PDF

We use seven SRFs to perform the inversion of the compensated MTF. In the inversion process, we define the mean squared error of MTF as the iterative relative convergence criterion as follows:

Rmse(Ht)=HtHt1Ht,
where is the L2 norm. The Rmse at various steps (Fig. 9) tends toward stability. The reconstructed image based on the compensated MTF is shown in Fig. 10. Compared with degraded SRF images in Fig. 8, the compensated MTF is higher than the original one and compensated images are clearer than uncompensated ones.

 figure: Fig. 9

Fig. 9 Mean squared error of MTF at different steps.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Reconstructed PSF and SRFs:(a),(b), and (c) are the gray scale, color scale, and 3D plots, respectively, of the same detailed view of the reconstructed PSF; and(d) is the reconstructed SRFs.

Download Full Size | PDF

5.2 Laboratory experiments for validation

We set up an experimental system to verify the proposed method (see Fig. 11). The system includes the SSPIAP prototype, camera controller, simulator of the remote sensing targets, optical theodolite, high-accuracy turntable, and ground auxiliary testing devices. The simulator is composed a light illuminator, targets, and a collimator. In our experiment, the imaging conditions of the SSPIAP simulate the on-orbit environments. The simulator provided an infinite target for the test system, which is used to simulate the ground targets when SSPIAP is working on-orbit. We also use some auxiliary equipments, such as a light radiometer, a temperature controller, and vibration isolation mechanisms, to ensure the consistent imaging conditions as on-orbit working. The SSPIAP prototype is integrated with an optical camera and attitude sensor. The SSPIAP optical system was a co-axial Schmidt–Cassegrain optical system. The aperture diameter is 203.2 mm; the focal length is 2032 mm, and the F/ratio of the optical system is 10. The spectral bands are 500–900nm. The image sensor is a complementary metal–oxide–semiconductor (CMOS) detector; the spectral range is 400-1000nm and the image resolution is 1280 × 1024 pixels.

 figure: Fig. 11

Fig. 11 Experimental system: (a) is the schematic drawing of experiment setup; (b) is the prototype of the SSPIAP integrated optical camera with attitude sensor and (c) is the experiment setup.

Download Full Size | PDF

The three-axis turntable was adjusted evenly using the optical theodolite. The collimator was also adjusted evenly. By adjusting the support tooling of the camera and using the benchmark prisms of the camera’s optical axis, the camera’s visual axis and collimator were moved to share a common shaft. We use the surface-micromachining (SM) process to fabricate a SRF and slant edge targets that were installed on the focal plane of the collimator. The SRF target is a point-source mask where the etched apertures can pass through lights, while other parts cannot because they are covered by metal layers [34]. In the slant edge target, the etched pattern is a slant edge where the left can pass through lights, while the right cannot pass through lights. The collimator produces parallel lights and passes into the optical system of the camera. Lights going out of the optical system are incident on the focal-plane detector. The SRFs or slant-edge of the collimator were imaged on the target CMOS sensor. The camera controller sets the imaging mode. Figure 12 shows the slant edge and SRFs.

 figure: Fig. 12

Fig. 12 Experimentaltesting targets:(a) is a slant edge target, (b) is the profile of edge, (c) is the point-star-like image, and (d) is the profile of the three points.

Download Full Size | PDF

We use a standard vertical edge target (Fig. 12(a)) to test the proposed method. The standard slant edge image is acquired by our camera in the laboratory to measure the MTF before compensation by the ISO12233 edge method. The extracted MTFs before and after compensation using various methods are shown in Fig. 13. Table 1 shows the MTF values at the Nyquist frequency. Compared with the MTF results before and after compensation, the proposed method can significantly improve the assessment value at the Nyquist frequency.

 figure: Fig. 13

Fig. 13 MTF before and after compensation, the MTF before compensation is measured by the edge-based method, and the MTF after compensation is inverted by the proposed method.

Download Full Size | PDF

Tables Icon

Table 1. MTF values at the Nyquist frequency

Figure 14 shows the images and energy distribution before and after compensation using the proposed method. From the comparisons, we observed that the image is visually appealing after the compensation operation. In addition to subjective visual effects, we assess the experimental results with quantitative evaluation. Remote sensing image interpretation is performed using human visual or machine perception. This study adopts several evaluation parameters, thereby indicating the image features, including the average gradient (AG), mean value (MV), standard deviation(SD), contrast ratio (CR), entropy, and edge intensity (EI) because these parameters are suitable for the evaluation of remote sensing images in human visual or machine perception [35–37]. AG represents remote sensing image clarity and can describe the degree of improvement in the remote sensing image quality and change rate of any minute detail contrast. The larger the AG, the more levels an image has and the image becomes clearer. Image entropy reflects the abundance of information in an image; the quality of the compensated image is determined by comparing the changes in the amount of information.MV, SD, and CR reflect the luminance and contrast performance of a remote sensing image. The contrast level of the compensated image is reflected by comparing the changes in the dynamic image and SD. Table 2 shows the calculated results. Moreover, Table 2 shows that the proposed method increases MV, SD, information entropy, AG, CR, and EI, thereby indicating the dynamic range of the increase in illumination, enhancement of contrast, and increase in clarity.

 figure: Fig. 14

Fig. 14 Compensated images:(a), (b), and (c) are the gray scale, color scale, and 3D plots, respectively, of the same detailed view of the captured image before compensation; and(d), (e), and (f) are the gray scale, color scale, and 3D plots, respectively, of the same detailed view of the captured image after compensation.

Download Full Size | PDF

Tables Icon

Table 2. Evaluation of the original and restored images

Lastly, we use a standard resolution testing board (SRTB) to perform an imaging experiment. To simulating the on-orbit imaging environment, we placed the SRTB onto the focal plan of the collimator, thus, providing an infinite target. Furthermore, the experimental turntable used a gas-floating vibration isolation platform to avoid vibration disturbance. The luminance of the target ensures a consistent value with on-orbit conditions using radiation calibration equipments. Moreover, the experiment was performed in a laboratory with constant temperature. We use two methods to obtain the compensated images. First, we apply the extracted MTF of the ISO12233 edge method to perform image restorations. The MTF is extracted from the remote sensing image using the ISO12233 method. Thereafter, the extracted MTF restores the PSF. The restored PSF can be considered the degradation transfer function of the entire remote sensor in the image recording process. The degradation of MTF has a degrading effect on the entire optical imaging link process, including optical system, detector, satellite platform motion, data transmission, and atmospheric effects. The obtained PSF employs the Wiener filter algorithm to obtain the restored image. Second, the proposed method directly compensates the image. Figure 15 shows the compensated image of the two different methods. We perform statistical analysis on restored images in terms of the mean gray level, mean grads, and edge strong information. Table 3 shows the statistical results. AG, MV, SD, CR, entropy, and EI have the highest values in the proposed method. Therefore, the experiments indicate that the proposed method obtained better-quality images outperforms the ISO12233 in six aspects: AG, MV, SD, CR, entropy, and EI. This reflected that the proposed method has the evident advantage in MTF compensations over the ISO12233 and un-compensation methods, thereby proving that the proposed method can compensate substantially clear and rich textured images.

 figure: Fig. 15

Fig. 15 Recovered image using different methods: (a) is original image of the camera output, (b) is the recovered image using the edge method, and (c) is the compensated image using our method

Download Full Size | PDF

Tables Icon

Table 3. Evaluation of the original and restored images

6. Conclusion

The use of filter-based MTFC allows for significant advances in remote sensing optical cameras. However, MTFC methods with external targets are limited by space and time when a remote sensor is working on-orbit. This study utilizes multiple natural SRFs to compensate the MTF of remote sensing cameras without spatial and temporal limitations. First, mathematical morphology and correlation filter algorithms are used to extract multiple SRFs from remote sensing images. Second, a regularization total variation energy function model is established to compensate the MTF using a partial derivative iteration algorithm. The compensated MTF can be used to improve the imaging performance of on-board remote sensors, as well as to recover and restore images to improve imaging quality. The proposed method can provide a reference for the on-board MTF compensation of space CCD cameras in the future.

Funding

National Natural Science Foundation of China (NSFC) (Grant No. 61505093, 61505190); National Key Research and Development Plan (Grant No. 2016YFC0103600).

Acknowledgments

The authors would like to express their sincere thanks to Z. Liu and F. Liu for the preparation of experiments.

References and links

1. C. Gaudin-Delrieu, J. Lamard, P. Cheroutre, B. Bailly, P. Dhuicq, and O. Puig, “The High Resolution Optical Instruments for the Pleiades HR Earth Observation Satellites,” in Proceedings of the 7th ICSO (International Conference on Space Optics), pp. 14–17(2008).

2. D. Leger, F. Viallefont, E. Hillairet, and A. Meygret, “In-flight refocusing and MTF assessment of SPOT5 HRGand HRS cameras,” Proc. SPIE 4881, 224–231 (2003). [CrossRef]  

3. P. Damilano, “Pleiades high resolution satellite: a solution for military and civilian needs in metric-class optical observation,” in AIAA/ USU Conference on Small Satellites, pp.1–9 (2001).

4. M. K. Cook, B. A. Peterson, G. Dial, L. Gibson, F. W. Gerlach, K. S. Hutchins, R. Kudola, and H. S. Bowen, “IKONOS technical performance assessment,” Proc. SPIE 4381, 94–108 (2001). [CrossRef]  

5. R. Ryan, B. Baldridge, R. A. Schowengerdt, T. Choi, D. L. Helder, and S. Blonski, “IKONOS spatial resolution and image interpretability characterization,” Remote Sens. Environ. 88(1–2), 37–52 (2003). [CrossRef]  

6. K. Kohm, “Modulation transfer function measurement method and results for the Orbview-3 high resolution imaging satellite,” in Proc. Geo-Imagery Bridging Continents XXth ISPRS Congr., pp. 7–12 (2004).

7. J. C. Storey, “Landsat 7 on-orbit modulation transfer function estimation,” Proc. SPIE 4540, 50–61 (2001). [CrossRef]  

8. D. Helder, J. Choi, and C. Anderson, “On-orbit modulation transfer function (MTF) measurements for IKONOS and QuickBird,” In Proceedings of the JACIE 2006 Civil Commercial Imagery Evaluation Workshop,Brookings, SD, USA, pp.14–16(2006).

9. F. Viallefont-Robinet and D. Léger, “Improvement of the edge method for on-orbit MTF measurement,” Opt. Express 18(4), 3531–3545 (2010). [CrossRef]   [PubMed]  

10. F. Viallefont-Robinet, “Edge method for on-orbit defocus assessment,” Opt. Express 18(20), 20845–20851 (2010). [CrossRef]   [PubMed]  

11. H. Hwang, Y. Choi, S. Kwak, M. Kim, and W. Park, “MTF assessment of high resolution satellite images using ISO 12233 slanted-edge method,” Proc. SPIE 7109, 710905 (2008). [CrossRef]  

12. K. Masaoka, T. Yamashita, Y. Nishida, and M. Sugawara, “Modified slanted-edge method and multidirectional modulation transfer function estimation,” Opt. Express 22(5), 6040–6046 (2014). [CrossRef]   [PubMed]  

13. X. Chen, N. George, G. Agranov, C. Liu, and B. Gravelle, “Sensor modulation transfer function measurement using band-limited laser speckle,” Opt. Express 16(24), 20047–20059 (2008). [CrossRef]   [PubMed]  

14. A. M. Pozo, A. Ferrero, M. Rubiño, J. Campos, and A. Pons, “Improvements for determining the modulation transfer function of charge-coupled devices by the speckle method,” Opt. Express 14(13), 5928–5936 (2006). [CrossRef]   [PubMed]  

15. D. Leger, J. Duffaut, and F. Robinet, “MTF measurement using spotlight,”in Proceedings of IEEE International Geoscience and Remote Sesning Symposium, pp. 2010–2012 (1994).

16. T. Choi, X. Xiong, and Z. Wang, “On-Orbit lunar modulation transfer function measurements for the moderate resolution imaging spectroradiometer,” IEEE Trans. Geosci. Remote Sens. 52(1), 270–277 (2014). [CrossRef]  

17. Z. Wang and X. Xiong, “VIIRS on-orbit spatial characterization using the Moon,” IEEE Geosci. Remote Sens. Lett. 11(6), 1116–1120 (2014). [CrossRef]  

18. E. Oh, S.-W. Kim, S.-I. Cho, J.-H. Ryu, and Y.-H. Ahn, “Initial on-orbit modulation transfer function performance analysis for Geostationary Ocean Color Imager,” J. Astron. Space Sci. 29(2), 199–208 (2012). [CrossRef]  

19. X. Mu, S. Xu, G. Li, and J. Hu, “Remote sensing image restoration with modulation transfer function compensation technology in-orbit,” Proc. SPIE 8768, 87681K (2012).

20. E. Oh and J. K. Choi, “GOCI image enhancement using an MTF compensation technique for coastal water applications,” Opt. Express 22(22), 26908–26918 (2014). [CrossRef]   [PubMed]  

21. X. Li, X. Gu, Q. Fu, T. Yu, H. Gao, J. Li, and L. Liu, “Removing atmospheric MTF and establishing an MTFcompensation filter for the HJ-1A CCD camera,” Int. J. Remote Sens. 34(4), 1413–1427 (2013). [CrossRef]  

22. Y. Yitzhaky, I. Dror, and N. S. Kopeika, “Restoration of atmospherically blurred images according to weather-predicted atmospheric modulation transfer functions,” Opt. Eng. 36(11), 3064–3072 (1997). [CrossRef]  

23. X. J. Guo, X. L. Liu, C. Ni, B. Liu, S. M. Huang, and M. Gu, “Improving image quality of x-ray in-line phase contrast imaging using an image restoration method,” Opt. Express 19(23), 23460–23468 (2011). [CrossRef]   [PubMed]  

24. J. Li, F. Xing, D. Chu, and Z. Liu, “High-Accuracy Self-Calibration for Smart, Optical Orbiting Payloads Integrated with Attitude and Position Determination,” Sensors (Basel) 16(8), 1176–1195 (2016). [CrossRef]   [PubMed]  

25. J. Li, F. Xing, T. Sun, Z. Liu, and Z. You, “Space high-accuracy intelligence payload system with integrated attitude and position determination,” Instrumentation 2(1), 3–15 (2015).

26. S. E. Reichenbach, D. E. Koehler, and D. W. Strelow, “Restoration and reconstruction of AVHRR images,” IEEE Trans. Geosci. Remote Sens. 33(4), 997–1007 (1995). [CrossRef]  

27. C. Latry, V. Despringre, and C. Valorge, “Automatic MTF measurement through a least square method,” Proc. SPIE 5570, 233–244 (2004). [CrossRef]  

28. W. H. Steel, “The defocused image of sinusoidal gratings,” J. Mod. Opt. 3, 65–74 (1956).

29. H.-S. Wong, Y. L. Yao, and E. S. Schlig, “TDI charge-coupled devices: design and application,” IBM J. Res. Develop. 36(1), 83–106 (1992). [CrossRef]  

30. L. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60(1-4), 259–268 (1992). [CrossRef]  

31. T. F. Chan and C.-K. Wong, “Total variation blind deconvolution,” IEEE Trans. Image Process. 7(3), 370–375 (1998). [CrossRef]   [PubMed]  

32. Y.-L. You and M. Kaveh, “Anisotropic blind image restoration,” in International Conference on Image Processing, pp. 461–464(1996).

33. Z. Deng, Z. Yin, and Y. Xiong, “High probability impulse noise-removing algorithm based on mathematical morphology,” IEEE Signal Process. Lett. 14(1), 31–34 (2007). [CrossRef]  

34. J. Li, Y. Zhang, S. Liu, and Z. Wang, “Self-Calibration Method Based on Surface Micromaching of Light Transceiver Focal Plane for Optical Camera,” Remote Sens. 8(12), 893–913 (2016). [CrossRef]  

35. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]   [PubMed]  

36. D. Y. Tsai, Y. Lee, and E. Matsuyama, “Information entropy measure for evaluation of image quality,” J. Digit. Imaging 21(3), 338–347 (2008). [CrossRef]   [PubMed]  

37. T. Yuan, X. Zheng, X. Hu, W. Zhou, and W. Wang, “A method for the evaluation of image quality according to the recognition effectiveness of objects in the optical remote sensing image using machine learning algorithm,” PLoS One 9(1), e86528 (2014). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1
Fig. 1 Imaging link model of the optical remote sensors.
Fig. 2
Fig. 2 Sketch map of the regularization term R.
Fig. 3
Fig. 3 Sub-resolution features in two remote images.
Fig. 4
Fig. 4 The entire MTF compensation procedure.
Fig. 5
Fig. 5 Gray value before and after mathematical morphology are (a) and (d), respectively; (b), (c), (e), (f) are the gray scale, color scale, and 3D plots of the same detailed view of a one star point.
Fig. 6
Fig. 6 Theoretical MTF in the X direction.
Fig. 7
Fig. 7 Theoretical2D MTF and recovered PSF;(a) and (b) are the 3D plots of MTF and2D color scale, respectively, of the same detailed view;(c),(d), and (e) are the gray scale, color scale, and 3D plots, respectively, of the same detailed view of PSF.
Fig. 8
Fig. 8 Simulated SRFs images, (a) original image with seven SRFs, (b) degraded image by MTF model, (c) magnified original SRFs, and (d) magnified degraded SRFs.
Fig. 9
Fig. 9 Mean squared error of MTF at different steps.
Fig. 10
Fig. 10 Reconstructed PSF and SRFs:(a),(b), and (c) are the gray scale, color scale, and 3D plots, respectively, of the same detailed view of the reconstructed PSF; and(d) is the reconstructed SRFs.
Fig. 11
Fig. 11 Experimental system: (a) is the schematic drawing of experiment setup; (b) is the prototype of the SSPIAP integrated optical camera with attitude sensor and (c) is the experiment setup.
Fig. 12
Fig. 12 Experimentaltesting targets:(a) is a slant edge target, (b) is the profile of edge, (c) is the point-star-like image, and (d) is the profile of the three points.
Fig. 13
Fig. 13 MTF before and after compensation, the MTF before compensation is measured by the edge-based method, and the MTF after compensation is inverted by the proposed method.
Fig. 14
Fig. 14 Compensated images:(a), (b), and (c) are the gray scale, color scale, and 3D plots, respectively, of the same detailed view of the captured image before compensation; and(d), (e), and (f) are the gray scale, color scale, and 3D plots, respectively, of the same detailed view of the captured image after compensation.
Fig. 15
Fig. 15 Recovered image using different methods: (a) is original image of the camera output, (b) is the recovered image using the edge method, and (c) is the compensated image using our method

Tables (3)

Tables Icon

Table 1 MTF values at the Nyquist frequency

Tables Icon

Table 2 Evaluation of the original and restored images

Tables Icon

Table 3 Evaluation of the original and restored images

Equations (31)

Equations on this page are rendered with MathJax. Learn more.

G ( ξ , η ) = F ( ξ , η ) H t o t a l ( ξ , η ) + N ( ξ , η ) H t o t a l ( ξ , η ) = H o p t i c s ( ξ , η ) H d e f o c u s ( ξ , η ) H det e c t o r ( ξ , η ) H i m a g e _ m o t i o n ( ξ , η ) H e l e c t r o n i c s ( ξ , η ) ,
G ( ξ , η ) = G ( ξ , η ) H ( ξ , η ) = F ( ξ , η ) H ( ξ , η ) H ( ξ , η ) + N ( ξ , η ) .
g ω ( x , y ) = f ω ( x , y ) h ( x , y ) + n ω ( x , y ) .
G ω ( ξ , η ) = F ω ( ξ , η ) H ( ξ , η ) + N ω ( ξ , η ) .
G ω = F ω H + N ω .
E ( f ) = T ( f ) + λ 2 U ( g f h ) 2 d U ,
T ( f ) = U | f | d U = U ( f y ) 2 + ( f x ) 2 d U .
E ω ( F ω ) = min F ω ( G ω H F ω p 2 + λ 1 T ( F ω ) + λ 2 F ω H G ω 2 2 ) ,
T ( F ω ) = | x F ω | + | y F ω | = i = 1 M 1 | f ( i , N ) f ( i + 1 , N ) | + j = 1 N 1 | f ( M , j ) f ( M , j + 1 ) | + i = 1 M 1 j = 1 N 1 | f ω ( i , j ) f ω ( i + 1 , j ) | 2 + | f ω ( i , j ) f ω ( i , j + 1 ) | 2 .
E ( H , F 1 , F 2 , ... , F ω ) = Q ( H , F 1 , F 2 , ... , F ω , G 1 , G 2 , ... , G ω ) + λ 1 T ( H ) + λ 2 W ( F 1 , F 2 , ... , F ω ) + λ 3 R ( F 1 , F 2 , ... , F ω , G 1 , G 2 , ... , G ω ) .
Q ( F 1 , F 2 , ... , F ω ) = ω = 1 Ω H F ω G ω 2 2 .
W ( F 1 , F 2 , ... , F ω ) = ω = 1 Ω F ω 2 2 .
G 1 F 2 = ( F 1 H ) F 2 = ( F 1 F 2 ) H G 2 F 1 = ( F 2 H ) F 1 = ( F 1 F 2 ) H .
R ( F 1 , F 2 , ... , F ω , G 1 , G 2 , ... , G ω ) = 1 2 1 i j Ω G i F j - G j F i 2 2 .
j = 1 N 1 i = 1 M 1 H ( i , j ) = 1 .
E ( H , F 1 , F 2 , ... , F ω ) = ω = 1 Ω H F ω G ω 2 2 + λ 1 T ( H ) + λ 2 ω = 1 Ω F ω 2 2 + λ 3 1 2 1 i j Ω G i F j - G j F i 2 2 .
g ω = [ g ω b ... g ω b ... g ω p ... g ω b ... g ω b ] = g ω p + g ω b = [ 0 0 g ω p g ω b 0 0 ] + [ g ω b ... g ω b ... g ω b ... g ω b ... g ω b ] ,
G ω - G ω b = F ω H + N - G ω b .
G ω - G ω b = F ω H + N - G ω b H = ( F ω G ω b ) H + N .
a ( x , y ) = τ Θ δ = min ( i , j ) D δ ( x + i , y + j ) D τ [ τ ( x + i , y + j ) δ ( i , j ) ] .
b ( x , y ) = τ δ = min ( i , j ) D δ ( x + i , y + j ) D τ [ τ ( x i , y j ) + δ ( i , j ) ] .
c = τ δ = ( τ Θ δ ) δ ,
d = τ ^ δ = ( τ δ ) Θ δ .
g ω = F ω b Q ω .
B ( x , y ) = 1 2 ( K 1 ) + 1 × 1 2 ( L 1 ) + 1 × i = ( K 1 ) K 1 j = ( L 1 ) L 1 o ( x + i , y + j ) .
T ( x , y ) = f ( x , y ) B ( x , y ) ,
g ( x , y ) = { 0 | g ( x , y ) | T g ( x , y ) | g ( x , y ) | > T .
{ E H = ω F ω ( H F ω G ω ) + λ 1 ( H | H | ) E F ω = ω H ( H F ω G ω ) + λ 2 ω = 1 Ω F ω + λ 3 1 i j Ω G i ( G i F j - G j F i ) ,
H ( n ) = H ( n 1 ) α E ( n 1 ) H ( n 1 ) ,
F ω ( n ) = F ω ( n 1 ) β E ( n 1 ) F ω ( n 1 ) ,
R m s e ( H t ) = H t H t 1 H t ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.