Abstract

The HaiYang-1C (HY-1C) ultra violet imager (UVI) consists of five independent cameras with a designed total image swath of approximately 3000 km. In order to obtain a complete seamless image formed by the five sub-images, a feasible geometric stitching method for the HY-1C UVI with a distorted virtual camera is proposed. First, we perform the absolute geometric calibration of camera 3 and the relative geometric calibration of cameras 1, 2, 4, and 5. Then, a distorted virtual camera is assigned. Finally, the five sub-images are stitched together with the distorted virtual camera. Three HY-1C UVI images were tested. The experimental results showed that the georeferencing accuracy of the stitched images was better than 1 pixel. Compared with the conventional stitching method with an undistorted virtual camera, the ground sampling distance differences of the five cameras obtained by the proposed method were reduced from 23%, 37%, 53%, 37%, and 25% to 6%, 6%, 1%, 7%, and 8%, respectively.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The HaiYang-1C (HY-1C) satellite, launched on 7th September 2018, is a Chinese ocean remote sensing satellite. It is equipped with a Chinese ocean color and temperature scanner (COCTS), a coastal zone imager (CZI), an ultra violet imager (UVI), a satellite calibration spectrometer (SCS), and an automatic identification system (AIS). The primary role of the HY-1C satellite is to collect global quantitative data on ocean color, sea surface temperature, and coastal zones. This data can then be widely applied to oceanic and coastal research projects, such as exploration of ocean biology and coastal zone resources and monitoring and mitigating ocean pollution.

The HY-1C UVI consists of five independent ultra violet cameras, and each camera has a different viewing angle, as shown in Fig. 1. The field angle and the focal length of each camera is approximately 23.8° and 36.9 mm, respectively. Each camera employs two complementary metal oxide semiconductor (CMOS) arrays to collect band 1 and band 2 images. Each CMOS array has 640×8 detectors, and the valid imaging array has 610×8 detectors. For both CMOS arrays, the upper four lines of detectors collect images in a high dynamic range, and the lower four lines of detectors collect images in a low dynamic range. The UVI collects images in a push-broom mode and has two transmission modes: full transmission mode and merged transmission mode. At an instant of imaging time, four line images are collected for both of the dynamic ranges in each band. In the full transmission mode, the four line images (640×4 pixels) are all transmitted to the ground. The four line images are further merged into a line image (640×1 pixels) during ground processing. The designed ground sample distance (GSD) of the subastral point of camera 3 is approximately 550 m in this mode. In the merged transmission mode, each image block (2×2 pixels) in the four line images are merged into one pixel. Two line images (320×2 pixels) are then transmitted to the ground and further merged into a line image (320×1 pixels) during ground processing. The designed GSD of the subastral point of camera 3 is approximately 1100 m in this mode. In both transmission modes, the five UVI cameras form a total image swath of approximately 3000 km.

 figure: Fig. 1.

Fig. 1. The sketch map of the HY-1C UVI imaging and the virtual camera assignment.

Download Full Size | PPT Slide | PDF

In order to obtain a complete seamless UVI image with a large image swath of approximately 3000 km, the sub-images collected by the five cameras must be geometrically stitched together. Generally, the geometric stitching methods for multiple satellite cameras or linear arrays are classified into two categories: the image-space-oriented stitching method and the object-space-oriented stitching method [1,2]. The former method is performed using a three-step procedure: matching tie points, establishing the stitching model, and registering the images [37]. First, a suitable image matching method, such as normalized cross correlation, is employed to match tie points in the overlapping area between the adjacent sub-images. Then, a mathematical model, such as the shift model, affine transformation model, or piecewise polynomial model, is employed as the stitching model. The stitching model parameters are solved with the matched tie points. Finally, all the sub-images are registered together based on the stitching model. This method is independent of the satellite camera’s geometric imaging model and not theoretically rigorous. Moreover, the stitching accuracy is limited by the matched tie points. When the overlapping area between the adjacent sub-images lack texture or is covered by forest or cloud, it is difficult to obtain sufficient and robust tie points. Therefore, this method is not suitable for highly precise image processing and application.

The object-space-oriented stitching method establishes the geometric relationship between the stitched image and the original sub-images according to the satellite camera’s geometric imaging model, and then recollects the stitched image of the same ground area covered by the original cameras or linear arrays [8,9]. Specifically, an ideal undistorted virtual camera is first assigned on a normalized focal plane according to the interior parameters of each camera. Then, the geometric imaging model, such as the physical sensor model or the rational polynomial model, is established for both the virtual camera and each original camera or linear array. Finally, according to the geometric imaging model of the virtual camera, each image point in the stitched image is projected onto the ground. According to the geometric imaging model of the original cameras or linear arrays, all the obtained ground points are subsequently projected onto the original sub-images. The resampled gray value of each projected point in the original sub-images is then assigned to the corresponding image point in the stitched image. In this method, the geometric relationship between the stitched image and the original sub-images is theoretically more rigorous. Hence, this method can often obtain a sub-pixel stitching accuracy and has been successfully used in the geometric stitching of many Chinese high-resolution satellites, such as the ZiYuan-3, ZiYuan-1 02C, GaoFen-1, and GaoFen-2 satellite [2,1012].

Previous studies concerning geometric stitching of multiple satellite cameras or linear arrays primarily focused on the stitching accuracy [811]. However, the GSD difference between the stitched image and the original sub-images is another very important indicator that should be given ample attention. When the GSD of the stitched image is larger than that of the original sub-images, a lot of object information in the original sub-images is lost. If the opposite occurs, then the stitched image is blurry. In either case, the GSD difference will seriously affect the quality of the satellite images. In the previous object-space-oriented method, the virtual camera was often assigned to be undistorted, which means the physical sizes of all imaging detectors of the virtual camera are identical [811]. For high-resolution satellite cameras, the largest imaging-detector look angle is only several degrees. For example, the largest imaging-detector look angle of the ZiYuan-3 satellite cameras is approximately 3° [11]. The GSD difference between the stitched image and the original sub-images may be negligible. However, for the five HY-1C UVI cameras, the largest designed imaging-detector look angle reaches approximately 56°. The imaging detectors dp and dq with the identical physical size in Fig. 1 have different look angles, different imaging distances from the detector to the ground, and different influences caused by the earth curvature. The GSDp of the imaging detector dp thereby differs from GSDq of the detector dq. It means that the GSD is different in different sub-images and in different regions of each image. In fact, the GSDs of the original UVI sub-images range from 530 m to 3400 m. Without taking the GSD difference into account, the largest GSD difference obtained by the conventional object-space-oriented stitching method will reach 55% in the across-track direction, which is clearly not negligible. It is noted that since the geometric stitching does not result in the GSD difference in the along-track direction, the GSD difference and the normalized detector size difference in the following description of this study only means the difference in the across-track direction.

In order to reduce the GSD difference, we proposed a geometric stitching method for the HY-1C UVI with a distorted virtual camera in this study. In this method, the absolute geometric calibration of camera 3 and the relative geometric calibration of cameras 1, 2, 4, and 5 are first performed. Then, a distorted virtual camera is assigned based on the calibrated interior parameters of each camera. Finally, the original sub-images collected by the five UVI cameras are stitched with the distorted virtual camera. Compared with the conventional undistorted camera, the proposed method can noticeably reduce the GSD difference between the stitched image and the original sub-images. Therefore, the blur of the stitched image can be reduced and the object information in the original sub-images can be retained.

The remainder of this paper is organized as follows. Section 2 details the proposed geometric stitching method, including the in-orbit geometric calibration, the distorted virtual camera assignment, and the geometric stitching. Section 3 describes the use of three HY-1C UVI images to analyze the feasibility and effectiveness of the proposed method. Section 4 presents the conclusions.

2. Methodology

2.1 In-orbit geometric calibration

The in-orbit geometric calibration aims to obtain the precise imaging parameters of satellite cameras. With the help of the geometric calibration, the interior and exterior orientation accuracy of satellite images can be significantly improved [1319]. In the geometric stitching of multiple satellite cameras or linear arrays, the geometric calibration is also an indispensable step. Here, we first introduce the geometric calibration of the HY-1C UVI cameras.

At an instant of imaging time, the HY-1C UVI actually collects only a line image. The UVI imaging procedure is likened to that a camera with a linear CMOS array on the focal plane collects images in a push-broom mode. Therefore, the geometric imaging model of the UVI cameras can be expressed as follows [13,20]:

$${\left[ {\begin{array}{c} X\\ Y\\ Z \end{array}} \right]_{\textrm{WGS}\;84}} = {\left[ {\begin{array}{c} {{X_S}}\\ {{Y_S}}\\ {{Z_S}} \end{array}} \right]_{\textrm{WGS}\;\textrm{84}}} + \lambda {\textbf R}_{\textrm{J2000}}^{\textrm{WGS}\;\textrm{84}}{\textbf R}_{\textrm{Body}}^{\textrm{J2000}}{\textbf R}_{\textrm{Camera}}^{\textrm{Body}}\;\left[ {\begin{array}{c} {\tan ({\psi_x})}\\ {\tan ({\psi_y})}\\ 1 \end{array}} \right]$$
where $(X,Y,Z)_{\textrm{WGS}\;\textrm{84}}^\textrm{T}$ and $({X_S},{Y_S},{Z_S})_{\textrm{WGS}\;\textrm{84}}^\textrm{T}$ are the coordinates of the ground point and the satellite position, respectively, in the WGS 84 geocentric coordinate system; ${\textbf R}_{\textrm{J2000}}^{\textrm{WGS84}}$ is the J2000-to-WGS 84 rotation matrix; ${\textbf R}_{\textrm{Body}}^{\textrm{J2000}}$ is the rotation matrix from the satellite-body coordinate system to the J2000 celestial coordinate system; ${\textbf R}_{\textrm{Camera}}^{\textrm{Body}}$ is the rotation matrix from the camera coordinate system to the satellite-body coordinate system, and the three rotation angles constructing ${\textbf R}_{\textrm{Camera}}^{\textrm{Body}}$ are the exterior parameters; $({\tan ({{\psi_x}} )\textrm{, tan}({{\psi_y}} )} )$ are the normalized coordinates of the imaging detector in the camera coordinate system, and a look-angle model is often used to model the normalized detector coordinates as follows [13,14,21,22]:
$$\left\{ {\begin{array}{c} {\tan ({\psi_x}) = {a_0} + {a_1}n + {a_2}{n^2} + {a_3}{n^3}}\\ {\tan ({\psi_y}) = {b_0} + {b_1}n + {b_2}{n^2} + {b_3}{n^3}\;} \end{array}} \right.$$
where n is the detector number; and $({a_0},\;{a_1},\;{a_2},\;{a_3},\;{b_0},\;{b_1},\;{b_2},\;{b_3})$ are the interior parameters.

The geometric imaging model expressed by Eq. (1) and Eq. (2) can theoretically describe the rigorous imaging procedure of a satellite camera with a linear array. It has been successfully used in the in-orbit geometric calibration of many Chinese high-resolution satellites, such as the ZiYuan-3, ZiYuan-1 02C, GaoFen-1, GaoFen-2, and YaoGan-26 [10,13,23,24].

In order to geometrically stitch the UVI sub-images, the relative geometric relationship between the five UVI cameras should be conclusively known. Since the five UVI cameras were installed symmetrically around camera 3 on the satellite, camera 3 was taken as the reference camera, and cameras 1, 2, 4, and 5 were taken as the nonreference cameras in this study. Accordingly, we performed the geometric calibration of the five cameras in two steps: absolute geometric calibration and relative geometric calibration. During absolute geometric calibration, both the exterior and interior parameters of the reference camera in (1) were first calibrated. During relative geometric calibration, the exterior parameters of the reference camera were applied to the nonreference cameras, and only the interior parameters were then calibrated.

2.2 Distorted virtual camera assignment

The virtual camera assignment is the key link in the geometric stitching of the HY-1C UVI. The detector size difference between the virtual camera and the original cameras directly determines the GSD difference between the stitched image and the original sub-images. In this section, we first theoretically analyzed the detector size difference obtained by the conventional geometric stitching method with an undistorted virtual camera. Then, we introduced the distorted virtual camera assignment in the proposed method.

1) Theoretical analysis of the conventional geometric stitching method: After the geometric calibration, we can obtain the normalized coordinates of each imaging detector in the camera coordinate system of camera 3. As shown in Fig. 1, the coordinate ${y_{{t_1}}}$ of point t1 is the normalized coordinate of the first detector of camera 1. The coordinate ${y_{{t_2}}}$ of point t2 is the normalized coordinate of the last detector of camera 5. In the conventional object-space-oriented stitching method, an undistorted virtual camera is often assigned as follows [9,11]:

$$\left\{ {\begin{array}{c} {\tan ({{\tilde{\psi }}_x}) = {{\tilde{a}}_0}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;}\\ {\tan ({{\tilde{\psi }}_y}) = {y_{{t_1}}} + \frac{{{y_{{t_2}}} - {y_{{t_1}}}}}{N}\tilde{n}\;} \end{array}} \right.$$
where $({\tan ({{{\tilde{\psi }}_x}} )\textrm{, tan}({{{\tilde{\psi }}_y}} )} )$ are the normalized detector coordinates of the virtual camera; ${\tilde{a}_0}$ is the mean value of the interior parameters ${({{{\tilde{a}}_0}} )_i}\;i = 1,2,3,4,5$ of the five cameras; N is the total number of the imaging detectors of the virtual camera; and $\tilde{n}$ is the detector number of the virtual camera.

In the conventional object-space-oriented stitching method, Eq. (3) demonstrates that the normalized detector sizes of the undistorted virtual camera are identical. However, owing to the look-angle difference, the imaging detectors dp and dq with the identical physical size in Fig. 1 have different normalized detector sizes t1p2 and q1q2. In fact, the maximum and minimum normalized detector sizes of the original five UVI cameras differ a lot because the largest look angle reaches approximately 56°. In the across-track direction, the theoretical normalized detector size s of the five cameras can be calculated as follows:

$$s = \tan (\alpha + \beta + \theta ) - \tan (\alpha + \beta ) = \frac{{(1 + {{\tan }^2}(\alpha + \beta ))\tan \theta }}{{1 - \tan (\alpha + \beta )\tan \theta }}$$
where α represents the angle between the optical axis of camera i (i=1,2,3,4,5) and that of camera 3; β represents the angle between the optical axis of the camera and the imaging ray of the detector in this camera; θ represents the field angle of the detector.

In Eq. (4), β and θ can be obtained according to the physical size of the imaging detectors and the focal length of the camera. According to the overlapping number of the imaging detectors, the overlapping angle between the adjacent cameras can be obtained, and then α can be obtained according to the field angle of the cameras and the overlapping angle.

According to Eq. (4), we can conclude that the detector size s of cameras 1, 2, 4, and 5 increases as the look angle α+β increases. Hence, the leftmost detector dp of camera 1 and the rightmost detector of camera 5 in Fig. 1 have the maximum normalized detector size and can be calculated as follows:

$${s_{\max }} = \tan (\textrm{44}\textrm{.533552}^\circ{+} \textrm{11}\textrm{.638026}^\circ{+} \textrm{0}\textrm{.037223}^\circ ) - \tan (\textrm{44}\textrm{.533552}^\circ{+} \textrm{11}\textrm{.638026}^\circ ) = \textrm{0}\textrm{.002098}$$

The middle detector dq of camera 1 in Fig. 1 has the minimum normalized detector size and can be calculated as follows:

$${s_{\min }} = \tan (0^\circ{+} 0^\circ{+} 0.038818^\circ ) - \tan (0^\circ{+} 0^\circ ) = 0.000678$$

From Eq. (5) and Eq. (6), it is evident that the maximum normalized detector size is approximately treble the minimum one. Hence, if we use the conventional object-space-oriented stitching method, the normalized detector sizes of the undistorted virtual camera will differ a lot from those of the original five cameras, as shown in Fig. 2. Figure 2 depicts the difference ds between the normalized detector size sv of the undistorted virtual camera and the normalized detector size so of the original camera. The difference ds is calculated as follows:

$$ds = \frac{{{s_v} - {s_o}}}{{{s_o}}} \times 100{\%}$$

The conventional geometric stitching method took the undistorted camera as the virtual camera and ignored the normalized detector size difference of the original cameras. Therefore, the detector sizes of the undistorted virtual camera have significant differences from those of the original cameras, which could be seen from the results in Fig. 2. The largest difference of each camera reached approximately 55%. This sizeable difference will undoubtedly result in a large GSD difference between the stitched image and the original sub-images.

2) Camera assignment of the proposed geometric stitching method: In order to reduce the GSD difference, we should retain the normalized detector size difference of the original cameras as much as possible. In this study, we propose to replace the conventional undistorted virtual camera with a distorted virtual camera to geometrically stitch the five UVI sub-images. In fact, the normalized detector coordinates of the original five cameras are all expressed by the look-angle model in Eq. (2). In order to theoretically reduce the GSD difference to the utmost extent, the best option is to continue using the look-angle model to express the normalized detector coordinates of the virtual camera. The proposed assignment procedure of the distorted virtual camera in this study is as follows:

  • (1) The normalized detector coordinates of the distorted virtual camera in the along-track direction is assigned as follows:
    $$\tan ({\tilde{\psi }_x}) = {\tilde{a}_0}$$
  • (2) The look-angle model in Eq. (2) is employed to express the normalized detector coordinates of the distorted virtual camera in the across-track direction as follows:
    $$\tan ({\tilde{\psi }_y}) = {\tilde{b}_0} + {\tilde{b}_1}\tilde{n} + {\tilde{b}_2}{\tilde{n}^2} + {\tilde{b}_3}{\tilde{n}^3}\;$$
    where $({\tilde{b}_0},\;{\tilde{b}_1},\;{\tilde{b}_2},\;{\tilde{b}_3})$ are calculated by using Eq. (9) to fit the normalized detector coordinates of the original five cameras.
  • (3) Due to fitting errors, the normalized coordinate ${\tilde{b}_0}$ of the first detector of the distorted virtual camera may not be coincident with the normalized coordinate ${t_1}$ of the first detector of camera 1. In order for the stitched image to cover the same ground area with the original sub-images, Eq. (9) is modified as follows:
    $$\tan ({\tilde{\psi }_y}) = {t_1} + {\tilde{b}_1}\tilde{n} + {\tilde{b}_2}{\tilde{n}^2} + {\tilde{b}_3}{\tilde{n}^3}\;$$
  • (4) Again, owing to fitting errors, the normalized coordinate ${t_1} + {\tilde{b}_1}N + {\tilde{b}_2}{N^2} + {\tilde{b}_3}{N^3}\;$ of the last detector of the distorted virtual camera may not be coincident with the normalized coordinate ${t_2}$ of the last detector of camera 5. Accordingly, Eq. (10) is further modified as follows:
    $$\tan ({\tilde{\psi }_y}) = {t_1} + \frac{{{t_2} - {t_1} - {{\tilde{b}}_2}{N^2} - {{\tilde{b}}_3}{N^3}}}{N}\tilde{n} + {\tilde{b}_2}{\tilde{n}^2} + {\tilde{b}_3}{\tilde{n}^3}\;$$

 figure: Fig. 2.

Fig. 2. The normalized detector size difference obtained by the undistorted virtual camera.

Download Full Size | PPT Slide | PDF

In the conventional stitching method, Eq. (3) demonstrates that the normalized detector sizes of the virtual camera are identical, indicating that the virtual camera is ideal and undistorted. In contrast, Eq. (11) shows that the normalized detector sizes of the virtual camera are different from each other in the across-track direction. The virtual camera is thereby distorted in this study.

According to Eq. (11), the theoretical normalized detector coordinates of the distorted virtual camera could be obtained. Then, the normalized detector sizes could be obtained. The normalized detector size difference between the distorted virtual camera and the original cameras is shown in Fig. 3.

 figure: Fig. 3.

Fig. 3. The normalized detector size difference obtained by the distorted virtual camera.

Download Full Size | PPT Slide | PDF

Comparing Fig. 3 with Fig. 2, it is evident that using the distorted virtual camera to geometrically stitch the UVI sub-images could noticeably reduce the normalized detector size difference between the virtual camera and the original cameras. The largest difference of cameras 1 and 5 was reduced from approximately 55% to 19%, and the largest difference of cameras 2, 3, and 4 was reduced from approximately 55% to 8%. The reason is that the proposed stitching method took into full account the normalized detector size difference of the original cameras and retained them as much as possible. Therefore, the GSD difference between the stitched image and the original sub-images will be noticeably reduced in theory.

2.3 Geometric stitching

The major procedures of the proposed geometric stitching method for the HY-1C UVI with a distorted virtual camera are as follows:

  • (1) The absolute geometric calibration of camera 3 and the relative geometric calibration of cameras 1, 2, 4, and 5 are performed, as described in Section 2.1.
  • (2) A distorted virtual camera is assigned based on the normalized detector coordinates of the five original cameras, as introduced in Section 2.2.
  • (3) The start imaging time, the end imaging time, and the imaging time interval of the original cameras are assigned to those of the distorted virtual camera.
  • (4) According to the start imaging time and the imaging time interval, the imaging time of each image line in the stitched image is obtained.
  • (5) According to the satellite positions and attitudes obtained by the GPS, gyros, and star trackers, the satellite orbit model and the satellite attitude model are established, respectively.
  • (6) According to the satellite orbit model, the satellite attitude model, the imaging time, and the interior and exterior calibration parameters, the geometric imaging model of each original camera is established as Eqs. (1) and (2). Similarly, the geometric imaging model of the distorted virtual camera is established as Eqs. (1), (8) and (11).
  • (7) According to the geometric imaging model of the distorted virtual camera, each image point (x, y) in the stitched image is projected onto the digital elevation model (DEM), and the corresponding ground point (X, Y, Z) is obtained.
  • (8) According to the geometric imaging model of the original camera, the ground point (X, Y, Z) is projected onto the original sub-image, and the corresponding image point (x’, y’) is obtained.
  • (9) Gray resampling is employed to obtain the gray value of the image point (x’, y’) in the original sub-image, and the obtained gray value is finally assigned to the image point (x, y) in the stitched image.

3. Results and discussion

3.1 Experimental datasets

In this study, three HY-1C UVI images were tested. The general characteristics of the three images are listed in Table 1. The UVI cameras have two bands, each of which has two dynamic ranges. The GSDs of both dynamic ranges in each band are the same, so only band 2, with the low dynamic range, was used to analyze the GSD difference. Additionally, the UVI cameras can collect the images in full transmission mode or merged transmission mode. Although the GSD of the images collected by the former mode is half of that collected by the latter mode, the GSD differences of the two modes are equivalent. Hence, only the images collected in the full transmission mode were tested. The five sub-images of each image are shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. Sub-images of (a) image 1, (b) image 2, and (c) image 3.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. General characteristics of the HY-1C UVI images

3.2 Calibration accuracy analysis

Considering that the smallest GSD of the five UVI cameras was only approximately 550 m, the globally publicized 100 m resolution Landsat digital orthophoto map (DOM) and 1000 m resolution shuttle radar topography mission (SRTM) DEM were used as the control data in the geometric calibration. Because the total image swath of the five UVI cameras is approximately 3000 km, it is very difficult for all the five sub-images of a single image to be fully covered by the land in the across-track direction. In this study, sub-images 2, 3, 4, and 5 of image 1, sub-images 1, 2, 3, and 4 of image 2, and sub-images 2, 3, and 4 of image 3 are fully coved by the land. There, sub-images 2, 3, 4, and 5 of image 1 were used to calibrate cameras 2, 3, 4, and 5, and sub-image 1 of image 2 was used to calibrate camera 1.

The HY-1C satellite positions and attitudes are determined by the global position system (GPS), gyros, and star trackers. The determination accuracy can reach approximately 10 cm and 0.003 degree, respectively. The exterior orientation error caused by the determination error of satellite positions and attitudes is smaller than 0.1 pixel. Therefore, the determination error of satellite positions and attitudes is negligible in the geometric calibration. Using images 1 and 2 to calibrate the five cameras has no effect on the calibration results.

In the geometric calibration, numerous ground control points (GCPs) in sub-images 2, 3, 4, and 5 of image 1 and sub-image 1 of image 2 were automatically extracted and matched from the reference DOM and DEM. The distribution of the GCPs is shown in Fig. 4. Then, the absolute geometric calibration of camera 3 and the relative geometric calibration of cameras 1, 2, 4, and 5 were performed. According to the calibrated exterior and interior parameters, the maximum error and the root mean square error (RMSE) of the residual errors of the GCPs after the geometric calibration are listed in Table 2. For ease of comparison, the maximum error and the RMSE of the GCPs before the geometric calibration, which were obtained via laboratory-calibrated parameters, are also listed in Table 2. The distribution of the residual errors before and after the geometric calibration is shown in Figs. 5 and 6.

 figure: Fig. 5.

Fig. 5. The distribution of the residual errors of the GCPs in sub-images (a, b) 1, (c, d) 2, (e, f) 3, (g, h) 4, and (i, j) 5 before the geometric calibration.

Download Full Size | PPT Slide | PDF

 figure: Fig. 6.

Fig. 6. The distribution of the residual errors of the GCPs in sub-images (a, b) 1, (c, d) 2, (e, f) 3, (g, h) 4, and (i, j) 5 after the geometric calibration.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 2. The georeferencing accuracy of the five UVI sub-images before and after the geometric calibrationa

Based on the results in Table 2 and Figs. 5 and 6, we could draw the following conclusions:

  • (1) Before the geometric calibration, the georeferencing accuracy of the five sub-images ranged from approximately 6 to 10 pixels. The residual errors of the GCPs showed an evident distorted pattern in both the along-track and across-track directions. The most likely reason was that the laboratory-calibrated parameters of the five UVI cameras changed significantly during the satellite launch. Therefore, it is imperative to perform an in-orbit geometric calibration for the five UVI cameras.
  • (2) The precise exterior and interior parameters of each camera were obtainable after the geometric calibration. Thus, the georeferencing accuracy of the five sub-images was improved to better than 1 pixel. Moreover, the residual errors of the GCPs no longer showed a distorted pattern in either the along-track or across-track directions. These results demonstrated that it is feasible to use the geometric imaging model in Eq. (1) and the look-angle model in Eq. (2) to describe the imaging geometry of the UVI cameras. With the help of the geometric calibration, the distortions of each camera could be effectively eliminated and the georeferencing accuracy could be noticeably improved.

3.3 Evaluation of geometric stitching

After the geometric calibration of the five UVI cameras, a distorted virtual camera was assigned according to the method described in Section 2.2. Next, the five sub-images of image 1 were geometrically stitched together according to the procedure described in Section 2.3. The process was repeated for images 2 and 3. Three examples of the partial enlarged images from image 3 are shown in Fig. 7. We can see from the results in Fig. 7 that the UVI sub-images were visually stitched very well.

 figure: Fig. 7.

Fig. 7. The partial enlarged images from image 3 (a) before and (b) after geometric stitching.

Download Full Size | PPT Slide | PDF

In order to quantificationally evaluate the feasibility of the proposed geometric stitching method, the Landsat DOM and the SRTM-DEM were also used as reference data. Numerous check points in each stitched image were automatically extracted and matched from the reference DOM and DEM. Direct georeferencing of each stitched image was performed using the exterior and interior parameters of the assigned distorted virtual camera. The maximum error and the RMSE of the residual errors of the check points are listed in Table 3.

Tables Icon

Table 3. The georeferencing accuracy of the stitched UVI imagesa

The distorted virtual camera was assigned based on the calibrated interior parameters of the five UVI cameras. Thus, the interior parameters of the assigned distorted virtual camera were also theoretically precise. This notion was validated by the results in Table 3, which showed that the georeferencing accuracy of the three stitched images was better than 1 pixel. The accuracy of the stitched images was consistent with that of the original sub-images listed in Table 2. These results demonstrated that it is feasible to use a distorted virtual camera to perform the geometric stitching of the UVI; as no georeferencing accuracy was lost after the sub-images are stitched together.

3.4 GSD difference analysis

Because it is very difficult to visually analyze the GSD difference between the stitched image and the original sub-images, the top left point and the top right point of an image pixel p was projected onto the ground based on the exterior and interior parameters. The GSD of image pixel p was determined as the distance between the two projected ground points. In this study, the GSDs of all the image pixels in the middle line of an image g was designated the GSDs of image g. In order to eliminate the influence of terrain relief on the GSD calculation, the ground heights of all the image pixels were set to 0.

In order to contrastively evaluate the performance of the proposed geometric stitching method in reducing the GSD difference, we designed three experiments as follows:

  • 1) Experiment E1: No geometric stitching method was used, and the GSDs of the five original sub-images were taken as a reference and calculated according to Eqs. (1) and (2).
  • 2) Experiment E2: The conventional geometric stitching method with an undistorted virtual camera was used to stitch the original sub-images, and the GSDs of the stitched image were calculated according to Eqs. (1) and (3).
  • 3) Experiment E3: The proposed geometric stitching method with a distorted virtual camera was used to stitch the original sub-images, and the GSDs of the stitched image were calculated according to Eqs. (1), (8) and (11).

After the GSDs of the stitched image were calculated, the stitched image was logically divided into five sub-images based on the number of the overlapping detectors between the adjacent cameras — i.e., ease of comparison, the divided sub-images of the stitched image share a same detector number with that of the original sub-images. The GSDs of the five sub-images of image 1 are used as an example and shown in Fig. 8. The GSD differences between the divided sub-images and the original sub-images are shown in Fig. 9. The maximum difference and the root mean square difference (RMSD) of the GSD differences are listed in Table 4. Here, the GSD difference dg is calculated as follows:

$$dg = \frac{{{g_v} - {g_o}}}{{{g_o}}} \times 100{\%}$$
where go is the GSD of the original sub-image and gv is the GSD of the divided sub-image.

 figure: Fig. 8.

Fig. 8. The GSDs of sub-images (a) 1, (b) 2, (c) 3, (d) 4, and (e) 5 from image 1.

Download Full Size | PPT Slide | PDF

 figure: Fig. 9.

Fig. 9. The GSD differences of sub-images (a) 1, (b) 2, (c) 3, (d) 4, and (e) 5 of image 1.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 4. The GSD differences between the divided sub-images and the original sub-images

The RMSD of the GSD differences is calculated as follows:

$$RMSD = \sqrt {\frac{{\sum\limits_{i = 0}^n {d{g_i} \cdot d{g_i}} }}{{n + 1}}} $$

Based on Figs. 8 and 9 and Table 4, we could draw the following conclusions:

  • (1) In experiment E1, the GSDs of the five original sub-images differed a lot due to the large difference between the imaging-detector look angles. The smallest and largest GSDs of the original sub-images were approximately 530 m and 3400 m, respectively.
  • (2) In experiment E2, the conventional geometric stitching method with an undistorted virtual camera was used to stitch the original sub-images. In the conventional method, the normalized detector size differences of the original cameras were ignored. The detector sizes of the undistorted virtual camera differed a lot from those of the original cameras, as shown in Fig. 2. As a result, the smallest and largest GSDs of the divided sub-images were approximately 820 m and 1700 m, which differed significantly from those of the original sub-images. For cameras 1 and 5, the majority of the GSDs of the divided sub-images in experiment E2 were smaller than those of the original sub-images in experiment E1. It means that one pixel at the left edge of original sub-image 1 and at the right edge of original sub-image 5 was stitched into two pixels in the divided sub-images. Thus, the divided sub-images were blurry. For cameras 2, 3, and 4, the GSDs of the divided sub-images in experiment E2 were larger than those of the original sub-images in experiment E1. It means that three pixels at the right edge of the original sub-image 2, at the full range of original sub-image 3, and at the left edge of original image 4 were stitched into two pixels in the divided sub-images. Thus, a lot of object information in the original sub-images was lost in the divided sub-images. No matter the divided sub-images were blurry or object information was lost, the GSD difference would seriously affect the quality of the stitched images.
  • (3) In experiment E3, the proposed geometric stitching method with a distorted virtual camera was used to stitch the original sub-images. In the proposed method, the normalized detector size differences of the original cameras were taken into full account and retained as much as possible. The normalized detector size differences between the distorted virtual camera and the original cameras were noticeably reduced, as shown in Fig. 3. In response, the GSD differences between the divided and original sub-images were also reduced. The smallest and largest GSDs of the divided sub-images were approximately 520 m and 2900 m, respectively. Comparing the experimental results achieved by the proposed and conventional method, we can see that for cameras 1 and 5, the GSD differences were respectively reduced from 23% and 25% in experiment E2 to 6% and 8% in experiment E3. The blur of the divided sub-images was thereby reduced to a negligible level. For cameras 2, 3, and 4, the GSD differences were respectively reduced from 37%, 53%, and 37% in experiment E2 to 6%, 1%, and 7% in experiment E3. The object information in the original sub-images was thereby retained. As such, we conclude that the proposed method is feasible and effective.
  • (4) The GSD differences of the five sub-images in Fig. 9 were consistent with the normalized detector size differences of the five cameras in Figs. 2 and 3. In fact, when the images are collected in a vertical push-broom mode and the earth curvature is not considered, the normalized detector size s and the GSD g satisfy the following equation:
    $$g = s \cdot H$$
    where H is the orbit height of the satellite.

Equation (14) demonstrates that the normalized detector size difference is the fundamental cause of GSD differences. Hence, using an undistorted virtual camera to geometrically stitch the sub-images will result in large GSD differences. In order to reduce the GSD difference, a distorted virtual camera is recommended to geometrically stitch the UVI sub-images.

4. Conclusion

Because the HY-1C UVI cameras have large imaging-detector look angles, the stitched images obtained by the conventional geometric stitching method with an undistorted virtual camera have large GSD differences from the original sub-images. As a result, the image pixels at the left and right edges of the stitched images are blurry, and a lot of object information in the original sub-images is lost. In this study, a feasible and effective geometric stitching method for the HY-1C UVI is proposed. In the proposed method, a distorted virtual camera is assigned to replace the conventional undistorted virtual camera. The normalized detector size differences of the original cameras are taken into full account and retained as much as possible. The GSD difference between the stitched image and the original sub-images can be significantly reduced.

The proposed geometric stitching method was tested on three HY-1C UVI images. For the conventional geometric stitching method with an undistorted virtual camera, the normalized detector sizes of the undistorted virtual camera differed significantly from those of the original cameras. The resulted largest GSD difference between the stitched image and the original sub-images reached 55%. For the proposed method, a distorted virtual camera was assigned. The normalized detector sizes of the distorted virtual camera were almost the same as those of the original cameras. Compared with the conventional method, the GSD difference achieved by the proposed method was thereby noticeably reduced. Moreover, the georeferencing accuracy of the stitched images obtained by the proposed method was consistent with that of the original sub-images. No georeferencing accuracy was lost in the geometric stitching. As such, the experimental results demonstrated the feasibility and effectiveness of the proposed method.

Funding

National Natural Science Foundation of China (61801331, 61901307, 91738302).

Acknowledgments

The authors would like to thank the anonymous reviewers and members of the editorial team for their comments and contributions and National Satellite Ocean Application Service for providing the test datasets.

Disclosures

The authors declare no conflicts of interest.

References

1. F. Hu, “Research on inner FOV stitching theories and algorithms for sub-images of three non-collinear TDI CCD chips,” Ph.D. Dissertation, (Wuhan University, 2010).

2. X. Tang, F. Hu, M. Wang, J. Pan, S. Jin, and G. Lu, “Inner FoV stitching of spaceborne TDI CCD images based on sensor geometry and projection plane in object space,” Remote Sens. 6(7), 6386–6406 (2014). [CrossRef]  

3. S. Ait-Aoudia, R. Mahiou, H. Djebli, and E. H. Guerrout, “Satellite and aerial image mosaicing - a comparative insight,” 16th International Conference on Information Visualisation, Montpellier, France, 652–657 (2012).

4. L. Hu, T. Sun, T. Zhang, and H. You, “Application of DIROEF algorithm for noncollinear multiple CCD array stitching of the Chinese mapping satellite 1-02,” IEEE Trans. Geosci. Remote Sens. 14(4), 519–523 (2017). [CrossRef]  

5. S. Li, T. Liu, and H. Wang, “Image Mosaic for TDICCD push-broom camera image based on image matching,” Remote Sens. Technol. Appl. 24(3), 374–378 (2009). [CrossRef]  

6. X. Long, X. Wang, and H. Zhong, “Analysis of image quality and processing method of a space-borne focal plane view splicing TDI CCD camera,” Sci. China Inf. Sci. 41, 19–31 (2011). [CrossRef]  

7. Y. Wang, G. Hu, H. Long, and T. Zhang, “CCD image seamless mosaic on characteristic and dislocation fitting,” J. Remote Sens. 16(z1), 98–101 (2012). [CrossRef]  

8. J. Pan, F. Hu, M. Wang, S. Jin, and G. Li, “An inner FOV stitching method for non-collinear TDI CCD images,” Acta Geod. Cartogr. Sin. 43(11), 1165–1173 (2014). [CrossRef]  

9. J. Pan, F. Hu, M. Wang, and S. Jin, “Inner FOV stitching of ZY-1 02C HR camera based on virtual CCD line,” Geomatics Inf. Sci. Wuhan Univ. 40(4), 436–443 (2015). [CrossRef]  

10. Y. Cheng, S. Jin, M. Wang, Y. Zhu, and Z. Dong, “A new image mosaicking approach for the multiple camera system of the optical remote sensing satellite GaoFen1,” Remote Sens. Lett. 8(11), 1042–1051 (2017). [CrossRef]  

11. Y. Cheng, S. Jin, M. Wang, Y. Zhu, and Z. Dong, “Image mosaicking approach for a double-camera system in the GaoFen2 optical remote sensing satellite based on the big virtual camera,” Sensors 17(6), 1441 (2017). [CrossRef]  

12. M. Wang, Y. Zhu, S. Jin, J. Pan, and Q. Zhu, “Correction of ZY-3 image distortion caused by satellite jitter via virtual steady reimaging using attitude data,” ISPRS J. Photogramm. Remote Sens. 119, 108–123 (2016). [CrossRef]  

13. M. Wang, B. Yang, F. Hu, and X. Zang, “On-orbit geometric calibration model and its applications for high-resolution optical satellite imagery,” Remote Sens. 6(5), 4391–4408 (2014). [CrossRef]  

14. J. Cao, X. Yuan, and J. Gong, “In-orbit geometric calibration and validation of ZY-3 three-line cameras based on CCD-detector look angles,” Photogramm. Rec. 30(150), 211–226 (2015). [CrossRef]  

15. D. Mulawa, “On-orbit geometric calibration of the OrbView-3 high resolution imaging satellite,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 35(B1), 1–6 (2004).

16. P. V. Radhadevi and S. S. Solanki, “In-flight geometric calibration of different cameras of IRS-P6 using a physical sensor model,” Photogramm. Rec. 23(121), 69–89 (2008). [CrossRef]  

17. S. Leprince, P. Musé, and J. P. Avouac, “In-flight CCD distortion calibration for pushbroom satellites based on subpixel correlation,” IEEE Trans. Geosci. Remote Sens. 46(9), 2675–2683 (2008). [CrossRef]  

18. J. Takaku and T. Tadono, “PRISM on-orbit geometric calibration and DSM performance,” IEEE Trans. Geosci. Remote Sens. 47(12), 4060–4073 (2009). [CrossRef]  

19. S. Kocaman and A. Gruen, “Orientation and self-calibration of ALOS PRISM imagery,” Photogramm. Rec. 23(123), 323–340 (2008). [CrossRef]  

20. B. Yang, M. Wang, W. Xu, D. Li, J. Gong, and Y. Pi, “Large-scale block adjustment without use of ground control points based on the compensation of geometric calibration for ZY-3 images,” ISPRS J. Photogramm. Remote Sens. 134, 1–14 (2017). [CrossRef]  

21. R. Gachet, “SPOT5 in-flight commission: inner orientation of HRG and HRS instruments,” Int. Arch. Photogramm. Remote Sens. 35(B1), 535–539 (2004).

22. S. Liu, C. S. Fraser, C. Zhang, M. Ravanbakhsh, and X. Tong, “Georeferencing performance of THEOS satellite imagery,” Photogramm. Rec. 26(134), 250–262 (2011). [CrossRef]  

23. Y. Cheng, M. Wang, S. Jin, L. He, and T. Tian, “New on-orbit geometric interior parameters self-calibration approach based on three-view stereoscopic images from high-resolution multi-TDI-CCD optical satellites,” Opt. Express 26(6), 7475–7493 (2018). [CrossRef]  

24. M. Wang, C. Fan, J. Pan, S. Jin, and X. Chang, “Image jitter detection and compensation using a high-frequency angular displacement method for Yaogan-26 remote sensing satellite,” ISPRS J. Photogramm. Remote Sens. 130, 32–43 (2017). [CrossRef]  

References

  • View by:

  1. F. Hu, “Research on inner FOV stitching theories and algorithms for sub-images of three non-collinear TDI CCD chips,” Ph.D. Dissertation, (Wuhan University, 2010).
  2. X. Tang, F. Hu, M. Wang, J. Pan, S. Jin, and G. Lu, “Inner FoV stitching of spaceborne TDI CCD images based on sensor geometry and projection plane in object space,” Remote Sens. 6(7), 6386–6406 (2014).
    [Crossref]
  3. S. Ait-Aoudia, R. Mahiou, H. Djebli, and E. H. Guerrout, “Satellite and aerial image mosaicing - a comparative insight,” 16th International Conference on Information Visualisation, Montpellier, France, 652–657 (2012).
  4. L. Hu, T. Sun, T. Zhang, and H. You, “Application of DIROEF algorithm for noncollinear multiple CCD array stitching of the Chinese mapping satellite 1-02,” IEEE Trans. Geosci. Remote Sens. 14(4), 519–523 (2017).
    [Crossref]
  5. S. Li, T. Liu, and H. Wang, “Image Mosaic for TDICCD push-broom camera image based on image matching,” Remote Sens. Technol. Appl. 24(3), 374–378 (2009).
    [Crossref]
  6. X. Long, X. Wang, and H. Zhong, “Analysis of image quality and processing method of a space-borne focal plane view splicing TDI CCD camera,” Sci. China Inf. Sci. 41, 19–31 (2011).
    [Crossref]
  7. Y. Wang, G. Hu, H. Long, and T. Zhang, “CCD image seamless mosaic on characteristic and dislocation fitting,” J. Remote Sens. 16(z1), 98–101 (2012).
    [Crossref]
  8. J. Pan, F. Hu, M. Wang, S. Jin, and G. Li, “An inner FOV stitching method for non-collinear TDI CCD images,” Acta Geod. Cartogr. Sin. 43(11), 1165–1173 (2014).
    [Crossref]
  9. J. Pan, F. Hu, M. Wang, and S. Jin, “Inner FOV stitching of ZY-1 02C HR camera based on virtual CCD line,” Geomatics Inf. Sci. Wuhan Univ. 40(4), 436–443 (2015).
    [Crossref]
  10. Y. Cheng, S. Jin, M. Wang, Y. Zhu, and Z. Dong, “A new image mosaicking approach for the multiple camera system of the optical remote sensing satellite GaoFen1,” Remote Sens. Lett. 8(11), 1042–1051 (2017).
    [Crossref]
  11. Y. Cheng, S. Jin, M. Wang, Y. Zhu, and Z. Dong, “Image mosaicking approach for a double-camera system in the GaoFen2 optical remote sensing satellite based on the big virtual camera,” Sensors 17(6), 1441 (2017).
    [Crossref]
  12. M. Wang, Y. Zhu, S. Jin, J. Pan, and Q. Zhu, “Correction of ZY-3 image distortion caused by satellite jitter via virtual steady reimaging using attitude data,” ISPRS J. Photogramm. Remote Sens. 119, 108–123 (2016).
    [Crossref]
  13. M. Wang, B. Yang, F. Hu, and X. Zang, “On-orbit geometric calibration model and its applications for high-resolution optical satellite imagery,” Remote Sens. 6(5), 4391–4408 (2014).
    [Crossref]
  14. J. Cao, X. Yuan, and J. Gong, “In-orbit geometric calibration and validation of ZY-3 three-line cameras based on CCD-detector look angles,” Photogramm. Rec. 30(150), 211–226 (2015).
    [Crossref]
  15. D. Mulawa, “On-orbit geometric calibration of the OrbView-3 high resolution imaging satellite,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 35(B1), 1–6 (2004).
  16. P. V. Radhadevi and S. S. Solanki, “In-flight geometric calibration of different cameras of IRS-P6 using a physical sensor model,” Photogramm. Rec. 23(121), 69–89 (2008).
    [Crossref]
  17. S. Leprince, P. Musé, and J. P. Avouac, “In-flight CCD distortion calibration for pushbroom satellites based on subpixel correlation,” IEEE Trans. Geosci. Remote Sens. 46(9), 2675–2683 (2008).
    [Crossref]
  18. J. Takaku and T. Tadono, “PRISM on-orbit geometric calibration and DSM performance,” IEEE Trans. Geosci. Remote Sens. 47(12), 4060–4073 (2009).
    [Crossref]
  19. S. Kocaman and A. Gruen, “Orientation and self-calibration of ALOS PRISM imagery,” Photogramm. Rec. 23(123), 323–340 (2008).
    [Crossref]
  20. B. Yang, M. Wang, W. Xu, D. Li, J. Gong, and Y. Pi, “Large-scale block adjustment without use of ground control points based on the compensation of geometric calibration for ZY-3 images,” ISPRS J. Photogramm. Remote Sens. 134, 1–14 (2017).
    [Crossref]
  21. R. Gachet, “SPOT5 in-flight commission: inner orientation of HRG and HRS instruments,” Int. Arch. Photogramm. Remote Sens. 35(B1), 535–539 (2004).
  22. S. Liu, C. S. Fraser, C. Zhang, M. Ravanbakhsh, and X. Tong, “Georeferencing performance of THEOS satellite imagery,” Photogramm. Rec. 26(134), 250–262 (2011).
    [Crossref]
  23. Y. Cheng, M. Wang, S. Jin, L. He, and T. Tian, “New on-orbit geometric interior parameters self-calibration approach based on three-view stereoscopic images from high-resolution multi-TDI-CCD optical satellites,” Opt. Express 26(6), 7475–7493 (2018).
    [Crossref]
  24. M. Wang, C. Fan, J. Pan, S. Jin, and X. Chang, “Image jitter detection and compensation using a high-frequency angular displacement method for Yaogan-26 remote sensing satellite,” ISPRS J. Photogramm. Remote Sens. 130, 32–43 (2017).
    [Crossref]

2018 (1)

2017 (5)

M. Wang, C. Fan, J. Pan, S. Jin, and X. Chang, “Image jitter detection and compensation using a high-frequency angular displacement method for Yaogan-26 remote sensing satellite,” ISPRS J. Photogramm. Remote Sens. 130, 32–43 (2017).
[Crossref]

B. Yang, M. Wang, W. Xu, D. Li, J. Gong, and Y. Pi, “Large-scale block adjustment without use of ground control points based on the compensation of geometric calibration for ZY-3 images,” ISPRS J. Photogramm. Remote Sens. 134, 1–14 (2017).
[Crossref]

L. Hu, T. Sun, T. Zhang, and H. You, “Application of DIROEF algorithm for noncollinear multiple CCD array stitching of the Chinese mapping satellite 1-02,” IEEE Trans. Geosci. Remote Sens. 14(4), 519–523 (2017).
[Crossref]

Y. Cheng, S. Jin, M. Wang, Y. Zhu, and Z. Dong, “A new image mosaicking approach for the multiple camera system of the optical remote sensing satellite GaoFen1,” Remote Sens. Lett. 8(11), 1042–1051 (2017).
[Crossref]

Y. Cheng, S. Jin, M. Wang, Y. Zhu, and Z. Dong, “Image mosaicking approach for a double-camera system in the GaoFen2 optical remote sensing satellite based on the big virtual camera,” Sensors 17(6), 1441 (2017).
[Crossref]

2016 (1)

M. Wang, Y. Zhu, S. Jin, J. Pan, and Q. Zhu, “Correction of ZY-3 image distortion caused by satellite jitter via virtual steady reimaging using attitude data,” ISPRS J. Photogramm. Remote Sens. 119, 108–123 (2016).
[Crossref]

2015 (2)

J. Cao, X. Yuan, and J. Gong, “In-orbit geometric calibration and validation of ZY-3 three-line cameras based on CCD-detector look angles,” Photogramm. Rec. 30(150), 211–226 (2015).
[Crossref]

J. Pan, F. Hu, M. Wang, and S. Jin, “Inner FOV stitching of ZY-1 02C HR camera based on virtual CCD line,” Geomatics Inf. Sci. Wuhan Univ. 40(4), 436–443 (2015).
[Crossref]

2014 (3)

J. Pan, F. Hu, M. Wang, S. Jin, and G. Li, “An inner FOV stitching method for non-collinear TDI CCD images,” Acta Geod. Cartogr. Sin. 43(11), 1165–1173 (2014).
[Crossref]

M. Wang, B. Yang, F. Hu, and X. Zang, “On-orbit geometric calibration model and its applications for high-resolution optical satellite imagery,” Remote Sens. 6(5), 4391–4408 (2014).
[Crossref]

X. Tang, F. Hu, M. Wang, J. Pan, S. Jin, and G. Lu, “Inner FoV stitching of spaceborne TDI CCD images based on sensor geometry and projection plane in object space,” Remote Sens. 6(7), 6386–6406 (2014).
[Crossref]

2012 (1)

Y. Wang, G. Hu, H. Long, and T. Zhang, “CCD image seamless mosaic on characteristic and dislocation fitting,” J. Remote Sens. 16(z1), 98–101 (2012).
[Crossref]

2011 (2)

X. Long, X. Wang, and H. Zhong, “Analysis of image quality and processing method of a space-borne focal plane view splicing TDI CCD camera,” Sci. China Inf. Sci. 41, 19–31 (2011).
[Crossref]

S. Liu, C. S. Fraser, C. Zhang, M. Ravanbakhsh, and X. Tong, “Georeferencing performance of THEOS satellite imagery,” Photogramm. Rec. 26(134), 250–262 (2011).
[Crossref]

2009 (2)

S. Li, T. Liu, and H. Wang, “Image Mosaic for TDICCD push-broom camera image based on image matching,” Remote Sens. Technol. Appl. 24(3), 374–378 (2009).
[Crossref]

J. Takaku and T. Tadono, “PRISM on-orbit geometric calibration and DSM performance,” IEEE Trans. Geosci. Remote Sens. 47(12), 4060–4073 (2009).
[Crossref]

2008 (3)

S. Kocaman and A. Gruen, “Orientation and self-calibration of ALOS PRISM imagery,” Photogramm. Rec. 23(123), 323–340 (2008).
[Crossref]

P. V. Radhadevi and S. S. Solanki, “In-flight geometric calibration of different cameras of IRS-P6 using a physical sensor model,” Photogramm. Rec. 23(121), 69–89 (2008).
[Crossref]

S. Leprince, P. Musé, and J. P. Avouac, “In-flight CCD distortion calibration for pushbroom satellites based on subpixel correlation,” IEEE Trans. Geosci. Remote Sens. 46(9), 2675–2683 (2008).
[Crossref]

2004 (2)

D. Mulawa, “On-orbit geometric calibration of the OrbView-3 high resolution imaging satellite,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 35(B1), 1–6 (2004).

R. Gachet, “SPOT5 in-flight commission: inner orientation of HRG and HRS instruments,” Int. Arch. Photogramm. Remote Sens. 35(B1), 535–539 (2004).

Ait-Aoudia, S.

S. Ait-Aoudia, R. Mahiou, H. Djebli, and E. H. Guerrout, “Satellite and aerial image mosaicing - a comparative insight,” 16th International Conference on Information Visualisation, Montpellier, France, 652–657 (2012).

Avouac, J. P.

S. Leprince, P. Musé, and J. P. Avouac, “In-flight CCD distortion calibration for pushbroom satellites based on subpixel correlation,” IEEE Trans. Geosci. Remote Sens. 46(9), 2675–2683 (2008).
[Crossref]

Cao, J.

J. Cao, X. Yuan, and J. Gong, “In-orbit geometric calibration and validation of ZY-3 three-line cameras based on CCD-detector look angles,” Photogramm. Rec. 30(150), 211–226 (2015).
[Crossref]

Chang, X.

M. Wang, C. Fan, J. Pan, S. Jin, and X. Chang, “Image jitter detection and compensation using a high-frequency angular displacement method for Yaogan-26 remote sensing satellite,” ISPRS J. Photogramm. Remote Sens. 130, 32–43 (2017).
[Crossref]

Cheng, Y.

Y. Cheng, M. Wang, S. Jin, L. He, and T. Tian, “New on-orbit geometric interior parameters self-calibration approach based on three-view stereoscopic images from high-resolution multi-TDI-CCD optical satellites,” Opt. Express 26(6), 7475–7493 (2018).
[Crossref]

Y. Cheng, S. Jin, M. Wang, Y. Zhu, and Z. Dong, “Image mosaicking approach for a double-camera system in the GaoFen2 optical remote sensing satellite based on the big virtual camera,” Sensors 17(6), 1441 (2017).
[Crossref]

Y. Cheng, S. Jin, M. Wang, Y. Zhu, and Z. Dong, “A new image mosaicking approach for the multiple camera system of the optical remote sensing satellite GaoFen1,” Remote Sens. Lett. 8(11), 1042–1051 (2017).
[Crossref]

Djebli, H.

S. Ait-Aoudia, R. Mahiou, H. Djebli, and E. H. Guerrout, “Satellite and aerial image mosaicing - a comparative insight,” 16th International Conference on Information Visualisation, Montpellier, France, 652–657 (2012).

Dong, Z.

Y. Cheng, S. Jin, M. Wang, Y. Zhu, and Z. Dong, “A new image mosaicking approach for the multiple camera system of the optical remote sensing satellite GaoFen1,” Remote Sens. Lett. 8(11), 1042–1051 (2017).
[Crossref]

Y. Cheng, S. Jin, M. Wang, Y. Zhu, and Z. Dong, “Image mosaicking approach for a double-camera system in the GaoFen2 optical remote sensing satellite based on the big virtual camera,” Sensors 17(6), 1441 (2017).
[Crossref]

Fan, C.

M. Wang, C. Fan, J. Pan, S. Jin, and X. Chang, “Image jitter detection and compensation using a high-frequency angular displacement method for Yaogan-26 remote sensing satellite,” ISPRS J. Photogramm. Remote Sens. 130, 32–43 (2017).
[Crossref]

Fraser, C. S.

S. Liu, C. S. Fraser, C. Zhang, M. Ravanbakhsh, and X. Tong, “Georeferencing performance of THEOS satellite imagery,” Photogramm. Rec. 26(134), 250–262 (2011).
[Crossref]

Gachet, R.

R. Gachet, “SPOT5 in-flight commission: inner orientation of HRG and HRS instruments,” Int. Arch. Photogramm. Remote Sens. 35(B1), 535–539 (2004).

Gong, J.

B. Yang, M. Wang, W. Xu, D. Li, J. Gong, and Y. Pi, “Large-scale block adjustment without use of ground control points based on the compensation of geometric calibration for ZY-3 images,” ISPRS J. Photogramm. Remote Sens. 134, 1–14 (2017).
[Crossref]

J. Cao, X. Yuan, and J. Gong, “In-orbit geometric calibration and validation of ZY-3 three-line cameras based on CCD-detector look angles,” Photogramm. Rec. 30(150), 211–226 (2015).
[Crossref]

Gruen, A.

S. Kocaman and A. Gruen, “Orientation and self-calibration of ALOS PRISM imagery,” Photogramm. Rec. 23(123), 323–340 (2008).
[Crossref]

Guerrout, E. H.

S. Ait-Aoudia, R. Mahiou, H. Djebli, and E. H. Guerrout, “Satellite and aerial image mosaicing - a comparative insight,” 16th International Conference on Information Visualisation, Montpellier, France, 652–657 (2012).

He, L.

Hu, F.

J. Pan, F. Hu, M. Wang, and S. Jin, “Inner FOV stitching of ZY-1 02C HR camera based on virtual CCD line,” Geomatics Inf. Sci. Wuhan Univ. 40(4), 436–443 (2015).
[Crossref]

J. Pan, F. Hu, M. Wang, S. Jin, and G. Li, “An inner FOV stitching method for non-collinear TDI CCD images,” Acta Geod. Cartogr. Sin. 43(11), 1165–1173 (2014).
[Crossref]

X. Tang, F. Hu, M. Wang, J. Pan, S. Jin, and G. Lu, “Inner FoV stitching of spaceborne TDI CCD images based on sensor geometry and projection plane in object space,” Remote Sens. 6(7), 6386–6406 (2014).
[Crossref]

M. Wang, B. Yang, F. Hu, and X. Zang, “On-orbit geometric calibration model and its applications for high-resolution optical satellite imagery,” Remote Sens. 6(5), 4391–4408 (2014).
[Crossref]

F. Hu, “Research on inner FOV stitching theories and algorithms for sub-images of three non-collinear TDI CCD chips,” Ph.D. Dissertation, (Wuhan University, 2010).

Hu, G.

Y. Wang, G. Hu, H. Long, and T. Zhang, “CCD image seamless mosaic on characteristic and dislocation fitting,” J. Remote Sens. 16(z1), 98–101 (2012).
[Crossref]

Hu, L.

L. Hu, T. Sun, T. Zhang, and H. You, “Application of DIROEF algorithm for noncollinear multiple CCD array stitching of the Chinese mapping satellite 1-02,” IEEE Trans. Geosci. Remote Sens. 14(4), 519–523 (2017).
[Crossref]

Jin, S.

Y. Cheng, M. Wang, S. Jin, L. He, and T. Tian, “New on-orbit geometric interior parameters self-calibration approach based on three-view stereoscopic images from high-resolution multi-TDI-CCD optical satellites,” Opt. Express 26(6), 7475–7493 (2018).
[Crossref]

M. Wang, C. Fan, J. Pan, S. Jin, and X. Chang, “Image jitter detection and compensation using a high-frequency angular displacement method for Yaogan-26 remote sensing satellite,” ISPRS J. Photogramm. Remote Sens. 130, 32–43 (2017).
[Crossref]

Y. Cheng, S. Jin, M. Wang, Y. Zhu, and Z. Dong, “A new image mosaicking approach for the multiple camera system of the optical remote sensing satellite GaoFen1,” Remote Sens. Lett. 8(11), 1042–1051 (2017).
[Crossref]

Y. Cheng, S. Jin, M. Wang, Y. Zhu, and Z. Dong, “Image mosaicking approach for a double-camera system in the GaoFen2 optical remote sensing satellite based on the big virtual camera,” Sensors 17(6), 1441 (2017).
[Crossref]

M. Wang, Y. Zhu, S. Jin, J. Pan, and Q. Zhu, “Correction of ZY-3 image distortion caused by satellite jitter via virtual steady reimaging using attitude data,” ISPRS J. Photogramm. Remote Sens. 119, 108–123 (2016).
[Crossref]

J. Pan, F. Hu, M. Wang, and S. Jin, “Inner FOV stitching of ZY-1 02C HR camera based on virtual CCD line,” Geomatics Inf. Sci. Wuhan Univ. 40(4), 436–443 (2015).
[Crossref]

J. Pan, F. Hu, M. Wang, S. Jin, and G. Li, “An inner FOV stitching method for non-collinear TDI CCD images,” Acta Geod. Cartogr. Sin. 43(11), 1165–1173 (2014).
[Crossref]

X. Tang, F. Hu, M. Wang, J. Pan, S. Jin, and G. Lu, “Inner FoV stitching of spaceborne TDI CCD images based on sensor geometry and projection plane in object space,” Remote Sens. 6(7), 6386–6406 (2014).
[Crossref]

Kocaman, S.

S. Kocaman and A. Gruen, “Orientation and self-calibration of ALOS PRISM imagery,” Photogramm. Rec. 23(123), 323–340 (2008).
[Crossref]

Leprince, S.

S. Leprince, P. Musé, and J. P. Avouac, “In-flight CCD distortion calibration for pushbroom satellites based on subpixel correlation,” IEEE Trans. Geosci. Remote Sens. 46(9), 2675–2683 (2008).
[Crossref]

Li, D.

B. Yang, M. Wang, W. Xu, D. Li, J. Gong, and Y. Pi, “Large-scale block adjustment without use of ground control points based on the compensation of geometric calibration for ZY-3 images,” ISPRS J. Photogramm. Remote Sens. 134, 1–14 (2017).
[Crossref]

Li, G.

J. Pan, F. Hu, M. Wang, S. Jin, and G. Li, “An inner FOV stitching method for non-collinear TDI CCD images,” Acta Geod. Cartogr. Sin. 43(11), 1165–1173 (2014).
[Crossref]

Li, S.

S. Li, T. Liu, and H. Wang, “Image Mosaic for TDICCD push-broom camera image based on image matching,” Remote Sens. Technol. Appl. 24(3), 374–378 (2009).
[Crossref]

Liu, S.

S. Liu, C. S. Fraser, C. Zhang, M. Ravanbakhsh, and X. Tong, “Georeferencing performance of THEOS satellite imagery,” Photogramm. Rec. 26(134), 250–262 (2011).
[Crossref]

Liu, T.

S. Li, T. Liu, and H. Wang, “Image Mosaic for TDICCD push-broom camera image based on image matching,” Remote Sens. Technol. Appl. 24(3), 374–378 (2009).
[Crossref]

Long, H.

Y. Wang, G. Hu, H. Long, and T. Zhang, “CCD image seamless mosaic on characteristic and dislocation fitting,” J. Remote Sens. 16(z1), 98–101 (2012).
[Crossref]

Long, X.

X. Long, X. Wang, and H. Zhong, “Analysis of image quality and processing method of a space-borne focal plane view splicing TDI CCD camera,” Sci. China Inf. Sci. 41, 19–31 (2011).
[Crossref]

Lu, G.

X. Tang, F. Hu, M. Wang, J. Pan, S. Jin, and G. Lu, “Inner FoV stitching of spaceborne TDI CCD images based on sensor geometry and projection plane in object space,” Remote Sens. 6(7), 6386–6406 (2014).
[Crossref]

Mahiou, R.

S. Ait-Aoudia, R. Mahiou, H. Djebli, and E. H. Guerrout, “Satellite and aerial image mosaicing - a comparative insight,” 16th International Conference on Information Visualisation, Montpellier, France, 652–657 (2012).

Mulawa, D.

D. Mulawa, “On-orbit geometric calibration of the OrbView-3 high resolution imaging satellite,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 35(B1), 1–6 (2004).

Musé, P.

S. Leprince, P. Musé, and J. P. Avouac, “In-flight CCD distortion calibration for pushbroom satellites based on subpixel correlation,” IEEE Trans. Geosci. Remote Sens. 46(9), 2675–2683 (2008).
[Crossref]

Pan, J.

M. Wang, C. Fan, J. Pan, S. Jin, and X. Chang, “Image jitter detection and compensation using a high-frequency angular displacement method for Yaogan-26 remote sensing satellite,” ISPRS J. Photogramm. Remote Sens. 130, 32–43 (2017).
[Crossref]

M. Wang, Y. Zhu, S. Jin, J. Pan, and Q. Zhu, “Correction of ZY-3 image distortion caused by satellite jitter via virtual steady reimaging using attitude data,” ISPRS J. Photogramm. Remote Sens. 119, 108–123 (2016).
[Crossref]

J. Pan, F. Hu, M. Wang, and S. Jin, “Inner FOV stitching of ZY-1 02C HR camera based on virtual CCD line,” Geomatics Inf. Sci. Wuhan Univ. 40(4), 436–443 (2015).
[Crossref]

J. Pan, F. Hu, M. Wang, S. Jin, and G. Li, “An inner FOV stitching method for non-collinear TDI CCD images,” Acta Geod. Cartogr. Sin. 43(11), 1165–1173 (2014).
[Crossref]

X. Tang, F. Hu, M. Wang, J. Pan, S. Jin, and G. Lu, “Inner FoV stitching of spaceborne TDI CCD images based on sensor geometry and projection plane in object space,” Remote Sens. 6(7), 6386–6406 (2014).
[Crossref]

Pi, Y.

B. Yang, M. Wang, W. Xu, D. Li, J. Gong, and Y. Pi, “Large-scale block adjustment without use of ground control points based on the compensation of geometric calibration for ZY-3 images,” ISPRS J. Photogramm. Remote Sens. 134, 1–14 (2017).
[Crossref]

Radhadevi, P. V.

P. V. Radhadevi and S. S. Solanki, “In-flight geometric calibration of different cameras of IRS-P6 using a physical sensor model,” Photogramm. Rec. 23(121), 69–89 (2008).
[Crossref]

Ravanbakhsh, M.

S. Liu, C. S. Fraser, C. Zhang, M. Ravanbakhsh, and X. Tong, “Georeferencing performance of THEOS satellite imagery,” Photogramm. Rec. 26(134), 250–262 (2011).
[Crossref]

Solanki, S. S.

P. V. Radhadevi and S. S. Solanki, “In-flight geometric calibration of different cameras of IRS-P6 using a physical sensor model,” Photogramm. Rec. 23(121), 69–89 (2008).
[Crossref]

Sun, T.

L. Hu, T. Sun, T. Zhang, and H. You, “Application of DIROEF algorithm for noncollinear multiple CCD array stitching of the Chinese mapping satellite 1-02,” IEEE Trans. Geosci. Remote Sens. 14(4), 519–523 (2017).
[Crossref]

Tadono, T.

J. Takaku and T. Tadono, “PRISM on-orbit geometric calibration and DSM performance,” IEEE Trans. Geosci. Remote Sens. 47(12), 4060–4073 (2009).
[Crossref]

Takaku, J.

J. Takaku and T. Tadono, “PRISM on-orbit geometric calibration and DSM performance,” IEEE Trans. Geosci. Remote Sens. 47(12), 4060–4073 (2009).
[Crossref]

Tang, X.

X. Tang, F. Hu, M. Wang, J. Pan, S. Jin, and G. Lu, “Inner FoV stitching of spaceborne TDI CCD images based on sensor geometry and projection plane in object space,” Remote Sens. 6(7), 6386–6406 (2014).
[Crossref]

Tian, T.

Tong, X.

S. Liu, C. S. Fraser, C. Zhang, M. Ravanbakhsh, and X. Tong, “Georeferencing performance of THEOS satellite imagery,” Photogramm. Rec. 26(134), 250–262 (2011).
[Crossref]

Wang, H.

S. Li, T. Liu, and H. Wang, “Image Mosaic for TDICCD push-broom camera image based on image matching,” Remote Sens. Technol. Appl. 24(3), 374–378 (2009).
[Crossref]

Wang, M.

Y. Cheng, M. Wang, S. Jin, L. He, and T. Tian, “New on-orbit geometric interior parameters self-calibration approach based on three-view stereoscopic images from high-resolution multi-TDI-CCD optical satellites,” Opt. Express 26(6), 7475–7493 (2018).
[Crossref]

M. Wang, C. Fan, J. Pan, S. Jin, and X. Chang, “Image jitter detection and compensation using a high-frequency angular displacement method for Yaogan-26 remote sensing satellite,” ISPRS J. Photogramm. Remote Sens. 130, 32–43 (2017).
[Crossref]

Y. Cheng, S. Jin, M. Wang, Y. Zhu, and Z. Dong, “A new image mosaicking approach for the multiple camera system of the optical remote sensing satellite GaoFen1,” Remote Sens. Lett. 8(11), 1042–1051 (2017).
[Crossref]

B. Yang, M. Wang, W. Xu, D. Li, J. Gong, and Y. Pi, “Large-scale block adjustment without use of ground control points based on the compensation of geometric calibration for ZY-3 images,” ISPRS J. Photogramm. Remote Sens. 134, 1–14 (2017).
[Crossref]

Y. Cheng, S. Jin, M. Wang, Y. Zhu, and Z. Dong, “Image mosaicking approach for a double-camera system in the GaoFen2 optical remote sensing satellite based on the big virtual camera,” Sensors 17(6), 1441 (2017).
[Crossref]

M. Wang, Y. Zhu, S. Jin, J. Pan, and Q. Zhu, “Correction of ZY-3 image distortion caused by satellite jitter via virtual steady reimaging using attitude data,” ISPRS J. Photogramm. Remote Sens. 119, 108–123 (2016).
[Crossref]

J. Pan, F. Hu, M. Wang, and S. Jin, “Inner FOV stitching of ZY-1 02C HR camera based on virtual CCD line,” Geomatics Inf. Sci. Wuhan Univ. 40(4), 436–443 (2015).
[Crossref]

J. Pan, F. Hu, M. Wang, S. Jin, and G. Li, “An inner FOV stitching method for non-collinear TDI CCD images,” Acta Geod. Cartogr. Sin. 43(11), 1165–1173 (2014).
[Crossref]

X. Tang, F. Hu, M. Wang, J. Pan, S. Jin, and G. Lu, “Inner FoV stitching of spaceborne TDI CCD images based on sensor geometry and projection plane in object space,” Remote Sens. 6(7), 6386–6406 (2014).
[Crossref]

M. Wang, B. Yang, F. Hu, and X. Zang, “On-orbit geometric calibration model and its applications for high-resolution optical satellite imagery,” Remote Sens. 6(5), 4391–4408 (2014).
[Crossref]

Wang, X.

X. Long, X. Wang, and H. Zhong, “Analysis of image quality and processing method of a space-borne focal plane view splicing TDI CCD camera,” Sci. China Inf. Sci. 41, 19–31 (2011).
[Crossref]

Wang, Y.

Y. Wang, G. Hu, H. Long, and T. Zhang, “CCD image seamless mosaic on characteristic and dislocation fitting,” J. Remote Sens. 16(z1), 98–101 (2012).
[Crossref]

Xu, W.

B. Yang, M. Wang, W. Xu, D. Li, J. Gong, and Y. Pi, “Large-scale block adjustment without use of ground control points based on the compensation of geometric calibration for ZY-3 images,” ISPRS J. Photogramm. Remote Sens. 134, 1–14 (2017).
[Crossref]

Yang, B.

B. Yang, M. Wang, W. Xu, D. Li, J. Gong, and Y. Pi, “Large-scale block adjustment without use of ground control points based on the compensation of geometric calibration for ZY-3 images,” ISPRS J. Photogramm. Remote Sens. 134, 1–14 (2017).
[Crossref]

M. Wang, B. Yang, F. Hu, and X. Zang, “On-orbit geometric calibration model and its applications for high-resolution optical satellite imagery,” Remote Sens. 6(5), 4391–4408 (2014).
[Crossref]

You, H.

L. Hu, T. Sun, T. Zhang, and H. You, “Application of DIROEF algorithm for noncollinear multiple CCD array stitching of the Chinese mapping satellite 1-02,” IEEE Trans. Geosci. Remote Sens. 14(4), 519–523 (2017).
[Crossref]

Yuan, X.

J. Cao, X. Yuan, and J. Gong, “In-orbit geometric calibration and validation of ZY-3 three-line cameras based on CCD-detector look angles,” Photogramm. Rec. 30(150), 211–226 (2015).
[Crossref]

Zang, X.

M. Wang, B. Yang, F. Hu, and X. Zang, “On-orbit geometric calibration model and its applications for high-resolution optical satellite imagery,” Remote Sens. 6(5), 4391–4408 (2014).
[Crossref]

Zhang, C.

S. Liu, C. S. Fraser, C. Zhang, M. Ravanbakhsh, and X. Tong, “Georeferencing performance of THEOS satellite imagery,” Photogramm. Rec. 26(134), 250–262 (2011).
[Crossref]

Zhang, T.

L. Hu, T. Sun, T. Zhang, and H. You, “Application of DIROEF algorithm for noncollinear multiple CCD array stitching of the Chinese mapping satellite 1-02,” IEEE Trans. Geosci. Remote Sens. 14(4), 519–523 (2017).
[Crossref]

Y. Wang, G. Hu, H. Long, and T. Zhang, “CCD image seamless mosaic on characteristic and dislocation fitting,” J. Remote Sens. 16(z1), 98–101 (2012).
[Crossref]

Zhong, H.

X. Long, X. Wang, and H. Zhong, “Analysis of image quality and processing method of a space-borne focal plane view splicing TDI CCD camera,” Sci. China Inf. Sci. 41, 19–31 (2011).
[Crossref]

Zhu, Q.

M. Wang, Y. Zhu, S. Jin, J. Pan, and Q. Zhu, “Correction of ZY-3 image distortion caused by satellite jitter via virtual steady reimaging using attitude data,” ISPRS J. Photogramm. Remote Sens. 119, 108–123 (2016).
[Crossref]

Zhu, Y.

Y. Cheng, S. Jin, M. Wang, Y. Zhu, and Z. Dong, “Image mosaicking approach for a double-camera system in the GaoFen2 optical remote sensing satellite based on the big virtual camera,” Sensors 17(6), 1441 (2017).
[Crossref]

Y. Cheng, S. Jin, M. Wang, Y. Zhu, and Z. Dong, “A new image mosaicking approach for the multiple camera system of the optical remote sensing satellite GaoFen1,” Remote Sens. Lett. 8(11), 1042–1051 (2017).
[Crossref]

M. Wang, Y. Zhu, S. Jin, J. Pan, and Q. Zhu, “Correction of ZY-3 image distortion caused by satellite jitter via virtual steady reimaging using attitude data,” ISPRS J. Photogramm. Remote Sens. 119, 108–123 (2016).
[Crossref]

Acta Geod. Cartogr. Sin. (1)

J. Pan, F. Hu, M. Wang, S. Jin, and G. Li, “An inner FOV stitching method for non-collinear TDI CCD images,” Acta Geod. Cartogr. Sin. 43(11), 1165–1173 (2014).
[Crossref]

Geomatics Inf. Sci. Wuhan Univ. (1)

J. Pan, F. Hu, M. Wang, and S. Jin, “Inner FOV stitching of ZY-1 02C HR camera based on virtual CCD line,” Geomatics Inf. Sci. Wuhan Univ. 40(4), 436–443 (2015).
[Crossref]

IEEE Trans. Geosci. Remote Sens. (3)

L. Hu, T. Sun, T. Zhang, and H. You, “Application of DIROEF algorithm for noncollinear multiple CCD array stitching of the Chinese mapping satellite 1-02,” IEEE Trans. Geosci. Remote Sens. 14(4), 519–523 (2017).
[Crossref]

S. Leprince, P. Musé, and J. P. Avouac, “In-flight CCD distortion calibration for pushbroom satellites based on subpixel correlation,” IEEE Trans. Geosci. Remote Sens. 46(9), 2675–2683 (2008).
[Crossref]

J. Takaku and T. Tadono, “PRISM on-orbit geometric calibration and DSM performance,” IEEE Trans. Geosci. Remote Sens. 47(12), 4060–4073 (2009).
[Crossref]

Int. Arch. Photogramm. Remote Sens. (1)

R. Gachet, “SPOT5 in-flight commission: inner orientation of HRG and HRS instruments,” Int. Arch. Photogramm. Remote Sens. 35(B1), 535–539 (2004).

Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. (1)

D. Mulawa, “On-orbit geometric calibration of the OrbView-3 high resolution imaging satellite,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 35(B1), 1–6 (2004).

ISPRS J. Photogramm. Remote Sens. (3)

M. Wang, Y. Zhu, S. Jin, J. Pan, and Q. Zhu, “Correction of ZY-3 image distortion caused by satellite jitter via virtual steady reimaging using attitude data,” ISPRS J. Photogramm. Remote Sens. 119, 108–123 (2016).
[Crossref]

B. Yang, M. Wang, W. Xu, D. Li, J. Gong, and Y. Pi, “Large-scale block adjustment without use of ground control points based on the compensation of geometric calibration for ZY-3 images,” ISPRS J. Photogramm. Remote Sens. 134, 1–14 (2017).
[Crossref]

M. Wang, C. Fan, J. Pan, S. Jin, and X. Chang, “Image jitter detection and compensation using a high-frequency angular displacement method for Yaogan-26 remote sensing satellite,” ISPRS J. Photogramm. Remote Sens. 130, 32–43 (2017).
[Crossref]

J. Remote Sens. (1)

Y. Wang, G. Hu, H. Long, and T. Zhang, “CCD image seamless mosaic on characteristic and dislocation fitting,” J. Remote Sens. 16(z1), 98–101 (2012).
[Crossref]

Opt. Express (1)

Photogramm. Rec. (4)

J. Cao, X. Yuan, and J. Gong, “In-orbit geometric calibration and validation of ZY-3 three-line cameras based on CCD-detector look angles,” Photogramm. Rec. 30(150), 211–226 (2015).
[Crossref]

S. Liu, C. S. Fraser, C. Zhang, M. Ravanbakhsh, and X. Tong, “Georeferencing performance of THEOS satellite imagery,” Photogramm. Rec. 26(134), 250–262 (2011).
[Crossref]

P. V. Radhadevi and S. S. Solanki, “In-flight geometric calibration of different cameras of IRS-P6 using a physical sensor model,” Photogramm. Rec. 23(121), 69–89 (2008).
[Crossref]

S. Kocaman and A. Gruen, “Orientation and self-calibration of ALOS PRISM imagery,” Photogramm. Rec. 23(123), 323–340 (2008).
[Crossref]

Remote Sens. (2)

M. Wang, B. Yang, F. Hu, and X. Zang, “On-orbit geometric calibration model and its applications for high-resolution optical satellite imagery,” Remote Sens. 6(5), 4391–4408 (2014).
[Crossref]

X. Tang, F. Hu, M. Wang, J. Pan, S. Jin, and G. Lu, “Inner FoV stitching of spaceborne TDI CCD images based on sensor geometry and projection plane in object space,” Remote Sens. 6(7), 6386–6406 (2014).
[Crossref]

Remote Sens. Lett. (1)

Y. Cheng, S. Jin, M. Wang, Y. Zhu, and Z. Dong, “A new image mosaicking approach for the multiple camera system of the optical remote sensing satellite GaoFen1,” Remote Sens. Lett. 8(11), 1042–1051 (2017).
[Crossref]

Remote Sens. Technol. Appl. (1)

S. Li, T. Liu, and H. Wang, “Image Mosaic for TDICCD push-broom camera image based on image matching,” Remote Sens. Technol. Appl. 24(3), 374–378 (2009).
[Crossref]

Sci. China Inf. Sci. (1)

X. Long, X. Wang, and H. Zhong, “Analysis of image quality and processing method of a space-borne focal plane view splicing TDI CCD camera,” Sci. China Inf. Sci. 41, 19–31 (2011).
[Crossref]

Sensors (1)

Y. Cheng, S. Jin, M. Wang, Y. Zhu, and Z. Dong, “Image mosaicking approach for a double-camera system in the GaoFen2 optical remote sensing satellite based on the big virtual camera,” Sensors 17(6), 1441 (2017).
[Crossref]

Other (2)

S. Ait-Aoudia, R. Mahiou, H. Djebli, and E. H. Guerrout, “Satellite and aerial image mosaicing - a comparative insight,” 16th International Conference on Information Visualisation, Montpellier, France, 652–657 (2012).

F. Hu, “Research on inner FOV stitching theories and algorithms for sub-images of three non-collinear TDI CCD chips,” Ph.D. Dissertation, (Wuhan University, 2010).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. The sketch map of the HY-1C UVI imaging and the virtual camera assignment.
Fig. 2.
Fig. 2. The normalized detector size difference obtained by the undistorted virtual camera.
Fig. 3.
Fig. 3. The normalized detector size difference obtained by the distorted virtual camera.
Fig. 4.
Fig. 4. Sub-images of (a) image 1, (b) image 2, and (c) image 3.
Fig. 5.
Fig. 5. The distribution of the residual errors of the GCPs in sub-images (a, b) 1, (c, d) 2, (e, f) 3, (g, h) 4, and (i, j) 5 before the geometric calibration.
Fig. 6.
Fig. 6. The distribution of the residual errors of the GCPs in sub-images (a, b) 1, (c, d) 2, (e, f) 3, (g, h) 4, and (i, j) 5 after the geometric calibration.
Fig. 7.
Fig. 7. The partial enlarged images from image 3 (a) before and (b) after geometric stitching.
Fig. 8.
Fig. 8. The GSDs of sub-images (a) 1, (b) 2, (c) 3, (d) 4, and (e) 5 from image 1.
Fig. 9.
Fig. 9. The GSD differences of sub-images (a) 1, (b) 2, (c) 3, (d) 4, and (e) 5 of image 1.

Tables (4)

Tables Icon

Table 1. General characteristics of the HY-1C UVI images

Tables Icon

Table 2. The georeferencing accuracy of the five UVI sub-images before and after the geometric calibration a

Tables Icon

Table 3. The georeferencing accuracy of the stitched UVI images a

Tables Icon

Table 4. The GSD differences between the divided sub-images and the original sub-images

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

[ X Y Z ] WGS 84 = [ X S Y S Z S ] WGS 84 + λ R J2000 WGS 84 R Body J2000 R Camera Body [ tan ( ψ x ) tan ( ψ y ) 1 ]
{ tan ( ψ x ) = a 0 + a 1 n + a 2 n 2 + a 3 n 3 tan ( ψ y ) = b 0 + b 1 n + b 2 n 2 + b 3 n 3
{ tan ( ψ ~ x ) = a ~ 0 tan ( ψ ~ y ) = y t 1 + y t 2 y t 1 N n ~
s = tan ( α + β + θ ) tan ( α + β ) = ( 1 + tan 2 ( α + β ) ) tan θ 1 tan ( α + β ) tan θ
s max = tan ( 44 .533552 + 11 .638026 + 0 .037223 ) tan ( 44 .533552 + 11 .638026 ) = 0 .002098
s min = tan ( 0 + 0 + 0.038818 ) tan ( 0 + 0 ) = 0.000678
d s = s v s o s o × 100 %
tan ( ψ ~ x ) = a ~ 0
tan ( ψ ~ y ) = b ~ 0 + b ~ 1 n ~ + b ~ 2 n ~ 2 + b ~ 3 n ~ 3
tan ( ψ ~ y ) = t 1 + b ~ 1 n ~ + b ~ 2 n ~ 2 + b ~ 3 n ~ 3
tan ( ψ ~ y ) = t 1 + t 2 t 1 b ~ 2 N 2 b ~ 3 N 3 N n ~ + b ~ 2 n ~ 2 + b ~ 3 n ~ 3
d g = g v g o g o × 100 %
R M S D = i = 0 n d g i d g i n + 1
g = s H

Metrics