Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Distortion measurement of optical system using phase diffractive beam splitter

Open Access Open Access

Abstract

Traditional methods for distortion measurement of large-aperture optical systems are time-consuming and ineffective because they require each field of view to be individually measured using a high-precision rotating platform. In this study, a new method that uses a phase diffractive beam splitter (DBS) is proposed to measure the distortion of optical systems, which has great potential application for the large-aperture optical system. The proposed method has a very high degree of accuracy and is extremely economical. A high-precision calibration method is proposed to measure the angular distribution of the DBS. The uncertainty analysis of the factors involved in the measurement process has been performed to highlight the low level of errors in the measurement methodology. Results show that high-precision measurements of the focal length and distortion were successfully achieved with high efficiency. The proposed method can be used for large-aperture wide-angle optical systems such as those used for aerial mapping applications.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In aerial mapping applications, it is important to obtain geographical photographs with minimal distortion [14]. Errors introduced during the design, manufacture, and assembly of the imaging system result in different degrees of non-linear distortion being produced in the aerial optical system, which causes deterioration of the geometric position accuracy that can be obtained from these images [57]. Thus, it is imperative to measure the distortion in the optical system and correct the images based on the measured distortion. This is especially critical for large-aperture wide-angle optical systems, such as those used for space-borne or air-borne aerial mapping.

The classical method for large-aperture distortion measurement consists of performing a precision length measurement based on the star point method [8]. This technique utilizes a large and high-precision rotating platform to shift parallel light beams for different fields of view (FOVs), after which the distortion is evaluated by calculating the differences in the positions between the theoretical image points and the star points in an image plane.

Diffraction gratings have been proposed for optical distortion measurement, wherein one-dimensional (1D) diffraction gratings are used to produce light beams with different FOVs [9]. Since the energy from higher order diffraction is too weak to be useful, the distortion measurement is performed using the first two orders of diffraction. However, the limitations of the intrinsic diffraction angle require the grating to be rotated several times; otherwise, a set of gratings with different periods need to be used to estimate the distortions caused by different FOVs. This method is ineffective and time consuming as the positions of the image spots at different FOVs must be separately recorded and analyzed.

To achieve distortion measurement of the entire FOV, an amplitude diffractive optical element (DOE) has been proposed that splits an incoming collimated laser beam into a number of two-dimensional (2D) beams with different diffractive orders [10]. Because the DOEs used are binary amplitude gratings, the zero-order beam has a power of 25%, which is high and therefore limits its application for purpose of calibration.

To address these drawbacks, we propose a high-precision measurement of distortion using a phase diffractive beam splitter (DBS). Compared to an amplitude DOE, the phase DBS can generate multiple beams with uniform intensity distribution of a much higher quality. This improves the signal-to-noise ratio (SNR) of the obtained images, resulting in a high-accuracy centroid extraction of spots on the charge-coupled device (CCD) array of optical system.

Angular accuracy of the FOV is one of the key factors that affects the accuracy of distortion measurement. In existing methods, the angular accuracy is entirely dependent on the positioning accuracy of the high-precision rotating platform. In our proposed method, the angular accuracy of the FOV is dependent on the accuracy with which the DBS splits the laser beam. This requires accurate calibration of the splitting angles of the DBS. Since calibration of the splitting angles of the DBS is proposed, any manufacturing defects in the DBS do not impact the final measurement accuracy of the system. It should be noted that the angular calibration is necessary, especially for the DBS with large fan-out angles, as both grating design and fabrication errors are present in such splitters, resulting in large angular errors that impact the distortion measurement [11].

A high-precision centroid extraction algorithm for the spots from the images obtained from the CCD array has also been developed. The sources of error and an uncertainty analysis of the measurement process is presented. A simultaneous high-precision measurement of both the focal length and distortion is demonstrated using the proposed technique. The developed method will have useful applications in the field of aerial mapping that use large-aperture wide-angle optical systems, by enabling high precision measurements.

2. Basic concept and method

As shown in Fig. 1, optical distortion of the aerial mapping system can be calculated by relating the actual image point position y′z to the predicted image point position f′ tanw using Eq. (1) [5]:

$$\delta {y^{\prime}_z} = {y^{\prime}_z} - f^{\prime}\tan w.$$
where y′z represents the actual image point position, f′ is the focus length of the system and w is the FOV.

 figure: Fig. 1.

Fig. 1. Schematic of optical distortion.

Download Full Size | PDF

The measurement of optical distortion with a DBS using a 2D array arrangement is shown in Fig. 2. The laser is first homogenized using frosted glass and then directed through a collimator resulting in a collinear beam. The 2D DBS divides this parallel light beam into 2D parallel beams at specific angles that are directed into the optical system. The 2D spots form an image on the CCD array. The system distortion can be obtained by extracting the centroids of these spots from the image and analyzing them in conjunction with the DBS beam angular calibration.

 figure: Fig. 2.

Fig. 2. Schematic of optical distortion measurement with DBS.

Download Full Size | PDF

The complex amplitude transmission function of 2D DBS can be expressed as follows:

$$t({x,y} )= \exp [{i\phi (x,y)} ].$$
where Φ(x, y) is the phase structure of 2D DBS. If a parallel light with unit amplitude is incident into the DBS, according to the scalar diffraction theory, the output light intensities in the image plane can be described as [12]:
$$I({{x_0},{y_0}} )= {|{FT\{{t(x,y)} \}} |^2}.$$
where FT{t(x, y)} means the Fourier transform of t(x, y). To achieve the desired distribution of output light intensities in actual applications, Φ(x, y) could be calculated using the optimization algorithms [13].

The calibration procedure of the DBS is shown in Fig. 3. The multiple beams generated by the DBS are directed into the self-collimator, an accurate device for angular measurement. As the FOV of the self-collimator tends to be smaller than the full diffraction angle of the DBS, it becomes necessary to perform a scanning measurement of the spot positions by using a rotating platform. This allows for a full calibration of the entire range of the DBS diffraction angles. The procedure for this calibration in outlined in the flowchart shown in Fig. 4.

 figure: Fig. 3.

Fig. 3. Schematic of angular calibration of the DBS.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Flow chart showing the angular measurement process of the DBS.

Download Full Size | PDF

3. Experiments and analysis

3.1 Angular calibration of the DBS

3.1.1 DBS design and fabrication

A ‘9 × 9’ DBS has been designed using the iterative Fourier transform algorithm (IFTA) based on the scalar diffraction theory. The DBS has been designed for a wavelength of 633 nm, a diffractive angle of 2.4°, and an angular spacing of 0.3°. It has a diffraction efficiency of 73% and its intensity uniformity is less than 13%. Figure 5 shows the photograph of the phase DBS and micrographs of the fabricated pattern. The height of the features is about 660 nm. The micrographs show that the pattern edge has been clearly fabricated.

 figure: Fig. 5.

Fig. 5. (a) the photograph of the DBS, and (b) the micrograph of the pattern.

Download Full Size | PDF

A DWL4000 laser scanning lithography system (Heidelberg Instruments, Germany) was used to generate the DBS patterns on a photoresist layer [14]. The lithography system allows for a minimum structure size smaller than 600 nm, and it can handle substrates of up to 400 mm × 400 mm size, with a positioning accuracy of 10 nm. The photoresist patterns were etched into the fused-silica substrate using an Ar-ion beam etching machine (IBF1500, Neue Technologien, Germany) [15]. The etching depth uniformity could be significantly improved by scanning the ion-generator on a raster path. The IBF machine is capable of scanning sample of size up to 1500 mm × 1500 mm.

3.1.2 Angular calibration

Figure 6 shows the experimental setup which consists of a 632.8 nm He–Ne laser source, a collimator with focal length of 180 mm to generate parallel light, the fabricated DBS for generating 9 × 9 spots pattern, and a self-collimator. The laser, the collimator and the DBS have been mounted on a hexapod 6-axis positioning system (PI: M-850.11); while the self-collimator (AcroBeam: Collapex AF300) is mounted on an adjustment frame. The self-collimator is limited to a range of 1.3° × 1.0° for its full FOV and has an angle measurement accuracy of less than 0.5”.

 figure: Fig. 6.

Fig. 6. Experimental setup for angular measurement of the DBS.

Download Full Size | PDF

Initially, the alignment of optical axis is carried out. Figure 7(a) shows that the optical axis of collimator is not aligned with that of the self-collimator as the spot is not aligned with the crosshairs. The optical axis of both the collimator and self-collimator are aligned by adjusting the PI stage by ensuring that the spot is centered at the crosshairs as shown in Fig. 7(b). The collimator is then adjusted so that the center of the array produced by DBS coincides with the center of the FOV (crosshairs) as shown in Fig. 7(c). The 5 × 3 spot array in the image plane are shown in Fig. 7(c) aligned with crosshair showing alignment of the FOV.

 figure: Fig. 7.

Fig. 7. Alignments: (a) Optical axis is not aligned (b) Optical axis is aligned (c) Array center of the DBS is aligned.

Download Full Size | PDF

Next, a scanning measurement of splitting angles was performed. The initial position and the scanning path are shown in Fig. 8(a), and there are 8 images similar to Fig. 8(b) that are recorded. In Fig. 8(b), one of the spots in the image is chosen as the reference spot and marked with ‘R’. The relative angles (in both X and Y directions) of the other 14 spots with the reference spot are then recorded by the self-collimator.

 figure: Fig. 8.

Fig. 8. (a) Angular measurement initial position and scanning path (b) Angle detection path.

Download Full Size | PDF

Finally, after recording and processing the data in all 8 positions, the high-accuracy angle distribution of the DBS is completely acquired, and it is shown in Table 1. The angular values are relative with the 0th order spot, i.e., the center of the spot array.

Tables Icon

Table 1. Measurement results of DBS angular distribution.

3.1.3 Standard uncertainty in angular calibration

The angular distribution of the DBS is measured by the self-collimator. To estimate the uncertainty in this measurement, we need to establish the components that result in this uncertainty—i.e., the self-collimator measuring uncertainty and measurement repeatability. It is necessary to account for these uncertainties in the final measurement.

  • (1) Self-collimator measurement uncertainty

    The angular measurement accuracy of the self-collimator is better than 0.5” at its full FOV. According to the uniform distribution, the angular standard uncertainty caused by the self-collimator measurement error uw1 can be described by type B uncertainty:

    $${u_{w1}} = \frac{{0.25^{\prime\prime}}}{{\sqrt 3 }} \approx 0.14^{\prime\prime}.$$

  • (2) Measurement repeatability

    During the angular measurement of the DBS, the angular values were recorded 5 times in each position, and these have been averaged as the final results. The maximum standard deviation from the average σ was 0.2”, thus the angular standard uncertainty caused by measurement repeatability uw2 can be described by type A uncertainty:

    $${u_{w2}} = \frac{\sigma }{{\sqrt n }} = \frac{{0.2^{\prime\prime}}}{{\sqrt 5 }} = 0.09^{\prime\prime}.$$
    Therefore, the angular measurement standard uncertainty uw can be expressed as a root sum square of the two uncertainties as:
    $${u_w} = \sqrt {{u^2}_{w1} + {u^2}_{w2}} \approx 0.17^{\prime\prime},$$
    which is sufficiently small and can result in a high-accuracy measurement of optical distortion.

3.2. Distortion measurement of optical system

3.2.1 Actual image point position

After the completion of the angular calibration, the DBS is then utilized to measure the optical distortion of a commercial camera (NIKON: AF1.4D). This camera replaces the self-collimator in the setup shown in Fig. 6. Figure 9 shows the new setup comprising the laser source, collimator, DBS and the Nikon Camera. A CCD (Point Grey: FL2-20S4M) connected with data acquisition card was mounted behind the system. As shown in Fig. 10, a 9 × 9 spots array was obtained in the center of the image plane after adjusting the zero-order beam of DBS perpendicularly into the tested optical system. Table 2 shows the pixel coordinates of the spots using the centroid extraction algorithm.

 figure: Fig. 9.

Fig. 9. Experimental setup of optical distortion measurement using the DBS.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Image point distribution in optical distortion measurement.

Download Full Size | PDF

Tables Icon

Table 2. Centroid coordinates of image points.

The position of the image spot was measured by a CCD. The accuracy of this data in influenced by the measurement error caused by centroid extraction algorithm and the parallelism error of the collimator.

  • (1) Centroid extraction algorithm
The centroid extraction was used to obtain the centroid coordinates of imaging spots, which includes the following three steps: image thresholding, template matching, and grayscale weighting [1618].

Firstly, the images acquired from CCD were converted to grayscale images using MATLAB. To restrict the interference of noise, such as stray light noise in the image, an appropriate threshold value should be chosen according to the characteristics of the actual energy distribution. The processed result I(x, y) could be expressed as:

$$I(x,y) = \left\{ {\begin{array}{{c}} {0,}\\ {P(x,y) - T(x,y),} \end{array}\begin{array}{{c}} {P(x,y) < T(x,y)}\\ {P(x,y) \ge T(x,y)} \end{array}} \right..$$
where P(x, y) is the intensity of each pixel in the images, and T(x, y) is the threshold.

Subsequently, the template matching was performed. In this study, a window with a size of 1.5 times the spot size was used to scan the 2D spots array in the grayscale image, and the window positions were obtained. Finally, at each window, the formula of grayscale weighting shown in Eq. (6) was used to calculate the accurate centroid of all the spots.

$$\left\{ {\begin{array}{{c}} {{x_c} = \frac{{\sum\limits_{j = {y_0} - {W_y}/2}^{{y_0} + {W_y}/2} {\sum\limits_{i = {x_0} - {W_x}/2}^{{x_0} + {W_x}/2} {({I_{ij}}^w \cdot {x_i})} } }}{{\sum\limits_{j = {y_0} - {W_y}/2}^{{y_0} + {W_y}/2} {\sum\limits_{i = {x_0} - {W_x}/2}^{{x_0} + {W_x}/2} {({I_{ij}}^w)} } }}}\\ {{y_c} = \frac{{\sum\limits_{j = {y_0} - {W_y}/2}^{{y_0} + {W_y}/2} {\sum\limits_{i = {x_0} - {W_x}/2}^{{x_0} + {W_x}/2} {({I_{ij}}^w \cdot {y_i})} } }}{{\sum\limits_{j = {y_0} - {W_y}/2}^{{y_0} + {W_y}/2} {\sum\limits_{i = {x_0} - {W_x}/2}^{{x_0} + {W_x}/2} {({I_{ij}}^w)} } }}} \end{array}} \right..$$
where xc and yc are the centroid coordinates, Wx and Wy is size of the window in x and y direction, respectively, Iij is the intensity of each pixel, and w is index of Iij.

Simulation results showed that the algorithm has high-positioning precision of 1/50th pixel size (1 pixel = 4.4 µm). The standard uncertainty of the position of the point y′z caused by algorithm ${u_{{{y^{\prime}}_{z1}}}}$ could be described by type B uncertainty:

$${u_{{{y^{\prime}}_{z1}}}} = \frac{{4.4{\mathrm{\mu}} \textrm{m}}}{{50 \times \sqrt 3 }} \approx 0.05{\mathrm{\mu}} \textrm{m}.$$
  • (2) Parallelism of collimator
As shown in Fig. 11, the two beams 1’ and 2’ with a divergence angle of θ are respectively imaged at point A′ and point B′. The differences between AB′ and AB were used to characterize the influences of the parallelism of incident light from the collimator on the distortion measurement, which could be expressed as:
$$\Delta = A^{\prime}B^{\prime} - AB = d\tan w.$$
where w is the FOV, d represents the lateral deviation caused by defocus shown in Fig. 12.

 figure: Fig. 11.

Fig. 11. The influence of beam parallelism on distortion measurement.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. The defocus caused by non-parallel light on axis.

Download Full Size | PDF

According to the wave aberration theory, d could be expressed as:

$$d = \frac{{ - {f^{\prime}2}}}{{n{r^2}\rho }} \cdot \frac{{\partial \Delta W}}{{\partial \rho }}.$$
where ΔW is the wave aberration at the exit of the tested system, and could be derived as $\Delta W = - \frac{{\theta \cdot r}}{2}.$

In addition, the aberration caused by defocus was expressed as: ΔW = W020ρ2; thus:

$$d = \frac{{\theta \cdot {f^{\prime}2}}}{{r{\rho ^2}}}.$$
Therefore, when ρ=1, substituting Eq. (12) into Eq. (10), then
$$\Delta = \frac{{\theta \cdot {f^{\prime}2}\tan w}}{r}.$$
where θ is the divergence angle of the collimator, f’ is the focal length of the system, r is the exit radius of the system, and w is the FOV.

Thus, the standard uncertainty of the position of the point y′z caused by parallelism of collimator ${u_{{{y^{\prime}}_{z2}}}}$ can be expressed as:

$${u_{{{y^{\prime}}_{z2}}}} = \left|{\frac{{\partial y_z^{\prime}}}{{\partial w}}} \right|{u_w} = \frac{{\theta \cdot {f^{\prime}2}}}{{r{{\cos }^2}w}}{u_w}.$$
After measurements and calculations, ${u_{{{y^{\prime}}_{z2}}}} = 4.9 \times {10^{ - 6}}$ µm., which is several orders of magnitude smaller than centroid estimation uncertainty calculated (Eq. 9); therefore, it can be neglected.

According to the above analysis of centroid extraction algorithm and parallelism of collimator, the standard uncertainty uw of the point position y′z can be expressed as:

$${u_{{{y^{\prime}}_z}}} = \sqrt {{u^2}_{{{y^{\prime}}_{z1}}} + {u^2}_{{{y^{\prime}}_{z2}}}} \approx {u_{{{y^{\prime}}_{z1}}}} = 0.05{\mathrm{\mu}} \textrm{m}.$$

3.2.2 Theoretical focal length

To evaluate the optical system distortion, the theoretical focal length should be calculated first. According to Eq. (1), when the measured FOV is small and paraxial, the optical distortion could be ignored and the focal length of the optical system can be determined using the least squares analysis. The theoretical focal length can be calculated as follows [19]:

$$f^{\prime} = \frac{{\sum {({{y^{\prime}}_{zi}} \cdot \tan {w_i})} }}{{\sum {{{\tan }^2}{w_i}} }}.$$
where wi is the measured paraxial FOV and y′zi is the corresponding image height. Using the previous experimental data in Table 1 and Table 2, the theoretical focal length was calculated by 35.006 mm, while the nominal value is 35 mm. Consequently, the theoretical image height could be calculated. Figure 13(a) shows the actual image height, theoretical image height, and the relative distortion in X direction. Figure 13(b) shows the counterpart in Y direction.

 figure: Fig. 13.

Fig. 13. (a) Fitted theoretical image height, actual image height, and the relative distortion in X direction. (b) Fitted theoretical image height, actual image height, and the relative distortion in Y direction.

Download Full Size | PDF

According to Eq. (16), the standard uncertainty of theoretical focal length can be calculated by:

$$\left\{ \begin{array}{l} {u_{f^{\prime}}} = \sqrt {{{\sum {(\frac{{\partial f^{\prime}}}{{\partial {{y^{\prime}}_{zi}}}}{u_{{{y^{\prime}}_z}}})} }^2} + \sum {{{(\frac{{\partial f^{\prime}}}{{\partial {w_i}}}{u_w})}^2}} } \\ \frac{{\partial f^{\prime}}}{{\partial {{y^{\prime}}_{zi}}}} = \frac{{\tan {w_i}}}{{\sum {{{\tan }^2}{w_i}} }}\\ \frac{{\partial f^{\prime}}}{{\partial {w_i}}} = \frac{{{{y^{\prime}}_{zi}}\sum {{{\tan }^2}{w_i}} - 2\tan {w_i}\sum {({{y^{\prime}}_{zi}} \cdot \tan {w_i})} }}{{{{(\cos {w_i}\sum {{{\tan }^2}{w_i}} )}^2}}} \end{array} \right..$$
By substituting Eq. (6) and Eq. (15) into Eq. (17), the uncertainty of theoretical focal length is shown as:
$${u_{f^{\prime}}} = 6.82{\mathrm{\mu}} \textrm{m}.$$
Thus, the relative standard uncertainty of focal length can be calculated as:
$${U_{f^{\prime}}} = \frac{{{u_{f^{\prime}}}}}{{f^{\prime}}} \times 100\%= 0.019\%.$$

3.2.3 Distortion coefficient

Considering the internal distortion of camera lens, the radial distortion model is used to estimate the distortion of any position among the tested FOV. The nonlinear distortion model is expressed as [6]:

$$\left\{ \begin{array}{l} AP = \overline X - X\\ A = \left( {\begin{array}{{cc}} {x({x^2} + {y^2})}&0\\ 0&{y({x^2} + {y^2})} \end{array}} \right)\\ P = {(\begin{array}{{cc}} {{k_1}}&{{k_2}} \end{array})^T} \end{array} \right..$$
where P is the distortion coefficient vector, $\overline X = (\overline {{x_i}} ,\overline {{y_i}} )$ and $X = ({x_i},{y_i})$ are the actual and theoretical image point coordinates, respectively.

To calculate the distortion coefficient for the tested FOV, the image points on X and Y axis shown in Fig. 14 are utilized. The corresponding experimental data are listed in Table 3 and Table  4.

 figure: Fig. 14.

Fig. 14. Utilized image points on X and Y axis in the fitting.

Download Full Size | PDF

Tables Icon

Table 3. X-direction image heights for fitting the distortion coefficient (Unit: pixel).

Tables Icon

Table 4. Y-direction image heights for fitting the distortion coefficient (Unit: pixel).

Based on the multiple sets of experimental data above and least squares algorithm, the distortion coefficient can be calculated according to Eq. (20):

$$P = {(\begin{array}{{cc}} {4.96e - 8}&{1.02e - 8} \end{array})^T}.$$
Thus, the distortion for all the points in the tested FOV can be calculated. Figure 15 shows the distortion distribution of the tested lens. The type of distortion shown in the figure is generally referred to as a pincushion distortion. The calculated result shows that the maximum relative distortion in the diagonal direction is 0.26%.

 figure: Fig. 15.

Fig. 15. (a) Distortion distribution of the tested lens.

Download Full Size | PDF

3.2.4 Uncertainty of the distortion

Based on the above uncertainty analysis and Eq. (1), the uncertainty of absolute distortion is related with angular measurement uncertainty uw, the position of the point uncertainty uy′z and theoretical focal length uncertainty uf’; thus, the standard uncertainty of distortion is represented as follows:

$${u_\delta }_{{{y^{\prime}}_z}} = \sqrt {{u^2}_{{{y^{\prime}}_z}} + {{\left( {\frac{{\partial \delta {{y^{\prime}}_z}}}{{\partial f^{\prime}}}} \right)}^2}{u^2}_{f^{\prime}} + {{\left( {\frac{{\partial \delta {{y^{\prime}}_z}}}{{\partial w}}} \right)}^2}{u^2}_w} .$$
Substituting Eq. (6), Eq. (15) and Eq. (18) into Eq. (22), the standard uncertainty of absolute distortion is achieved as:
$${u_{\delta {{y^{\prime}}_z}}} \approx 0.28{\mathrm{\mu}} \textrm{m}.$$
As a result, the distortion measurement result is 0.26% ± 0.02%.

4. Conclusions

This study presented a compact, accurate, fast, and economical method for the measurement of the distortion of optical systems utilizing a phase DBS. A high-accuracy self-collimator allows for the accurate calibration of beam splitting angles for the phase DBS. The angular distribution of the DBS is determined with a standard uncertainty of less than 0.2″ by utilizing a scanning measurement. This enables the significant improvement of measurement accuracy without any effect presented by DBS manufacturing accuracy. Using a calibrated DBS, the focal length and distortion have been simultaneously measured. The measurement uncertainty of the experimental result was estimated. Results show that the standard uncertainty of focal length measurement and relative distortion measurement by this method are ± 0.019% and ± 0.02%, respectively. This technique is applicable to high-precision measurement of a large-aperture wide-angle optical system, such as those used in aerial mapping applications.

It should be noted that the beam splitter designed in this study has a small diffractive angle of 2.4°. However, the proposed calibration method can also be applied for DBS with larger diffractive angles, as long as the angular spacing of the neighboring beams do not exceed the FOV of the used self-collimator. This limitation should be considered when designing the DBS for distortion measurement of optical system. Further investigation is planned to measure the distortion of large-aperture wide-angle aerial mapping system using the developed DBS method.

Funding

National Natural Science Foundation of China (51505185, 51775531, 61605202); National Basic Research Program of China (973 Program) (2016YFB0500100); Advanced Science Key Research Project of the Chinese Academy of Sciences (QYZDJ-SSW-JSC038).

References

1. D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Stack. multiscale gigapixel photography,” Nature 486(7403), 386–389 (2012). [CrossRef]  

2. T. A. Clarke and J. F. Fryer, “The development of camera calibration methods and models,” Photogramm. Rec. 16(91), 51–66 (1998). [CrossRef]  

3. A. Harmat, M. Trentini, and I. S. Michael, “Multi-Camera tracking and mapping for unmanned aerial vehicles in unstructured environments,” J. Intell. Robot. Syst. 78(2), 291–317 (2015). [CrossRef]  

4. X. Zhang, B. Chen, F. He, K. Song, L. He, S. Liu, Q. Guo, J. Li, X. Wang, H. Zhang, H. Wang, Z. Han, L. Sun, P. Zhang, S. Dai, G. Ding, L. Chen, Z. Wang, G. Shi, X. Zhang, C. Yu, Z. Yang, P. Zhang, and J. Wang, “Wide-field auroral imager onboard the Fengyun satellite,” Light: Sci. Appl. 8(1), 47 (2019). [CrossRef]  

5. W. J. Smith, Modern optical engineering, (McGraw-Hill), Chap. 3 (2000).

6. P. Sun, N. G. Lu, and M. L. Dong, “Modelling and calibration of depth-dependent distortion for large depth visual measurement cameras,” Opt. Express 25(9), 9834–9847 (2017). [CrossRef]  

7. A. Miks and J. Novak, “Dependence of camera lens induced radial distortion and circle of confusion on object position,” Opt. Laser Technol. 44(4), 1043–1049 (2012). [CrossRef]  

8. T. Kouyama, A. Yamazaki, M. Yamada, and T. Imamura, “A method to estimate optical distortion using planetary images,” Planet. Space Sci. 86(15), 86–90 (2013). [CrossRef]  

9. A. Miks and P. Pokorny, “Use of diffraction grating for measuring the focal length and distortion of optical systems,” Appl. Opt. 54(34), 10200–10206 (2015). [CrossRef]  

10. M. Bauer, D. Grießbach, A. Hermerschmidt, S. Krüger, M. Scheele, and A. Schischmanow, “Geometrical camera calibration with diffractive optical elements,” Opt. Express 16(25), 20241–20248 (2008). [CrossRef]  

11. S. Thibault, A. Arfaoui, and P. Desaulniers, “Cross-diffractive optical elements for wide angle geometric camera calibration,” Opt. Lett. 36(24), 4770–4772 (2011). [CrossRef]  

12. K. Fuse, T. Hirai, T. Ushiro, T. Okada, K. Kurisu, and K. Ebata, “Design and performance of multilevel phase fan-out diffractive optical elements for laser materials processing,” J. Laser Appl. 15(4), 246–254 (2003). [CrossRef]  

13. A. Hermerschmidt, S. Krüger, and G. Wernicke, “Binary diffractive beam splitters with arbitrary diffraction angles,” Opt. Lett. 32(5), 448–450 (2007). [CrossRef]  

14. C. Guo, Z. Zhang, D. Xue, L. Li, R. Wang, X. Zhou, F. Zhang, and X. Zhang, “High-performance etching of multilevel phase-type Fresnel zone plates with large apertures,” Opt. Commun. 407, 227–233 (2018). [CrossRef]  

15. Z. Zhang, C. Guo, R. Wang, H. Hu, X. Zhou, T. Liu, D. Xue, X. Zhang, F. Zhang, and X. Zhang, “Hybrid-level Fresnel zone plate for diffraction efficiency enhancement,” Opt. Express 25(26), 33676–33687 (2017). [CrossRef]  

16. M. Sezgin and S. Bülent, “Survey over image thresholding techniques and quantitative performance evaluation,” J. Electron Imaging 13(1), 146–165 (2004). [CrossRef]  

17. X. Yin, X. Li, L. Zhao, and Z. Fang, “Adaptive thresholding and dynamic windowing method for automatic centroid detection of digital Shack-Hartmann wavefront sensor,” Appl. Opt. 48(32), 6088–6098 (2009). [CrossRef]  

18. S. H. Baik, S. K. Park, C. J. Kim, Y. S. Seo, and Y. J. Kang, “New centroid detection algorithm for the Shack-Hartmann wavefront sensor,” Proc. SPIE 4926, 251–260 (2002). [CrossRef]  

19. G. Yang, L. Miao, X. Zhang, C. Sun, and Y. Qiao, “High-accuracy measurement of the focal length and distortion of optical systems based on interferometry,” Appl. Opt. 57(18), 5217–5223 (2018). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Schematic of optical distortion.
Fig. 2.
Fig. 2. Schematic of optical distortion measurement with DBS.
Fig. 3.
Fig. 3. Schematic of angular calibration of the DBS.
Fig. 4.
Fig. 4. Flow chart showing the angular measurement process of the DBS.
Fig. 5.
Fig. 5. (a) the photograph of the DBS, and (b) the micrograph of the pattern.
Fig. 6.
Fig. 6. Experimental setup for angular measurement of the DBS.
Fig. 7.
Fig. 7. Alignments: (a) Optical axis is not aligned (b) Optical axis is aligned (c) Array center of the DBS is aligned.
Fig. 8.
Fig. 8. (a) Angular measurement initial position and scanning path (b) Angle detection path.
Fig. 9.
Fig. 9. Experimental setup of optical distortion measurement using the DBS.
Fig. 10.
Fig. 10. Image point distribution in optical distortion measurement.
Fig. 11.
Fig. 11. The influence of beam parallelism on distortion measurement.
Fig. 12.
Fig. 12. The defocus caused by non-parallel light on axis.
Fig. 13.
Fig. 13. (a) Fitted theoretical image height, actual image height, and the relative distortion in X direction. (b) Fitted theoretical image height, actual image height, and the relative distortion in Y direction.
Fig. 14.
Fig. 14. Utilized image points on X and Y axis in the fitting.
Fig. 15.
Fig. 15. (a) Distortion distribution of the tested lens.

Tables (4)

Tables Icon

Table 1. Measurement results of DBS angular distribution.

Tables Icon

Table 2. Centroid coordinates of image points.

Tables Icon

Table 3. X-direction image heights for fitting the distortion coefficient (Unit: pixel).

Tables Icon

Table 4. Y-direction image heights for fitting the distortion coefficient (Unit: pixel).

Equations (23)

Equations on this page are rendered with MathJax. Learn more.

δ y z = y z f tan w .
t ( x , y ) = exp [ i ϕ ( x , y ) ] .
I ( x 0 , y 0 ) = | F T { t ( x , y ) } | 2 .
u w 1 = 0.25 3 0.14 .
u w 2 = σ n = 0.2 5 = 0.09 .
u w = u 2 w 1 + u 2 w 2 0.17 ,
I ( x , y ) = { 0 , P ( x , y ) T ( x , y ) , P ( x , y ) < T ( x , y ) P ( x , y ) T ( x , y ) .
{ x c = j = y 0 W y / 2 y 0 + W y / 2 i = x 0 W x / 2 x 0 + W x / 2 ( I i j w x i ) j = y 0 W y / 2 y 0 + W y / 2 i = x 0 W x / 2 x 0 + W x / 2 ( I i j w ) y c = j = y 0 W y / 2 y 0 + W y / 2 i = x 0 W x / 2 x 0 + W x / 2 ( I i j w y i ) j = y 0 W y / 2 y 0 + W y / 2 i = x 0 W x / 2 x 0 + W x / 2 ( I i j w ) .
u y z 1 = 4.4 μ m 50 × 3 0.05 μ m .
Δ = A B A B = d tan w .
d = f 2 n r 2 ρ Δ W ρ .
d = θ f 2 r ρ 2 .
Δ = θ f 2 tan w r .
u y z 2 = | y z w | u w = θ f 2 r cos 2 w u w .
u y z = u 2 y z 1 + u 2 y z 2 u y z 1 = 0.05 μ m .
f = ( y z i tan w i ) tan 2 w i .
{ u f = ( f y z i u y z ) 2 + ( f w i u w ) 2 f y z i = tan w i tan 2 w i f w i = y z i tan 2 w i 2 tan w i ( y z i tan w i ) ( cos w i tan 2 w i ) 2 .
u f = 6.82 μ m .
U f = u f f × 100 % = 0.019 % .
{ A P = X ¯ X A = ( x ( x 2 + y 2 ) 0 0 y ( x 2 + y 2 ) ) P = ( k 1 k 2 ) T .
P = ( 4.96 e 8 1.02 e 8 ) T .
u δ y z = u 2 y z + ( δ y z f ) 2 u 2 f + ( δ y z w ) 2 u 2 w .
u δ y z 0.28 μ m .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.