Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Extending vision ray calibration by determination of focus distances

Open Access Open Access

Abstract

The application of cameras as sensors in optical metrology techniques for three-dimensional topography measurement, such as fringe projection profilometry and deflectometry, presumes knowledge regarding the metric relationship between image space and object space. This relation is established by camera calibration and a variety of techniques are available. Vision ray calibration achieves highly precise camera calibration by employing a display as calibration target, enabling the use of active patterns in the form of series of phase-shifted sinusoidal fringes. Besides the required spatial coding of the display surface, this procedure yields additional full-field contrast information. Exploiting the relation between full-field contrast and defocus, we present an extension of vision ray calibration providing the additional information of the focus distances of the calibrated camera. In our experiments we achieve a reproducibility of the focus distances in the order of mm. Using a modified Laplacian based focus determination method, we confirm our focus distance results within a few mm.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

High precision calibration of cameras plays a major role in many full-field optical metrology techniques based on incoherent light. Fringe projection profilometry, where the distortion of projected patterns on the surface under test is observed by cameras, is used in a variety of applications such as quality control in car manufacturing [1], to create digital molds in dental medicine [2] or for the digitalization of art and cultural goods [3] to name a few. Similarly, phase measuring deflectometry utilizes the distortion of patterns observed in reflection [4] or in transmission [5]. Some examples of applications are quality control of varnished surfaces [6] and of optical components such as lenses and mirrors [79]. The determination of surface topographies with such techniques is based on tracing of light paths. Hence, the accuracy of results depends immediately on the calibration accuracy of the cameras and many studies on the topic of camera calibration can be found in literature [1013].

Typically, camera calibration utilizes a marker plate, such as a checkerboard pattern, which is observed from multiple perspectives [14]. Several works on camera calibration in the presence of defocus exist that determine focus distances either explicitly [15], or implicitly by estimating the size of a blur kernel in space [16,17]. However, their emphasis on traditional calibration targets and feature detection does not permit a focus determination for individual pixels. Instead of a marker plate, a display can be employed, enabling the usage of non-static patterns in the calibration procedure. Defocus has been considered in the case of active targets [18,19], although not as a source of additional information about the imaging system. In Vision Ray Calibration (VRC) [20], an active target is utilized to apply Phase Shifting Technique (PST) [8] by displaying series of phase-shifted sinusoidal fringe patterns. In contrast to static patterns, this procedure provides not just specific reference points but effectively a continuous and full-field spatial coding of the surface of the calibration target. As a consequence, imaging properties of calibrated cameras can be described by independent vision rays of sensor pixels, which does not require the use of model assumptions regarding image distortion.

Besides continuous spatial coding, PST provides additional information in the form of contrast of displayed fringe patterns. This information is not immediately useful in the pursuit of improved calibration accuracy. However, the magnitude of the contrast is driven by defocus. It has been shown that the change of contrast in space can be used to determine surface topography without using an analytical model or camera calibration [21]. We demonstrate that the correlation between contrast and defocus can be used to determine the focus distance for each individual sensor pixel. Our method extends the generic camera model used in [22] by incorporating how the light-sampling cross section changes along the rays. The main goal of this work is the demonstration of the focus determination method from VRC measurements without additional measurement effort. A possible application in the future may be the focus alignment of camera systems, which is suitable because VRC has the ability to calibrate multiple cameras simultaneously. The method may also be used to investigate defocus-related variations of optical distortions.

This paper is structured as follows: In section 2, the theoretical framework for our method is laid out. VRC is explained briefly, which requires phase shifting for reference marker retrieval. A side product of PST is the fringe contrast, which degrades when the display is out of focus. Using a parametric model for the camera Point Spread Function (PSF), determining the focus distances is formulated as an optimization problem. Section 3 shows the experimental setup and results. A PSF model is chosen based on experimental data. Then, the focus distances are calculated for a measurement with constant fringe period. The experiment is repeated with a horizontally tilted camera to investigate the robustness of the method against systematic misalignments. A measurement is performed where the fringe period is adjusted is such a way that the contrast is approximately constant. This enables to test the feasibility of focus determination when a distinctive contrast maximum does not exist. The focus distances are compared against the result from an alternative method based on modified Laplacian focus measure. Section 4 discusses the main conclusions of our work.

2. Theory and model

2.1 Vision ray calibration

The method for the determination of focus distance proposed in this work is an extension of the Vision Ray Calibration (VRC) technique for the calibration of imaging systems. This section outlines the basic principles of VRC. An in-depth description of VRC can be found in [20].

A common model for the geometric relation between object space and image space is the distorted pinhole, which describes the imaging process by a projection matrix under the assumption of an ideal projection center, supplemented by distortion terms [14]. Generic camera models constitute an alternative description of the properties of an imaging system. Here, an individual ray of vision, pointing towards the source of received light, is determined for each sensor pixel [22,23]. The vision ray of an arbitrary sensor pixel $i$ can be described by four parameters given by start point vector $\vec {p_i}=(p_i^x,p_i^y,0)$ and a direction vector $\vec {v_i}=(v_i^x,v_i^y,1)$ [24]. The result of such a calibration is given by a lookup table containing these parameters for each pixel. The relation between sensor pixels and object space is established via ray-tracing.

VRC is a method for finding the parameters of the generic camera model [20]. A Liquid Crystal Display (LCD) serves as a calibration target. Using PST, phase angles $\phi (x,y)$ of displayed sinusoidal fringe patterns can be determined. These constitute feature points of known geometric relationship on the LCD surface and can be related with two-dimensional surface coordinates $(x,y)$. In the first step of VRC, this information is used to determine the transformations $T_j$ describing relative positions and orientations of LCD and imaging device during calibration for each measurement $j$ by employing a pinhole model. This enables the transformation of the known coordinates of feature points on the LCD surface from all calibration measurements into a common coordinate system. Since PST provides continuous and full-field spatial coding, each measurement $j$ yields a reference point $\vec {r}_{i,j}$ for each sensor pixel $i$ that has observed the LCD. The reference points of a sensor pixel are subjected to the collinearity constraint. Straight line regression through points $\vec {r}_{i,j}$ for each pixel $i$ yields the corresponding vision ray parameters $\vec {p_i}$ and $\vec {v_i}$. These results are then refined by numerical optimization of position and orientation information $T_j$ of all measurements $j$, identifying those that yield minimum remaining error of all vision rays with respect to corresponding reference points $\vec {r}_{i,j}$.

2.2 Standard N-step phase shifting algorithm

The standard N-step phase shifting algorithm outlined in this section is a robust procedure for spatial coding of surfaces by structured illumination. A detailed outline of the algorithm is given by Zuo et al. [25]. In VRC, this procedure is used to render an LCD into a versatile calibration target.

A phase measurement constitutes the unique spatial coding by phase angles and requires phase shifting to be applied for at least two, typically perpendicular, fringe directions. Usually, an $x$-$y$-coordinate system of the LCD surface is established based on lines and columns of its pixel grid. Sinusoidal 2D-patterns displayed on the LCD across the $x-$direction are given by

$$I_n^\mathrm{d}(x,y) = \bar{I}^\mathrm{d}(x,y) \bigg[1 + \gamma^\mathrm{d} (x,y)\cos\bigg(kx - \dfrac{2\pi n}{N}\bigg) \bigg],$$
where $\bar {I}^\mathrm {d}$ is the average intensity, $\gamma ^\mathrm {d}$ the fringe contrast of the displayed fringe image, $k=2\pi /\lambda$ is the angular wave number of the fringe period $\lambda$, $N$ the total number of phase shifting steps and $n = 0,1,\dots, N-1$ is the index for the phase shift. The distorted images captured by the camera are given by
$$I_n(x,y) = \bar{I}(x,y) \bigg[1 + \gamma (x,y)\cos\bigg(\phi(x,y) - \dfrac{2\pi n}{N}\bigg) \bigg],$$
where $\phi (x,y)$ is the unknown phase. Since there are three unknowns $\bar {I}(x,y)$, $\gamma (x,y)$ and $\phi (x,y)$, at least three phase shifts are required. The phase $\phi (x,y)$ is given by
$$\phi(x,y) = \tan^{{-}1} \dfrac{\sum\limits_{n \, = \, 0}^{N-1} I_n(x,y) \, \sin\left(\dfrac{2\pi n}{N}\right)}{\sum\limits_{n \, = \, 0}^{N-1} I_n(x,y) \, \cos\left(\dfrac{2\pi n}{N}\right)}.$$

Note that calculated phase angles are only unique within each fringe period $\lambda$ and phase unwrapping must be applied to solve this ambiguity. The fringe contrast, which we use as the measure of defocus, can be calculated using

$$\gamma(x,y) = 2\, \dfrac{\sqrt{\left[\sum\limits_{n \,=\, 0}^{N-1} I_n(x,y) \,\sin\left(\dfrac{2\pi n}{N}\right)\right]^2 + \left[\sum\limits_{n \,=\, 0}^{N-1} I_n(x,y) \,\cos\left(\dfrac{2\pi n}{N}\right)\right]^2}}{\sum\limits_{n \,=\, 0}^{N-1} I_n(x,y)}.$$

2.3 Fringe contrast degradation model

In the following, we omit the phase shift index $n$. The measured intensity $I$ in Eq. (2) can be described as the convolution of the displayed intensity $I^\mathrm {d}$ in Eq. (1) with the Point Spread Function (PSF) of the camera [17,26]:

$$I = I^\mathrm{d} * \mathrm{PSF}.$$

Using the convolution theorem [27], this can be written as

$$I = \mathcal{F}^{{-}1}[\mathcal{F}[I^\mathrm{d}]\cdot\mathcal{F}[\mathrm{PSF}]],$$
where $\mathcal{F}$ is the two-dimensional Fourier transform and $\mathcal{F} ^{-1}$ is its inverse. The Fourier transform of the PSF is the Optical Transfer Function (OTF) [26,28]. Assuming $\bar {I}^\mathrm {d}$ and $\gamma ^\mathrm {d}$ are spatially constant or slowly varying, $I^\mathrm {d}$ is sinusoidal and its Fourier transform will result in a sum of Dirac deltas which makes it trivial to perform the inverse Fourier transform. This is used in Appendix“5.1 to show that, under the assumption of a symmetric PSF, the measured intensity is of the form
$$I=\bar{I}(1+\gamma^\mathrm{d} B \cos(kx-\psi))$$
with $\psi =2\pi n/N$. A less general derivation has been done by [29] for a Gaussian PSF. The factor
$$B=\frac{\textrm{OTF}(k,0)}{\textrm{OTF}(0,0)}$$
describes the effect of defocus. Generally, the OTF can be complex-valued because it is the Fourier transform of the PSF. In our case, the assumed symmetry of the PSF ensures that the OTF and thus $B$ are real-valued. The factor $B$ can be negative, depending on the shape of the PSF and the wave number $k$. The sign of $B$ can be incorporated into the argument of the cosine using $-\cos (t)=\cos (t+\pi )$. Equation (7) may therefore be written as
$$I=\bar{I}(1+\gamma^\mathrm{d} |B| \cos(kx+\eta-\psi)),$$
where $\eta =0$ for $B>0$ and $\eta =\pi$ for $B< 0$. This allows us to compare Eq. (2) and Eq. (9) to obtain
$$\gamma=\gamma^\mathrm{d}\cdot |B|,$$
which will serve as the observation equation that relates the measured fringe contrast $\gamma$ and the blur model parameters to be determined. Note that if $\eta =\pi$, an additional shift of $\pi$ will be introduced to the phase $\phi$ obtained from Eq. (3). Therefore we need to ensure $B>0$ while performing phase measurements for VRC. From the definition of $B$ we have $B(k=0)=1$. Thus, under the assumption of continuity there exists an interval around $k=0$ where $B$ is positive. As the fringe period $\lambda$ determines $k$ via the relation $k=2\pi /\lambda$, we can always ensure $B>0$ in the experiment by choosing a sufficiently large fringe period.

2.4 Defocus model

We assume a parametric model for the PSF. Typical examples in the literature are Gaussian or pillbox function [21,30,31]. Parametric models typically have a parameter $R$ that is proportional to the width of the PSF. In the following, $R$ is called blur radius.

To describe the dependency of the blur radius as a function of the distance from the focus, we assume a linear model:

$$R(d)=|d-d_f|\cdot a.$$

Here, $d$ is the distance from the pinhole along the ray of vision, $d_f$ is the focus distance along the ray of vision and $a$ is the rate of increase of blur radius with distance. The focus distance may be different for every camera pixel. The parameter $a$ however is in the following assumed to be equal for all camera pixels. The parameters $d_f$ and $a$ are unknown and will be the result of the focus determination procedure.

Figure 1 shows the geometric situation to estimate $a$ from the F-number $N$ of the optical system. From similar triangles we see that $a=D/(2d_f)$, where $D$ is the diameter of the entrance pupil. Let $f$ be the focal length. By definition of the F-number we have $N=f/D$. From this it follows that the estimate of $a$ is

$$a=f/(2Nd_f).$$

Because the blur radius is proportional to the width of the PSF and the fringe period $\lambda$ describes the width of the fringe pattern to be convolved with, the contrast can only be a function of their ratio. The factor $B$, which describes the contrast degradation due to defocus, must therefore have the functional form $B(R/\lambda )$. For convenience, we define $B$ such that $B=B(2\pi R/\lambda )=B(kR)$. According to Appendix“5.2, the factors for $B$ result in

$$B_\mathrm{Gauss}(s)=\exp\!\left(-\frac{s^2}{2}\right),\qquad B_\mathrm{pillbox}(s)=\frac{J_1(s)}{s/2}$$
for a Gaussian and a pillbox PSF, where $J_1$ stands for the first order Bessel function of first kind.

 figure: Fig. 1.

Fig. 1. Geometry for estimating parameter $a$ of the defocus model. For simplicity only the vision ray of the optical axis is considered. The cone of rays entering the entrance pupil converges at focus distance $d_f$ to a single point. The cone has slope $a$. From similar triangles we see that $a=D/(2d_f)$, where $D$ is the diameter of the entrance pupil.

Download Full Size | PDF

2.5 Optimization procedure

To find the distances $(d_f)_i$ for every vision ray $i$ as well as the unknown parameters $\gamma ^\mathrm {d}$ and $a$, we start from the observation Eq. (10) to formulate a least squares error function $F$. During a phase measurement, both $x$ and $y$ phases are determined, so the error function will constitute one term for $x$ contrast and one for $y$ contrast, for each vision ray $i$ and each phase measurement $j$. We therefore have measured contrast values $\gamma _{ij}^x$ and $\gamma _{ij}^y$. Generally, the fringe period will be different in $x$ and $y$ direction because of different pixel pitches or by choice of the experimenter. The experimenter may also choose the fringe period differently for each phase measurement. Additionally, most vision rays will strike the display surface at non-normal incidence.

Figure 2 shows the geometry of the setting that results is an effective fringe period $\lambda ^\mathrm {eff}=\lambda \cos \alpha$, where $\alpha$ is the angle between the display normal and the projection of the vision ray into the $xz$ and $yz$ plane, respectively. Generally, $\alpha$ is different for every vision ray and phase measurement. The error function thus takes the form

$$F(\gamma^\mathrm{d},a,\lbrace(d_f)_i\rbrace)=\sum_{ij}\left[(f_{ij}^x)^2+(f_{ij}^y)^2\right]$$
with $x$ and $y$ contrast residuals
$$f_{ij}^x=\gamma_{ij}^x-\gamma^\mathrm{d} \left|B(k_{ij}^{\mathrm{eff},x}R(d_{ij},(d_f)_i,a)\right|,\qquad f_{ij}^y=\gamma_{ij}^y-\gamma^\mathrm{d} \left|B(k_{ij}^{\mathrm{eff},y}R(d_{ij},(d_f)_i,a)\right|$$
and effective angular wave numbers
$$k_{ij}^{\mathrm{eff},x}=\frac{2\pi}{\lambda_j^x\cos\alpha_{ij}^x},\qquad k_{ij}^{\mathrm{eff},y}=\frac{2\pi}{\lambda_j^y\cos\alpha_{ij}^y}.$$

The distances $d_{ij}$ and angles $\alpha _{ij}^x,\alpha _{ij}^y$ can be calculated from the VRC. The unknowns $(d_f)_i$, $a$ and $\gamma ^\mathrm {d}$ can now be determined by regarding $F$ as a cost function and minimizing it. For numerical optimization, we use the MATLAB function lsqnonlin [32]. The Jacobian of the least squares residuals is being calculated analytically to improve speed and accuracy. We only use the data of every fourth camera pixel in both sensor chip directions to reduce the size of the problem.

 figure: Fig. 2.

Fig. 2. An oblique viewing direction makes the fringe period effectively smaller: $\lambda ^\mathrm {eff}=\lambda \cos \alpha$. The angle $\alpha$ is between the normal direction of the display and the projection of the ray direction vector into the $xz$ plane (in case of $x$ fringes).

Download Full Size | PDF

After the focus distances $(d_f)_i$ along the vision rays and thus the foci have been found, the focus distances along the optical axis can be obtained from the $z$ coordinates of the foci. The focus distances along the optical axis are of particular interest when investigating the field curvature.

3. Experiments and discussion

3.1 Experimental setup

Figure 3(a) shows the experimental setup. It consists of a Samsung U28E590D 28" monitor and a Manta G-145B camera equipped with a 16 mm Ricoh FL-CC1416-2M lens. The F-number of the lens is adjusted to 8 and the focus setting is between 0.3 m and 0.5 m. The camera is mounted on a linear translation stage and can also be rotated in the horizontal plane. The translation and rotation of the camera do not need to be precisely controlled and can be chosen e. g. with a ruler, because the relative orientations of camera and display are determined later during calibration. The camera is moved instead of the display to change their relative position, because moving the camera is much easier, and mechanically moving the display will probably change its shape. We precalibrate the intensity characteristic of the display with an active method similar to [33]: We show images of all 8 bit gray values on the display and each time measure the camera image mean intensity. The functional relationship between intensity and grayscale value is then established via curve fitting with 14 fit coefficients.

 figure: Fig. 3.

Fig. 3. (a) Experimental setup. The camera is mounted on a linear translation stage and can be rotated horizontally. The precise relative position of camera and display is determined using Vision Ray Calibration (VRC). (b) Measurement procedure from the perspective of the camera coordinate system.

Download Full Size | PDF

The measurement procedure from the perspective of the camera coordinate system is shown in Fig. 3(b). The camera is translated to different positions and each time a phase measurement is being recorded. Additionally, a few phase measurements are recorded where the camera is being tilted, which is necessary for the VRC algorithm to converge. From each phase measurement we obtain the absolute phase and the fringe contrast. The absolute phases are used to perform a VRC, from which we obtain the vision rays, the display poses relative to the camera and the display shape [20]. From the display poses we calculate the intersections points of the vision rays with the display for all non-tilted measurements. Their distances from the camera origin and the measured fringe contrast constitute the data used in the focus determination procedure.

3.2 Comparison of Gaussian and pillbox blur

First, we investigate whether to use a Gaussian or a pillbox to model the PSF. We measure the fringe contrast over a large range of distances $d$ while keeping the fringe period on the display constant at 6 pixels.

Figure 4 shows measured contrast against distance towards the LCD surface for a single pixel in the center of the camera chip. Contrast models based on a Gaussian and a pillbox PSF were fitted to the data. Highest contrast is obtained near the focus at around $d = 450$ mm. Here, both models fit the measurement data well. However, further away from the focus at $d=200$ mm or $d=750$ mm, the contrast almost drops to zero before increasing again, forming side lobes. The side lobes correspond to the sign change of $B$ discussed in section 2.3. The corresponding phase shift of $\pi$ can be observed directly by comparing fringe images captured e.g. at $d=220$ mm and at $d=180$ mm. Only the pillbox model can reproduce this behavior, which we therefore choose for the focus determination procedure. Still, the pillbox PSF deviates from the experimental data for larger distances. This gives a hint that an improved PSF model might be worth investigating in the future. In the following we perform the experiments in such a way that measuring at the side lobes is avoided, so that $B$ is always positive.

 figure: Fig. 4.

Fig. 4. Measured contrast as a function of distance $d$ for a single pixel in the center of the camera image. The phase measurements were performed with a constant fringe period of 6 pixels. The fitted contrast models stemming from a Gaussian and a pillbox PSF are shown as well.

Download Full Size | PDF

3.3 Focus distances

To obtain the focus distances, we move the camera to several positions relative to the display and each time perform a phase measurement. We translate the camera over a range of 60 cm in steps of 5 cm, and in steps of 2.5 cm when the display is close to the focus, which yields 17 measurements. Then we perform 3 additional measurements at different positions with the camera tilted to different angles, as explained in section 3.1. This yields an overall 20 phase measurements. The fringe period is chosen to be a constant value of 12 pixels, which is much larger than for Fig. 4 to avoid the appearance of side lobes in the whole translation range. We then apply the VRC to all phase measurements and the focus determination to all phase measurements where the camera was not tilted.

The parameter $a$ is found to be $2.375\cdot 10^{-3}$, which is close to the value $2.3\cdot 10^{-3}$ estimated from Eq. (12). The display contrast $\gamma ^\mathrm {d}$ is found to be 0.849, which is unrealistically low. Figure 5(a) shows the focus distances. The camera pixels used in the focus determination procedure form a regular grid on which the focus distances are being displayed as a color-coded image. In the following, this form of visualization is used for other quantities that are defined for all used camera pixels as well. The focus distance is maximal in the center of the image and is approximately rotationally symmetric, as one would expect from a rotationally symmetric lens.

 figure: Fig. 5.

Fig. 5. Results obtained from measurement with constant fringe period of 12 pixels. (a) Focus Distances along camera optical axis, displayed as color-coded image on the grid of camera pixels that were used in the focus determination procedure. The markers labeled 1–4 indicate pixels investigated in (b). (b) Measured contrast as a function of distance for a few camera pixels. Curves labeled 1 and 4 show small "dents" close to the maximum. (c) Pixel-wise error of an exemplary phase measurement: RMS of deviation between measured and fitted intensities. (d) Pixel-wise RMS of residuals $f_{ij}^x$ at the end of the focus determination procedure.

Download Full Size | PDF

Because the fringe period is constant, the fringe contrast as a function of distance is expected to have a clear maximum defining the focus. Figure 5(b) shows the curves of measured fringe contrast as a function of distance for a few pixels. Note that the contrast values in Fig. 5(b) do not drop to zero within the distance range like they do in Fig. 4, because a larger fringe period was used. Some of the curves have small "dents" at the maximum. By inspection of the fringe images we found the dents to be caused by moire patterns [34] that originate from the sampling of the display pixel grid by the camera. The corresponding data points have therefore been omitted in the focus determination procedure.

Surprisingly, the maximum contrast of the curves very much depends on the position on the camera sensor and even exceeds 1 in some cases. The reason for this can be seen in Fig. 5(c), which shows for an exemplary phase measurement the pixel-wise root mean square (RMS) of $I_n-\bar {I}(1+\gamma \cos (\phi {-}2\pi n/N)),\, n=0,\ldots,N-1$, i. e. the deviation of the measured intensity from sinusoidal shape. The RMS is especially large on top and bottom of the image. From detailed inspection of the intensities we find that these deviations cause the contrast to be measured systematically too large on top and too small on bottom. Because Eq. (4) mathematically permits contrast values up to 2 if the intensities $I_n$ significantly deviate from a sinusoidal curve, the contrast can exceed 1 at the image top.

The non-sinusoidal intensity is most likely caused by a nonlinear display intensity characteristic, stemming from a change of the intensity characteristic with the viewing angle. This explains the error distribution in Fig. 5: The image regions of largest error are on top and bottom. These regions correspond to large vertical viewing angles on the display, which are known to have considerably different intensity characteristics compared to a normal viewing angle, for consumer twisted-nematic LCDs [35].

Figure 5(d) exhibits the pixel-wise RMS of the residuals $f_{ij}^x$ resulting from the optimization procedure. It is similar to Fig. 5(c), because systematic errors of the measured contrast will also increase its disparity from the fitted contrast model. The regression error of the focus determination procedure is thus to a large extent caused by intensity nonlinearity of the display. For the phase measurements used for focus determination, the camera is translated but not rotated. Thus, a particular camera pixel always observes the display from a similar angle. Without defocus, the contrast would therefore be the same for all measurements. Hence, contrast degradation can solely be attributed to defocus. This means that the position of the contrast maximum and therefore the calculated focus distance are not affected by a nonlinear intensity characteristic. This is of course only true if the intensity nonlinearity is not too severe to break the contrast model fitting procedure.

3.4 Tilting the camera

To test the stability of our focus determination method against horizontal tilt, we repeat the measurements and evaluations from previous section with a camera that is tilted in the horizontal plane by approximately $5^\circ$. All results are close to the results with non-tilted camera. We obtain $a=2.368\cdot 10^{-3}$ and $\gamma ^\mathrm {d}=0.875$. The focus distances are depicted in Fig. 6(a). An estimate of the reproducibility of focus determination can be made from looking at the difference of Fig. 5(a) and Fig. 6(a), which is depicted in Fig. 6(b). The RMS of Fig. 6(b) is 1.4 mm, indicating that the reproducibility of the focus distance measurement is in the same order of magnitude.

 figure: Fig. 6.

Fig. 6. (a) Focus distances along the camera optical axis, obtained from measurement with tilted camera and constant fringe period of 12 pixels. (b) Difference of the results of (a) and Fig. 5(a).

Download Full Size | PDF

3.5 Constant fringe contrast

We now test the feasibility of the focus determination method when the fringe period is not held constant. Instead, the fringe period is adjusted for each camera position to keep the fringe contrast approximately constant. Such a choice may be desirable for camera calibration, because the fringe contrast is directly related to statistic phase measurement error [36]. Contrast curves as illustrated in Fig. 5(b) will however no longer show maxima that visibly designate the focus position. We translate the camera over a range of 46 cm in steps of 2 cm, omitting distances close to focus where the fringe period would be too small for the display to render a reasonably smooth sine. Together with 4 measurements with tilted camera this constitutes 18 phase measurements. The contrast is kept at approximately 0.5 in the center of the camera image. Figure 7 shows the adjusted fringe periods which already hint the approximate position of the focus distance at 350 mm, as the focus is the position where the contrast is high even for a small fringe period.

 figure: Fig. 7.

Fig. 7. Fringe period $\lambda$ as a function of distance $d$, for the measurement where the fringe period was adjusted to keep the contrast approximately constant. Only fringe periods of the measurements used for focus determination (non-tilted camera) are shown.

Download Full Size | PDF

After recording the measurements we perform a VRC and the focus determination procedure. We obtain $a=2{,}827\cdot 10^{-3}$ and $\gamma ^\mathrm {d}=0.821$. Figure 8(a) exhibits the focus distances along the optical axis. They are again maximal in the image center and rotationally symmetric as one would expect. The focus distances are not directly comparable to the results of previous sections because the camera parameters have been adjusted in between.

 figure: Fig. 8.

Fig. 8. (a) Focus distances along the camera optical axis, obtained from the measurement where the fringe period was adjusted for each camera position to keep the fringe contrast approximately constant. (b) Modified Laplacian focus measure (ML) as a function of distance for a single pixel. To find the maximum, a parabola is being fitted to the points close to the maximum data point. (c) Focus distances along camera optical axis obtained from the ML method. (d) Difference between (c) and (a).

Download Full Size | PDF

In the literature, focus is often determined using a "focus measure" that can be calculated directly from an image [30]. Here, we utilize the modified Laplacian focus measure (ML) [37,38] to get an independent focus distance measurement for comparison with our method. We translate the camera to 115 positions around the distance where the display is in focus, in steps of 1 mm. Each time, the display shows a grid structure which the camera records. The grid consists of $2\times 2$ bright pixel blocks that are two pixels apart on a dark background. At each camera position we perform a phase measurement. The fringe period is 20 pixels. Using the calibration result of previous VRC, we can find the relative position of camera and display and thus the distances at which the vision rays strike the display. From the camera image of the grid, the ML can be calculated for all camera pixels, except for the ones on the image border which are being ignored. Because the ML value of a specific camera pixel depends on whether it was looking at a bright or a dark display pixel or on the region in between, the ML will be very different even for neighboring camera pixels. To mitigate this effect as well as effects from the moire pattern while still recovering the slowly varying field curvature, the ML image was filtered with a Gaussian averaging filter function with standard deviation 30 pixels. Thus we obtain for every camera pixel the ML at multiple distances. The maximum of the ML as a function of distance yields the focus distance.

Figure 8(b) exhibits the ML of a single pixel as a function of distance. The curve has a maximum which is located by fitting a parabola to the data points that are closest the approximate position of the maximum. Around $d=400$ mm and, to a smaller extent, around $d=325$ mm deviations from a smooth curve are visible. These deviations occur when the moire pattern, formed by sampling the display pixel grid with the camera, has such a large wavelength that the Gaussian filter cannot suppress it. Fortunately, for this specific camera the ML deviations caused by the filtered moire pattern never appeared directly at the maxima of the ML curves. By finding the maximum of the ML curve for all camera pixels we obtain the focus distances which are depicted in Fig. 8(c). For comparison with the focus distances obtained from our method, Fig. 8(d) shows the difference between Fig. 8(a) and Fig. 8(c). The RMS of Fig. 8(d) is 1.3 mm. Thus we verified the accuracy of our focus distances to be in the order of mm.

4. Summary and conclusion

In this work we demonstrated that the fringe contrast, acquired as a side product from phase measurements for vision ray calibration, allows to calculate the focus distances of the camera without any additional measurements. Our method enables to determine the focus distances for individual pixels, thus also yielding the object side field curvature. We derived a mathematical model for the degradation of phase measurement contrast due to defocus and implemented an optimization procedure to obtain the focus distances from measurement data. Using empirical data we found that a pillbox function is a better model for the point spread function of our camera than a Gaussian model. We performed the focus determination procedure on experimental data and repeated the measurement with a horizontally tilted camera, which gave a reproducibility estimate of the focus distances of a few mm, or approximately 1%.

To show the feasibility of the focus determination procedure for different fringe period selection schemes, we determined the focus distances from phase measurements where the fringe period was held constant for all camera positions and where the fringe period was adjusted to keep the fringe contrast approximately constant. The obtained focus distances agree up to a few mm with the results from an independent comparison method based on the modified Laplacian focus measure. This accuracy may enable the future use of our method for focus surface alignment of camera systems. In our setup, the intensity nonlinearity of the display at oblique vertical viewing angles significantly influences the measured contrast. Our model currently does not consider this type of intensity nonlinearity.

In conclusion, we demonstrated the feasibility of focus determination from phase measurements used in vision ray calibration. The contrast variations due to intensity nonlinearity of the display at vertical viewing angles may be further reduced by using a high quality display.

APPENDIX

I. Convolution of fringe image with PSF

The Fourier transform of a function $f(x)$ with respect to $x$ is

$$\mathcal{F}_x[f](\nu)=\int_{-\infty}^\infty f(x)\,\text{e}^{-\textrm{i}\nu x}\,\text{d} x.$$

The inverse transform is

$$\mathcal{F}_\nu^{{-}1}[g](x)=\frac{1}{2\pi}\int_{-\infty}^\infty g(\nu)\,\text{e}^{\textrm{i}\nu x}\,\text{d} \nu.$$

This can be generalized to two dimensions. Then, $\mathcal{F} _{xy}$ symbolizes the two-dimensional Fourier transform with respect to variables $x$ and $y$.

In the following, we evaluate the expression

$$I = \mathcal{F}^{{-}1}[\mathcal{F}[I^\mathrm{d}]\cdot\mathcal{F}[\textrm{PSF}]].$$

Like in section 2.3, we suppress the phase shift index $n$ and define $\psi =2\pi n/N$. In Eq. (1), let $\bar {I}^\mathrm {d}$ and $\gamma ^\mathrm {d}$ be constant. Then the Fourier transform of $I^\mathrm {d}$ is

$$\mathcal{F}[I^\mathrm{d}](\nu_x,\nu_y)=\mathcal{F}_{xy}[\bar{I}^\mathrm{d}(1+\gamma^\mathrm{d}\cos(kx-\psi)](\nu_x,\nu_y).$$

Since in this case $I^\mathrm {d}$ describes $x$ fringes, $I^\mathrm {d}$ is not dependent on $y$ and we obtain

$$\mathcal{F}[I^\mathrm{d}](\nu_x,\nu_y)=2\pi\delta(\nu_y)\mathcal{F}_x[\bar{I}^\mathrm{d}(1+\gamma^\mathrm{d}\cos(kx-\psi)](\nu_x).$$

Due to the linearity of the Fourier transform, we can transform 1 and $\cos (kx-\psi )$ individually. The transformation of 1 is $2\pi \delta (\nu _x)$. With the argument shift property of the Fourier transform, we have

$$\mathcal{F}_x[\cos(kx-\psi)](\nu_x)=\mathcal{F}_x[\cos(k(x-\psi/k))](\nu_x)=\text{e}^{-\textrm{i} \psi/k\cdot \nu_x}\mathcal{F}_x[\cos(kx)](\nu_x).$$

The transformation of $\cos (kx)$ is $\pi (\delta (\nu _x-k)+\delta (\nu _x+k))$. Thus we obtain

$$\mathcal{F}[I^\mathrm{d}](\nu_x,\nu_y)=(2\pi)^2\delta(\nu_y)\bar{I}^\mathrm{d}\text{e}^{-\textrm{i} \psi/k\cdot\nu_x}\left(\delta(\nu_x)+\gamma^\mathrm{d}\frac{\delta(\nu_x-k)+\delta(\nu_x+k)}{2}\right).$$

Using the sampling property of the shifted Dirac deltas, $\nu _x$ in $\text{e} ^{-\textrm{i} \psi /k\cdot \nu _x}$ can be replaced by 0, $k$ and $-k$ respectively, which yields

$$\mathcal{F}[I^\mathrm{d}](\nu_x,\nu_y) =(2\pi)^2\delta(\nu_y)\bar{I}^\mathrm{d}\left(\delta(\nu_x)+\gamma^\mathrm{d}\frac{\text{e}^{-\textrm{i} \psi}\delta(\nu_x-k)+\text{e}^{\textrm{i} \psi}\delta(\nu_x+k)}{2}\right).$$

The Fourier transform of the PSF is the OTF [28]. We use Eq. (6) to calculate the measured intensity:

$$I(x,y) = \mathcal{F}_{\nu_x,\nu_y}^{{-}1}\Big[\mathcal{F}[I^\mathrm{d}](\nu_x,\nu_y)\cdot \textrm{OTF}(\nu_x,\nu_y)\Big](x,y).$$

Because of the factor $\delta (\nu _y)$ in Eq. (24), the Fourier transform with respect to $\nu _y$ can be performed simply by evaluating the integrand at $\nu _y=0$ and dividing by $2\pi$. Thus we are left only with the transformation with respect to $\nu _x$, which we write explicitly:

$$I(x,y)=\frac{1}{2\pi}\int_{-\infty}^\infty 2\pi \bar{I}^\mathrm{d}\left(\delta(\nu_x)+\gamma^\mathrm{d}\frac{\text{e}^{-\textrm{i} \psi}\delta(\nu_x-k)+\text{e}^{\textrm{i}\psi}\delta(\nu_x+k)}{2}\right)\textrm{OTF}(\nu_x,0)\,\text{e}^{\textrm{i} x\nu_x}\,\text{d} x.$$

Again using the sampling property of the Dirac delta, we obtain

$$I(x,y)=\bar{I}^\mathrm{d}\left(\textrm{OTF}(0,0)+\gamma^\mathrm{d}\frac{\text{e}^{-\textrm{i} \psi}\textrm{OTF}(k,0)\,\text{e}^{\textrm{i} kx}+\text{e}^{\textrm{i}\psi}\textrm{OTF}({-}k,0)\,\text{e}^{-\textrm{i} kx}}{2}\right).$$

The PSF is real-valued. Under the assumption that it is also symmetric, its Fourier transform – the OTF – is a real-valued even function: $\operatorname {OTF}(\nu _x,\nu _y)=\operatorname {OTF}(-\nu _x,-\nu _y)$. Equation (27) therefore simplifies to

$$I(x,y) = \bar{I}^\mathrm{d}\textrm{OTF}(0,0)\left(1+\gamma^\mathrm{d}\frac{\textrm{OTF}(k,0)}{\textrm{OTF}(0,0)}\frac{\text{e}^{\textrm{i} (kx-\psi)}+\text{e}^{-\textrm{i} (kx-\psi)}}{2}\right).$$

Using the identity $\cos (x)=(\text{e} ^{\textrm{i} x}+\text{e} ^{-\textrm{i} x})/2$ and the definitions

$$\bar{I}=\bar{I}^{\,d}\textrm{OTF}(0,0),$$
$$B=\frac{\textrm{OTF}(k,0)}{\textrm{OTF}(0,0)},$$
we finally obtain
$$I(x,y)=\bar{I}(1+\gamma^\mathrm{d} B\cos(kx-\psi)).$$

II. Focus degradation for Gaussian and pillbox PSF

With Eq. (30) and the fact that the OTF is the Fourier transform of the PSF, the functional form of $B=B(kR)$ for specific PSFs can be obtained from tabulated Fourier transformations [27,39]. For a Gaussian PSF

$$\textrm{PSF}(x,y) = \frac{1}{2\pi R^2}\exp\!\left(-\frac{x^2+y^2}{2R^2}\right)$$
we get
$$\textrm{OTF}(\nu_x,\nu_y)=2\pi R^2 \exp\!\left(-\frac{R^2}{2}(\nu_x^2+\nu_y^2)\right)$$
and thus
$$B(kR) = \frac{2\pi R^2 \exp\!\left(-\frac{R^2}{2}(k^2+0^2)\right)}{2\pi R^2 \exp\!\left(-\frac{R^2}{2}(0^2+0^2)\right)}=\exp\!\left(-\frac{(kR)^2}{2}\right).$$

For a Pillbox PSF

$$\textrm{PSF}(x,y)=\frac{1}{\pi R^2}\textrm{circ}\!\left(\frac{\sqrt{x^2+y^2}}{R}\right)$$
we get
$$\textrm{OTF}(\nu_x,\nu_y)=\frac{1}{\pi }\cdot \frac{J_1\!\Big(R\sqrt{\nu_x^2+\nu_y^2}\Big)}{R\sqrt{\nu_x^2+\nu_y^2}},$$
where $J_1$ is the first order Bessel function of first kind. $\operatorname {OTF}(0,0)$ is defined in the limit $(\nu _x,\nu _y)\to 0$. Since $J_1(x)=x/2+\mathcal {O}(x^3)$, we have
$$\textrm{OTF}(0,0)=\lim_{\nu\to 0}\frac{1}{\pi}\frac{J_1(R\nu)}{R\nu}=\frac{1}{2\pi}$$
and thus
$$B(kR)=\frac{\frac{1}{\pi }\cdot \frac{J_1(Rk)}{Rk}}{\frac{1}{2\pi}}=\frac{J_1(kR)}{kR/2}.$$

Funding

Deutsche Forschungsgemeinschaft (418992697).

Acknowledgments

The authors thank C. Kapitza for the provided technical assistance. S. Gauchan acknowledges valuable discussions with C. Falldorf.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. M. Hödel, L. Hoegner, and U. Stilla, “Review on photogrammetric surface inspection in automotive production,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. XLIII-B2-2021, 511–518 (2021). [CrossRef]  

2. L. Bohner, D. Habor, K. Radermacher, S. Wolfart, and J. Marotti, “Scanning of a dental implant with a high-frequency ultrasound scanner: A pilot study,” Appl. Sci. 11(12), 5494 (2021). [CrossRef]  

3. I. Vasiljević, R. Obradović, I. Ðurić, B. Popkonstantinović, I. Budak, L. Kulić, and Z. Milojević, “Copyright protection of 3d digitized artistic sculptures by adding unique local inconspicuous errors by sculptors,” Appl. Sci. 11(16), 7481 (2021). [CrossRef]  

4. L. Huang, M. Idir, C. Zuo, and A. Asundi, “Review of phase measuring deflectometry,” Opt. Lasers Eng. 107, 247–257 (2018). [CrossRef]  

5. H. S. Lee, S. Shin, H. Lee, and Y. Yu, “Determining the micro-optical element surfaces profiles using transmission deflectometry with liquids,” Curr. Appl. Phys. 15(3), 302–306 (2015). [CrossRef]  

6. K. K. Kieselbach, M. Nöthen, and H. Heuer, “Development of a visual inspection system and the corresponding algorithm for the detection and subsequent classification of paint defects on car bodies in the automotive industry,” J. Coatings Technol. Res. 16(4), 1033–1042 (2019). [CrossRef]  

7. P. Kiefel and P. Nitz, “Quality control of fresnel lens molds using deflectometry,” in AIP Conference Proceedings, vol. 1766 (AIP Publishing LLC, 2016), p. 050004.

8. M. C. Knauer, J. Kaminski, and G. Häusler, “Phase measuring deflectometry: a new approach to measure specular free-form surfaces,” in Optical Metrology in Production Engineering, vol. 5457 (SPIE, 2004), pp. 366–376.

9. R. Huang, P. Su, and J. H. Burge, “Deflectometry measurement of Daniel K. Inouye solar telescope primary mirror,” in Optical Manufacturing and Testing XI, vol. 9575 (SPIE, 2015), pp. 195–209.

10. A.-S. Poulin-Girard, S. Thibault, and D. Laurendeau, “Influence of camera calibration conditions on the accuracy of 3d reconstruction,” Opt. Express 24(3), 2678–2686 (2016). [CrossRef]  

11. W. Li, S. Shan, and H. Liu, “High-precision method of binocular camera calibration with a distortion model,” Appl. Opt. 56(8), 2368–2377 (2017). [CrossRef]  

12. Z. Liu, Q. Wu, S. Wu, and X. Pan, “Flexible and accurate camera calibration using grid spherical images,” Opt. Express 25(13), 15269–15285 (2017). [CrossRef]  

13. L. R. Ramírez-Hernández, J. C. Rodríguez-Quino nez, M. J. Castro-Toscano, D. Hernández-Balbuena, W. Flores-Fuentes, R. Rascón-Carmona, L. Lindner, and O. Sergiyenko, “Improve three-dimensional point localization accuracy in stereo vision systems using a novel camera calibration method,” Int. J. Adv. Robotic Syst. 17(1), 172988141989671 (2020). [CrossRef]  

14. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

15. M. Baba, M. Mukunoki, and N. Asada, “A unified camera calibration using geometry and blur of feature points,” in 18th International Conference on Pattern Recognition (ICPR’06), vol. 1 (IEEE, 2006), pp. 816–819.

16. H. Ha, Y. Bok, K. Joo, J. Jung, and I. S. Kweon, “Accurate camera calibration robust to defocus using a smartphone,” in Proceedings of the IEEE International conference on computer vision (2015), pp. 828–836.

17. W. Ding, X. Liu, D. Xu, D. Zhang, and Z. Zhang, “A robust detection method of control points for calibration and measurement with defocused images,” IEEE Trans. Instrum. Meas. 66(10), 2725–2735 (2017). [CrossRef]  

18. T. Bell, J. Xu, and S. Zhang, “Method for out-of-focus camera calibration,” Appl. Opt. 55(9), 2346–2352 (2016). [CrossRef]  

19. C. Schmalz, F. Forster, and E. Angelopoulou, “Camera calibration: active versus passive targets,” Opt. Eng. 50(11), 113601 (2011). [CrossRef]  

20. J. Bartsch, Y. Sperling, and R. B. Bergmann, “Efficient vision ray calibration of multi-camera systems,” Opt. Express 29(11), 17125–17139 (2021). [CrossRef]  

21. Y. Dou and X. Su, “A flexible 3d profilometry based on fringe contrast analysis,” Opt. Laser Technol. 44(4), 844–849 (2012). [CrossRef]  

22. S. Ramalingam and P. Sturm, “A unifying model for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 39(7), 1309–1319 (2017). [CrossRef]  

23. M. Grossberg and S. Nayar, “A general imaging model and a method for finding its parameters,” in Proceedings 8th IEEE International Conference on Computer Vision, vol. 2 (2001), pp. 108–115.

24. T. Bothe, W. Li, M. Schulte, C. von Kopylow, R. B. Bergmann, and W. P. Jüptner, “Vision ray calibration for the quantitative geometric description of general imaging and projection optics in metrology,” Appl. Opt. 49(30), 5851–5860 (2010). [CrossRef]  

25. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

26. R. Szeliski, Computer Vision - Algorithms and Applications (Springer, 2011), p. 70.

27. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company Publishers, 2005), pp. 8–15, 3rd ed.

28. J. Kim, T. Li, Y. Wang, and X. Zhang, “Vectorial point spread function and optical transfer function in oblique plane imaging,” Opt. Express 22(9), 11140–11151 (2014). [CrossRef]  

29. Y. Dou, X. Su, Y. Chen, and Y. Wang, “A flexible fast 3d profilometry based on modulation measurement,” Opt. Lasers Eng. 49(3), 376–383 (2011). [CrossRef]  

30. M. Subbarao, T.-S. Choi, and A. Nikzad, “Focusing techniques,” Opt. Eng. 32(11), 2824–2836 (1993). [CrossRef]  

31. F. Mannan and M. S. Langer, “Blur calibration for depth from defocus,” in 2016 13th Conference on Computer and Robot Vision (CRV) (2016), pp. 281–288.

32. The MathWorks, Inc., “lsqnonlin,” https://uk.mathworks.com/help/optim/ug/lsqnonlin.html, Accessed: 29.08.2022.

33. S. Zhang, “Comparative study on passive and active projector nonlinear gamma calibration,” Appl. Opt. 54(13), 3834–3841 (2015). [CrossRef]  

34. V. Saveljev, S.-K. Kim, and J. Kim, “Moiré effect in displays: a tutorial,” Opt. Eng. 57(03), 1 (2018). [CrossRef]  

35. M. Fischer, M. Petz, and R. Tutsch, “Evaluation of LCD monitors for deflectometric measurement systems,” in Optical Sensing and Detection, vol. 7726 F. Berghmans, A. G. Mignani, and C. A. van Hoof, eds., International Society for Optics and Photonics (SPIE, 2010), pp. 260–269.

36. T. Bothe, Grundlegende Untersuchungen zur Formerfassung mit einem neuartigen Prinzip der Streifenprojektion und Realisierung in einer kompakten 3D-Kamera (BIAS, 2008), pp. 47–60, no. 32 in Strahltechnik.

37. S. K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16(8), 824–831 (1994). [CrossRef]  

38. M. Riaz, S. Park, M. B. Ahmad, W. Rasheed, and J. Park, “Generalized laplacian as focus measure,” in International Conference on Computational Science (Springer, 2008), pp. 1013–1021.

39. R. E. Blahut, Theory of Remote Image Formation (Cambridge University Press, 2004), pp. 72–82.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Geometry for estimating parameter $a$ of the defocus model. For simplicity only the vision ray of the optical axis is considered. The cone of rays entering the entrance pupil converges at focus distance $d_f$ to a single point. The cone has slope $a$. From similar triangles we see that $a=D/(2d_f)$, where $D$ is the diameter of the entrance pupil.
Fig. 2.
Fig. 2. An oblique viewing direction makes the fringe period effectively smaller: $\lambda ^\mathrm {eff}=\lambda \cos \alpha$. The angle $\alpha$ is between the normal direction of the display and the projection of the ray direction vector into the $xz$ plane (in case of $x$ fringes).
Fig. 3.
Fig. 3. (a) Experimental setup. The camera is mounted on a linear translation stage and can be rotated horizontally. The precise relative position of camera and display is determined using Vision Ray Calibration (VRC). (b) Measurement procedure from the perspective of the camera coordinate system.
Fig. 4.
Fig. 4. Measured contrast as a function of distance $d$ for a single pixel in the center of the camera image. The phase measurements were performed with a constant fringe period of 6 pixels. The fitted contrast models stemming from a Gaussian and a pillbox PSF are shown as well.
Fig. 5.
Fig. 5. Results obtained from measurement with constant fringe period of 12 pixels. (a) Focus Distances along camera optical axis, displayed as color-coded image on the grid of camera pixels that were used in the focus determination procedure. The markers labeled 1–4 indicate pixels investigated in (b). (b) Measured contrast as a function of distance for a few camera pixels. Curves labeled 1 and 4 show small "dents" close to the maximum. (c) Pixel-wise error of an exemplary phase measurement: RMS of deviation between measured and fitted intensities. (d) Pixel-wise RMS of residuals $f_{ij}^x$ at the end of the focus determination procedure.
Fig. 6.
Fig. 6. (a) Focus distances along the camera optical axis, obtained from measurement with tilted camera and constant fringe period of 12 pixels. (b) Difference of the results of (a) and Fig. 5(a).
Fig. 7.
Fig. 7. Fringe period $\lambda$ as a function of distance $d$, for the measurement where the fringe period was adjusted to keep the contrast approximately constant. Only fringe periods of the measurements used for focus determination (non-tilted camera) are shown.
Fig. 8.
Fig. 8. (a) Focus distances along the camera optical axis, obtained from the measurement where the fringe period was adjusted for each camera position to keep the fringe contrast approximately constant. (b) Modified Laplacian focus measure (ML) as a function of distance for a single pixel. To find the maximum, a parabola is being fitted to the points close to the maximum data point. (c) Focus distances along camera optical axis obtained from the ML method. (d) Difference between (c) and (a).

Equations (38)

Equations on this page are rendered with MathJax. Learn more.

I n d ( x , y ) = I ¯ d ( x , y ) [ 1 + γ d ( x , y ) cos ( k x 2 π n N ) ] ,
I n ( x , y ) = I ¯ ( x , y ) [ 1 + γ ( x , y ) cos ( ϕ ( x , y ) 2 π n N ) ] ,
ϕ ( x , y ) = tan 1 n = 0 N 1 I n ( x , y ) sin ( 2 π n N ) n = 0 N 1 I n ( x , y ) cos ( 2 π n N ) .
γ ( x , y ) = 2 [ n = 0 N 1 I n ( x , y ) sin ( 2 π n N ) ] 2 + [ n = 0 N 1 I n ( x , y ) cos ( 2 π n N ) ] 2 n = 0 N 1 I n ( x , y ) .
I = I d P S F .
I = F 1 [ F [ I d ] F [ P S F ] ] ,
I = I ¯ ( 1 + γ d B cos ( k x ψ ) )
B = OTF ( k , 0 ) OTF ( 0 , 0 )
I = I ¯ ( 1 + γ d | B | cos ( k x + η ψ ) ) ,
γ = γ d | B | ,
R ( d ) = | d d f | a .
a = f / ( 2 N d f ) .
B G a u s s ( s ) = exp ( s 2 2 ) , B p i l l b o x ( s ) = J 1 ( s ) s / 2
F ( γ d , a , { ( d f ) i } ) = i j [ ( f i j x ) 2 + ( f i j y ) 2 ]
f i j x = γ i j x γ d | B ( k i j e f f , x R ( d i j , ( d f ) i , a ) | , f i j y = γ i j y γ d | B ( k i j e f f , y R ( d i j , ( d f ) i , a ) |
k i j e f f , x = 2 π λ j x cos α i j x , k i j e f f , y = 2 π λ j y cos α i j y .
F x [ f ] ( ν ) = f ( x ) e i ν x d x .
F ν 1 [ g ] ( x ) = 1 2 π g ( ν ) e i ν x d ν .
I = F 1 [ F [ I d ] F [ PSF ] ] .
F [ I d ] ( ν x , ν y ) = F x y [ I ¯ d ( 1 + γ d cos ( k x ψ ) ] ( ν x , ν y ) .
F [ I d ] ( ν x , ν y ) = 2 π δ ( ν y ) F x [ I ¯ d ( 1 + γ d cos ( k x ψ ) ] ( ν x ) .
F x [ cos ( k x ψ ) ] ( ν x ) = F x [ cos ( k ( x ψ / k ) ) ] ( ν x ) = e i ψ / k ν x F x [ cos ( k x ) ] ( ν x ) .
F [ I d ] ( ν x , ν y ) = ( 2 π ) 2 δ ( ν y ) I ¯ d e i ψ / k ν x ( δ ( ν x ) + γ d δ ( ν x k ) + δ ( ν x + k ) 2 ) .
F [ I d ] ( ν x , ν y ) = ( 2 π ) 2 δ ( ν y ) I ¯ d ( δ ( ν x ) + γ d e i ψ δ ( ν x k ) + e i ψ δ ( ν x + k ) 2 ) .
I ( x , y ) = F ν x , ν y 1 [ F [ I d ] ( ν x , ν y ) OTF ( ν x , ν y ) ] ( x , y ) .
I ( x , y ) = 1 2 π 2 π I ¯ d ( δ ( ν x ) + γ d e i ψ δ ( ν x k ) + e i ψ δ ( ν x + k ) 2 ) OTF ( ν x , 0 ) e i x ν x d x .
I ( x , y ) = I ¯ d ( OTF ( 0 , 0 ) + γ d e i ψ OTF ( k , 0 ) e i k x + e i ψ OTF ( k , 0 ) e i k x 2 ) .
I ( x , y ) = I ¯ d OTF ( 0 , 0 ) ( 1 + γ d OTF ( k , 0 ) OTF ( 0 , 0 ) e i ( k x ψ ) + e i ( k x ψ ) 2 ) .
I ¯ = I ¯ d OTF ( 0 , 0 ) ,
B = OTF ( k , 0 ) OTF ( 0 , 0 ) ,
I ( x , y ) = I ¯ ( 1 + γ d B cos ( k x ψ ) ) .
PSF ( x , y ) = 1 2 π R 2 exp ( x 2 + y 2 2 R 2 )
OTF ( ν x , ν y ) = 2 π R 2 exp ( R 2 2 ( ν x 2 + ν y 2 ) )
B ( k R ) = 2 π R 2 exp ( R 2 2 ( k 2 + 0 2 ) ) 2 π R 2 exp ( R 2 2 ( 0 2 + 0 2 ) ) = exp ( ( k R ) 2 2 ) .
PSF ( x , y ) = 1 π R 2 circ ( x 2 + y 2 R )
OTF ( ν x , ν y ) = 1 π J 1 ( R ν x 2 + ν y 2 ) R ν x 2 + ν y 2 ,
OTF ( 0 , 0 ) = lim ν 0 1 π J 1 ( R ν ) R ν = 1 2 π
B ( k R ) = 1 π J 1 ( R k ) R k 1 2 π = J 1 ( k R ) k R / 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.