Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-speed real-time 3D shape measurement based on adaptive depth constraint

Open Access Open Access

Abstract

Stereo phase unwrapping (SPU) has been increasingly applied to high-speed real-time fringe projection profilometry (FPP) because it can retrieve the absolute phase or matching points in a stereo FPP system without projecting or acquiring additional fringe patterns. Based on a pre-defined measurement volume, artificial maximum/minimum phase maps can be created solely using geometric constraints of the FPP system, permitting phase unwrapping on a pixel-by-pixel basis. However, when high-frequency fringes are used, the phase ambiguities will increase which makes SPU unreliable. Several auxiliary techniques have been proposed to enhance the robustness of SPU, but their flexibility still needs to be improved. In this paper, we proposed an adaptive depth constraint (ADC) approach for high-speed real-time 3D shape measurement, where the measurement depth volume for geometric constraints is adaptively updated according to the current reconstructed geometry. By utilizing the spatio-temporal correlation of moving objects under measurement, a customized and tighter depth constraint can be defined, which helps enhance the robustness of SPU over a large measurement volume. Besides, two complementary techniques, including simplified left-right consistency check and feedback mechanism based on valid area, are introduced to further increase the robustness and flexibility of the ADC. Experimental results demonstrate the success of our proposed SPU approach in recovering absolute 3D geometries of both simple and complicated objects with only three phase-shifted fringe images.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Fringe projection profilometry (FPP), as a non-contact three-dimensional (3D) shape measurement technique, has becoming more prevalently adopted in a variety of applications, including manufacturing, medical imaging, computer vision, education, bio-medicine, and virtual/augmented reality (VR/AR) [1,2]. Many mature instruments have been developed to measure static objects based on FPP. Recently, with the rapid development of image sensors and digital projection technology, it becomes possible to realize high-speed, real-time 3D shape measurement of dynamic objects by using FPP [3]. Different from those techniques for applications with static objects, the primary problem in this field is to recover the 3D information of moving objects or dynamic scenes with high speed, accuracy, and reliability. To this end, much research has been done targeting real-time, high-performance 3D shape measurement, and considerable progresses have been made during the past few years [4,6–8].

In high-speed, real-time 3D shape measurement based on FPP, the sinusoidal fringe pattern is the most frequently used projection pattern because using the phase information has the merits of robustness to sensor noise, surface reflectivity variations, and being able to achieve high spatial and/or temporal resolutions. The phase of the fringe is usually retrieved by Fourier transform algorithm [4,9,10] or phase-shifting algorithm [5,6,11]. Fourier transform algorithm extracts the phase measurement from a single fringe image by applying a properly designed bandpass filter in the frequency domain, while the phase-shifting algorithm uses a minimum of three fringe images and offers much higher measurement accuracy. 3D shape measurement methods using these two algorithms are referred as the Fourier transform profilometry (FTP) and the phase-shifting profilometry (PSP), respectively. Both methods have the capability of high-speed real-time measurement. This capability can be improved in the following two aspects: (1) increasing the speed of hardware (projector and camera); (2) increasing the efficiency of absolute phase retrieval (or matching point retrieval). The first aspect focuses on using projector defocusing technique to generate the sinusoidal fringe patterns from binary ones so that the projection speed can be increased to the maximum frame rates of the digital mirror device (DMD), e.g. kHz [10,12] or even tens of kHz [13,14]. Then with the assistance of high-speed camera, it is not difficult to reduce the motion artifacts between two adjacent captured frames. The major concern in this aspect is to explore what kind of binary pattern is suitable for achieving high-accuracy phase measurement with a slightly de-focused projector [15–19]. The second aspect tends to retrieve absolute phase or search the matching points by using as few fringe patterns as possible [14,20–22,30–42]. How to ensure the robustness of phase unwrapping with reduced patterns is the essential problem in this aspect [5]. Besides these two aspects, removing the motion artifacts and reducing the motion vulnerability of multi-shot PSP based on error compensation algorithms have also attracted increasing attention recently [22–26]. In practical applications, the above mentioned three aspects can be combined to improve the accuracy of dynamic measurement for moving parts. In this work, we focus more on the second aspect and want to enhance the robustness of absolute phase retrieval without requiring additional image acquisition.

Temporal phase unwrapping (TPU) is the most popular technique to retrieve the absolute phase map which may contain large discontinuities and isolated surfaces [27–29]. However, TPU generally requires extra fringe/graycode patterns, which decreases the efficiency of 3D measurement in high-speed, time-critical scenarios. To maximize the measurement efficiency of PSP, Weise et al. [22] introduced a novel phase unwrapping method where the geometric constraint between different views (two cameras and a projector) is used to retrieve the absolute phase. This method is termed as the stereo phase unwrapping (SPU) in our paper. There are no restrictions on the shape of the measured objects and no requirements of extra projection in SPU. This could be hardly realized in any other conventional techniques, such as spatial/temporal phase unwrapping techniques. It is well known that the low-frequency sinusoidal fringes have less phase ambiguities, tending to make the phase unwrapping more reliable. However, on the other hand, increasing the frequency of the fringe pattern is essential to achieve high-precision 3D shape reconstruction. Using conventional SPU is not enough to robustly eliminate phase ambiguities when high-frequency fringes are used. In the works of Weise et al. [22] and Garcia et al. [30], graph cut and loopy belief propagation (LBP) were employed as auxiliary means to further correct the fringe orders errors after SPU when high-frequency fringes are used. Digital image correlation (DIC) is a different spatial algorithm to enhance the robustness of SPU due to the high spatial distinguishability of speckle patterns [31,32]. Notni et al. [33] took the limited measurement volume of FPP system into account, and as a result, they set a depth volume to preclude some phase ambiguities before the process of SPU. However, since the high density of the fringes used, graph cut algorithm is inevitable in order to obtain an error-free result. The similar idea was also employed by Li et al. [34] where the fringe density is properly selected to make sure that only few candidates fall within the pre-defined measurement volume. According to Li’s work, a narrow depth range should be set to ensure the reliability of SPU when high-frequency fringes are used. The idea in Notni and Li’s work is the so-called depth constraint. Based on the depth constraint, Tao et al. [35] integrated the composite phase shifting scheme into SPU. Reliable real-time 3D measurement result was obtained over a large measurement volume at the cost of reducing the amplitude of the fringe. Liu and Kofman [36] also developed a 3D shape measurement approach that uses high-frequency background modulation fringe patterns generated based on the depth constraint. Song et al. [37] used the passive stereo matching method to generate a coarse depth map of the measured scene, which then serves as the depth volume for SPU to reconstruct a high-resolution depth map. Besides, the depth constraint has also been applied to conventional monocular FPP system (consisting of only one camera and one projector) to restrict the search range for possible fringe orders and rule out several false candidates in TPU [14, 38–40]. Besides the depth constraint, Notni’s work [41] and Tao’s recent work [42] suggest that SPU can also be further refined by optimizing the relative positions between the projector and cameras.

From above discussion, we know that SPU is usually implemented accompanying some complicated and time-consuming spatial domain processing algorithms, such as graph cut [22,30–33,41] and DIC [31,32]. However, in practical applications, these algorithms still need to be simplified in order to realize better real-time performance. On the other hand, the depth constraint is an effective approach to improve the performance of SPU and has been extensively used in combination with other phase retrieval approaches in many recent work [14,33–42]. The key problem in the depth constraint is how to set a suitable depth range. When a fixed depth range is used, there is an inherent trade-off between the measurement range and the robustness of the phase unwrapping. To guarantee the robustness of phase unwrapping, the fringe patterns are usually designed with a relatively low frequency, resulting in a low measurement precision. On the other hand, if high-frequency fringe patterns are used, the measurement range will be significantly compromised as the object has to be placed within a very limited depth range.

To overcome the above mentioned limitations, we propose an adaptive depth constraint (ADC) approach for high-speed real-time 3D shape measurement, where the measurement depth volume for geometric constraints is adaptively updated according to the current reconstructed geometry. During the real-time measurement process, we first analyze the statistical characteristics of the raw depth map, and use the analysed results to update the global depth volume. Meanwhile, some outliers can be filter out to obtain a refined depth map. Based on this refined depth map, then we focus on the depth map of a neighborhood of each pixel and create a pixel-wise depth volume map to be an input of the depth constraint in the next cycle of 3D measurement. Compared to the conventional approaches based on fixed depth volume, the pixel-wise depth volume in our method is more compact, adaptive to the object shape, and adaptively updated over time. That means our method cannot only ensure the robustness of SPU but also have a wide measurement volume. To make this adaptive depth constraint more flexible and robust, we develop two auxiliary techniques including a simplified left-right consistency check and a feedback mechanism of valid depth area. The simplified left-right consistency check tends to help ensure the reliability of raw depth map. The feedback mechanism is used to deal with the case of the abrupt depth change, such as a new object entering the field of view. All these techniques constitute the whole computational framework for enhancing the robustness of SPU. The effectiveness and real-time performance of this method are validated by several experiments.

2. Principle

In this section, we will focus on the basic principle of the proposed method. In Section 2.1, we first introduce the framework and the inadequacy of SPU to more conveniently explain the principle as well as the motivation of this work. Then we propose the core idea of our method which is called ADC in Section 2.2. In the following Sections 2.3 and 2.4, the simplified left-right consistency check and the feedback mechanism based on the valid area are detailed as two complementary algorithms for ADC.

2.1. Basic principle of SPU

A typical FPP system using SPU is composed of two cameras and a projector. The fringe patterns are projected onto the measured object by the projector, and then deformed by the object, and finally captured by two cameras. A phase map is extracted from the deformed fringe patterns to search the sub-pixel matching points and retrieving depth information. Taking three-step phase-shifting fringe patterns for example, these fringe patterns captured by the cameras can be expressed as the following formulae:

I1c(uc,vc)=Ac(uc,vc)+Bc(uc,vc)cos(Φc(uc,vc)),
I2c(uc,vc)=Ac(uc,vc)+Bc(uc,vc)cos(Φc(uc,vc)+2π3),
I3c(uc,vc)=Ac(uc,vc)+Bc(uc,vc)cos(Φc(uc,vc)+4π3).
Where superscript c denotes the camera, (uc, vc) is an arbitrary point in camera c, I1c, I2c and I3c correspond to the captured intensity maps, Ac is the average intensity map, Bc is the amplitude map while Φc the deformed absolute phase map. If the fringes are perpendicular to horizontal axis of the projector, the horizontal coordinate of the matching point of (uc, vc) can be determined by
up(uc,vc)=Φc(uc,vc)R2Nπ,
where R is the horizontal resolution of the projector, N is the period number of fringes. Then the 3D coordinates (Xw, Yw, Zw) of (uc, vc) can be retrieved by
Zw(uc,vc)=Dcp(uc,vc)+Ecp(uc,vc)Fcp(uc,vc)up(uc,vc)+1,
Xw(uc,vc)=Gcp(uc,vc)Z(uc,vc)+Jcp(uc,vc),
Yw(uc,vc)=Lcp(uc,vc)Z(uc,vc)+Mcp(uc,vc),
where Dcp, Ecp, Fcp, Gcp, Jcp, Lcp and Mcp are the parameter matrices derived from calibration parameters between the camera c and the projector p [20]. The process from Eqs. (5) to (7) can also be implemented between two cameras, provided the matching point in another camera is obtained. However, due to the inherent limitation of arctangent function, only the wrapped phase ϕc(uc, vc) can be obtained from Eqs. (1)(3)
ϕc(uc,vc)=tan13[I1c(uc,vc)I3c(uc,vc)][2I2c(uc,vc)I1c(uc,vc)I3c(uc,vc)].
The relationship between ϕc(uc, vc) and Φc(uc, vc) satisfies
Φc(uc,vc)=ϕc(uc,vc)+2Kc(uc,vc)π,Kc(uc,vc)[0,N1],
where Kc is the fringe orders. The essential problem of FPP is to find the correct Kc, and this process of exploring Kc is the so-called phase unwrapping or elimination of phase ambiguities.

In SPU, the Kc is retrieved with geometric constraint. To the best of our knowledge, it is first used in Weise’s work [22]. Figure 1 displays the basic principle of SPU. oc1 is an arbitrary point in the first camera (c1) with the coordinate (uc1, vc1) and the wrapped phase ϕc1(uc1, vc1). For the sake of brevity, oc1 is used to substitute for (uc1, vc1) in some expressions. We first sequentially assign the integers within the interval [0, N − 1] to Kc1(oc1), and notate Kc1(oc1) assigned with different values as K0c1(oc1),K1c1(oc1)Knc1(oc1)KN1c1(oc1). Each Knc1(oc1) corresponds to a Φc1(oc1,Knc1(oc1)) according to Eq. (9) (Knc1(oc1) will be simplified as Knc1). Then a total of N 3D points can be derived from Eqs. (4)(7), and the n-th 3D point is notated as ow(oc1, Knc1) with the coordinate (Xw(oc1,Knc1),Yw(oc,Knc1),Zw(oc1,Knc1)). All these 3D points are called the 3D candidates of oc1, and the only 3D matching point is included in these candidates. The blue lines from the projector in Fig. 1 denote part of the rays with the same wrapped phase ϕc1(oc1) but the different absolute phase Φc1(oc1, Knc1). These blue lines from the projector and camera c1 intersect at different ow(oc1, Knc1). The 3D candidates can be projected into the second camera c2 to get its corresponding 2D candidates oc2(oc1, Knc1) with the coordinates (uc2(oc1, Knc1), vc2(oc1, Knc1)). There is also a 2D matching point within these 2D candidates. oc1 and its 2D matching point should have the similar properties, such as the phase, the texture and so on. Keep this in mind, the 2D candidate with the closest wrapped phase to ϕc1(oc1) is chosen to be the matching point. Once the 2D matching point is determined, the fringe order of oc1 will be known. This is the basic principle of SPU.

 figure: Fig. 1

Fig. 1 Diagram of SPU and conventional depth constraint (CDC).

Download Full Size | PDF

However, we must consider the effects of the discrete property of the camera, imperfect system calibration and the noise in an actual experiment. All these effects will introduce errors to our system and decrease the similarity between oc1 and its 2D matching point. On the other hand, there may exist more than one 2D candidates having the similar phase to ϕc1(uc1, vc1) if the high-frequency fringes are used. In this case, some matching errors will emerge during the process of SPU. We can use a scheme called phase consistency check [42] to reject some 2D candidates whose phase difference with ϕc1(uc1, vc1) is larger than a threshold. As shown in Fig. 1, the 3D candidates corresponding to red lines from c2 are rejected while those candidates corresponding to blue lines are reversed. Some additional techniques are required to distinguish the matching point from the reserved candidates. Several kinds of techniques have been proposed, and the review of these techniques can be seen in the introduction. Only the depth constraint will be described in detail to explain both some basic principles and the initial motivation of our method.

2.2. Adaptive depth constraint

At the beginning of this section, we will introduce the principle of CDC. Next, to resolve the contradiction of CDC, we introduce the proposed methods including the global adaptive depth constraint (GADC) and the pixel-wise adaptive depth constraint (PWADC).

2.2.1. Global adaptive depth constraint

Considering limitations of the depth of focus and the common field of view, the measurement volume of FPP system must be restricted to a finite range. Different systems should have different measurement volumes. But the theoretical measurement volume of a specific system is difficult to determine, so it is always replaced by a larger experimental volume. The physical meanings of Zmin and Zmax in Fig. 2 represent the minimum and maximum depth boundaries in the world coordinate, respectively, and their units are mm. The volume between Zmin and Zmax is the measurement volume. In CDC, the measurement volume usually serves as the depth volume. If we can confirm that the measured object is located in the depth volume, we can use

ZminZw(oc1,Knc1)Zmax
to remove quite a number of candidates. As shown in Fig. 2, all the 3D candidates located out of the depth volume are rejected, including ow(oc1, k + 2) whose wrapped phase is similar to ϕc1(uc1, vc1). That is the basic principle of CDC. However, the fringe density is usually large enough, so there still exist more than one candidates with the similar wrapped phase to ϕc1(uc1, vc1), such as ow(oc1, k−2) and ow(oc1, k). That means the robustness of SPU is still not ensured. We can further narrow the depth volume until ow(oc1, k−2) is rejected (assuming ow(oc1, k) is the matching point). But in this way, we can hardly guarantee that all the measurable areas of the object are located in this narrow volume, especially when the object is moving.

 figure: Fig. 2

Fig. 2 Diagram of the cross-section of Fig. 1

Download Full Size | PDF

In order to resolve the contradict between the measurement volume and robustness, we propose a GADC technique. The main idea of this technique is to analyze the statistical properties of the current depth map, and update a new depth volume for the next cycle of 3D measurement. In GADC, note that the measurement volume [Zmin, Zmax] will be independent of the depth volume and remain constant. We first averagely divide the measurement volume (or initial depth volume) [Zmin, Zmax] into Q intervals, and the q-th interval is notated as [Zmin+qΔZ, Zmin+(q + 1)ΔZ]. q is an integer and belongs to [0, Q −1] while ΔZ=ZmaxZminQ. H(q) is used to denote the number of points in [Zmin+qΔZ, Zmin+(q + 1)ΔZ]. We can easily obtain the histogram of the current depth map by

H(floor(ZW(oC1)ZminΔZ))H(floor(ZW(oC1)ZminΔZ))+1,
where floor() represents rounding towards minus infinity, 0floor(Zw(oc1)ZminΔz)Q1. As shown in Fig. 3, The green boundary of the object is the measurable surface, and the yellow area is the depth histogram obtained according to Eq. (11). It can be found that the maximum valid depth appears at H(q + i) while the minimum valid depth appears at H(qj). Because of the temporal continuity in real-time measurement, the depth difference between two adjacent depth maps is small enough. That means the current depth distribution can serve as a reference to the next cycle of 3D measurement. The new depth volume is set as [Zminglobal,Zmaxglobal], where Zminglobal=Zmin+(qj)ΔZΔZmotion, Zmaxglobal=Zmin+(q+i1)ΔZ+ΔZmotion. ΔZmotion is the allowance which is set for depth difference raised by object motion, and its unit is also mm. The value of ΔZmotion depends on the capturing speed of the cameras. In most cases, ΔZmotion = 10mm is large enough. Considering the effect of outliers, a threshold Hmin(a, u) is necessary to set the all depth intervals with H(q) > Hmin inactive. The value of Hmin depends on the resolution of the cameras. For the 640 × 480 resolution, Hmin = 500 is acceptable. As shown in histogram of Fig. 3, the purple dotted line represents Hmin, and these inactive depth intervals are marked with red color while active depth intervals are represented by green rectangles. The final boundaries of depth volume should be revised as Zminglobal=Zmin+(q2)ΔZΔZmotion and Zmaxglobal=Zmin+(q+11)ΔZ+ΔZmotion. The blue area in Fig. 3 displays the new depth volume. Based on this depth volume, the outliers in these inactive depth intervals H(q + i) and H(qj) will be correctly reconstructed. This is the process of GADC. Compared to CDC, the proposed technique updates the initial depth volume from [Zmin, Zmax] to a more compact depth volume [Zminglobal,Zmaxglobal], which increases the robustness of SPU. Beside, [Zminglobal,Zmaxglobal] is a dynamic depth volume which is updated per cycle of measurement. That means the measurement volume will be not restricted to a small depth volume, so there is no problem for this technique to measure a moving object.

 figure: Fig. 3

Fig. 3 Diagram of GADC for measuring single object

Download Full Size | PDF

2.2.2. Pixel-wise adaptive depth constraint

The effectiveness of GADC depends on the updated depth volume while this depth volume is determined by the object. Let us consider the case of measuring a large-size object or several isolated objects. In this case, the dynamic depth volume [Zminglobal,Zmaxglobal] may not be so superior to initial depth volume [Zmin, Zmax], as shown in Fig. 4. Although the reconstructed errors in H(qj) can still be corrected, more errors cannot be removed. The effectiveness of GADC will decrease obviously. That is a result of only two depth boundaries being used. A more flexible depth volume should be created to handle this case, and this is actually another technique called PWADC that we will introduce in the next paragraph.

 figure: Fig. 4

Fig. 4 Diagram of GADC for measuring two isolated objects

Download Full Size | PDF

Just as its name implies, PWADC allocates each pixel with its own independent depth volume. We will describe its principle based on Fig. 5. As shown in Fig. 5, the green point denotes an arbitrary point oc1 in camera, and the red dotted rectangle is the 5 × 5 neighbourhood around oc1. Each small grey rectangle represents a pixel in this neighbourhood, and the green rectangle is oc1. The numbers in grey rectangles represent depth of the pixel. The maximum and minimum depth will be used as the boundaries of depth volume [Zminpixel(oc1),Zmaxpixel(oc1)], where Zminpixel(oc1)=20ΔZmotion, Zmaxpixel(oc1)=10+ΔZmotion. The depth difference raised by motion is addressed from two aspects: (1) the pixel-wise depth volume is created based on a neighbourhood instead of a single pixel; (2) the depth allowance ΔZmotion is added. Note that there is an important step that should be implemented at the very start. That is we should use the global depth volume [Zminglobal+ΔZmotion,ZmaxglobalΔZmotion] to set some outliers inactive. Assuming the red rectangle in Fig. 5 is the inactive pixel, then the actual minimum boundary of depth volume of oc1 should be Zminpixel(oc1)=1ΔZmotion. The final depth volume of oc1 in Fig. 5 is [1 − ΔZmotion, 10 + ΔZmotion]. The whole process can be summarized as following formulae

Zw(oc1)=nan,ifZw(oc1)<Zminglobal+ΔZmotionorZw(oc1)>ZmaxglobalΔZmotion,
{Zminpixel(oc1)=min(Zwi,j=ri,j=r(uc1i,vc1j))ΔZmotionZmaxpixel(oc1)=max(Zwi,j=ri,j=r(uc1i,vc1j))+ΔZmotion,
where nan represents invalid value, r is the window size, min() is the function to calculate the minimum value while max() is the function to calculate the maximum value. If the measured object moves at a relatively high speed, or the captured speed of the camera is not high enough, a large r is necessary. In most real-time measurement conditions, r = 5 is large enough. Implementing Eqs. (12) and (13) for each pixel, the pixel-wise depth volume displayed in right of Fig. 5 can be obtained. Because the pixel-wise depth volume only depends on the depth information of its neighbourhood, it can accurately envelop the measured surface. Compared to GADC, PWADC not only has the temporal adaptive property but also has the spatial adaptive property. However, it should be noted that PWADC is not independent of GADC, it needs the global depth volume to eliminate the effect of outliers, as reflected in Eq. (12).

 figure: Fig. 5

Fig. 5 Diagram of PWADC for measuring two isolated objects

Download Full Size | PDF

2.3. Simplified left-right consistency check

The proposed ADC technique provides a strict but accurate depth volume to enhance the robustness of SPU. However, there still exist a few unreliably reconstructed points within the pixel-wise depth volume, especially around the contours of the moving object. Besides, it is possible that the number of outliers in the current depth map is larger than Hmin, and in this case, the outliers will not be set inactive. To present the occurrence of these cases, a simplified left-right consistency check technique is proposed.

Left-right consistency check is a frequently used technique in stereo vision to detect and remove matching errors. For SPU, we should independently calculate the fringe orders of all valid points in c1, and c2, and then check the consistency between fringe orders of points in c1 and those of their matching points in c2. The points without the consistency of fringe orders will be finally set invalid and removed from subsequent processing. Though the left-right consistency check often works quite well, a double computation time of SPU is required, which imposes a heavy burdens on real-time measurement. In this paper, we proposed a more efficient left-right consistency check to achieve better real-time performance without compromising the accuracy.

Almost all the errors of SPU are induced due to the existence of two or more candidates being reserved after the depth constraint and phase consistency check. This has been explained in detail in Section 2.1. There does exist some cases that only false matching points are reserved after phase consistency check, but these cases are very rare, especially in a multi-view system. In this paper, we do not consider these rare cases. Figure 6(a) displays the distribution of the number of candidates. In the orange area, there is only one candidate while in the red area the number of candidates is more than one. The points located in orange area in Fig. 6(a) are first projected into c2. The orange area in Fig. 6(b) denotes the matching points of that in Figure 6(a). The white area in Fig. 6(b) is the blank area without any matching points for the points in orange area in Fig. 6(a). The green point oc1 is an arbitrary point in the red area. Two green points oc2(oc1, k) and oc2(oc1, k − 2) in Fig. 6(b) are the 2D reserved candidates of oc1. There are two basic principles for the simplified left-right consistency check: (1) it is an one-to-one correspondence between the points in c1 and the points in c2; (2) the area whose points have only one candidate is the reliable area, as shown in Fig. 6(a). Point oc2(oc1, k − 2) is obviously not consistent with principle (1), and we confirm that the correct matching point of oc2(oc1, k − 2) is located in the orange area in Fig. 6(a) instead of oc1. On the other hand, oc2(oc1, k) located in the blank area satisfies the one-to-one relationship with oc1. As a result, oc2(oc1, k) is chosen to be the matching point. There exist some special cases that all the candidates of oc1 satisfy the principle (1), or none of the candidates satisfy the principle (1). In these cases, oc1 will be set invalid. Let us assume that the value of the reliable area and blank area in Fig. 6(b) is 0 and 1, respectively, and the number of reserved candidates is Nr. Then for an arbitrary point oc1 whose Nr ≥ 2, if it satisfies

Ωoc1(oc1,Knc1(oc1))=1,
we can confirm the existence of its matching point and find it out. Otherwise, we will set oc1 invalid. Ω represents the set of reserved candidates. This simplified left-right consistency check inherits the basic principle from conventional left-right consistency check, but it has no requirement of calculating the fringe orders of c2. Therefore, the computational cost is greatly decreased, and it is more suitable for real-time measurement.

 figure: Fig. 6

Fig. 6 Diagram of simplified left-right consistency check

Download Full Size | PDF

2.4. Feedback mechanism based on the valid area

Now let us consider another special case that a new object enter the adaptive depth volume [Zminpixel(uc1,vc1),Zmaxpixel(uc1,vc1)] but within the initial depth volume [Zmin, Zmax]. Obviously, this new object will not be correctly measured based on the above procedures. Therefore, an additional feedback mechanism is required to detect and handle this case. In this paper, we calculate the valid area within [Zminpixel(uc1,vc1),Zmaxpixel(uc1,vc1)] and [Zmin, Zmax], respectively. The so-called valid area is the region having at least one candidate after the depth constraint and the phase consistency check. The numbers of the points in these two valid areas are notated as S1 and S2. Then we can detect the appearance of the new object according to whether the following inequality is satisfied or not,

S1S2Smin
where Smin is a predefined threshold (a, u). Smin = 500 is a reasonable setting for the camera with 640 × 480 resolution. There is no appearance of new object if Eq. (15) is not satisfied, and in this case we will use [Zminpixel(uc1,vc1),Zmaxpixel(uc1,vc1)] as the depth volume. Otherwise, [Zmin, Zmax] is set as the depth volume. The SPU methods which employ a strict depth volume are recommended to implement this feedback mechanism based on the valid area.

In order to clearly show the process of the proposed method, a flowchart is given in Fig. 7. The green modules denote the outputs, while the red modules denote the main algorithms.

  • Step 1: the wrapped phase maps of different views (cameras) are calculated from the captured fringe patterns.
  • Step 2: the 3D candidates of the points in camera 1 as well as the 2D candidates in camera 2 are calculated. Only the candidates within the conventional depth volume are reserved.
  • Step 3: phase consistency check is implemented to reject some unqualified candidates.
  • Step 4: GADC and PWADC are carried out to remove some unqualified candidates from the reserved candidates after CDC and phase consistency check(s).
  • Step 5: the difference between two valid areas is calculated and the feedback is sent according to the result of Eq. (15).
  • Step 6: the simplified left-right consistency check is used to get the final absolute phase or matching points.
  • Step 7: the 3D shape is reconstructed based on the final absolute phase or matching points.
  • Step 8: the global depth volume and pixel-wise depth volume are updated according to the current 3D map. Since each pixel needs search its matching point within all period orders, the proposed algorithm has a O(n2) time complexity. But due to the independence of each pixel, the final time complexity of the algorithm can be reduced to O(n) if parallel computing is implemented.

 figure: Fig. 7

Fig. 7 Flowchart of the proposed method

Download Full Size | PDF

3. Experimental results

Several experiments are designed to verify the validity of the proposed techniques, including ADC, the simplified left-right consistency check and the feedback mechanism based on the valid area. We also separately use quad-camera and dual-camera systems to implement the real-time measurement to display the compatibility of our method. All the cameras used in our system is Basler acA640–750um with the highest 750fps at the full resolution 640 × 480. The cameras are outfitted with 12mm Computar lenses. The projector is LightCrafter 4500Pro with the resolution of 912 × 1140 and projection speed of 120Hz. In our experiments, the projection speed is 100Hz, and all the cameras are synchronized by the trigger signal from the projector. The related parameters and thresholds are set as Zmin = −200 mm, Zmax = 200 mm, ΔZmotion = 10 mm, Hmin = 500, r = 5, Smin = 500.

3.1. Measurement of single object

In the first experiment, a David statue with complex surface was measured to verify the robustness of SPU. A quad-camera system was used with the projection of 48-period phase-shifting fringe patterns. The retrieved absolute phase maps and the reconstruction results are shown in Fig. 8. In the measuring process, the statue was first moved toward the measuring system, and then left static for few seconds. During the static stage, the absolute phase map acquired by multi-frequency PSP in Fig. 8(a) serves as the ground truth to detect the errors in Fig. 8(b) – 8(d). The quantitative numbers and ratios of falsely unwrapped points are counted and displayed in the right bottom in Fig. 8(b) – 8(d). Here the missing points beyond the common field of views are set inactive and removed from the statistics. The erroneous regions are marked by the red dotted circles. We can easily find that: (1) compared to CDC, the robustness of ADC increases obviously; (2) there is no difference between the performance of GADC and PWADC when measured such a small single object. Besides, The comparison between Fig. 8(f) and Fig. 8(e) suggests that the falsely reconstructed points are far from their true positions. All the experimental results are consistent with our analysis in Section 2.2. More detailed measurement results of the moving statue are shown in Visualization 1.

 figure: Fig. 8

Fig. 8 Measured results of David statue (see Visualization 1 for the whole results). (a)–(d) The absolute phase maps acquired by conventional multi-frequency PSP, CDC, GADC, and PWADC. (e)–(h) The 3D reconstruction corresponding to (a)–(d).

Download Full Size | PDF

3.2. Measurement of two isolated objects

In Section 2.2.2, we have pointed out the problem of GADC under the case of measuring a large-size object or two isolated objects. The second experiment is designed to explain this problem and verify the superiority of PWADC. Two isolated objects including a ping-pong and a geometry model were measured in this experiment, as shown in Fig. 9(i). The ping-pong was arranged far from the camera and remained static while the model was shifted from the position near the camera to the position far from the camera. Since the valid depth volume (about 300 mm) is much larger than that in the first experiment (about 150 mm), the robustness of GADC is obviously inferior to that of PWADC, although it has been enhanced compared to that of CDC. From the Fig. 9(j) – 9(l), it is found that GADC has removed some redundancies in CDC but more redundancies still exist compared to adaptive PWADC. The more detailed results are displayed in Visualization 2. This experiment provides a strong evidence for the principle in Sections 2.2.1 and 2.2.2. Note that the X and Y coordinates in Fig. 9(j) – 9(l) are pixel coordinates, and the invalid area of Fig. 9(l) is hidden to outstand the valid area.

 figure: Fig. 9

Fig. 9 Measured results of a plastic ball and a geometry model (see Visualization 2 for the whole results). (a)–(d) The absolute phase maps acquired by conventional multi-frequency PSP, CDC, GADC, and PWADC. (e)–(h) The 3D reconstruction corresponding to (a)–(d). (i)–(l) Depth volumes corresponding to (e)–(h).

Download Full Size | PDF

In the previous two experiments, we do not use any left-right consistency check and related algorithms. Next, the simplified left-right consistency check is added to our system to re-analyze the data of the second experiment. Figure 10 displays the final results. Comparing Fig. 10(b) with Fig. 9(b), we can find that the simplified left-right consistency check can reduce the error ratio obviously. However, the missing ratio in Fig. 10(b) after the simplified left-right consistency check increases obviously. That is because that there are too many unreliable areas which should be checked by Eq. (14). That means more points have the possibility of being not satisfied with Eq. (14) and set inactive. To reduce the error ratio and simultaneously not increase the missing ratio, we must have the precondition that the unreliable area cannot be too large. The small missing ratio in Fig. 10(c) and nearly perfect missing ratio in Fig. 10(d) just benefit from their less and less unreliable areas. It is worth decreasing the error ratio at the cost of increasing missing ratio because the low error rate of depth map will generate a correct compact depth volume which will conversely decrease the unreliable areas and the missing ratio. Visualization 3 further displays more related details about the experiment.

 figure: Fig. 10

Fig. 10 Measured results with the simplified left-right consistency check (see Visualization 3 for the whole results). (a)–(h) The results corresponding to Fig. 9(a)Fig. 9(h). (i)–(l) enlarged details corresponding to (e)–(h).

Download Full Size | PDF

3.3. Real-time experiments

In the last experiment, we focus on validating the performance of the feedback mechanism and the ability of real-time measurement and the compatibility of our method. A ping-pong and a microscope shell were measured in this experiment. The microscope shell was rotated by an electronic control turntable, the ping-pong was hung on a position far from the microscope shell. In the beginning, the ping-pong was shaded by the microscope shell, but with the continuous rotation of the microscope shell, the ping-pong revealed gradually. Because the ping-pong was far from and shaded by the microscope shell, it was not included in the adaptive depth volume. However, when it revealed, its position would be accurately detected by the feedback mechanism, and the adaptive depth volume was updated quickly. We implemented this experiment using dual-camera and quad-camera system respectively to verify the compatibility for different multi-view systems of our method. About 40fps and 60fps reconstruction speed can be achieved by the quad-camera system and the dual-camera system, respectively. The real-time measurement processes and results of quad-camera and dual-camera system can be found in Visualization 4 and Visualization 5. Two frames of Visualization 4 and Visualization 5 are shown in Fig. 11.

 figure: Fig. 11

Fig. 11 The real-time measurement process and results using our method based on (a) quad-camera system (see Visualization 4 for the whole process) and (b) dual-camera system (see Visualization 5 for the whole process).

Download Full Size | PDF

4. Conclusion

We have presented a high-speed real-time 3D shape measurement approach based on ADC, where the measurement depth volume for geometric constraints is adaptively updated according to the current reconstructed geometry. The rationality of the proposed approach relies on the fact that the depth distribution of a moving object varies continuously in both spatial and temporal domain, which allows to define a compact depth volume and helps enhance the robustness of SPU. Furthermore, the adaptively updated measurement depth guarantees that the measured object can freely move within a large measurement volume. Besides, two complementary techniques, including simplified left-right consistency check and feedback mechanism based on valid area, are introduced to further remove erroneously unwrapped region and increase the robustness and flexibility of the ADC. The adaptive depth constraint approach, along with simplified left-right consistency check and feedback mechanism based on valid area constitute a complete computational framework for robust and efficient phase unwrapping in high-speed real-time 3D shape measurement. Besides, the proposed approach has low computation cost and good real-time performance. The processing speed can be further significantly improved by using graphics processing units (GPUs), as all the involved algorithms are performed pixel-wise and highly parallelizable. Experiments demonstrated the ability of the method to perform real-time 3D shape measurement with high-accuracy, for complex surfaces and spatially isolated objects, using only three high frequency fringe patterns.

There are several aspects that need to be further improved in the proposed method, which we will leave for future consideration. First, there are several parameters in the proposed approach, which should be properly selected for different kinds of motion. Currently, we empirically set these parameters according to the overall property of the test scene. It should be better that these parameters can be automatically and adaptively selected. Second, we simply use the measurement volume to replace the adaptive depth volume when a new object appears within the measurement volume but out of the adaptive depth volume, which is not an optimal selection. How to handle both cases is an another interesting direction for further investigation.

Funding

National Key R&D Program of China (2017YFF0106403); National Natural Science Fund of China (61722506, 61705105, 111574152); Final Assembly ‘13th Five-Year Plan’ Advanced Research Project of China (30102070102); Equipment Advanced Research Fund of China (61404150202), The Key Research and Development Program of Jiangsu Province, China (BE2017162); Outstanding Youth Foundation of Jiangsu Province of China (BK20170034); National Defense Science and Technology Foundation of China (0106173); ‘Six Talent Peaks’ project of Jiangsu Province, China (2015-DZXX-009); ‘333 Engineering’ research project of Jiangsu Province, China (BRA2016407, BRA2015294); Fundamental Research Funds for the Central Universities (30917011204, 30916011322); Open Research Fund of Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense (3091601410414); China Postdoctoral Science Foundation (2017M621747), and Jiangsu Planned Projects for Postdoctoral Research Funds (1701038A).

References

1. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recogn. 43(8), 2666–2680 (2010). [CrossRef]  

2. J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011). [CrossRef]  

3. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Laser Eng. 48, 133–140 (2010). [CrossRef]  

4. X. Su and Q. Zhang, “Dynamic 3-D shape measurement method: a review,” Opt. Laser Eng. 48(2), 191–204 (2010). [CrossRef]  

5. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Laser Eng. 109, 23–59 (2018). [CrossRef]  

6. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Laser Eng. 48(2), 149–158 (2010). [CrossRef]  

7. Z. Zhang, “Review of single-shot 3D shape measurement by phase calculation-based fringe projection techniques,” Opt. Laser Eng. 50(8), 1097–1106 (2012). [CrossRef]  

8. S. V. Jeught and J. J. J. Dirckx, “Real-time structured light profilometry: a review,” Opt. Laser Eng. 87, 18–31 (2016). [CrossRef]  

9. M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3-D object shapes,” Appl. Opt. 22(24), 3977–3982 (1983). [CrossRef]   [PubMed]  

10. Q. Zhang and X. Su, “High-speed optical measurement for the drumhead vibration,” Opt. Express 13(8), 3110–3116 (2005). [CrossRef]   [PubMed]  

11. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-D diffuse objects,” Appl. Opt. 23(18), 3105–3108 (1984). [CrossRef]   [PubMed]  

12. S. Lei and S. Zhang, “Flexible 3-D shape measurement using projector defocusing,” Opt. Lett. 34(20), 3080–3082 (2009). [CrossRef]   [PubMed]  

13. S. Zhang, D. Van. Der. Weide, and J. Oliver, “Superfast phase-shifting method for 3-D shape measurement,” Opt. Express 18(9), 9684–9689 (2010). [CrossRef]   [PubMed]  

14. C. Zuo, T. Tao, S. Feng, L. Huang, A. Asundi, and Q. Chen, “Micro Fourier Transform Profilometry (μFTP): 3D shape measurement at 10,000 frames per second,” Opt. Laser Eng. 102, 70–91 (2018). [CrossRef]  

15. Y. Wang and S. Zhang, “Superfast multifrequency phase-shifting technique with optimal pulse width modulation,” Opt. Express 19(6), 5149–5155 (2011). [CrossRef]   [PubMed]  

16. C. Zuo, Q. Chen, G. Gu, S. Feng, F. Feng, R. Li, and G. Shen, “High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection,” Opt. Laser Eng. 51(8), 953–960 (2013). [CrossRef]  

17. Y. Xu, L. Ekstrand, J. Dai, and S. Zhang, “Phase error compensation for three-dimensional shape measurement with projector defocusing,” Appl. Opt. 50(17), 2572–2581 (2011). [CrossRef]   [PubMed]  

18. H. Zhao, X. Diao, H. Jiang, and X. Li, “High-speed triangular pattern phase-shifting 3D measurement based on the motion blur method,” Opt. Express 25(8), 9171–9185 (2017). [CrossRef]   [PubMed]  

19. Y. Hu, Q. Chen, S. Feng, T. Tao, H. Li, and C. Zuo, “Real-time microscopic 3-D shape measurement based on optimized pulse-width-modulation binary fringe projection,” Mea Sci Technol 28(7), 075010 (2017). [CrossRef]  

20. K. Liu, Y. Wang, D. L. Lau, Q. Hao, and L. G. Hassebrook, “Dual-frequency pattern scheme for high-speed 3-D shape measurement,” Opt. Express 18(5), 5229–5244 (2010). [CrossRef]   [PubMed]  

21. C. Zuo, Q. Chen, G. Gu, S. Feng, and F. Feng, “High-speed three-dimensional profilometry for multiple objects with complex shapes,” Opt. Express 20(17), 19493–19510 (2012). [CrossRef]   [PubMed]  

22. T. Weise, B. Leibe, and L. Van Gool, “Fast 3d scanning with automatic motion compensation,” 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2007).

23. L. Lu, J. Xi, Y. Yu, and Q. Guo, “Improving the accuracy performance of phase-shifting profilometry for the measurement of objects in motion,” Opt. Lett. 39(23), 6715–6718 (2014). [CrossRef]   [PubMed]  

24. S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-D measurements with motion-compensated phase-shifting profilometry,” Opt. Laser Eng. 103, 127–138 (2018). [CrossRef]  

25. P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3d sensing with fourier-assisted phase shifting,” IEEE J-STSP. 9(3), 396–408 (2015).

26. Z. Liu, P. C. Zibley, and S. Zhang, “Motion-induced error compensation for phase shifting profilometry,” Opt. Express 26(10), 12632–12637 (2018). [CrossRef]   [PubMed]  

27. G. Sansoni, M. Carocci, and R. Rodella, “Three-dimensional vision based on a combination of gray-code and phase-shift light projection: analysis and compensation of the systematic errors,” Appl. Opt. 38(31), 6565–6573 (1999). [CrossRef]  

28. C. E. Towers, D. P. Towers, and Z. Zhang, “Time efficient color fringe projection system for 3D shape and color using optimum 3-frequency Selection,” Opt. Express 14(14), 6444–6455 (2006). [CrossRef]   [PubMed]  

29. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

30. R. R. Garcia and A. Zakhor, “Consistent stereo-assisted absolute phase unwrapping methods for structured light systems,” IEEE J. Sel. Top. Quant. 6(5), 411–424 (2012). [CrossRef]  

31. Y. Zhang, Z. Xiong, and F. Wu, “Unambiguous 3D measurement from speckle-embedded fringe,” Appl. Opt. 52(32), 7797–7805 (2013). [CrossRef]   [PubMed]  

32. W. Lohry and S. Zhang, “High-speed absolute three-dimensional shape measurement using three binary dithered patterns,” Opt. Express 22(22), 26752–26762 (2014). [CrossRef]   [PubMed]  

33. C. Bräuer-Burchardt, C. Munkelt, M. Heinze, P. Kühmstedt, and G. Notni, “Using geometric constraints to solve the point correspondence problem in fringe projection based 3D measuring systems,” International Conference on Image Analysis and Processing, pp. 265–274 (2011).

34. Z. Li, K. Zhong, Y. Li, X. Zhou, and Y. Shi, “Multiview phase shifting: a full-resolution and high-speed 3D measurement framework for arbitrary shape dynamic objects,” Opt. Lett. 38(9), 1389–1391 (2013). [CrossRef]   [PubMed]  

35. T. Tao, Q. Chen, J. Da, S. Feng, Y. Hu, and C. Zuo, “Real-time 3-D shape measurement with composite phase-shifting fringes and multi-view system,” Opt. Express 24(18), 20253–20269 (2016). [CrossRef]   [PubMed]  

36. X. Liu and J. Kofman, “High-frequency background modulation fringe patterns based on a fringe-wavelength geometry-constraint model for 3d surface-shape measurement,” Opt. Express 25(14), 16618–16628 (2017). [CrossRef]   [PubMed]  

37. K. Song, S. Hu, X. Wen, and Y Yan, “Fast 3D shape measurement using fourier transform profilometry without phase unwrapping,” Opt. Lasers Eng 84, 74–81 (2016). [CrossRef]  

38. Y. An, J. S. Hyun, and S. Zhang, “Pixel-wise absolute phase unwrapping using geometric constraints of structured light system,” Opt. Express 24(16), 18445–18459 (2016). [CrossRef]   [PubMed]  

39. X. Liu and J. Kofman, “Background and amplitude encoded fringe patterns for 3D surface-shape measurement,” Opt. Lasers Eng 94, 63–69 (2017). [CrossRef]  

40. Y. Xing and C. Quan, “Reference-plane-based fast pixel-by-pixel absolute phase retrieval for height measurement,” Appl. Opt. 57(17), 4901–4908 (2018). [CrossRef]  

41. A. Breitbarth, E. Müller, P. Kühmstedt, G. Notni, and J. Denzler, “Phase unwrapping of fringe images for dynamic 3D measurements without additional pattern projection,” SPIE Sensing Technology+ Applications, pp. 948903 (2015).

42. T. Tao, Q. Chen, S. Feng, Y. Hu, M. Zhang, and C. Zuo, “High-precision real-time 3D shape measurement based on a quad-camera system,” J. Opt. 20(1), 014009 (2017). [CrossRef]  

Supplementary Material (5)

NameDescription
Visualization 1       Measured results of David statue
Visualization 2       Measured results of a plastic ball and a geometry model
Visualization 3       Measured results with the simplified left-right consistency check
Visualization 4       The real-time measurement process and results
Visualization 5       The real-time measurement process and results

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Diagram of SPU and conventional depth constraint (CDC).
Fig. 2
Fig. 2 Diagram of the cross-section of Fig. 1
Fig. 3
Fig. 3 Diagram of GADC for measuring single object
Fig. 4
Fig. 4 Diagram of GADC for measuring two isolated objects
Fig. 5
Fig. 5 Diagram of PWADC for measuring two isolated objects
Fig. 6
Fig. 6 Diagram of simplified left-right consistency check
Fig. 7
Fig. 7 Flowchart of the proposed method
Fig. 8
Fig. 8 Measured results of David statue (see Visualization 1 for the whole results). (a)–(d) The absolute phase maps acquired by conventional multi-frequency PSP, CDC, GADC, and PWADC. (e)–(h) The 3D reconstruction corresponding to (a)–(d).
Fig. 9
Fig. 9 Measured results of a plastic ball and a geometry model (see Visualization 2 for the whole results). (a)–(d) The absolute phase maps acquired by conventional multi-frequency PSP, CDC, GADC, and PWADC. (e)–(h) The 3D reconstruction corresponding to (a)–(d). (i)–(l) Depth volumes corresponding to (e)–(h).
Fig. 10
Fig. 10 Measured results with the simplified left-right consistency check (see Visualization 3 for the whole results). (a)–(h) The results corresponding to Fig. 9(a)Fig. 9(h). (i)–(l) enlarged details corresponding to (e)–(h).
Fig. 11
Fig. 11 The real-time measurement process and results using our method based on (a) quad-camera system (see Visualization 4 for the whole process) and (b) dual-camera system (see Visualization 5 for the whole process).

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

I 1 c ( u c , v c ) = A c ( u c , v c ) + B c ( u c , v c ) cos ( Φ c ( u c , v c ) ) ,
I 2 c ( u c , v c ) = A c ( u c , v c ) + B c ( u c , v c ) cos ( Φ c ( u c , v c ) + 2 π 3 ) ,
I 3 c ( u c , v c ) = A c ( u c , v c ) + B c ( u c , v c ) cos ( Φ c ( u c , v c ) + 4 π 3 ) .
u p ( u c , v c ) = Φ c ( u c , v c ) R 2 N π ,
Z w ( u c , v c ) = D c p ( u c , v c ) + E c p ( u c , v c ) F c p ( u c , v c ) u p ( u c , v c ) + 1 ,
X w ( u c , v c ) = G c p ( u c , v c ) Z ( u c , v c ) + J c p ( u c , v c ) ,
Y w ( u c , v c ) = L c p ( u c , v c ) Z ( u c , v c ) + M c p ( u c , v c ) ,
ϕ c ( u c , v c ) = tan 1 3 [ I 1 c ( u c , v c ) I 3 c ( u c , v c ) ] [ 2 I 2 c ( u c , v c ) I 1 c ( u c , v c ) I 3 c ( u c , v c ) ] .
Φ c ( u c , v c ) = ϕ c ( u c , v c ) + 2 K c ( u c , v c ) π , K c ( u c , v c ) [ 0 , N 1 ] ,
Z min Z w ( o c 1 , K n c 1 ) Z max
H ( floor ( Z W ( o C 1 ) Z min Δ Z ) ) H ( floor ( Z W ( o C 1 ) Z min Δ Z ) ) + 1 ,
Z w ( o c 1 ) = nan , if Z w ( o c 1 ) < Z min global + Δ Z motion or Z w ( o c 1 ) > Z max global Δ Z motion ,
{ Z min pixel ( o c 1 ) = min ( Z w i , j = r i , j = r ( u c 1 i , v c 1 j ) ) Δ Z motion Z max pixel ( o c 1 ) = max ( Z w i , j = r i , j = r ( u c 1 i , v c 1 j ) ) + Δ Z motion ,
Ω o c 1 ( o c 1 , K n c 1 ( o c 1 ) ) = 1 ,
S 1 S 2 S min
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.