Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Motion measurement system of compliant mechanisms using computer micro-vision

Open Access Open Access

Abstract

Position sensing is essential to testify the validity of the mechanical design and verify the performance in micromanipulation. A practical system for non-contact micro-motion measurement of compliant nanopositioning stages and micromanipulators is proposed using computer micro-vision. The micro-motion measurement method integrates optical microscopy and an optical flow-based technique, in which the motions of complaint mechanisms are precisely detected and measured. Simulations are carried out to validate the robustness of the proposed method, while the micro-vision system and a laser interferometer measurement system are also built up for a series of experiments. The experimental results demonstrate that the proposed measurement system possesses high stability, extensibility, and precision with 0.06 µm absolute accuracy and 0.05 µm standard deviation.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Compliant mechanisms transform the displacement and force through the deformation of their structural components, which own the advantages of compact structure, high precision, friction-free, long life, and fast response, thereby have drawn great attention in recent years as the mechanical configuration in micro-electro-mechanical systems (MEMS) [1]. Many studies have been conducted to design and analyze the compliant mechanisms, aiming to adapt to micromanipulation tasks [26]. Kenton and Leang [2] used serial-compliant double-hinged flexures for high-bandwidth nanopositioning stages. Tang and Li [3] proposed a novel 2-DOF compliant mechanism for nanopositioning . Wang and Zhang [4] compared different compliant nanopositioning systems and optimized the structural parameters. Xu et al. [5] designed and analyzed a gripper device with a compliant mechanism for micromanipulation. Chen et al. [6] presented an optimal design framework of the microgripper with a displacement amplification. With the booming development of the compliant mechanisms, compliant positioning stages and manipulators will be more broadly applied in micromanipulation. However, compliant mechanisms for micromanipulation are normally compact, small, and specifically constructed, which raises a challenging but significant demand for precisely detecting their motions. One typical application case is grabbing 30 - 80 $\mu m$ objects by compliant microgrippers in micromanipulation [4]. Thus, a measuring range above 200 $\mu m$ with the measurement accuracy higher than 0.1 $\mu m$ is preferable.

As one of the commonly used tools to measure displacements in precise engineering, the laser interferometer measurement system (LIMS) is able to obtain high measurement accuracy to the nanoscale. Nevertheless, normally only one degree-of-freedom (DOF) displacement can be detected by LIMS at a time. To simultaneously measure two or more DOFs, auxiliary devices are required to install. Environmental conditions will also interfere with the measured results of LIMS, where temperature, pressure, and air humidity need to be strictly controlled. Moreover, the reflecting device of LIMS must be properly installed on the measured objected, which is inconvenient for compact compliant mechanisms and will cost much time for the preparation. In recent years, a few other displacement measurement methods have been presented. Berkovic and Shafir [7] suggested the optical technique for displacement and distance measurements. Hsieh and Pan [8] proposed a grating-based method for multi-DOF displacement measurements. Other researchers such as Mudassar et al. [9], Li et al. [10] and Pan et al. [11] developed vision-based methods and digital image processing techniques for precise displacement measurements. High accuracy, reliability, and expandability have been verified in the vision systems.

This paper develops a micro-motion measurement system for compliant mechanisms using the computer micro-vision-based method. All the compliant mechanism motions can be precisely captured and measured in the field of view of this micro-vision system, thereby contributing to the design validation and application of compliant mechanisms. This paper organizes the remainder as follows. In Section 2, the methodology is proposed in detail. Section 3 presents the simulations and results. A series of experiments are demonstrated with comparison to LIMS in Section 4. The conclusion is summarized in Section 5.

2. Methodology

2.1 System setup and measuring method

The micro-vision-based motion measurement system setup is illustrated in Fig. 1. This system consists of three main parts: the microscopic imaging unit, the image processing unit, and the motion control unit. The microscopic imaging unit is based on the optical microscope, the illuminator, and the sensor, which can generate the magnified images of the measured targets. The image processing unit, which comprises the image acquisition card and the industrial personal computer (IPC), can efficiently transmit and process the image data. The motion control unit is utilized to control the adjustable lens and platform, which enables the micro-motion measurement system to possess a flexible measurement range and accuracy.

 figure: Fig. 1.

Fig. 1. The micro-motion measurement system for compliant mechanisms. (a) Schematic diagram of the micro-motion measurement system. (b) Detailed configuration.

Download Full Size | PDF

Concretely, we establish the micro-motion measurement system with the following hardware. The optical microscope is assembled by a controllable zoom lens with the magnification ranges from 0.714 to 3.330 (1-6010, Navitar, USA), two selectable objectives (M Plan Apo 10x and M Plan Apo 50x, Mitutoyo, Japan), and a focus stage (TSA150-ABZ, Zolix, China). A CMOS camera with pixel size 5.5 $\mu m$ $\times$ 5.5 $\mu m$ (Genie TS M2048, Tendency Dalsa, Canada) and an $0.5\times$ adapter (1-62088, Navitar, USA) is selected as the sensor to capture the image sequence. The image acquisition card (GigEPro GigE PCIe Card, Point Grey, Canada), an IPC (Intel Core i7-4790M, CPU 3.6 GHz, RAM 16 GB) and a display constitute the image processing unit. The adjustable platform is constructed with a macro stage (DZTSA300GAB-SVO1, Zolix, China) driven by servo motors (MHMD042P1U, Panasonic, Japan) and a micro stage (YM10A-S1, KOHZU, Japan), which can precisely carry the measured targets into the field of view (FOV) of the micro-vision system. The micro-vision system is set up in an ISO Class 7 cleanroom, and a high-intensity LED fiber optic illuminator (LMI-6000, Dolan-Jenner, USA) and a vibration isolation table are applied to ensure stable measurement. After the image sequence of the targets is captured by the sensor, it will be sent to and processed by the IPC through the image acquisition card.

Template matching is the most widely used approach for vision-based displacement measurements, and it has been verified as a mature technique in many applications [1214]. Nevertheless, different templates can generate different convergent results, and human intervention is needed for the template selection, which determines the measurement performance. It may slow down the convergence rate, go into local optima, or even fail to converge if the template is not chosen properly. Moreover, to reliably evaluate the design and fabrication of compliant mechanisms, the motion analysis, dynamic behavior and characterization of the whole mechanical parts are needed, which the conventional template matching is not capable of. Optical flow is the technique that estimates each point in an image sequence by the displacement vector field, which can produce pixel-to-pixel correspondences between two images based on the pixel gray value [15]. The traditional optical flow approach is not dense in vector field extraction, and the discontinuities are lost [16,17]. At the micro/nano scale, the noise may also limit the performance of optical flow. Instead of raw pixels, matching scale invariant feature transform (SIFT) descriptors [18] at each pixel based on the optical flow framework can achieve discreteness and discontinuity. The discontinuity-preserving spatial model can also enable robust matching across different parts of the scene. There are a few advantages of adopting SIFT into optical flow [19]: 1) Significant features of the captured images can be well encoded in the SIFT descriptors; 2) the SIFT descriptors are insensitive to distortions caused by motions and noises; 3) the fast discrete optimization method can be applied in the SIFT-based optical flow computation; 4) long-distant and long-term motions of compliant mechanisms can still be estimated in SIFT matching; 5) SIFT-based optical flow algorithm directly encodes features from every input image, and it does not require the training data or prior knowledge from the field of view of the micro-vision system. The details of the procedures will now be discussed.

2.2 SIFT-based optical flow

SIFT feature descriptor can encode local gradient information such as edges and corners with the invariance to changes in scaling, rotation, noise, viewpoint, and illumination. As a result, SIFT descriptor is insensitive to light, shadow, and affine transforms in obtained microscope images. Exploiting SIFT matching into the optical flow can avoid the noise and changes by the deformation of compliant mechanisms in image sequences.

To extract the local features into the local orientation histogram, four steps in SIFT computation are conducted [20]: scale-space extrema detection, key-point localization, orientation assignment, and descriptor construction. In the first two steps, pixel candidates that are robust to scale changes are extracted. The last two steps store the local orientation information into the histogram vectors, in which each pixel in a neighborhood is assigned with a specific orientation. Concretely, the $16\times 16$ neighborhood of every pixel in an image is converted into a $4\times 4$ cell array, as illustrated in Fig. 2, where the orientations are quantized into eight bins and then accumulated into a 128-dimensional histogram vector. Therefore, SIFT descriptor can represent each pixel in an image and create the SIFT image.

 figure: Fig. 2.

Fig. 2. SIFT feature histogram. Each pixel is assigned with an orientation, which is quantized into 8 bins in the cell array.

Download Full Size | PDF

SIFT-based optical flow is then computed from the consecutive SIFT images. For a image pair $I_1$ and $I_2$, the objective function of the SIFT-based optical flow can be defined as

$$E(\boldsymbol{p}) = F(\boldsymbol{p}) + G(\boldsymbol{p}) + R(\boldsymbol{p})$$
$$F(\boldsymbol{p}) = \sum_{\boldsymbol{p}} \left( \|I_1(\boldsymbol{p})-I_2(\boldsymbol{p} +(\mu(\boldsymbol{p}),\nu(\boldsymbol{p})) \|_1,t \right)$$
$$G(\boldsymbol{p}) = \sum_{\boldsymbol{p}} \eta \left(|\mu(\boldsymbol{p})| + |\nu(\boldsymbol{p})| \right)$$
$$R(\boldsymbol{p}) = \sum_{(\boldsymbol{p},\boldsymbol{q}) \in \varepsilon} min\left(\alpha|\mu(\boldsymbol{p})-\mu(\boldsymbol{q})|,h\right) + min\left(\alpha|\mu(\boldsymbol{p})-\nu(\boldsymbol{q})|,h\right)$$
where $\boldsymbol {p} = (x,y)$ is the pixel coordinate of images, $\mu (\boldsymbol {p})$ and $\nu (\boldsymbol {p})$ are flow vectors in the horizontal direction and in the vertical direction respectively. $F(\boldsymbol {p})$ in Eq. (2) is the data term, which let the SIFT descriptors to be matched along flow vector direction. $G(\boldsymbol {p})$ in Eq. (3) is the small displacement term, which allows the flow vectors to be as small as possible while no information is feeding in. The spatial regularization term $R(\boldsymbol {p})$ in Eq. (4) ensures the similarity of flow vectors within the 4-neighbors $\varepsilon$, where $\boldsymbol {q}$ are the adjacent pixels in the neighborhood. The truncated $\textit {l}_1$-norms with thresholds $t$ and $h$ are employed in Eqs. (2) and (4) to deal with the SIFT matching outliers and flow discontinuities. Distance transform function [21] and sequential belief prorogation [22] are applied to solve Eq. (1) for the stable convergence.

A coarse-to-fine scheme is also used to speed up the matching process. The SIFT pyramid is established, as illustrated in Fig. 3. Let image 1 and image 2 be two images in the image sequence, $I^n$ represents the pyramid level $n$, which is downsampled and smoothed from the lower level $I^{n-1}$. The flow is firstly estimated at a course level of the image grid, and later gradually refined to a fine level. Assuming $\boldsymbol {p}$ denotes the pixel coordinate to match and $\boldsymbol {w}$ is the searching window, after the best match $\boldsymbol {w}^n$ is obtained by sequential belief prorogation at the top level $I^{n-1}$, the location of the flow vector is propagated to the finer level $I^{n-1}$. This procedure iterates from $I^{n}$ to $I^{1}$ until the optimized flow vector is estimated at the bottom level. With the flow pyramid, large motions which are beyond the searching window size can be also precisely measured.

 figure: Fig. 3.

Fig. 3. Pyramid of the SIFT-based flow matching process.

Download Full Size | PDF

Figure 4 presents the structure of the proposed method. Given the region of interest in the first image in the image sequence, the displacements of pixels are computed between the first and second images using the aforementioned SIFT-based flow algorithm. Concretely, the level number of flow pyramid is set to 8, the top level size is set to 5, and the searching window size is set to 4, the weight of the $\textit {l}_1$-norm regularization $\alpha$ in Eq. (4) is set to 5, the threshold of the truncation $h$ is set to 80, the iteration number of belief prorogation is 20. Then, the motions of the target are obtained, and the location of the region of interest is updated in the field of view. The second image is reset as the first image, and a new image from the image sequence is input for the next round matching process. This process is repeated until the last image is paired and computed. Then, the measurement can be recorded and transferred from the image space to the physical space with the known scale ratio of the micro-vision system.

 figure: Fig. 4.

Fig. 4. Structure of the proposed method.

Download Full Size | PDF

3. Simulation and result

The quality of acquired images from the CMOS sensor could be decisive to the micro-vision-based measurement systems. To testify the accuracy and robustness of the proposed system, a series of simulations were conducted. These simulations focused on the performance of the proposed method under different noise interference and input displacements. For more realistic tests, a real micro image of the compliant mechanism with the size 640 $\times$ 480 pixels was captured for simulation. Gaussian noise was applied since it can realistically simulate complex noisy environment. The input displacements of 0, 20, and 40 pixels were set on both x and y directions of the image, while the zero-mean-value Gaussian noise with variances $\sigma ^2$ of 0.001, 0.01, and 0.1 was added. A region of interest was firstly selected in the image. Then, the image was shifted with the predefined displacements to simulate the second image. The process in Fig. 4 was conducted, and the location of the matched target was updated and recorded. All simulations were implemented in Mathworks MATLAB.

Figure 5 exhibits the simulation results, where the matched target is marked with the green box. Corresponding to different conditions in Fig. 5, Table 1 lists the detailed measurement results based on the simulation data, where the centroid of the selected region is chosen as the location of the predefined position and simulation result. As can be seen from Fig. 5(a)-(c), when the noise is low ($\sigma ^2=0.001$), large motions can be precisely measured. When the noise increased ($\sigma ^2=0.01$), small measurement errors can be observed in Fig. 5(d)-(f), but measurement accuracy and robustness are still high with the errors smaller than one pixel. When the noise interference is seriously high ($\sigma ^2=0.1$), the measuring results deteriorate, especially when the input displacements are large. Nevertheless, this noisy environment is very rare under normal laboratory conditions, and the environmental noises can be well controlled with the high-intensity illuminator and vibration isolation facilities in the proposed measurement setup. Therefore, the simulations indicate that the proposed method can achieve high-precision motion measurement under acceptable noise level.

 figure: Fig. 5.

Fig. 5. Simulation results under various conditions. The green box denotes the matched location.

Download Full Size | PDF

Tables Icon

Table 1. Simulation results of the matched location with different initial input displacements and noise interference.

4. Experiments and discussion

4.1 Accuracy and precision verification

The proposed micro-motion measurement system is testified with different compliant mechanisms to validate different aspects of the performances. A novel nanopositioning stage with high resonant frequency and low cross-coupling was specifically designed and fabricated in our laboratory [23], and it was used for measurement accuracy and precision demonstration. As shown in Fig. 6, the compliant nanopositioning stage was constructed using parallelogram hybrid beams and doubly clamped beams to deal with the natural frequency and cross-coupling effects. Driven by the piezoelectric actuator (PA), which was installed in the compliant mechanism, the moving stage can perform displacements in x-direction and y-direction separately within the designed workspace 15 $\mu m$ $\times$ 15 $\mu m$. The control module (XE-517.i3, XMT, China) supplied 0 - 150 volts for the PA (NAC2014-H14, XMT, China).

 figure: Fig. 6.

Fig. 6. Experimental setup of simultaneous measurements for accuracy and precision validation. (a) Schematic diagram of the nanopositioning stage. (b) Full view of the micro-motion measurement system, LIMS and the compliant nanopositioning stage. (c) Zoom-in view of the experimental setup on the nanopositioning stage.

Download Full Size | PDF

Figures 5(b) and 5(c) presents the experimental setup for accuracy and precision measurements. The base of the compliant positioning stage was fixed on the adjustable platform. To achieve the best measurement accuracy, the zoom lens of the proposed micro-motion measurement system was set to $3.33 \times$ and the $50 \times$ objective was selected. Using the calibration method presented in [24], the magnification of the vision system setup was calibrated as 77.673$\times$. Since the SIFT-based optical flow technique is the pixel-to-pixel correspondence, the algorithm resolution is integer pixel. With 5.5 $\mu m$ pixel size of the sensor, each pixel in the image corresponds to 70.8 $nm$. 700 $\mu s$ exposure time was set for the CMOS sensor. To compare the performances with the professional measuring equipment, a high-precision LIMS with a linear measurement accuracy of $\pm$ 0.5 ppm and 1 $nm$ resolution was also established. This LIMS included a laser interferometer (XL-80, Renishaw, UK), an environmental compensator (XC-80, Renishaw, UK), and an IPC (Intel Core i7-4770M, CPU 2.40 GHz, RAM 4 GB). Since the LIMS can only measure 1 DOF, all devices of LIMS must be particularly installed for the measuring direction.

After both measurement systems were initialized, the PA was controlled to input displacements by the range of 0 - 150 volts with ten intervals from the PA control module. At each interval, while the nanopositioning stage stopped, the micro-motion measurement system and LIMS simultaneously measured the displacements of the compliant nanopositioning stage. Ten groups of experiments were respectively implemented in both x-direction and y-direction. The measurement results by the proposed method and LIMS are presented in Fig. 7. It can be seen that the displacement curves of each group by the proposed method and the LIMS are quite coincident. Due to piezoelectric hysteresis effect, when each group of measurement was finished and the voltage was descending to 0 volt, the absolute displacement of the PA was not 0. Meanwhile, before every group of measurement evaluation, both the micro-vision system and the LIMS were reset. Therefore, the absolute position of starting point of each group were different, which resulted in different displacement curves, as can be seen in the difference between 1st group and 10th group in 7(a) and 7(c). Figure 8 shows the deviation of each group between two different measurement methods, which is smaller than 0.06 $\mu m$. Since the scale ratio of this experimental setup was 70.8 $nm/pixel$, the accuracy of the proposed micro-motion measurement system can be verified as smaller than one pixel.

 figure: Fig. 7.

Fig. 7. Measurement results by the micro-motion measurement system in (a) x-direction and (b) y-direction. Measurement results by the LIMS in (c) x-direction and (d) y-direction.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. The deviation between the proposed method and LIMS in (a) x-direction and (b) y-direction.

Download Full Size | PDF

To validate the stability of the proposed micro-motion measurement system, the precision measurements were then conducted. The PAs also drove the compliant nanopositioning stage in x-direction and y-direction with ten intervals. During each interval, 30 measurements from the micro-motion measurement system were recorded. Figure 9(a) illustrates the maximum residual error, which is defined as the maximum difference between the measured value and the mean of measured values in displacements. Figure 9(b) shows the experimental standard deviation of the proposed method. With the vibration isolation table and well-controlled illumination, the environmental noise has been minimized. Both the residual error and the standard deviation in x and y directions are quite small, which also demonstrates the high robustness, reliability, and stability of the micro-motion measurement system.

 figure: Fig. 9.

Fig. 9. Precision measurements of the proposed method. (a) Residual error and (b) standard deviation.

Download Full Size | PDF

4.2 Motion analysis

Full-field motion measurement is important and useful to verify the design and fabrication of the compliant mechanisms. A supplementary experiment was carried out to demonstrate the capability of a comprehensive motion analysis of the micro-motion measurement system. A compliant microgripper was optimally designed and fabricated in our previous work [6], which aimed to achieve both displacement amplification and parallel grasping in a compact configuration, as shown in Fig. 10(a). To evaluate the actual performance of the microgripper, the proposed method was utilized. In Fig. 10(b), the microgripper was fixed onto the adjective platform of the micro-motion measurement system. $10 \times$ objective was selected to obtain a large field of view 727.4 $\mu m$ $\times$ 363.7 $\mu m$, where the scale ratio was 355.2 $nm/pixel$. Through the control module, the voltage of 0 - 150 volts was applied to the PA, which provided the input displacement on the amplification structure at $P_{in}$. Then, the outputs were performed on the jaw region, which was marked as $P_{out}$ in Fig. 10, and the operations of grasping and releasing were conducted.

 figure: Fig. 10.

Fig. 10. Performance evaluation of the microgripper using the proposed method. (a) The prototype of piezoelectric-driven compliant microgripper. (b) Microgripper under motion analysis.

Download Full Size | PDF

The motions of both the input displacement and the output were measured using the proposed method, as shown in Figs. 11(a) and 11(b). Grasping and releasing were successively performed, where the voltage increment and decrement of the PA were 10 volts between 0 to 150 volts. Figures 11(c) and 11(d) visualize the motions using the flow color coding from [25]. The motions were well detected and decoupled on the x and y components of the displacement field. It can be seen that the outline of measured motions was highly in line with the targets in the acquired image. The color of the microgripper jaw area is quite uniform, which indicates the high stability of the proposed method. Meanwhile, the visualized colors of grasping and releasing are exactly opposite in different jaw areas, which validates that the movement directions of the operations were also opposite, thereby qualitatively verifying the microgripper design. The input displacement of the compliant microgripper is shown in Fig. 12. The maximum input displacement is recorded as 5.99 $\mu m$ when the voltage of the PA is 150 volts. Figure 12(b) presents the average motions of both left and right jaws in terms of different voltage inputs. When the voltage is 150 volts, the left jaw movement at the parallel direction is 51.048 pixels, which is corresponding to 18.132 $\mu m$, while the right jaw movement is -49.502 pixels, namely -17.583 $\mu m$. The corresponding parasitic motions are 0.00005328 pixels (namely 0.00001892 $\mu m$) and 0.186 pixels (namely 0.0663 $\mu m$), respectively. Thus, the ratio of parallel and parasitic displacement is smaller than 0.35%. The high precision of parallel grasping is verified, which enables stable micromanipulation using the microgripper. The experimental results of the amplification ratio were then obtained. As shown in Fig. 12(c), the amplification ratio keeps larger than 2. Therefore, the displacement amplification design is validated. The voltage ascending and process and descending are not exactly the same, which is a result of the large deflection nonlinearity of the designed compliant mechanism. This also proves the importance of the accurate measurement method for the precision micromanipulation of compliant mechanisms.

 figure: Fig. 11.

Fig. 11. Field of view of the measurement area. (a) The jaws of the compliant microgripper. (b) The input area of compliant mechanism. (c) Motion visualization of grasping. (d) Motion visualization of releasing.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. Experimental results of the motion analysis of the microgripper. (a) Input displacement. (b) Output performance. (c) Amplification ratio.

Download Full Size | PDF

5. Conclusion

This paper presents a flexible micro-motion measurement system for compliant mechanisms using the computer micro-vision technique. With this system, the mechanism micro-motion can be precisely detected and analyzed. Both simulations and experiments were conducted to demonstrate the high accuracy and robustness of the proposed system. Compared to traditional instruments such as LIMS, the proposed method does not require sophisticated installment of auxiliary devices, artificial markers, or specific patterns. Thus, it is a practical and competitive alternative for motion measurement of compliant mechanisms at small scales. Furthermore, with strong expandability, the proposed method can be used for micromanipulation, design validation, micro-object tracking, and deformation measurement. We believe the proposed method can make a contribution to the development of compliant mechanisms.

Funding

National Natural Science Foundation of China (51820105007, 51905176); China Postdoctoral Science Foundation (2018M643072); Fundamental Research Funds for the Central Universities (2019MS057).

Disclosures

The authors declare no conflicts of interest.

References

1. B. Zhu, X. Zhang, H. Zhang, J. Liang, H. Zang, H. Li, and R. Wang, “Design of compliant mechanisms using continuum topology optimization: A review,” Mech. Mach. Theory 143, 103622 (2020). [CrossRef]  

2. B. J. Kenton and K. K. Leang, “Design and control of a three-axis serial-kinematic high-bandwidth nanopositioner,” IEEE/ASME Trans. Mechatron. 17(2), 356–369 (2012). [CrossRef]  

3. H. Tang and Y. Li, “Design, analysis, and test of a novel 2-dof nanopositioning system driven by dual mode,” IEEE Trans. Robot. 29(3), 650–662 (2013). [CrossRef]  

4. R. Wang and X. Zhang, “Parameters optimization and experiment of a planar parallel 3-dof nanopositioning system,” IEEE Trans. Ind. Electron. 65(3), 2388–2397 (2018). [CrossRef]  

5. Y. Liu, Y. Zhang, and Q. Xu, “Design and control of a novel compliant constant-force gripper based on buckled fixed-guided beams,” IEEE/ASME Trans. Mechatron. 22(1), 476–486 (2017). [CrossRef]  

6. W. Chen, X. Zhang, H. Li, J. Wei, and S. Fatikow, “Nonlinear analysis and optimal design of a novel piezoelectric-driven compliant microgripper,” Mech. Mach. Theory 118, 32–52 (2017). [CrossRef]  

7. G. Berkovic and E. Shafir, “Optical methods for distance and displacement measurements,” Adv. Opt. Photonics 4(4), 441–471 (2012). [CrossRef]  

8. H.-L. Hsieh and S.-W. Pan, “Development of a grating-based interferometer for six-degree-of-freedom displacement and angle measurements,” Opt. Express 23(3), 2451–2465 (2015). [CrossRef]  

9. A. A. Mudassar and S. Butt, “Improved digital image correlation for in-plane displacement measurement,” Appl. Opt. 53(5), 960–970 (2014). [CrossRef]  

10. H. Li, X. Zhang, B. Zhu, and S. Fatikow, “Online precise motion measurement of 3-dof nanopositioners based on image correlation,” IEEE Trans. Instrum. Meas. 68(3), 782–790 (2019). [CrossRef]  

11. B. Pan, L. Yu, and D. Wu, “High-accuracy 2d digital image correlation measurements with bilateral telecentric lenses: error analysis and experimental verification,” Exp. Mech. 53(9), 1719–1733 (2013). [CrossRef]  

12. S. Baker and I. Matthews, “Lucas-kanade 20 years on: A unifying framework,” Int. journal computer vision 56(3), 221–255 (2004). [CrossRef]  

13. A. Nakhmani and A. Tannenbaum, “A new distance measure based on generalized image normalized cross-correlation for robust video tracking and image recognition,” Pattern recognition letters 34(3), 315–321 (2013). [CrossRef]  

14. H. Wu, X. Zhang, J. Gan, H. Li, and P. Ge, “Displacement measurement system for inverters using computer micro-vision,” Opt. Lasers Eng. 81, 113–118 (2016). [CrossRef]  

15. A. Bruhn, J. Weickert, and C. Schnörr, “Lucas/kanade meets horn/schunck: Combining local and global optic flow methods,” Int. journal computer vision 61(3), 1–21 (2005). [CrossRef]  

16. S. Lazebnik, C. Schmid, and J. Ponce, “Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,” in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), vol. 2 (IEEE, 2006), pp. 2169–2178.

17. B. K. Horn and B. G. Schunck, “Determining optical flow,” in Techniques and Applications of Image Understanding, vol. 281 (International Society for Optics and Photonics, 1981), pp. 319–331.

18. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. journal computer vision 60(2), 91–110 (2004). [CrossRef]  

19. C. Liu, J. Yuen, A. Torralba, J. Sivic, and W. T. Freeman, “Sift flow: Dense correspondence across different scenes,” in European conference on computer vision, (Springer, 2008), pp. 28–42.

20. Z. Chaoyang, “Video object tracking using sift and mean shift,” Master’s thesis, Chalmers University of Technology, Sweden (2011).

21. P. F. Felzenszwalb and D. P. Huttenlocher, “Efficient belief propagation for early vision,” Int. journal computer vision 70(1), 41–54 (2006). [CrossRef]  

22. R. Szeliski, R. Zabih, D. Scharstein, O. Veksler, V. Kolmogorov, A. Agarwala, M. Tappen, and C. Rother, “A comparative study of energy minimization methods for markov random fields with smoothness-based priors,” IEEE Trans. Pattern Anal. Mach. Intell. 30(6), 1068–1080 (2008). [CrossRef]  

23. S. Lin, X. Zhang, and B. Zhu, “Design, modeling and analysis of a xy nanopositioning stage for high speed scanning,” in IOP Conference Series: Materials Science and Engineering, vol. 538 (IOP Publishing, 2019), p. 012043.

24. H. Li, X. Zhang, H. Wu, and J. Gan, “Line-based calibration of a micro-vision motion measurement system,” Opt. Lasers Eng. 93, 40–46 (2017). [CrossRef]  

25. S. Baker, D. Scharstein, J. Lewis, S. Roth, M. J. Black, and R. Szeliski, “A database and evaluation methodology for optical flow,” Int. journal computer vision 92(1), 1–31 (2011). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. The micro-motion measurement system for compliant mechanisms. (a) Schematic diagram of the micro-motion measurement system. (b) Detailed configuration.
Fig. 2.
Fig. 2. SIFT feature histogram. Each pixel is assigned with an orientation, which is quantized into 8 bins in the cell array.
Fig. 3.
Fig. 3. Pyramid of the SIFT-based flow matching process.
Fig. 4.
Fig. 4. Structure of the proposed method.
Fig. 5.
Fig. 5. Simulation results under various conditions. The green box denotes the matched location.
Fig. 6.
Fig. 6. Experimental setup of simultaneous measurements for accuracy and precision validation. (a) Schematic diagram of the nanopositioning stage. (b) Full view of the micro-motion measurement system, LIMS and the compliant nanopositioning stage. (c) Zoom-in view of the experimental setup on the nanopositioning stage.
Fig. 7.
Fig. 7. Measurement results by the micro-motion measurement system in (a) x-direction and (b) y-direction. Measurement results by the LIMS in (c) x-direction and (d) y-direction.
Fig. 8.
Fig. 8. The deviation between the proposed method and LIMS in (a) x-direction and (b) y-direction.
Fig. 9.
Fig. 9. Precision measurements of the proposed method. (a) Residual error and (b) standard deviation.
Fig. 10.
Fig. 10. Performance evaluation of the microgripper using the proposed method. (a) The prototype of piezoelectric-driven compliant microgripper. (b) Microgripper under motion analysis.
Fig. 11.
Fig. 11. Field of view of the measurement area. (a) The jaws of the compliant microgripper. (b) The input area of compliant mechanism. (c) Motion visualization of grasping. (d) Motion visualization of releasing.
Fig. 12.
Fig. 12. Experimental results of the motion analysis of the microgripper. (a) Input displacement. (b) Output performance. (c) Amplification ratio.

Tables (1)

Tables Icon

Table 1. Simulation results of the matched location with different initial input displacements and noise interference.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

E ( p ) = F ( p ) + G ( p ) + R ( p )
F ( p ) = p ( I 1 ( p ) I 2 ( p + ( μ ( p ) , ν ( p ) ) 1 , t )
G ( p ) = p η ( | μ ( p ) | + | ν ( p ) | )
R ( p ) = ( p , q ) ε m i n ( α | μ ( p ) μ ( q ) | , h ) + m i n ( α | μ ( p ) ν ( q ) | , h )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.