In a previous paper [J. S. Dam et al, Opt. Express 15, 1923 (2007)] we demonstrated computerized “drag-and-drop” optical alignment of a counter-propagating multi-beam based micromanipulation system. By inclusion of image analysis, we report here on the extension of this work to accommodate a completely automated beam-alignment process. Additionally, to maintain a cost-effective and technically less demanding system architecture, we also report on a computer-guided manual alignment procedure. In the manual version, the computer analyzes the initial misalignment and the required compensations for each mirror in the system are calculated. Subsequently, the user is guided in adjusting the mirrors exactly by the requisite amount. This way, all mirrors only need to be moved once. The image analysis utilized in both calibration schemes employs a fitting algorithm to determine the position of beam-center with sub-pixel accuracy, thereby providing “better than human” alignment.
© 2007 Optical Society of America
As put forward by Kraikivski et al , alignment of various counter-propagating beam trapping systems [2–5] has been considered problematic. The challenge in having an array of such traps is that the individual beams must be aligned to a high precision. A recent paper shows optical manipulation using four intersecting laser beams . Such a system will face calibration issues similar to the ones presented here, and the methods we describe here may be applicable in some modified form.
In a previous paper , we have shown computerized “drag-and-drop” calibration applicable for our counter-propagating multi-beam based optical micromanipulation systems, producing an easy and fast calibration, although still requiring some user involvement in aligning the beams. Here, the “drag-and-drop” calibration scheme has been further developed to a fully automated self-aligning system, as will be shown in the following section. In section 3 we describe a computer-guided calibration scheme, which does not rely on expensive computer controlled mirrors, but works with manually operated mirror mounts, while still producing fast and highly accurate alignment.
The fully automated alignment requires an auto-focusing algorithm and image analysis to determine the positions of the beam-centers. One method is to rely on the (2-D) ray transfer matrix analysis (like the “drag-and-drop” calibration routine ) and perform a few iterations to eliminate residual errors. We will explain the single-stroke manual alignment by a simple 4-D vector-analysis, and demonstrate a method that swiftly produces high accuracy calibration.
2. Fully automated alignment including auto-focusing algorithm
Having a completely automated calibration routine opens for automated measurements on a plurality of samples. Combining an automatic sample exchange mechanism as an extension of what we have described in Ref.  with vision-based automated experiments , would allow us to examine a large amount of samples, without user involvement.
The first step in the calibration scheme as described in Ref.  is to position the upper surface of the sample chamber to the focal plane of the upper objective lens. For the computer to determine when this has been achieved, a function that gives a measure of focus quality is needed. A simple method proving to be sufficient for this purpose is determining the standard deviation of the pixel intensities in the image. A defocused image is blurred out over more pixels and thus results in a lower standard deviation. However, since we are using a coherent light source, we must carefully choose a light-pattern that does not produce Fresnel propagated self-images or self-image-like patterns, i.e. Talbot images , which might fool the algorithm. We have found that a random mesh of curved, thin lines is a good choice, since it fades away efficiently when defocused as shown in Fig. 1.
By monitoring standard deviation of pixel-intensities as a focus quality parameter while moving the sample stage through focus, we can determine the optimal stage z-position for a focused image. For time-efficiency the process is divided in two steps: (1) a quick sweep of a large region to determine the approximate location of focus, (2) a slower, smaller sweep to determine a more accurate position of focus. In Fig. 2 we show the focus quality as function of axial coordinate of the sample.
After focusing the upper beam, its profile is changed to a crosshair. We use image analysis to determine the center position of the crosshair. The information gained is used as a replacement for the mouse dragging described in our previous work . To determine the coordinates of the crosshair’s center in our image, we add the pixel intensities in horizontal and vertical lines respectively, and apply a width-adaptive, quadratic fit, peak detection algorithm to find the crosshair image coordinates with sub-pixel accuracy as illustrated in Fig 3. The image analysis and mirror repositioning are iterated until the crosshair is centered.
For the defocused crosshair, we choose a pattern on the spatial light modulator which produces a suitable Fresnel propagated pattern, which is easy to analyze. The accuracy obtained by image analysis is better than what is achieved by human eye. Except for the image analysis replacing the “drag-and-drop” GUI tasks, the procedure works on exactly the same principle as described previously . The resulting fully automated calibration routine makes calibration much faster and allows sub-pixel accuracy through image analysis.
3. Computer-guided manual alignment
The fully automated alignment, as good as it is, comes with a steep price tag, as we need 8 high precision actuators to adjust the mirrors. To make a less expensive, but still easy-to-use calibration, we have designed a system, where the computer calculates how much each mirror must be adjusted, draws a target crosshair on the monitor, and then asks the user to do the actual adjustment of one mirror until the crosshairs overlap. Fundamentally, this method differs in the sense that it is optimized to require as little user interaction as possible, and since the mirrors are no longer computerized, we cannot rely on the separation of tilt and image position (that requires moving two mirrors simultaneously) as in the previously described methods. To this end, we have devised a method that only requires adjusting each mirror once.
For the manual mirror alignment to function, we must have precise control of focus and defocusing. Therefore, we must keep the lower objective and sample z-stage computer controlled. For this reason, we can reuse the auto-focusing algorithm described in the previous section. In the previous calibration schemes, we have assumed that tilting a mirror only creates tilt in the beam. However, a slight beam displacement also occurs as sketched in Fig. 4. This displacement does not prevent the mouse controlled or the fully automated methods from working. Nevertheless, when designing a method where the mirrors should only be moved once each, one must take these coupled effects into account.
In this analysis the system is considered linear, which is a reasonable approximation for the small angles we are working with. In extreme cases (e.g. accidental large displacement of a knob), a slight tilt may remain after calibration (due to non-linearity), which can easily be removed by a second iteration of the calibration procedure.
Before the alignment can take place, the computer must know the effect of moving the individual mirror adjustment knobs (knob A and B on the adjustable mirror furthest from the sample, and knob C and D on the mirror closest to the sample). The computer learns this information once and for all, by subtracting the image coordinates of focused (x,y) and defocused (u,v) crosshairs before and after arbitrary adjustment of a single knob [Eq. (1)].This information, which can be thought of as directional vectors, must be obtained for all knobs. Consequently a GUI interface automatically changing between focused and defocused crosshair positions and performing image analysis et cetera has been developed.
Knowing the effect of adjusting each of the four available knobs, and the error in alignment we need to correct (gained from image analysis), allow us to calculate how much each knob must be displaced in the manual alignment (or rather, the crosshair displacements to be obtained from each knob). Determining these quantities requires solving a 4-D linear equation [Eq. (2)]. The right hand side of Eq. (2) is the required displacement of focused and defocused crosshairs to have the beam aligned.
Knob A and knob B (on the mirror furthest from the sample) must be displaced the amounts a and b respectively. This will result in the defocused image being displaced by an amount calculated in Eq. (3).
To guide the user to this exact displacement (calculated from the starting position of the defocused crosshair), a target crosshair is displayed at this location on the monitor. The user must then adjust knob A and knob B until the crosshairs overlap.
Lastly, the second mirror must be displaced. First, the sample is moved to focus, and a green crosshair is displayed at the center of the screen, to which the user must adjust knob C and knob D until the crosshairs overlap. This will result in the desired beam alignment. The entire procedure must be repeated with the beam coming from below (with the only difference of moving the lower objective rather than the sample to adjust focus). The single solution to Eq. (2) furthermore ensures stability against successive alignments, as the final position of the knobs is independent of the initial misalignment.
The visual result of this procedure can be observed in Fig. 6 and in the online video.
The computer-guided manual alignment allows an unskilled user to calibrate the system much faster than an experienced user without computer-guidance.
We have shown how image analysis can be used to fully automate calibration of a motorized counter-propagating multi-beam based optical micromanipulation system. The same image analysis has been implemented in a setup without motorized mirrors resulting in a computer-guided manual alignment, where all mirrors only need to be adjusted once by the user. For the guided alignment, we do need absolute control over focus, so the sample z-stage and lower objective must be motorized. The guided manual alignment of the upper and lower beam sets can be completed in about 2 minutes.
We would like to thank the support from the EU-FP6-NEST program (ATOM3D), the ESF-Eurocores-SONS program (SPANAS) and the Danish Technical Scientific Research Council (FTP).
References and links
1. P. Kraikivski, B. Pouligny, and R. Dimova, “Implementing both short- and long-working-distance optical trappings into a commercial microscope,” Rev. Sci. Instrum. 77, 113703 (2006). [CrossRef]
2. S. A. Tatarkova, A. E. Carruthers, and K. Dholakia, “One-Dimensional Optically Bound Arrays of Microscopic Particles,” Phys. Rev. Lett 89, 283901 (2002). [CrossRef]
4. A. Isomura, N. Magome, M. I. Kohira, and K. Yoshikawa, “Toward the stable optical trapping of a droplet with counter laser beams under microgravity,” Chem. Phys. Lett. 429, 321–325 (2006). [CrossRef]
5. P. J. Rodrigo, V. R. Daria, and J. Glückstad, “Four-dimensional optical manipulation of colloidal particles,” Appl. Phys. Lett. 86, 074103 (2005). [CrossRef]
7. J. S. Dam, P. J. Rodrigo, I. R. Perch-Nielsen, C. A. Alonzo, and J. Glückstad, “Computerized “drag-and-drop” alignment of GPC-based optical micromanipulation system,” Opt. Express 15, 1923–1931 (2007). [CrossRef] [PubMed]
8. I. R. Perch-Nielsen, P. J. Rodrigo, C. A. Alonzo, and J. Glückstad, “Autonomous and 3D real-time multi-beam manipulation in a microfluidic environment,” Opt. Express 14, 12199–12205 (2006). [CrossRef] [PubMed]
9. E. Hecht, Optics (Addison-Wesley, 2002).