In this work, we present a method providing real-time, low cost, three-dimensional imaging in a three-dimensional optical micromanipulation system. The three-dimensional imaging system is based on a small form factor LED based projector. The projector is used to dynamically shape the rear illumination light in a counter-propagating beam-trapping setup. This allows us to produce stereoscopic images, from which the human brain can construct a three-dimensional image, or alternatively image analysis can be applied by a computer, thereby obtaining true three-dimensional coordinates in real-time for the trapped objects.
©2008 Optical Society of America
Optical trapping and manipulation of many particles in three dimensions (3D) has been demonstrated using different principles such as holographic trapping  and trapping based on a counter-propagating geometry [2–4]. Imaging and tracking the particles becomes a concern as axial displacements from 3D manipulation typically results in a defocused image.
Previously, 3D imaging in 3D optical trapping systems has been done by holographic microscopy , and while it does give accurate 3D information, it requires complicated calculations and gives ghost images and cannot be considered visually appealing. 3D information can also be extracted by observing the objects from the side, with another camera . This is complicated as one needs a sample chamber with an optical quality window from the side, and one needs to align the side-view camera to focus on the intended location. Also figuring out which objects are the same in the two views are far from trivial if more than a few objects are present in a scene.
In this work we show how the use of a color camera and illumination from a small form factor LED based projector can be used to construct real-time, stereoscopic images of particles that are optically manipulated in 3D in a system based on counter-propagating beam sets. We also describe a time-multiplexing method to construct identical 3D images without the need for a color camera. Furthermore, we show how the projector can be used to make “rotating” illumination bringing 3D videos to life.
2. Experimental setup
The laser shaping system and trapping module are essentially the same as in previous experimental setups [7, 8]. By exchanging a standard white light source for rear illumination with a small form factor LED based pocket projector (Samsung SP-P310ME), we gain the advantage of being able to create custom-shaped illumination patterns [9, 10]. These patterns can consist of different colors, and can even change dynamically in time as we will show later. The image produced by the projector is relayed by a 4-f lens system to the back-focal plane of one of the trapping objectives as illustrated in Fig. 1.
The diffusing filter in Fig. 1 has been inserted in an intermediate image plane (of the projector image being relayed to the back focal plane of the microscope objective) close to the projector. The reason for this is that the DLP projector emits light with some diffraction pattern, meaning that the light from the active area is not emitted homogenous in spatial angles, but contains some higher frequency diffraction pattern. This has no implications for regular use of projectors, as the imaging is from the object plane (DLP chip) to an image plane (screen). For our use, where we display the image at the back focal plane of the microscope objective, the spread in angles will lead to darker and lighter areas in the illuminated area. This homogeneity problem is completely solved by the diffusing filter as shown in Fig. 1.
The 3D optical trapping is occurring between the focal planes of the upper and lower objectives, and the z-position can be varied in this region . This means that by default, the trapped particles are significantly out of focus. To make sharp images of objects positioned in these locations, we need to shift the plane being imaged on the CCD, without altering the trapping laser beam sets. To this end, we have inserted a wheel with lenses of varying focal lengths as close as possible after the dichroic mirror separating the trapping wavelength from the imaging wavelengths. In the following we refer to the lenses used, not by their focal lengths, but rather by their diopter (diopter; the inverse focal length measured in meters). The diopter is additive for lenses placed close to each other, allowing us to use two filter wheels, placed close together to obtain more possible diopters with a few numbers of lenses. When aligning the trapping system, the procedure [8, 12] is carried out without any of these correctional optics placed in the optical path. To minimize the change of magnification when inserting corrective optics, they should be placed as close as possible to the objective. In the table below, we have calculated how the insertion of correction optics displaces the observed plane and the corresponding change in magnification. The calculation is done using standard ray transfer matrices.
In the equation above, f1 is the focal length of the Olympus lens forming the image on the CCD (180 mm). f2 is the focal length of the 50X Olympus microscope objective (3.6 mm). d1 is the optical distance from the microscope objective to FW1 (80 mm). d2 is the distance from FW1 to FW2 (46 mm). fcor is the focal length of the lens situated in FW1. fcor2 is the focal length of the lens situated in FW2. z is the obtained shift of focus (in vacuum).
Solving the equation for z, while setting x=0 gives us the attained focus shift. The magnification can be calculated as θ/y. The focus shift must finally be multiplied by the index of refraction of the sample medium (usually water) to obtain the shift of focus in the medium.
The drawing and tabled calculations in Fig. 2 show how we can shift the observed plane from the focal plane of the objective by insertion of corrective optics. This can be used as a method to find the approximate z-location of a trapped particle. By finding the lens combination where the object is best in focus, the location can be determined by looking in the table above. The possible focus planes are spaced closer together than the typical dimensions of the objects being manipulated, meaning that it is always possible to achieve a focused (or at least very close to focused) image. Simultaneously determining the z-positions of multiple particles can be done in a different way, leading us to the topic of the next section.
3. Stereoscopic imaging
As depicted in Fig. 1 we use a projector as source for the rear illumination. The image from the projector is relayed to the back focal plane of the lower microscope objective. By changing the image displayed on the projector the pattern for the illumination changes accordingly .
In Fig. 3 we show a sketch of a situation with a particle placed away from the image plane, and how we can determine the z-position of the particle by use of custom made illumination and a color camera.
Figure 3 shows how a particle displaced from the focus plane will make displaced blue and red shadows in the focal plane. Because of the relay of the observation plane, the displacement works both for particles placed before and after the focal plane. In the example above, the blue dot (blue since red is blocked) would be below the red dot if the particle was placed after the observation plane. Furthermore, from the angles of the blue and red illuminations and the distance between shadows one can easily deduce the z-position of the imaged particle. The x-y coordinates are deduced as the midpoint of the blue and red shadows.
To demonstrate the 3D imaging capabilities of this dual-color illumination system, we show an experiment where some polystyrene beads of different sizes are manipulated and placed at different z-positions. These images are shown in Fig. 4.
From the image analysis in Fig. 4, the 3D coordinates of the particles can be extracted. The x-y coordinates are easily found as the average value of the x-y coordinates of the corresponding blue and red shadows. From the illustration in Fig. 3, the calculation of the z coordinate is trivial, when we know the angle θ of the red and blue illuminations (which we know from the illumination pattern, the focal length of the objective lens, the refractive index of water), and the distance between the center coordinates of the blue and red shadows, xblue and xred.
As a result of the image analysis, we can calculate the z positions for all particles. The large out of focus particle is e.g. at 11 µm below the observation plane. Since θ is constant throughout the view area, there is no position dependence in the relation between distance between the colored spots and the z position. The precision of the calculated z-coordinate depends on how accurately the shadows location can be identified. Under good conditions the z-coordinate can be found with about 1 µm accuracy or better.
For more complex scenes with more densely packed particles, problems may arise when trying to match the shadows pair wise. This identification problem/ambiguity can be prevented by adding a centered green dot in the illumination pattern, resulting in a green shadow with the correct x-y coordinates, and blue and red shadows at equal distances left and right from this point. Such 3-colored images are best presented in their negative forms after image analysis, like the lower row in Fig. 4. The stereoscopic images can also be viewed with red/blue stereoscopic goggles, where the brain will interpret the scene as a 3D structure. To help the brain decode the scene, the image processed scene can be modified to show displaced shape identical red/blue particle pairs.
4. Advanced illumination
The projector can also be used to make illumination with continuously varying angle of illumination. This allow us to make 3D like videos where the 3D effect looks like the point of observation moves in e.g. a circular motion, where in fact nothing in the system moves, only the angle of illumination is varied. A few frames from such a video are presented in Fig. 5. These frames can also be considered an example of time multiplexed illumination.
The rotation-like video wherefrom frames are presented above is well suited for presentation of 3D structures/arrangements.
Since the color camera has good separation between red and blue color planes, they can be used individually to have dual illumination, where the color-based stereoscopic imaging described in the previous section is only one of many possibilities. One possibility is to use the blue light to create a high resolution low depth sharpness imaging plane to obtain best possible images of focused particles, while at the same time using the red light with higher depth of field imaging, which simultaneously allow us to observe particles well out of focus. This could even be imagined combined with time multiplexing of different patterns, giving effectively both stereoscopic imaging, high x-y resolution imaging, and high depth sharpness imaging, all in the same recording. For such a time multiplexed video, image analysis along with adjustment of the aperture shown in Fig. 1, can be used to determine which illumination pattern was used to generate a particular video frame. In Fig. 6, we show how a blue/red stereo illumination like the one used in Fig. 4 as well as a high NA white light illumination are affected by the out-of-focus aperture.
As seen in Fig. 6, we can use the image of the aperture to determine how the source of illumination looks like for a given frame. This opens up for time-multiplexing of many different illumination patterns in a single recording, which can be split up in segments at a later time. This can e.g. be used to provide automatic stereoscopic functionality with a grayscale camera, where the frames are painted in colors according to the position of the aperture.
In this work, we have shown how replacing an ordinary white light source, with a projector provides for a low cost and flexible illumination source capable of making advanced and even time-multiplexed imaging. This can be used to make stereoscopic images or even 3D visualizations when combined with 3D presentation equipment like for example Virtual Reality glasses, where the two parts of the stereoscopic images are displayed directly to each eye. The stereoscopic imaging comes virtually calibration free, unlike 3D observation where the same scene is observed with two cameras from two different angles. We have also demonstrated how a set of corrective lenses can be used to shift the plane of observation away from the focal plane of the objective in small steps, allowing to shift the observation plane in a controlled manner, without affecting the counter-propagating trapping beam sets.
We would like to thank the support from the Danish Technical Scientific Research Council (FTP).
2. A. Ashkin, “Acceleration and trapping of particles by radiation pressure,” Phys. Rev. Lett. 24, 156 (1970). [CrossRef]
4. A. Isomura, N. Magome, M. I. Kohira, and K. Yoshikawa, “Toward the stable optical trapping of a droplet with counter laser beams under microgravity,” Chem. Phys. Lett. 429, 321–325 (2006). [CrossRef]
6. I. Perch-Nielsen, P. Rodrigo, and J. Glückstad, “Real-time interactive 3D manipulation of particles viewed in two orthogonal observation planes,” Opt. Express 13, 2852–2857 (2005). [CrossRef] [PubMed]
7. P. J. Rodrigo, I. R. Perch-Nielsen, C. A. Alonzo, and J. Glückstad, “GPC-based optical micromanipulation in 3D real-time using a single spatial light modulator,” Opt. Express 14, 13107–13112 (2006). [CrossRef] [PubMed]
8. J. S. Dam, P. J. Rodrigo, I. R. Perch-Nielsen, and J. Glückstad, “Fully automated beam-alignment and single stroke guided manual alignment of counter-propagating multi-beam based optical micromanipulation systems,” Opt. Express 15, 7968–7973 (2007). [CrossRef] [PubMed]
10. E. C. Samson and C. M. Blanca, “Dynamic contrast enhancement in widefield microscopy using projector-generated illumination patterns,” New J. Phys. 9363 (2007). [CrossRef]
12. J. S. Dam, P. J. Rodrigo, I. R. Perch-Nielsen, C. A. Alonzo, and J. Glückstad, “Computerized “drag-and-drop” alignment of GPC-based optical micromanipulation system,” Opt. Express 15, 1923–1931 (2007). [CrossRef] [PubMed]
13. K. Burton, “An aperture-shifting light-microscopic method for rapidly quantifying positions of cells in 3D matrices,” Cytometry Part A 54, 125–131 (2003). [CrossRef]