Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Three-dimensional imaging in three-dimensional optical multi-beam micromanipulation

Open Access Open Access

Abstract

In this work, we present a method providing real-time, low cost, three-dimensional imaging in a three-dimensional optical micromanipulation system. The three-dimensional imaging system is based on a small form factor LED based projector. The projector is used to dynamically shape the rear illumination light in a counter-propagating beam-trapping setup. This allows us to produce stereoscopic images, from which the human brain can construct a three-dimensional image, or alternatively image analysis can be applied by a computer, thereby obtaining true three-dimensional coordinates in real-time for the trapped objects.

©2008 Optical Society of America

1. Introduction

Optical trapping and manipulation of many particles in three dimensions (3D) has been demonstrated using different principles such as holographic trapping [1] and trapping based on a counter-propagating geometry [2–4]. Imaging and tracking the particles becomes a concern as axial displacements from 3D manipulation typically results in a defocused image.

Previously, 3D imaging in 3D optical trapping systems has been done by holographic microscopy [5], and while it does give accurate 3D information, it requires complicated calculations and gives ghost images and cannot be considered visually appealing. 3D information can also be extracted by observing the objects from the side, with another camera [6]. This is complicated as one needs a sample chamber with an optical quality window from the side, and one needs to align the side-view camera to focus on the intended location. Also figuring out which objects are the same in the two views are far from trivial if more than a few objects are present in a scene.

In this work we show how the use of a color camera and illumination from a small form factor LED based projector can be used to construct real-time, stereoscopic images of particles that are optically manipulated in 3D in a system based on counter-propagating beam sets. We also describe a time-multiplexing method to construct identical 3D images without the need for a color camera. Furthermore, we show how the projector can be used to make “rotating” illumination bringing 3D videos to life.

2. Experimental setup

The laser shaping system and trapping module are essentially the same as in previous experimental setups [7, 8]. By exchanging a standard white light source for rear illumination with a small form factor LED based pocket projector (Samsung SP-P310ME), we gain the advantage of being able to create custom-shaped illumination patterns [9, 10]. These patterns can consist of different colors, and can even change dynamically in time as we will show later. The image produced by the projector is relayed by a 4-f lens system to the back-focal plane of one of the trapping objectives as illustrated in Fig. 1.

 figure: Fig. 1.

Fig. 1. The output from the projector is collected by a 25 mm focal length lens (L1), which forms a focused image on the diffuser. This image (displayed by the projector) is relayed by a 4-f system to the back focal plane of the lower objective. L2 has a focal length of 150 mm and L3 125 mm. An aperture is placed approximately 2-f from the diffuser. The objectives are both long working distance 50× Olympus PlanIR objectives with a numerical aperture of 0.55. L4 is an Olympus microscope lens with a focal length of 180 mm. The green line symbolizes the counter-propagating trapping laser beam set being reflected in dichroic mirrors (dashed lines). The lower image to the left shows how an empty scene looks without the diffusive filter in place. Notice the unevenness of the illumination. After insertion of a diffusive filter, the homogeneity problem is completely solved as shown in the right image. Not only does it make for nicer images, it also simplifies any subsequent image analysis greatly.

Download Full Size | PDF

The diffusing filter in Fig. 1 has been inserted in an intermediate image plane (of the projector image being relayed to the back focal plane of the microscope objective) close to the projector. The reason for this is that the DLP projector emits light with some diffraction pattern, meaning that the light from the active area is not emitted homogenous in spatial angles, but contains some higher frequency diffraction pattern. This has no implications for regular use of projectors, as the imaging is from the object plane (DLP chip) to an image plane (screen). For our use, where we display the image at the back focal plane of the microscope objective, the spread in angles will lead to darker and lighter areas in the illuminated area. This homogeneity problem is completely solved by the diffusing filter as shown in Fig. 1.

The 3D optical trapping is occurring between the focal planes of the upper and lower objectives, and the z-position can be varied in this region [11]. This means that by default, the trapped particles are significantly out of focus. To make sharp images of objects positioned in these locations, we need to shift the plane being imaged on the CCD, without altering the trapping laser beam sets. To this end, we have inserted a wheel with lenses of varying focal lengths as close as possible after the dichroic mirror separating the trapping wavelength from the imaging wavelengths. In the following we refer to the lenses used, not by their focal lengths, but rather by their diopter (diopter; the inverse focal length measured in meters). The diopter is additive for lenses placed close to each other, allowing us to use two filter wheels, placed close together to obtain more possible diopters with a few numbers of lenses. When aligning the trapping system, the procedure [8, 12] is carried out without any of these correctional optics placed in the optical path. To minimize the change of magnification when inserting corrective optics, they should be placed as close as possible to the objective. In the table below, we have calculated how the insertion of correction optics displaces the observed plane and the corresponding change in magnification. The calculation is done using standard ray transfer matrices.

[1f101]·[101f11]·[1f1+f2d1d201]·[101fcor21]·[1d201]·[101fcor1]·[1d101]·[1f2+z01]·(0θ)=(xy)

In the equation above, f1 is the focal length of the Olympus lens forming the image on the CCD (180 mm). f2 is the focal length of the 50X Olympus microscope objective (3.6 mm). d1 is the optical distance from the microscope objective to FW1 (80 mm). d2 is the distance from FW1 to FW2 (46 mm). fcor is the focal length of the lens situated in FW1. fcor2 is the focal length of the lens situated in FW2. z is the obtained shift of focus (in vacuum).

Solving the equation for z, while setting x=0 gives us the attained focus shift. The magnification can be calculated as θ/y. The focus shift must finally be multiplied by the index of refraction of the sample medium (usually water) to obtain the shift of focus in the medium.

 figure: Fig. 2.

Fig. 2. The sketch on the left shows the principle of how corrective lenses can be used to displace the observation plane of the used microscope objectives. CCD indicates a color camera; FW1 and FW2 are filter wheels with a selection of corrective lenses to insert in the optical path. DM are dichroic mirrors which reflect the laser light while allowing visible light to pass. FOC indicates the focal plane of the upper objective. The red line shows the uncorrected image path, and the black line shows how the corrective lenses displace the observed plane. The green lines are the trapping laser beam sets. The table on the right shows focus displacement and relative magnification changes as a function of the lenses inserted in the optical path.

Download Full Size | PDF

The drawing and tabled calculations in Fig. 2 show how we can shift the observed plane from the focal plane of the objective by insertion of corrective optics. This can be used as a method to find the approximate z-location of a trapped particle. By finding the lens combination where the object is best in focus, the location can be determined by looking in the table above. The possible focus planes are spaced closer together than the typical dimensions of the objects being manipulated, meaning that it is always possible to achieve a focused (or at least very close to focused) image. Simultaneously determining the z-positions of multiple particles can be done in a different way, leading us to the topic of the next section.

3. Stereoscopic imaging

As depicted in Fig. 1 we use a projector as source for the rear illumination. The image from the projector is relayed to the back focal plane of the lower microscope objective. By changing the image displayed on the projector the pattern for the illumination changes accordingly [13].

In Fig. 3 we show a sketch of a situation with a particle placed away from the image plane, and how we can determine the z-position of the particle by use of custom made illumination and a color camera.

 figure: Fig. 3.

Fig. 3. The projector is displaying a red and a blue circle (A) at the back focal plane of the lower microscope objective. The dashed circle shows the back aperture of the objective, and the blue and red dots are the image displayed there. This custom shaped illumination makes the particle (which in this example is placed before the observation plane) give two displaced shadows (C) in the observed plane.

Download Full Size | PDF

Figure 3 shows how a particle displaced from the focus plane will make displaced blue and red shadows in the focal plane. Because of the relay of the observation plane, the displacement works both for particles placed before and after the focal plane. In the example above, the blue dot (blue since red is blocked) would be below the red dot if the particle was placed after the observation plane. Furthermore, from the angles of the blue and red illuminations and the distance between shadows one can easily deduce the z-position of the imaged particle. The x-y coordinates are deduced as the midpoint of the blue and red shadows.

To demonstrate the 3D imaging capabilities of this dual-color illumination system, we show an experiment where some polystyrene beads of different sizes are manipulated and placed at different z-positions. These images are shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. The left image shows particles of varying sizes (diameters 2 µm, 4.5 µm and 10 µm) being trapped at different z-locations. The dashed inserts show the illumination pattern displayed in the back focal plane. If a set of red/blue 3D goggles is worn, the image can be interpreted by the brain as a 3D structure. The right image is the same image after image analysis (thresholding) has been applied. The analyzed image has negative colors, i.e. a blue dot where the blue light is missing (where the left image shows red).

Download Full Size | PDF

From the image analysis in Fig. 4, the 3D coordinates of the particles can be extracted. The x-y coordinates are easily found as the average value of the x-y coordinates of the corresponding blue and red shadows. From the illustration in Fig. 3, the calculation of the z coordinate is trivial, when we know the angle θ of the red and blue illuminations (which we know from the illumination pattern, the focal length of the objective lens, the refractive index of water), and the distance between the center coordinates of the blue and red shadows, xblue and xred.

z=xbluexred2tanθ

As a result of the image analysis, we can calculate the z positions for all particles. The large out of focus particle is e.g. at 11 µm below the observation plane. Since θ is constant throughout the view area, there is no position dependence in the relation between distance between the colored spots and the z position. The precision of the calculated z-coordinate depends on how accurately the shadows location can be identified. Under good conditions the z-coordinate can be found with about 1 µm accuracy or better.

For more complex scenes with more densely packed particles, problems may arise when trying to match the shadows pair wise. This identification problem/ambiguity can be prevented by adding a centered green dot in the illumination pattern, resulting in a green shadow with the correct x-y coordinates, and blue and red shadows at equal distances left and right from this point. Such 3-colored images are best presented in their negative forms after image analysis, like the lower row in Fig. 4. The stereoscopic images can also be viewed with red/blue stereoscopic goggles, where the brain will interpret the scene as a 3D structure. To help the brain decode the scene, the image processed scene can be modified to show displaced shape identical red/blue particle pairs.

4. Advanced illumination

The projector can also be used to make illumination with continuously varying angle of illumination. This allow us to make 3D like videos where the 3D effect looks like the point of observation moves in e.g. a circular motion, where in fact nothing in the system moves, only the angle of illumination is varied. A few frames from such a video are presented in Fig. 5. These frames can also be considered an example of time multiplexed illumination.

 figure: Fig. 5.

Fig. 5. (AVI: 1.0 MB) Four frames from movie showing how a rotating illumination can give a 3D impression of the scene. The effect is similar to watching a 3D object with one eye closed while moving ones head in circles to gain 3D information. Note that nothing in the scene is moving, only the illumination pattern changes. Frames with different illumination patterns can be combined to produce color coded stereoscopic images. The red/blue image is a combination of the left illuminated frame (colored red) and right illuminated frame (colored blue). [GIF: contrast enhanced version] [Media 1][Media 2]

Download Full Size | PDF

The rotation-like video wherefrom frames are presented above is well suited for presentation of 3D structures/arrangements.

Since the color camera has good separation between red and blue color planes, they can be used individually to have dual illumination, where the color-based stereoscopic imaging described in the previous section is only one of many possibilities. One possibility is to use the blue light to create a high resolution low depth sharpness imaging plane to obtain best possible images of focused particles, while at the same time using the red light with higher depth of field imaging, which simultaneously allow us to observe particles well out of focus. This could even be imagined combined with time multiplexing of different patterns, giving effectively both stereoscopic imaging, high x-y resolution imaging, and high depth sharpness imaging, all in the same recording. For such a time multiplexed video, image analysis along with adjustment of the aperture shown in Fig. 1, can be used to determine which illumination pattern was used to generate a particular video frame. In Fig. 6, we show how a blue/red stereo illumination like the one used in Fig. 4 as well as a high NA white light illumination are affected by the out-of-focus aperture.

 figure: Fig. 6.

Fig. 6. The images show the effect of an aperture (placed in the white light module as shown in Fig. 1) imaged out of focus under different illuminations. The left image shows how the imaged aperture appears sharp under low NA illumination, and shift by the angle of illumination. The use of such an aperture allow us to do time-multiplexing of different illumination patterns while using subsequent image analysis to determine how each video frame was illuminated.

Download Full Size | PDF

As seen in Fig. 6, we can use the image of the aperture to determine how the source of illumination looks like for a given frame. This opens up for time-multiplexing of many different illumination patterns in a single recording, which can be split up in segments at a later time. This can e.g. be used to provide automatic stereoscopic functionality with a grayscale camera, where the frames are painted in colors according to the position of the aperture.

5. Conclusion

In this work, we have shown how replacing an ordinary white light source, with a projector provides for a low cost and flexible illumination source capable of making advanced and even time-multiplexed imaging. This can be used to make stereoscopic images or even 3D visualizations when combined with 3D presentation equipment like for example Virtual Reality glasses, where the two parts of the stereoscopic images are displayed directly to each eye. The stereoscopic imaging comes virtually calibration free, unlike 3D observation where the same scene is observed with two cameras from two different angles. We have also demonstrated how a set of corrective lenses can be used to shift the plane of observation away from the focal plane of the objective in small steps, allowing to shift the observation plane in a controlled manner, without affecting the counter-propagating trapping beam sets.

Acknowledgments

We would like to thank the support from the Danish Technical Scientific Research Council (FTP).

References

1. D. G. Grier, “A revolution in optical manipulation,” Nature 424, 810–816 (2003). [CrossRef]   [PubMed]  

2. A. Ashkin, “Acceleration and trapping of particles by radiation pressure,” Phys. Rev. Lett. 24, 156 (1970). [CrossRef]  

3. P. J. Rodrigo, V. R. Daria, and J. Glückstad, “Real-time three-dimensional optical micromanipulation of multiple particles and living cells,” Opt. Lett. 29, 2270–2272 (2004). [CrossRef]   [PubMed]  

4. A. Isomura, N. Magome, M. I. Kohira, and K. Yoshikawa, “Toward the stable optical trapping of a droplet with counter laser beams under microgravity,” Chem. Phys. Lett. 429, 321–325 (2006). [CrossRef]  

5. S. -H. Lee and D. G. Grier, “Holographic microscopy of holographically trapped three-dimensional structures,” Opt. Express 15, 1505–1512 (2007). [CrossRef]   [PubMed]  

6. I. Perch-Nielsen, P. Rodrigo, and J. Glückstad, “Real-time interactive 3D manipulation of particles viewed in two orthogonal observation planes,” Opt. Express 13, 2852–2857 (2005). [CrossRef]   [PubMed]  

7. P. J. Rodrigo, I. R. Perch-Nielsen, C. A. Alonzo, and J. Glückstad, “GPC-based optical micromanipulation in 3D real-time using a single spatial light modulator,” Opt. Express 14, 13107–13112 (2006). [CrossRef]   [PubMed]  

8. J. S. Dam, P. J. Rodrigo, I. R. Perch-Nielsen, and J. Glückstad, “Fully automated beam-alignment and single stroke guided manual alignment of counter-propagating multi-beam based optical micromanipulation systems,” Opt. Express 15, 7968–7973 (2007). [CrossRef]   [PubMed]  

9. S. Delica and C. M. Blanca, “Wide-field depth-sectioning fluorescence microscopy using projector-generated patterned illumination,” Appl. Opt. 46, 7237–7243 (2007). [CrossRef]   [PubMed]  

10. E. C. Samson and C. M. Blanca, “Dynamic contrast enhancement in widefield microscopy using projector-generated illumination patterns,” New J. Phys. 9363 (2007). [CrossRef]  

11. P. J. Rodrigo, I. R. Perch-Nielsen, and J. Glückstad, “Three-dimensional forces in GPC-based counterpropagating-beam traps,” Opt. Express 14, 5812–5822 (2006). [CrossRef]   [PubMed]  

12. J. S. Dam, P. J. Rodrigo, I. R. Perch-Nielsen, C. A. Alonzo, and J. Glückstad, “Computerized “drag-and-drop” alignment of GPC-based optical micromanipulation system,” Opt. Express 15, 1923–1931 (2007). [CrossRef]   [PubMed]  

13. K. Burton, “An aperture-shifting light-microscopic method for rapidly quantifying positions of cells in 3D matrices,” Cytometry Part A 54, 125–131 (2003). [CrossRef]  

Supplementary Material (2)

Media 1: AVI (999 KB)     
Media 2: GIF (109 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. The output from the projector is collected by a 25 mm focal length lens (L1), which forms a focused image on the diffuser. This image (displayed by the projector) is relayed by a 4-f system to the back focal plane of the lower objective. L2 has a focal length of 150 mm and L3 125 mm. An aperture is placed approximately 2-f from the diffuser. The objectives are both long working distance 50× Olympus PlanIR objectives with a numerical aperture of 0.55. L4 is an Olympus microscope lens with a focal length of 180 mm. The green line symbolizes the counter-propagating trapping laser beam set being reflected in dichroic mirrors (dashed lines). The lower image to the left shows how an empty scene looks without the diffusive filter in place. Notice the unevenness of the illumination. After insertion of a diffusive filter, the homogeneity problem is completely solved as shown in the right image. Not only does it make for nicer images, it also simplifies any subsequent image analysis greatly.
Fig. 2.
Fig. 2. The sketch on the left shows the principle of how corrective lenses can be used to displace the observation plane of the used microscope objectives. CCD indicates a color camera; FW1 and FW2 are filter wheels with a selection of corrective lenses to insert in the optical path. DM are dichroic mirrors which reflect the laser light while allowing visible light to pass. FOC indicates the focal plane of the upper objective. The red line shows the uncorrected image path, and the black line shows how the corrective lenses displace the observed plane. The green lines are the trapping laser beam sets. The table on the right shows focus displacement and relative magnification changes as a function of the lenses inserted in the optical path.
Fig. 3.
Fig. 3. The projector is displaying a red and a blue circle (A) at the back focal plane of the lower microscope objective. The dashed circle shows the back aperture of the objective, and the blue and red dots are the image displayed there. This custom shaped illumination makes the particle (which in this example is placed before the observation plane) give two displaced shadows (C) in the observed plane.
Fig. 4.
Fig. 4. The left image shows particles of varying sizes (diameters 2 µm, 4.5 µm and 10 µm) being trapped at different z-locations. The dashed inserts show the illumination pattern displayed in the back focal plane. If a set of red/blue 3D goggles is worn, the image can be interpreted by the brain as a 3D structure. The right image is the same image after image analysis (thresholding) has been applied. The analyzed image has negative colors, i.e. a blue dot where the blue light is missing (where the left image shows red).
Fig. 5.
Fig. 5. (AVI: 1.0 MB) Four frames from movie showing how a rotating illumination can give a 3D impression of the scene. The effect is similar to watching a 3D object with one eye closed while moving ones head in circles to gain 3D information. Note that nothing in the scene is moving, only the illumination pattern changes. Frames with different illumination patterns can be combined to produce color coded stereoscopic images. The red/blue image is a combination of the left illuminated frame (colored red) and right illuminated frame (colored blue). [GIF: contrast enhanced version] [Media 1][Media 2]
Fig. 6.
Fig. 6. The images show the effect of an aperture (placed in the white light module as shown in Fig. 1) imaged out of focus under different illuminations. The left image shows how the imaged aperture appears sharp under low NA illumination, and shift by the angle of illumination. The use of such an aperture allow us to do time-multiplexing of different illumination patterns while using subsequent image analysis to determine how each video frame was illuminated.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

[ 1 f 1 0 1 ] · [ 1 0 1 f 1 1 ] · [ 1 f 1 + f 2 d 1 d 2 0 1 ] · [ 1 0 1 f cor 2 1 ] · [ 1 d 2 0 1 ] · [ 1 0 1 f cor 1 ] · [ 1 d 1 0 1 ] · [ 1 f 2 + z 0 1 ] · ( 0 θ ) = ( x y )
z = x blue x red 2 tan θ
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.