Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Dynamic 2D implementation of 3D diffractive optics

Open Access Open Access

Abstract

Volumetric computer-generated diffractive optics offer advantages over planar 2D implementations, including the generation of space-variant functions and the multiplexing of information in space or frequency domains. Unfortunately, despite remarkable progress, fabrication of high volumetric space-bandwidth micro- and nano-structures is still in its infancy. Furthermore, existing 3D diffractive optics implementations are static while programmable volumetric spatial light modulators (SLMs) are still years or decades away. In order to address these shortcomings, we propose the implementation of volumetric diffractive optics equivalent functionality via cascaded planar elements. To illustrate the principle, we design 3D diffractive optics and implement a two-layer continuous phase-only design on a single SLM with a folded setup. The system provides dynamic and efficient multiplexing capability. Numerical and experimental results show this approach improves system performance such as diffraction efficiency, spatial/spectral selectivity, and number of multiplexing functions relative to 2D devices while providing dynamic large space-bandwidth relative to current static volume diffractive optics. The limitations and capabilities of dynamic 3D diffractive optics are discussed.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Diffractive optics is a topic of significant interest, fueled by applications in optical tweezers [14], beam shaping [5,6], holographic displays [7,8], novel microscopies [912], femtosecond laser micromachining [13,14], and optogenetics [1518]. The most widely used diffractive optics are 2D diffractive optical elements (DOEs) and 2D computer-generated holograms (CGHs) [19,20]. They are superior to optically recorded holograms in terms of customized wavefront generation from arbitrary wavefront illumination, which is due to the degrees of freedom offered by individually addressable pixels and possible optimization for a target metric.

The design of 3D structures is of interest for controlling the multidimensional spatial, spectral, temporal, and coherence function of light fields. This is achieved through additional degrees of freedom and novel physical phenomena involving the interplay of diffraction, refraction, radiation, and scattering [2123]. Previous work has shown that extending diffractive optics from 2D to 3D enables new functionality and improves system performance metrics, including enhanced diffraction efficiency [24,25], better angular or frequency selectivity [26,27], and capability to generate space-variant functions [28,29]. Cascaded 2D diffractive optics have been demonstrated with experiments showing improved diffraction efficiency, angular multiplexing of two diffraction patterns [30], and fiber mode multiplexing [31]. Full volume designs have been implemented applying 3D scattering theory and projection onto constraint sets (POCS) algorithms [21]. Experiments have demonstrated both angular and frequency multiplexing. However, 3D lithographic methods still limit the implementation to relatively low space-bandwidth devices and mostly to binary form, which restrict the design degrees of freedom and performance [22]. Further, once the devices are fabricated, no dynamic changes are allowed due to the permanently induced material modification. Volumetric spatial light modulators (SLMs) with capability to modulate micro-voxels would provide a unique opportunity to this field. Unfortunately, to the best of our knowledge, a viable device has never been conceived or demonstrated.

Liquid-crystal-based SLMs are dynamic 2D wavefront shaping devices with high efficiency and high resolution. They allow switching rates of hundreds of hertz enabling dynamic 2D diffractive optics. However, the phase patterns displayed on SLMs are 2D; hence, they only work optimally for a certain wavelength due to diffractive and material dispersion. A simple solution for display applications is to use spatially or time multiplexed 2D phase patterns on a single or multiple SLM, with each phase pattern corresponding to a different color [3234]. While these methods are appropriate for display, they cannot implement the space- or frequency-variant functionality of volume diffractive optics.

Angular and frequency (wavelength) multiplexing are the most common forms of encoding information in a volume [35]. Previous approaches aimed at multi-wavelength operation of 2D diffractive optics are based on multiple-order diffractive optics, namely devices implementing phase delays beyond 2π. They are based on surface-relief fabrication [36,37] or liquid-crystal SLM [3840]. However, these methods are capable of a limited spectral bandwidth selectivity, enabling independent control of two or at most three color bands, making them inappropriate to control a large number of spectral bands as possible with volumetric optics. Latest investigation of diffractive optics incorporating sub-wavelength structures, also called meta-surface optics [4146], provides interesting opportunities for multifunctional devices.

In this paper, we first introduce an approach for 2D implementation of 3D diffractive optics that enables dynamic control of high volumetric bandwidth elements. We then design 3D diffractive optics composed of multiple diffractive layers using a POCS algorithm [23,47,48], which is a more general version of the well-known Gerchberg–Saxton iterative optimization algorithm [49]. We implement the design on a liquid-crystal SLM, which enables dynamic and multi-level phase modulation. The SLM is spatially divided to accommodate different layers, and each layer is diffraction propagated using a concave mirror. We theoretically and experimentally investigate multilayer devices in terms of diffraction efficiency and spatial/spectral multiplexing properties.

2. THEORY

A. Model

3D diffractive optics consists of, or can be represented by, multiple thin, cascaded DOEs, which are spatially separated by short distances, in an optically homogenous medium. As light propagates through the 3D optics, the amplitude and phase are modulated by each DOE and diffraction occurs in the intermediate homogeneous regions [Fig. 1(a)]. This model also applies to volume optics [21] that continuously reshape light on propagation by considering infinitely thin homogenous layers. If we consider only one single layer, it exhibits Raman–Nath characteristics because the thickness is infinitesimal. However, the 3D diffractive optics altogether shows Bragg-like behavior as a result of the diffraction in multiple DOEs and buffer layers. This property can be used for multiplexing, in both the frequency and angular domains, and to generate space-variant systems, as demonstrated below.

 figure: Fig. 1.

Fig. 1. 3D diffractive optics implementation via 2D optics. (a) Decomposition in stratified layers. (b) Equivalent cascaded system using imaging optics. (c) 3D diffractive optics folded implementation on single spatially multiplexed DOE (e.g., SLM) with spherical mirrors.

Download Full Size | PDF

Therefore, to emulate a 3D diffractive optics, we consider stratified layers separated by a short distance Δz. The transformation by diffraction between layers, namely free-space propagation through a distance Δz, is equivalent to imaging with unit magnification followed by free-space propagation of Δz [Fig. 1(b)]. This equivalence enables physical separation among layers while achieving the same functional form as a 3D optical element. Hence, existing planar (2D) diffractive technology can be implemented to generate 3D diffractive optics functionality.

Furthermore, this approach is also amenable to implementation in folded systems, for instance, by substituting the lens by one or several concave spherical mirrors. As a result, the 3D design can be implemented on a single 2D plane [Fig. 1(c)], enabling display on a single phase-only DOE or a liquid-crystal SLM, which is spatially multiplexed to display the different layers.

For simplicity, we consider the scalar approximation to be valid under the assumption that the feature size is large relative to the wavelength of operation. The complex transmittance function of each thin DOE can be expressed as

hk(x,y)=|hk(x,y)|exp[jϕk(x,y)],
where k is the layer number. To achieve maximum efficiency, we consider pure phase modulation, with the amplitude term always unity. Under the thin-element approximation, the effect of a single DOE layer on the complex amplitude is
E(x,y,zk+)=hk(x,y)E(x,y,zk),
where zk and zk+ indicate the planes immediately before and after the kth DOE, respectively. The wave-field evolution between adjacent DOEs can be described by angular spectrum propagation in free space. It should be noticed that the wave-field picks up a quadratic phase term after a single lens or upon reflection from the spherical mirror. Therefore, the relation between the complex amplitude after the kth layer and the wave-field before the k+1th layer can be expressed as
E(x,y,zk+1)=F1{ejk02kx2ky2·Δz·F[E(x,y,zk+)·ej2πλ(x2+y2)·2f]},
where λ is the design wavelength, Δz is the layer separation, and f is the focal length of the lens or spherical mirror. If a Fourier lens is placed one focal length after the last DOE layer, the complex amplitude at the reconstruction plane satisfies
R(kx,ky,)=F{E(x,y,zN+)}.
Hence, the relation between the 3D diffractive optics and the far-field reconstruction is obtained. The propagation process is also numerically reversible; namely, waves can be back-propagated from the target R(kx,ky,).

B. Design Algorithm

While different design strategies can be anticipated, here we design the multiplexing 3D diffractive optics using a POCS algorithm with a distribution-on-layers method. To calculate a 3D diffractive optics layer by layer, we first start by setting all of them to have a random phase and unit amplitude. Then we calculate the transmission function of the layer r by first calculating the wave-field before the layer r, E(x,y,zr), r[1,,N]. This process starts from the input E(x,y,z1) and follows Eqs. (1)–(4). For backward propagation, we start with the desired reconstruction field R˜(kx,ky,), and use the inverse propagation [conjugate of Eqs. (1)–(4)] to calculate the wave-field after the rth layer, E(x,y,zr+). The transfer function for layer r is then obtained as follows:

h˜r(x,y)=E(x,y,zr+)E(x,y,zr).
hr(x,y) is a complex function, so we extract its phase by projecting onto the set of phase-only functions,
hr(x,y)=exp{h˜r(x,y)}.
If we perform forward propagation through the 3D diffractive optics, it is mostly likely that the field on the reconstruction plane will no longer match the original target. Hence, we employ a generalized projection algorithm, which iterates between each layer and the reconstruction plane, applying Eqs. (1)–(4) and their conjugate form. The algorithm keeps running until the deviation from the reconstruction plane and target is acceptable.

This process provides the transmission function for one layer of 3D diffractive optics. The remaining layers are calculated following the same process. The layers can be calculated in sequential form, in random fashion, or in parallel. As a result, the encoded information is evenly distributed among all the layers. This can significantly increase the design degrees of freedom and coding capacity of the 3D diffractive optics.

C. Multiplexing Design

Volumetric optics enables methods of multiplexing that can be implemented by design in 3D diffractive optics. Compared to 2D DOEs, the 3D counterparts exhibit strong angular or wavelength selectivity; i.e., different uncorrelated outputs can be achieved with different inputs in a single 3D diffractive optics. For instance, one can change the initial condition Ep(x,y,z1) to reconstruct different predefined images Rp(kx,ky,), respectively. The input can be addressed via wavelength, angle of incidence, or phase pattern:

Ep(x,y,z1)={Aexp{i2πλxsinϕp},angularmultiplexingAexp{i2πλp},frequencymultiplexing,p=1,2,,KAexp{iϕp(x,y)},phasemultiplexing,
where K is the total number of pages to be multiplexed. For each input and its corresponding reconstruction, every single-layer DOE is calculated by the same procedure described above. Finally, to take all the multiplexed information into account, we apply parallel projections [23] as follows:
hr(x,y)=exp{cr1Kp=1Kh˜r,p(x,y)},
where cr is a coefficient to facilitate algorithm convergence. Every layer of the 3D diffractive optics is calculated in this fashion, thus concluding one iteration. The generalized projection algorithm runs until a satisfactory result is reached. The overall flowchart of the algorithm is summarized in Fig. 2.

 figure: Fig. 2.

Fig. 2. Flowchart of projection onto constraint sets with a distribution-on-layers algorithm. h1,h2,,hN are layers to be designed, and are set random prior to the computation. R1(kx,ky,),R2(kx,ky,),,RK(kx,ky,) are user-defined output multiplexed fields with the corresponding input multiplexing fields E1(x,y,z1),E2(x,y,z1),,EK(x,y,z1). The input field and output field are forward- and backward-propagated, respectively, to the field before and after the layer to be designed. The modulation function is updated during several iterations for each multiplexing pair and for each layer in the 3D diffractive optics. The process is followed by a parallel projection to ensure that all the information is being encrypted and evenly distributed among all the N layers. The optimization algorithm ends when the target quality or the preset iteration number is reached.

Download Full Size | PDF

3. SIMULATION

The algorithm described above has been used to design 3D diffractive optics of more than 16 layers on a desktop computer (see Supplement 1). To illustrate the principle, we present the design of two-layer 3D diffractive optics. The pixel number in each layer is 128×128, with pixel size of 8μm×8μm. The layer separation is set to be Δz=486μm. Those parameters are chosen to adapt to the SLM used in the experiment, as shown in the next section.

For angular multiplexing, we use the letters “C” and “U” from the CU logo [Fig. 3(a)] as the target images for incident angles at 7° and 10°, respectively. The wavelength of the incident beam is 633 nm. The reconstructed image is shown in Fig. 3(c). For frequency multiplexing, we use the same two patterns with the incident angle fixed at 7°, and the wavelength of illumination 633 nm for “C” and 532 nm for “U.” The reconstructed image is shown in Fig. 3(e). The phase patterns for the above two cases are shown in Figs. 3(b) and 3(d), respectively, as calculated with the procedure described in Section 2.

 figure: Fig. 3.

Fig. 3. Simulation results for multiplexing 3D diffractive optics. (a) The letters “C” and “U” in the CU logo are the target images. (b) Phase patterns designed for angular multiplexing. (c) Reconstructed images with incident angle at 7° and 10° showing angular multiplexing. (d) Phase patterns designed for frequency multiplexing. (e) Reconstructed images with 633 nm and 532 nm illumination showing frequency multiplexing.

Download Full Size | PDF

We use diffraction efficiency (DE) and relative error (Err) to evaluate the performance of the designs. The diffraction efficiency is defined as the ratio of the intensity in the target area to the intensity of the input beam, and can be calculated by the following equation:

DE=|UR(kx,ky,)|2vb(kx,ky)dkxdky|E(x,y,z1)|2dxdy,
where UR is the reconstructed field in wave-vector coordinates, and vb(kx,ky) is the target region in binary form, i.e., the target domain. The relative error is used to measure the quality of the reconstruction relative to the total light intensity directed on target:
Err=||UR(kx,ky,)|2civb(kx,ky)|2dkxdky|UR(kx,ky,)|2vb(kx,ky)dkxdky,
where ci is a weighting factor that changes with iteration number i to ensure the algorithm converges.

The diffraction efficiencies for C and U in the angular multiplexing example are 54.2% and 59.1%, respectively, while the relative errors are 0.13 and 0.10, respectively. For frequency multiplexing, the efficiencies are 62.5% and 65.5%, whereas the relative errors are 0.16 and 0.14.

Next, we investigate the relations between diffraction efficiency and parameters such as number of pixels, number of layers, and layer separation. We use the scheme for frequency multiplexing. First, we expand the number of layers to 4, 8, and 16, and for each case we change the number of pixels to 256×256, 512×512, and 1024×1024. The diffraction efficiency for “C” under 633 nm illumination and “U” under 532 nm illumination is plotted in Fig. 4(a). Both the number of pixels and the number of layers are positively related to the degrees of freedom of the device. Therefore, with all other parameters unchanged, the diffraction efficiency can be enhanced by increasing the number of pixels or the number of layers. A longer computation time is required, which at some point can make the problem intractable. For example, the calculation of 16 layers with 2048×2048pixels is beyond the computational power of a 2.8 GHz quad-core CPU with 12 Gb memory and could be tackled with parallel computation.

 figure: Fig. 4.

Fig. 4. Characterization of 3D diffractive optics in the case of frequency multiplexing. (a) Diffraction efficiency of the letter “C” under 633 nm illumination and “U” under 532 nm illumination as a function of the number of pixels and the number of layers. (b) Wavelength selectivity for the letters “C” and “U” as a function of the number of pixels and the number of layers. (c) Diffraction efficiency of the letters “C” and “U” as a function of layer separation. (d) Wavelength selectivity of the letters “C” and “U” at layer separation of 50 μm, 486 μm, and 1000 μm. Pixel numbers (n) represent side size of a square matrix of size n×n.

Download Full Size | PDF

Second, we study the effect of pixel and layer number on wavelength selectivity. The results are shown in Fig. 4(b). We start with two layers of 128×128pixels, and reconstruct the 3D diffractive optics with wavelength from 500 to 660 nm. The diffraction efficiency of “C” and “U” are recorded respectively. Then we use four layers with 1024×1024pixels and record the data in the same way. We observe that both the diffraction efficiency increases and the wavelength selectivity improves with additional degrees of freedom.

Third, we analyze the diffraction efficiency as a function of layer separation, shown in Fig. 4(c). We change layer separation from 1 μm to 1 mm, for two-layer elements of 128×128pixels. We observe little effect of layer separation on diffraction efficiency.

Fourth, we study the effect of layer separation on wavelength selectivity. The layer separation is selected to be 50 μm, 486 μm (used in the design and experiment), and 1000 μm for two layers of 128×128pixels. The wavelength in the reconstruction beam is changed from 500 to 660 nm in all three cases, as shown in Fig. 4(d). We observe a moderate increase in selectivity as the effective thickness of the element increases. The effect can be explained by the fact that the buffer layer is where the propagation effect of diffraction occurs so a wavelength deviation of the input leads to a larger effect for longer distances. Similar tendencies are observed for angular multiplexing.

Last, it is interesting to analyze the limit of angular or frequency multiplexing in layered 3D diffractive optics, namely the smallest angle or wavelength interval between multiplexed reconstructions that avoids information crosstalk. While the selectivity plots of Fig. 4(b) provide a sense of the multiplexing performance, a more specific metric consists of the reconstruction error as a function of the angular/frequency separation of the different information channels. Accordingly, we design 3D diffractive optics for angular multiplexing with changing angular intervals and plot the normalized reconstruction error as a function of the angular separation. For demonstration, we use four layers with 128×128pixels on each layer (see Fig. S4, Supplement 1), from which we conclude that the smallest angular interval to avoid severe crosstalk is 0.2°. Similarly, for frequency multiplexing, we conclude that the smallest wavelength multiplexing interval to avoid severe crosstalk, with these same parameters, is 20nm (see Fig. S4, Supplement 1).

4. EXPERIMENTS

A. Experimental Setup

In this section, we present experimental results for angular multiplexing and frequency multiplexing with two-layer continuous-phase 3D diffractive optics. The experimental setup is shown in Fig. 5. We use a supercontinuum fiber laser (Fianium FemtoPower 1060) to generate a tunable source covering spectral bandwidth from below 400 nm to beyond 900 nm. The beam is sent to a computer-controlled acousto-optic tunable filter (AOTF) to provide a narrowband output with bandwidth of 2 to 4 nm at the desired wavelength. The AOTF features a fast switching mode with less than 5 μs rise time, which is sufficient for real-time applications such as generating real-time color holographic projection. A linear polarizer is used to ensure that the polarization of the incident beam is parallel to the orientation of the liquid crystal on the SLM panel (horizontal in our case), even though the output from the AOTF is already linearly polarized at that direction. We include a neutral density (ND) filter after the polarizer to adjust the intensity of the laser beam. To improve the uniformity of the beam profile, a spatial filter system is employed consisting of a microscope objective (20×, 0.25 NA) and a pinhole (50 μm diameter). A doublet achromatic lens (L1) is used to collimate the beam while avoiding chromatic aberrations. An iris adjusts the beam diameter for optimal illumination on the active area of the SLM (Holoeye HEO1080P, with 1920×1080pixels and 8 μm pixel pitch).

 figure: Fig. 5.

Fig. 5. Experimental setup for 2D implementation and characterization of dynamic 3D diffractive optics. A supercontinuum source together with an acousto-optic tunable filter (AOTF) provide narrowband laser output in the visible spectrum. The designed layers are implemented on a single high-resolution liquid-crystal SLM, which is spatially divided into two sections. The first layer is imaged at a small distance in front of the second layer, with an imaging system formed by a concave spherical mirror with focal length of 200 mm. A color CMOS sensor is placed on the reconstruction plane after a Fourier lens to record the image.

Download Full Size | PDF

If we divide the SLM into two parts side by side, the largest beam size allowed could be up to 4.32 mm, and the pixel number of each single-layer DOE could be up to 540×540. If more layers are designed, and a single SLM is still used, both the beam size and the DOE dimension will have to shrink. Here, we design two-layer diffractive optics with 128×128pixels for angular multiplexing of two functions and 256×256 for frequency multiplexing of seven functions. Accordingly, the beam size is adjusted to 1.5 mm and 3 mm for each case. To control the incident angle, a flat mirror (M3) mounted on a rotation stage is used. It diverts the beam at 7° with respect to the normal of the SLM panel. In angular multiplexing, a flipped mirror (M4) is inserted at the proper position along the beam path to obtain an incident angle of 10°. The laser beam illumination setup is indicated by the orange square in Fig. 5.

In order to match the beam profile while suppressing the background of light unaffected by the SLM, the designed layers are first padded with tilted blazed gratings (see Supplement 1). Then they are implemented on a single high-resolution SLM, which is horizontally divided in two sections. The input beam is incident on the right section (far side w.r.t. M3) displaying the first layer. It is then imaged by a concave spherical mirror (SM) with focal length of 200 mm at a small distance in front of the left section (near side w.r.t. M3), where the second layer is displayed. Based on these parameters, the distance between layers turns out to be 486 μm. Simulation results show that the misalignment between the two layers could be up to 1 pixel (8 μm) and still yield acceptable reconstructed images (see Supplement 1, Visualization 2, Visualization 3). Since the incident angle is small, we use a wedge with 10° beam deviation (Thorlabs PS814) to separate the output from the input. An achromatic doublet lens (L2) with focal length of 300 mm is followed to yield a Fourier plane (equivalent to the far field of the output from the diffractive optics) where a camera is installed to capture the reconstructed image. Considering the beam is diverging after incidence on the SLM for the second time, the Fourier plane is located farther than one focal length after the lens.

B. Angular Multiplexing Demonstration

For angular multiplexing, we set the output wavelength to be fixed at 633 nm, and we use a monochromatic camera (Point Grey CMLN-13S2M) to record the reconstructed image. The results are shown in Fig. 6. When the flip mirror is down, the incident angle is at 7°, and the letter “C” shows up on the reconstruction plane [Fig. 6(a)]. As we switch the flip mirror up to get an incident angle of 10°, we see the letter “U” on the camera [Fig. 6(b)]. The diffraction efficiencies are 50.5% and 52.1% for “C” and “U,” respectively. We also notice a weak twin image on the camera that does not appear in the design simulation. This is attributed to imperfections of the SLM and non-ideal experimental conditions. To verify that the design is successful, we illuminate only one layer of the 3D diffractive optics, and a random speckle pattern is obtained [Fig. 6(c)]. This indicates that the encryption is distributed among the layers of the 3D diffractive optics.

 figure: Fig. 6.

Fig. 6. Experimental results for angular multiplexing. (a) Reconstruction image with incident angle at 7°. (b) Reconstruction image with incident angle at 10°. (c) Speckle field with one layer blocked, indicating that the 3D encryption is successful.

Download Full Size | PDF

C. Frequency Multiplexing Demonstration

To demonstrate frequency multiplexing with a high number of degrees of freedom, we multiplexed seven functions with different colors. Specifically, each letter in the word “boulder” is encoded with wavelength 460 nm, 496 nm, 532 nm, 568 nm, 600 nm, 633 nm, and 694 nm, respectively (Fig. 7 and Visualization 1).

There are three issues that had to be addressed in the experiment. The first one is coding capacity. Since there is more information to be encoded, we expand the pixel number in each layer from 128×128 to 256×256 to ensure that the algorithm converges with acceptable crosstalk on the reconstruction plane.

The second issue is target scaling due to different diffraction angles at various wavelengths. In effect, the letters designed for shorter wavelength appear proportionally smaller on the reconstruction plane than the ones designed for longer wavelength. This can be compensated by resizing the letters by a scaling factor before running the design algorithm. For example, without resizing, the letter “b” is scaled by 633/460=1.38 w.r.t. the reference wavelength (633 nm), “o” is scaled by 633/496=1.28, and “r” is scaled by 633/694=0.91.

The third issue is phase shift compensation. This issue arises from the fact that the phase shift induced by each SLM pixel depends on both the applied voltage and the working wavelength, as is given by the following equation:

ϕ(V,λ)=2πdλn(V,λ),
where d is the thickness of the liquid crystal, λ is the working wavelength, n is the refractive index, and V is the applied voltage, which changes the orientation of the liquid-crystal molecules, thus producing various optical path differences for the selected wavelength. The voltage is generated by the SLM’s control circuit board, which converts the 256 phase patterns (02π) uploaded on the computer to (8 bit) electronic signals. Normally, a lookup-table (LUT), either provided by the manufacturer or experimentally measured, is built in the control circuit to establish a linear, or quasi-linear, relation between the addressed gray phase level and the actual phase delay. Therefore, for the same phase value of the DOE, the phase modulation on the SLM shifts by a constant coefficient as the working wavelength deviates from the designed one. For each layer of the 3D diffractive optics, we have N individual phase patterns ϕλi(x,y) calculated from the design algorithm. The task is to combine these independent phase patterns into one phase pattern while displaying the corresponding phase value for each predefined wavelength. We first convert all the phase patterns to the reference wavelength 633 nm, for which the SLM is calibrated. The conversion is done by simply multiplying a scaling factor βλi=λi/633nm to each individual phase pattern, where λi is its corresponding wavelength. This linear compensation is sufficient in many cases, as the experiments below show, even though the nature of the material dispersion of the liquid crystal is nonlinear. In Supplement 1, Fig. S2b shows that, in general, nonlinear phase deviations can still yield a good reconstruction with somehow reduced diffraction efficiency. If needed, though, the specific material dispersion can be included in the design process for optimal results. We then obtain the design in each iteration by a modified parallel projection, with the phase shift compensation being taken into account:
ϕk(x,y)=1Ni=1Nβλiϕλi(x,y),
where N is the total number of wavelengths used for frequency multiplexing. Supplement 1 and Visualization 1 show the design results and experimental implementation. The reconstructed image is recorded with a color CMOS sensor (Canon 5D Mark II). The results are shown in Fig. 7. The better quality of these images with respect to Fig. 6 is due in part to the use of a different camera.

 figure: Fig. 7.

Fig. 7. Experimental results for frequency multiplexing with two-layer diffractive optics implemented on a single SLM. The letters in the word “boulder” are reconstructed with wavelength 460 nm, 496 nm, 532 nm, 568 nm, 600 nm, 633 nm, and 694 nm, respectively. See Visualization 1.

Download Full Size | PDF

The experimental diffraction efficiency for each reconstruction image is 38.2% (40.2%), 38.0% (38.9%), 38.5% (39.4%), 35.9% (38.2%), 41.1% (43.5%), 44.9% (47.0%), and 29.8% (30.7%), respectively, with values in simulation provided in brackets for comparison. The efficiency is not as high as in the angular multiplexing example, because the information of each page decays as more functions are multiplexed. Other factors affecting the diffraction efficiency include the relatively broad spectrum of the laser source and imperfections of the SLM. However, we observed negligible crosstalk among the reconstructions.

5. CONCLUSION

We proposed an approach to implement 3D diffractive optics on a 2D dynamic SLM. We analyzed the fundamental opportunities and limitations, while the experiments confirmed the predicted performance.

3D diffractive optics not only enhance the design degrees of freedom and coding capacity, but also enable properties unique to volume (thick) holograms, such as having only one diffraction order, improved efficiency with lower crosstalk, and capability for angular and frequency multiplexing, as demonstrated numerically and experimentally. It is worth pointing out that our approach is different from the traditional use of multiple planar diffractive elements to encode amplitude and phase [28], but rather a carefully designed arrangement of diffraction, imaging, and propagation that provides the functionality of a volumetric structure, namely space variance, multiplexing in wavelength and spatially, and large information capacity, among others.

The 3D diffractive optics design implements a POCS algorithm with distribution-on-layers to spread information among multiple thin DOEs. The approach further contributes to the field of inverse problems by solving the nonlinear inverse problem of finding the 3D diffractive optics that achieves a given task without the need to assume weak scattering structures. From a fundamental point of view, the design of 3D diffractive structures mitigates the dimensionality mismatch inherent to the control of multiple dimensions of light fields (spatial, spectral, temporal, and coherence function) beyond what is possible with 2D devices.

The design is implemented on widely available SLMs, which are capable of switching the phase patterns at relatively high frame rates, thus enabling operation with multiple wavelengths or codes both simultaneously and dynamically. While we show possible implementations for more than two layers (Fig. 1), an alternative implementation could include a single large spherical mirror, with the addition of properly designed space-variant quadratic phase factors and blazed gratings onto the SLM to steer the reflected beam to the desired locations. Furthermore, one could use multiple SLMs to simplify the geometry and increase the total space-bandwidth product.

The results show that light fields are modulated in multiple dimensions with a compact and efficient system. Independent information is successfully encrypted and read out, with high efficiency and low crosstalk. This approach will benefit from the ever-increasing computational power and advances in SLM technology.

Dynamic 3D diffractive optics could be beneficial for numerous applications that require independent multi-color operation. For example, for an imaging lens, chromatic aberrations could be corrected at different wavelengths by preshaping the wavefront with a frequency multiplexing scheme. In optical tweezers, where attractive or repulsive force is generated from focused laser beams, 3D diffractive optics could implement multiple dynamic independent focused beams at different wavelengths, thus achieving manipulation of multiple microscopic objects. Furthermore, a 3D diffractive optical system could couple multiple modes into a multimode fiber, each matched in frequency and spatial shape, e.g., modes with angular momentum of various wavelengths [31,50]. Likewise, one could use 3D optics to analyze (demultiplex) the modes coming out of such a system. In a totally different application, 3D diffractive optics could be used in multi-color single-molecule localization microscopy with higher efficiency and capacity than what has recently been demonstrated [51,52]. Other interesting applications include beam steering, beam shaping, 3D display, and data storage.

Funding

National Science Foundation (NSF) (1548924, 1556473).

 

See Supplement 1 for supporting content.

REFERENCES

1. D. G. Grier, “A revolution in optical manipulation,” Nature 424, 810–816 (2003). [CrossRef]  

2. J. E. Curtis, B. A. Koss, and D. G. Grier, “Dynamic holographic optical tweezers,” Opt. Commun. 207, 169–175 (2002). [CrossRef]  

3. E. Schonbrun, R. Piestun, P. Jordan, J. Cooper, K. D. Wulff, J. Courtial, and M. Padgett, “3D interferometric optical tweezers using a single spatial light modulator,” Opt. Express 13, 3777–3786 (2005). [CrossRef]  

4. D. B. Conkey, R. P. Trivedi, S. R. P. Pavani, I. I. Smalyukh, and R. Piestun, “Three-dimensional parallel particle manipulation and tracking by integrating holographic optical tweezers and engineered point spread functions,” Opt. Express 19, 3835–3842 (2011). [CrossRef]  

5. J. S. Liu and M. R. Taghizadeh, “Iterative algorithm for the design of diffractive phase elements for laser beam shaping,” Opt. Lett. 27, 1463–1465 (2002). [CrossRef]  

6. A. J. Caley, M. J. Thomson, J. Liu, A. J. Waddie, and M. R. Taghizadeh, “Diffractive optical elements for high gain lasers with arbitrary output beam profiles,” Opt. Express 15, 10699–10704 (2007). [CrossRef]  

7. H. Yu, K. Lee, J. Park, and Y. Park, “Ultrahigh-definition dynamic 3D holographic display by active control of volume speckle fields,” Nat. Photonics 11, 186–192 (2017). [CrossRef]  

8. M. Makowski, M. Sypek, I. Ducin, A. Fajst, A. Siemion, J. Suszek, and A. Kolodziejczyk, “Experimental evaluation of a full-color compact lensless holographic display,” Opt. Express 17, 20840–20846 (2009). [CrossRef]  

9. L. Sacconi, E. Froner, R. Antolini, M. R. Taghizadeh, A. Choudhury, and F. S. Pavone, “Multiphoton multifocal microscopy exploiting a diffractive optical element,” Opt. Lett. 28, 1918–1920 (2003). [CrossRef]  

10. S. Fürhapter, A. Jesacher, S. Bernet, and M. Ritsch-Marte, “Spiral phase contrast imaging in microscopy,” Opt. Express 13, 689–694 (2005). [CrossRef]  

11. S. R. P. Pavani and R. Piestun, “Three dimensional tracking of fluorescent microparticles using a photon-limited double-helix response system,” Opt. Express 16, 22048–22057 (2008). [CrossRef]  

12. B. Nie, I. Saytashev, A. Chong, H. Liu, S. N. Arkhipov, F. W. Wise, and M. Dantus, “Multimodal microscopy with sub-30 fs Yb fiber laser oscillator,” Biomed. Opt. Express 3, 1750–1756 (2012). [CrossRef]  

13. Y. Kuroiwa, N. Takeshima, Y. Narita, S. Tanaka, and K. Hirao, “Arbitrary micropatterning method in femtosecond laser microprocessing using diffractive optical elements,” Opt. Express 12, 1908–1915 (2004). [CrossRef]  

14. A. Jesacher and M. J. Booth, “Parallel direct laser writing in three dimensions with spatially dependent aberration correction,” Opt. Express 18, 21090–21099 (2010). [CrossRef]  

15. V. Nikolenko, B. O. Watson, R. Araya, A. Woodruff, D. S. Peterka, and R. Yuste, “SLM microscopy: scanless two-photon imaging and photostimulation with spatial light modulators,” Front. Neural Circuits 2, 5 (2008). [CrossRef]  

16. L. Golan, I. Reutsky, N. Farah, and S. Shoham, “Design and characteristics of holographic neural photo-stimulation systems,” J. Neural Eng. 6, 066004 (2009). [CrossRef]  

17. O. Hernandez, E. Papagiakoumou, D. Tanese, K. Fidelin, C. Wyart, and V. Emiliani, “Three-dimensional spatiotemporal focusing of holographic patterns,” Nat. Commun. 7, 11928 (2016). [CrossRef]  

18. S. Bovetti, C. Moretti, S. Zucca, M. D. Maschio, P. Bonifazi, and T. Fellin, “Simultaneous high-speed imaging and optogenetic inhibition in the intact mouse brain,” Sci. Rep. 7, 40041 (2017). [CrossRef]  

19. A. W. Lohmann and D. P. Paris, “Binary Fraunhofer holograms, generated by computer,” Appl. Opt. 6, 1739–1748 (1967). [CrossRef]  

20. R. Piestun and J. Shamir, “Control of wave-front propagation with diffractive elements,” Opt. Lett. 19, 771–773 (1994). [CrossRef]  

21. T. D. Gerke and R. Piestun, “Aperiodic volume optics,” Nat. Photonics 4, 188–193 (2010). [CrossRef]  

22. R. Piestun and D. A. B. Miller, “Electromagnetic degrees of freedom of an optical system,” J. Opt. Soc. Am. A 17, 892–902 (2000). [CrossRef]  

23. R. Piestun, B. Spektor, and J. Shamir, “Wave fields in three dimensions: analysis and synthesis,” J. Opt. Soc. Am. A 13, 1837–1848 (1996). [CrossRef]  

24. H. Bartelt, “Computer-generated holographic component with optimum light efficiency,” Appl. Opt. 23, 1499–1502 (1984). [CrossRef]  

25. W. Cai, T. J. Reber, and R. Piestun, “Computer-generated volume holograms fabricated by femtosecond laser micromachining,” Opt. Lett. 31, 1836–1838 (2006). [CrossRef]  

26. D. Brady and D. Psaltis, “Control of volume holograms,” J. Opt. Soc. Am. A 9, 1167–1182 (1992). [CrossRef]  

27. T. D. Gerke and R. Piestun, “Aperiodic computer-generated volume holograms improve the performance of amplitude volume gratings,” Opt. Express 15, 14954–14960 (2007). [CrossRef]  

28. R. V. Johnson and A. R. Tanguay, “Stratified volume holographic optical elements,” Opt. Lett. 13, 189–191 (1988). [CrossRef]  

29. G. P. Nordin, R. V. Johnson, and A. R. Tanguay, “Diffraction properties of stratified volume holographic optical elements,” J. Opt. Soc. Am. A 9, 2206–2217 (1992). [CrossRef]  

30. S. Borgsmüller, S. Noehte, C. Dietrich, T. Kresse, and R. Männer, “Computer-generated stratified diffractive optical elements,” Appl. Opt. 42, 5274–5283 (2003). [CrossRef]  

31. G. Labroille, B. Denolle, P. Jian, P. Genevaux, N. Treps, and J.-F. Morizur, “Efficient and mode selective spatial mode multiplexer based on multi-plane light conversion,” Opt. Express 22, 15599–15607 (2014). [CrossRef]  

32. F. Yaraş, H. Kang, and L. Onural, “Real-time phase-only color holographic video display system using LED illumination,” Appl. Opt. 48, H48–H53 (2009). [CrossRef]  

33. M. Makowski, I. Ducin, K. Kakarenko, J. Suszek, M. Sypek, and A. Kolodziejczyk, “Simple holographic projection in color,” Opt. Express 20, 25130–25136 (2012). [CrossRef]  

34. T. Shimobaba, A. Shiraki, N. Masuda, and T. Ito, “An electroholographic colour reconstruction by time division switching of reference lights,” J. Opt. A 9, 757–760 (2007). [CrossRef]  

35. G. Barbastathis and D. Psaltis, “Volume holographic multiplexing methods,” in Holographic Data Storage, Springer Series in Optical Sciences (Springer, 2000), pp. 21–62.

36. T. R. M. Sales and D. H. Raguin, “Multiwavelength operation with thin diffractive elements,” Appl. Opt. 38, 3012–3018 (1999). [CrossRef]  

37. U. Levy, E. Marom, and D. Mendlovic, “Simultaneous multicolor image formation with a single diffractive optical element,” Opt. Lett. 26, 1149–1151 (2001). [CrossRef]  

38. A. Jesacher, S. Bernet, and M. Ritsch-Marte, “Colour hologram projection with an SLM by exploiting its full phase modulation range,” Opt. Express 22, 20530–20541 (2014). [CrossRef]  

39. A. Jesacher, S. Bernet, and M. Ritsch-Marte, “Combined holographic optical trapping and optical image processing using a single diffractive pattern displayed on a spatial light modulator,” Opt. Lett. 39, 5337–5340 (2014). [CrossRef]  

40. W. Harm, C. Roider, S. Bernet, and M. Ritsch-Marte, “Tilt-effect of holograms and images displayed on a spatial light modulator,” Opt. Express 23, 30497–30511 (2015). [CrossRef]  

41. P. Lalanne and P. Chavel, “Metalenses at visible wavelengths: past, present, perspectives,” Laser Photon. Rev. 11, 1600295 (2017). [CrossRef]  

42. S. Colburn, A. Zhan, and A. Majumdar, “Metasurface optics for full-color computational imaging,” Sci. Adv. 4, eaar2114 (2018). [CrossRef]  

43. S. Wang, P. C. Wu, V.-C. Su, Y.-C. Lai, C. H. Chu, J.-W. Chen, S.-H. Lu, J. Chen, B. Xu, C.-H. Kuan, T. Li, S. Zhu, and D. P. Tsai, “Broadband achromatic optical metasurface devices,” Nat. Commun. 8, 187 (2017). [CrossRef]  

44. H. Yang, T. Yu, Q. Wang, and M. Lei, “Wave manipulation with magnetically tunable metasurfaces,” Sci. Rep. 7, 5441 (2017). [CrossRef]  

45. E. Arbabi, A. Arbabi, S. M. Kamali, Y. Horie, and A. Faraon, “Controlling the sign of chromatic dispersion in diffractive optics with dielectric metasurfaces,” Optica 4, 625–632 (2017). [CrossRef]  

46. T. Kämpfe, E.-B. Kley, A. Tünnermann, and P. Dannberg, “Design and fabrication of stacked, computer generated holograms for multicolor image generation,” Appl. Opt. 46, 5482–5488 (2007). [CrossRef]  

47. L. G. Gubin, B. T. Polyyak, and E. V. Raik, “The method of projections for finding the common point of convex sets,” USSR Comput. Math. Math. Phys. 7, 1–24 (1967). [CrossRef]  

48. R. Aharoni and Y. Censor, “Block-iterative projection methods for parallel computation of solutions to convex feasibility problems,” Linear Algebra Appl. 120, 165–175 (1989). [CrossRef]  

49. R. Gerchberg and W. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Opt. Jena 35, 237–246 (1972).

50. O. Tzang, A. M. Caravaca-Aguirre, K. Wagner, and R. Piestun, “Adaptive wavefront shaping for controlling nonlinear multimode interactions in optical fibres,” Nat. Photonics 12, 368–374 (2018). [CrossRef]  

51. A. Gahlmann, J. L. Ptacin, G. Grover, S. Quirin, A. R. S. von Diezmann, M. K. Lee, M. P. Backlund, L. Shapiro, R. Piestun, and W. E. Moerner, “Quantitative multicolor subdiffraction imaging of bacterial protein ultrastructures in three dimensions,” Nano Lett. 13, 987–993 (2013). [CrossRef]  

52. Y. Shechtman, L. E. Weiss, A. S. Backer, M. Y. Lee, and W. E. Moerner, “Multicolour localization microscopy by point-spread-function engineering,” Nat. Photonics 10, 590–594 (2016). [CrossRef]  

Supplementary Material (4)

NameDescription
Supplement 1       Supplementary document
Visualization 1       2D implementation of volume diffractive optics. Simulation results for frequency multiplexing with 2-layer diffractive optics implemented on a single SLM. The letters in the word “boulder”? are reconstructed with wavelength 460 nm, 496 nm, 532 nm, 568 nm.
Visualization 2       Investigation of lateral misalignment tolerance. Visualization 2 shows the reconstructed pattern under both 633 nm and 532 nm illumination as the second layer is misaligned from −20 to 20 micron.
Visualization 3       Investigation of longitudinal misalignment tolerance. Visualization 3 shows the reconstructed pattern, under both 633 nm and 532 nm illumination, when the second layer is misaligned from −50 to 50 micron.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. 3D diffractive optics implementation via 2D optics. (a) Decomposition in stratified layers. (b) Equivalent cascaded system using imaging optics. (c) 3D diffractive optics folded implementation on single spatially multiplexed DOE (e.g., SLM) with spherical mirrors.
Fig. 2.
Fig. 2. Flowchart of projection onto constraint sets with a distribution-on-layers algorithm. h1,h2,,hN are layers to be designed, and are set random prior to the computation. R1(kx,ky,),R2(kx,ky,),,RK(kx,ky,) are user-defined output multiplexed fields with the corresponding input multiplexing fields E1(x,y,z1),E2(x,y,z1),,EK(x,y,z1). The input field and output field are forward- and backward-propagated, respectively, to the field before and after the layer to be designed. The modulation function is updated during several iterations for each multiplexing pair and for each layer in the 3D diffractive optics. The process is followed by a parallel projection to ensure that all the information is being encrypted and evenly distributed among all the N layers. The optimization algorithm ends when the target quality or the preset iteration number is reached.
Fig. 3.
Fig. 3. Simulation results for multiplexing 3D diffractive optics. (a) The letters “C” and “U” in the CU logo are the target images. (b) Phase patterns designed for angular multiplexing. (c) Reconstructed images with incident angle at 7° and 10° showing angular multiplexing. (d) Phase patterns designed for frequency multiplexing. (e) Reconstructed images with 633 nm and 532 nm illumination showing frequency multiplexing.
Fig. 4.
Fig. 4. Characterization of 3D diffractive optics in the case of frequency multiplexing. (a) Diffraction efficiency of the letter “C” under 633 nm illumination and “U” under 532 nm illumination as a function of the number of pixels and the number of layers. (b) Wavelength selectivity for the letters “C” and “U” as a function of the number of pixels and the number of layers. (c) Diffraction efficiency of the letters “C” and “U” as a function of layer separation. (d) Wavelength selectivity of the letters “C” and “U” at layer separation of 50 μm, 486 μm, and 1000 μm. Pixel numbers (n) represent side size of a square matrix of size n×n.
Fig. 5.
Fig. 5. Experimental setup for 2D implementation and characterization of dynamic 3D diffractive optics. A supercontinuum source together with an acousto-optic tunable filter (AOTF) provide narrowband laser output in the visible spectrum. The designed layers are implemented on a single high-resolution liquid-crystal SLM, which is spatially divided into two sections. The first layer is imaged at a small distance in front of the second layer, with an imaging system formed by a concave spherical mirror with focal length of 200 mm. A color CMOS sensor is placed on the reconstruction plane after a Fourier lens to record the image.
Fig. 6.
Fig. 6. Experimental results for angular multiplexing. (a) Reconstruction image with incident angle at 7°. (b) Reconstruction image with incident angle at 10°. (c) Speckle field with one layer blocked, indicating that the 3D encryption is successful.
Fig. 7.
Fig. 7. Experimental results for frequency multiplexing with two-layer diffractive optics implemented on a single SLM. The letters in the word “boulder” are reconstructed with wavelength 460 nm, 496 nm, 532 nm, 568 nm, 600 nm, 633 nm, and 694 nm, respectively. See Visualization 1.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

hk(x,y)=|hk(x,y)|exp[jϕk(x,y)],
E(x,y,zk+)=hk(x,y)E(x,y,zk),
E(x,y,zk+1)=F1{ejk02kx2ky2·Δz·F[E(x,y,zk+)·ej2πλ(x2+y2)·2f]},
R(kx,ky,)=F{E(x,y,zN+)}.
h˜r(x,y)=E(x,y,zr+)E(x,y,zr).
hr(x,y)=exp{h˜r(x,y)}.
Ep(x,y,z1)={Aexp{i2πλxsinϕp},angularmultiplexingAexp{i2πλp},frequencymultiplexing,p=1,2,,KAexp{iϕp(x,y)},phasemultiplexing,
hr(x,y)=exp{cr1Kp=1Kh˜r,p(x,y)},
DE=|UR(kx,ky,)|2vb(kx,ky)dkxdky|E(x,y,z1)|2dxdy,
Err=||UR(kx,ky,)|2civb(kx,ky)|2dkxdky|UR(kx,ky,)|2vb(kx,ky)dkxdky,
ϕ(V,λ)=2πdλn(V,λ),
ϕk(x,y)=1Ni=1Nβλiϕλi(x,y),
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.