Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Incoherent diffractive optical elements for extendable field-of-view imaging

Open Access Open Access

Abstract

We present a diffractive optics design for incoherent imaging with an extendable field-of-view. In our design method, multiple layers of diffractive optical elements (DOEs) are synthesized so that images on the input plane illuminated with spatially incoherent light are reproduced upright on the output plane. In addition, our method removes the need for an approximation of shift invariance, which has been assumed in conventional optical designs for incoherent imaging systems. Once the DOE cascade is calculated, the field-of-view can be extended by using an array of such DOEs without further calculation. We derive the optical condition to calculate the DOEs and numerically demonstrate the proposed method with the condition.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

Corrections

25 October 2023: A typographical correction was made to figure captions 2, 6, 10, and 11.

1. Introduction

A wide field-of-view (FOV) is important in various imaging applications, such as whole-brain microscopy, all-sky observation, and surveillance cameras [13]. The space–bandwidth product, which is proportional to the FOV and the reciprocal of spatial resolution, is one important factor used to assess imaging systems [4]. To achieve high space–bandwidth products, the imaging optics need to be large [5]. Extending the FOV without compromising the spatial resolution, while maintaining compact imaging optics, is a longstanding issue in optical design. Several methods for overcoming the issue have been demonstrated, some of which incorporate computational post-processing [613].

One promising approach to reduce the volume of the optical system is to employ diffractive optical elements (DOEs) [14]. DOEs use the diffraction of light waves by means of pixel-wise modulation and have higher degrees of freedom compared with lenses, which use the refraction of light rays by means of surface curvature. Recently, methods for wide FOV imaging with DOEs, including metalenses, have been proposed based on the framework of deep learning [1518].

Most cases of DOE design assume coherent light, such as computer-generated holography [1921]. On the other hand, many imaging applications employ incoherent light. DOE design for incoherent light requires extensive computational cost because of the large number of propagation modes in incoherent light. Therefore, to reduce the computational costs, the above-mentioned previous methods for wide FOV imaging with DOEs approximate the optical process through DOEs by a global or local convolution, where shift invariance is assumed [1518].

In this study, we design DOEs for imaging applications on the basis of a rigorous propagation model for spatially incoherent monochromatic (temporally coherent) light without shift invariance. Furthermore, unlike most lens designs, we introduce an upright imaging condition to our DOE design for realizing a wide FOV by suppressing off-axis aberrations. We show that the FOV of our method can be extended by using an array or the designed DOEs, assuming an imaging sensor of unlimited size. In contrast to computational imaging techniques with incoherent light, such as coded aperture imaging and incoherent digital holography [2227], our incoherent imaging system does not require any computational reconstruction. As a result, it is suitable for real-time applications and remains unaffected by noise sensitivity associated with reconstruction processes.

2. Method

To realize extendable FOV imaging, we design multilayered DOEs for incoherent light, as shown in Fig. 1. The proposed method does not assume shift invariance of the optical process through the DOEs. Furthermore, our DOE cascade reproduces upright images on the output plane. For simplicity, we here suppose equal magnification for imaging. Extensions to non-equal magnifications will be discussed in the conclusion section.

 figure: Fig. 1.

Fig. 1. DOE cascade for extendable FOV imaging with spatially incoherent light.

Download Full Size | PDF

2.1 Upright imaging condition

A comparison between the inverted imaging condition, commonly used in conventional imaging systems, and the upright imaging condition is shown in Fig. 2. In the inverted imaging condition, the number of optical axes is one, and the optical processes from point sources on the input plane at different distances from the single optical axis are different as shown in Fig. 2(a). Therefore, the inverted imaging condition exposes off-axis aberrations like coma, astigmatism, field curvature, and distortion. These aberrations tend to become large as the image height increases [28]. This issue is serious especially when the FOV is large.

 figure: Fig. 2.

Fig. 2. Optical processes of (a) inverted imaging condition and (b) upright imaging condition.

Download Full Size | PDF

On the other hand, the upright imaging condition realizes imaging systems with multiple optical axes or without explicit optical axes, as shown in Fig. 2(b). In this case, the optical processes of light from all point sources on the input plane are similar or identical, and the impact of off-axis aberrations is low compared with that in the inverted imaging condition. Therefore, the upright imaging condition is essential for implementing a wide FOV in compact imaging optics.

2.2 Imaging model

In this study, we design DOEs for incoherent imaging. An issue with this design is the computational cost for numerically propagating incoherent light through the DOEs. We recently addressed this issue with a numerical method called compressive propagation (CP), where incoherent light is described as a set of random wavefronts based on stochastic gradient descent [29,30].

We show a CP-based forward model of the imaging process through the DOE cascade in Fig. 1, where monochromaticity of the spatially incoherent light is assumed for simplicity. The forward model with $K$ layers of phase-modulation type DOEs and $M$ different random wavefronts is written as the following steps:

  • 1. The $m$-th random wavefront $w_m$ illuminates the intensity object $f$ on the input plane. Light passing through the object propagates onto the first DOE by the Fresnel propagation kernel $\mathcal {P}_1[\bullet ]$ [4]. This process is written as
    $$v_{1,m} = \mathcal{P}_1\left[w_{m}\sqrt{f}\right],$$
    where $v_{1,m}$ denotes the $m$-th wavefront just before the first DOE.
  • 2. The wavefront $v_{1,m}$ is modulated by the first DOE with a phase distribution $\phi _1$. The light passing through the first DOE propagates onto the second DOE as
    $$v_{2,m} = \mathcal{P}_{2}\left[d_1 v_{1,m}\right],$$
    where
    $$d_1 = \exp(j\phi_1).$$

    Here $j$ is the imaginary unit.

  • 3. Following Step 2, the $m$-th wavefront $v_{k,m}$ just before the $k$-th DOE is written as
    $$v_{k,m} = \mathcal{P}_{k}\left[d_{k-1} v_{k-1,m}\right],$$
    where
    $$d_k = \exp(j\phi_k).$$
  • 4. Step 3 is iterated until the light arrives at the sensor on the output plane. The image $g$ captured by the sensor is calculated as the ensemble average of intensities of the $M$ propagating wavefronts on the output plane as
    $$g = \frac{1}{M}\sum_{m=1}^M \left|v_{K+1,m}\right|^2.$$

2.3 Inverse problem

We define the cost function $\mathcal {L}(\phi _1,\phi _2,\dots,\phi _K)$ to design $K$ layers of DOEs with $N$ training images as follows:

$$\mathcal{L}(\phi_1,\phi_2,\dots,\phi_K)=\sum_{n=1}^N \|g_n - f_n\|_2 ^2,$$
where $f_n$ is the $n$-th intensity image on the input plane for training, and $g_n$ is the intensity image on the output plane captured through the imaging process illustrated in Fig. 1 and Subsection 2.2.

Our aim is to find DOEs that minimize the cost function in Eq. (7) as

$$\underset{\phi_1,\dots,\phi_K}{\arg \min}\mathcal{L}(\phi_1,\dots,\phi_K).$$

Therefore, the DOEs are optimized so that images on the input plane are reproduced upright on the output plane to realize the imaging condition in Fig. 2(b) for extendable FOV imaging.

We solve the optimization problem in Eq. (8) based on gradient descent. The partial derivative of the cost function $\mathcal {L}$ in Eq. (7) with respect to $\phi _k$ is calculated based on the chain rule as follows:

$$\frac{\partial \mathcal{L}}{\partial \phi_k}=\frac{\partial d_k}{\partial \phi_k}\cdot \frac{\partial \mathcal{L}}{\partial d_k}.$$

The first and second terms on the right-hand side in Eq. (9) are written as

$$\begin{aligned} \frac{\partial d_k}{\partial \phi_k}=\mathrm{real}\left[{-}jd_k^*\frac{\partial \mathcal{L}}{\partial d_k}\right], \end{aligned}$$
$$\begin{aligned} \frac{\partial \mathcal{L}}{\partial d_k}=\frac{4}{M} \sum _{n=1}^N \sum_{m=1}^M v_{k,m,n}^* \mathcal{P}_{k+1}^{{-}1}\biggl[d_{k+1}^* \mathcal{P}_{k+2}^{{-}1}\Bigl[\dots d_{K}^*\mathcal{P}_{K+1}^{{-}1}\bigl[v_{K+1,m,n}(g_n-o_n)\bigr]\dots\Bigr]\biggr], \end{aligned}$$
where $\mathrm {real}[\bullet ]$ denotes the real part of a complex amplitude field, the superscript $*$ denotes the complex conjugate, and $\mathcal {P}_k^{-1}$ is the inverse Fresnel propagation kernel from the $k$-th DOE to the $(k-1)$-th DOE, respectively. We feed back the partial derivative shown in Eq. (9) to each DOE at the $i$-th iteration as
$$\phi_k^{(i)}=\phi_k^{(i-1)}-\mathrm{Adam}\left[\frac{\partial \mathcal{L}}{\partial \phi_k^{(i-1)}}\right],$$
where $\mathrm {Adam}[\bullet ]$ is an operator of the Adam optimizer to automatically tune the updating step [31]. Based on stochastic gradient descent, we randomly change the $M$ wavefronts $\{w_1, w_2,\dots, w_M\}$ upstream of the input plane and the $N$ intensity objects $\{f_1, f_2, \dots, f_N\}$ on the input plane in each iteration.

2.4 DOE cascade array

Based on the method described in Subsections 2.12.3, the FOV of our imaging system is the same as the lateral size of the DOE cascade, and, in principle is arbitrarily extendable by laterally enlarging the individual DOEs. However, the computational cost increases to design larger DOEs to extend the FOV. We address this issue by arraying the elemental DOE cascade, which is the DOEs calculated by the process in Subsection 2.3, as shown in Fig. 3, where an image sensor of unlimited size is assumed. Under the upright imaging condition shown in Fig. 2(b), each DOE cascade in the array also has the same optical axes as those in the elemental one. Furthermore, the forward and backward Fresnel propagation kernels $\mathcal {P}_k$ and $\mathcal {P}_k^{-1}$ are implemented with the fast Fourier transform. The DOE cascade designed with these kernels has a circulant structure, and it is feasible to array the DOEs. Thus, in this case, the FOV can be expanded by increasing the number of arrays without further calculation. Note that, under the inverted imaging condition in Fig. 2(a), extension of the FOV is not possible by arraying the DOE cascade because of the crosstalk between the elemental DOE cascades in the array.

 figure: Fig. 3.

Fig. 3. Array of DOE cascade.

Download Full Size | PDF

It is important to derive the smallest lateral size of the elemental DOE cascade for minimizing the computational cost. In the following derivation, we omit one of the lateral dimensions for simplicity, as shown in Fig. 4. The diffraction angle $\theta$ from a single pixel with a size of $\delta$, which is the pixel pitch on the input plane, the DOEs, and the output plane, is written as

$$\theta \approx \frac{\lambda}{2\delta},$$
where $\lambda$ is the light wavelength [4]. We assume that the optical process through the optimized DOE cascade is axially symmetric as shown in Fig. 4. Based on the paraxial approximation, the physical size $a$ of the region covered by the diffraction light on the center plane of the DOE cascade from the single pixel on the input plane is written as
$$a=2\times \frac{z}{2} \tan \theta \approx z\theta,$$
where $z$ is the distance from the input plane to the output plane. As mentioned above, the propagation kernels $\mathcal {P}_k$ and $\mathcal {P}_k^{-1}$ used in the DOE design employ the fast Fourier transform. In the optimization process, to prevent crosstalk between the direct diffraction light and the circulated diffraction light on the center plane, as shown in Fig. 4, the diffraction size $a$ on the center plane must be smaller than the lateral size of the DOE as shown in the following equation, obtained by substituting Eq. (13) in Eq. (14):
$$a\approx \frac{z\lambda}{2\delta} \leq \delta W,$$
where $W$ is the lateral one-dimensional pixel count of the input plane, the DOEs, and the output plane. Then, the condition with respect to $W$ is derived as
$$W \geq \frac{z\lambda}{2\delta^2},$$
and the minimal value $W_\mathrm {min}$ of $W$ is written as
$$W_{\mathrm{min}}=\frac{z\lambda}{2\delta^2}.$$

In summary, the elemental DOE cascade with the lateral pixel count $W_\mathrm {min}$ is designed to minimize the calculation cost, and is arrayed to extend the FOV without further calculation.

 figure: Fig. 4.

Fig. 4. Circulant trajectory of diffracted light in the DOE design.

Download Full Size | PDF

3. Numerical experiment

We numerically demonstrated our DOE design and analyzed its imaging performance under various aspects. Conditions in the simulations were set as follows, unless each condition is described in the subsections below. We assumed spatially incoherent monochromatic light with a wavelength $\lambda$ of 500 nm to illuminate an object. The pixel count $W^2$ and the pixel pitch $\delta$ in the object, the DOEs, and the image sensor were $400^2$ and 5 µm, respectively. The distance between the input and output planes $z$ was set to 4 cm. The number of random wavefronts $M$ composing the spatially incoherent light based on CP was set to 20 for the optimization process and 1000 for the final reproduction process. In the optimization process, the number of images in the training dataset, which consisted of natural images obtained by the authors, was 1000, and the size of a mini-batch $N$ was 5. The number of iterations for the gradient descent process was set to 10,000. In the Adam optimizer, the learning parameter was set empirically at each condition, and other tuning parameters were the same as those in the original work [31]. After the optimization, the imaging performance of the DOE cascade was evaluated with the average of the structural similarities (SSIMs) of 100 natural images in the test dataset, which were not included in the training dataset [32].

3.1 Number of layers

We first demonstrated the imaging performance for different numbers of layers $K$ in the DOE cascade, as shown in Fig. 5. In the demonstration, we changed the number of layers from one to six. The distance between the layers was set to $z/(K+1)$ to equalize their intervals. As shown in the results, the DOE cascade composed of three layers achieved higher SSIMs than the other conditions. The DOE cascade with three layers might be a balance between the advantage and the disadvantage of the degrees of freedom, which corresponds to the product of the pixel count $W^2$ and the number of layers $K$ in the DOE cascade. If the product is too small, it might be insufficient to realize the optical process of Fig. 2(b). However, if the product is too large, the designed DOEs might fall into a local minimum. Therefore, the trend shown in Fig. 5 may be independent of the distance $z$ when the pixel count $W^2$ is fixed. In the following simulations, we chose three layers for the DOE design.

 figure: Fig. 5.

Fig. 5. Imaging performance for each number of layers.

Download Full Size | PDF

3.2 Different types of training images

Next, we compared three types of images for optimizing the DOE cascade. The first type was the natural images, as mentioned above and shown in Fig. 6(b). The second type was delta patterns, as shown in Fig. 6(c). In this case, we divided the input plane into $A^2$ square patches and assumed shift invariance within each patch, as in conventional ray-tracing-based optical designs. A point source was located at the center of one of the patches in each training image. Therefore, the number of training images was $A^2$ for the delta patterns. In this simulation, we set $A$ to 10, 20, 30, 40, 50, and 100. The third type for training images consisted of random patterns, as shown in Fig. 6(d). In this case, we used uniform random distributions for training images. The number of training random patterns was set to 1000, which was the same as that in the case of the natural images.

 figure: Fig. 6.

Fig. 6. Imaging performance with different training datasets at each iteration. (a) Plots and examples of (b) natural images, (c) delta patterns, and (d) random patterns.

Download Full Size | PDF

In contrast to the cases of the natural images and the random patterns, the number of coherent propagations $M$ for the delta patterns is one because the light propagating from a point source is spatially coherent and the numerical propagation of spatially incoherent light based on CP is not necessary. We set the mini-batch size $N$ for the delta patterns to 100. In the case of the random patterns, we set the number of random wavefronts $M$ to 20 and the mini-batch size $N$ to 5. These values were the same as in the case of the natural images. The memory size for each iteration was proportional to $M\times N$, which was 100 for all three training datasets under these conditions.

The number of epochs for the delta patterns was set to 1000. Then, in the case of the delta patterns, the total number of iterations during the training process was $A^2\times 10~(=A^2/100\times 1000)$. The optimized DOE cascade with the delta patterns at each condition was evaluated with the average SSIM of 100 natural images in the test dataset, as shown in Fig. 6(a). The imaging performances of the DOE cascades trained with the natural images and the random patterns and evaluated with the natural images in the test dataset are also plotted at the same number of iterations as that for the delta patterns in Fig. 6(a). As shown in the results, the average SSIM with the natural images quickly approached 1.0. Therefore, the training dataset of the natural images realized DOE optimization with a lower computational cost than the others. This result also shows the advantage of the DOE design without shift invariance, which is assumed in training with the delta patterns. Therefore, we chose the natural images for the training dataset in the following simulations.

3.3 Pixel count for DOE optimization

We verified the impact of the lateral pixel count in the optimization of the DOE cascade. It is important to reduce the computational cost, as discussed in Subsection 2.4. Under the simulation conditions, the minimal one-dimensional pixel count $W_\mathrm {min}$ along one of the lateral axes in Eq. (17) was calculated to be 400. The relationship between the one-dimensional pixel count $W$ and the average SSIM for the elemental DOE cascade is shown in Fig. 7, where $W$ was varied from 100 to 800 with an interval of 50. We also plotted the average SSIMs of a two-by-two array of each DOE cascade. This result shows that the difference of the average SSIMs for the elemental DOE cascade and its two-by-two array was small when the one-dimensional pixel count $W$ was equal to or larger than 400 (=$W_\mathrm {min}$). Therefore, we verified the condition of $W_\mathrm {min}$ in Eq. (17). Furthermore, the elemental DOE cascade and its two-by-two array at $W_\mathrm {min}$ achieved higher average SSIMs than those at other one-dimensional pixel counts. Thus, in the following simulations, we chose a one-dimensional pixel count $W$ of 400 for designing the elemental DOE cascade.

 figure: Fig. 7.

Fig. 7. Imaging performance of elemental DOE cascade and two-by-two DOE cascade array for each one-dimensional pixel count.

Download Full Size | PDF

3.4 DOE cascade array

As discussed in Subsection 2.4, the FOV of the DOE cascade can be extended by arraying the elemental DOE cascade with the minimal one-dimensional pixel count $W_\mathrm {min}$ in Eq. (17). We confirmed the average SSIMs for one-by-one to four-by-four arrays of the elemental DOE cascade with the conditions chosen in Subsections 3.13.3, which were $K=3$, a training dataset of natural images, and $W=400$. The average SSIMs are shown in Fig. 8. The values were close to one and were not dependent on the number of arrays. This result indicates that the FOV of the DOE cascade array is can be extended without limit by increasing the number of arrays. The advantage of the imaging performance under the condition chosen here over those under the other conditions analyzed in the above subsections may remain when these DOE cascades are arrayed.

 figure: Fig. 8.

Fig. 8. Imaging performance of DOE cascade array for each number of arrays.

Download Full Size | PDF

3.5 Visualization

In Subsections 3.13.3, the proposed optical system achieved the best imaging performance when the number of DOE layers $K$ was 3, the training dataset was composed of natural images, and the one-dimensional pixel count $W$ was 400. Phase distributions of the elemental DOE cascade under these conditions are shown in Fig. 9. A lens-array-like structure appears on the DOEs. The array structure is circulant on each layer, and therefore, it is feasible to construct an array of these DOEs, as mentioned in Subsection 2.4. Contrary to the Gabor superlens [3336], which consists of microlens arrays with slightly differing pitches, our DOE design does not employ the paraxial approximation. This aspect allows our design to achieve a wide FOV and a large numerical aperture by eliminating the approximation.

 figure: Fig. 9.

Fig. 9. Phase distributions of the elemental DOE cascade.

Download Full Size | PDF

One input image was chosen from the test dataset, and square regions on the image were clipped with the one-dimensional pixel counts $W$ of 400, 800, and 1600, as shown in Fig. 10(a) to verify image reproduction with FOVs of different sizes. Reproduction results of each FOV through the DOE cascades with one-by-one, two-by-two, and four-by-four arrays are shown in Fig. 10(b). The extendable FOV achieved by our DOE design was visually verified.

 figure: Fig. 10.

Fig. 10. Imaging results obtained with DOE cascade array. (a) FOVs on the input image with one-dimensional pixel counts $W$ of 400, 800, and 1600. (b) Images of the FOVs reproduced through the DOE cascades with one-by-one, two-by-two, and four-by-four arrays.

Download Full Size | PDF

Intensity profiles of the optical field from different point sources on the input plane to the output plane through the elemental DOE cascade are shown in Figs. 11(a)–11(c), where both the lateral and axial grid pitches were set to 5 µm (=$\delta$) and the dynamic range of the light intensity was compressed with the sigmoid function for visualization purposes. The light efficiencies from the point sources in Figs. 11(a)–11(c) were 98%, 96%, and 97%, respectively. The circulant propagation processes shown in Figs. 11(b) and 11(c) are substantialized when the elemental DOE cascade is arrayed for extending the FOV. These profiles visualize the optical process of the upright imaging condition in Fig. 2(b) and the axial symmetry assumed in Subsection 2.4. Although it is difficult to characterize the imaging performance of our system with conventional metrics because of its shift variant property, these profiles also indicate a possibility of understanding the imaging principle of the DOE cascade on the basis of geometrical optics. This may help achieve an intuitive optical design and refractive-optics-based implementation, such as lenses, for extendable FOV imaging.

 figure: Fig. 11.

Fig. 11. Intensity profiles of the optical field in the elemental DOE cascade from three different point sources (a)-(c) on the input plane to the output plane. The dynamic range of the light intensity was compressed with the sigmoid function for visualization purposes.

Download Full Size | PDF

4. Conclusion

In this study, we proposed a method for designing a DOE cascade for extendable FOV imaging with spatially incoherent light. We showed the advantage of the upright imaging condition in suppressing off-axis aberrations and presented its optical forward model through DOEs without an assumption of shift invariance. The DOE cascade was synthesized by solving the inversion of the forward model based on stochastic gradient descent with CP. We also derived an optical condition to minimize the size of the DOEs to reduce the calculation cost for the DOE design. Once the DOEs are calculated, the FOV of our imaging system can be extended without limit by increasing the number of arrays of the elemental DOE cascade without further calculation when an image sensor that is sufficiently large or of unlimited size is assumed. We numerically demonstrated the imaging performance of our DOE design by investigating the number of layers, the types of training images, and the lateral pixel count of the DOEs. Through the demonstrations, we verified extendable FOV imaging with spatially incoherent light using the designed DOE cascade.

One future issue with our DOE design, which assumes monochromatic light, is an extension to multicolor or multispectral imaging. Some designs of DOEs or holograms for multiwavelength light will be applicable to this issue [30,37]. Metasurfaces are also beneficial for overcoming this issue because these novel optical components enable us to design phase modulations that depend on each wavelength [3840]. Another challenge of our DOE design is realizing non-equal magnifications to increase the range of applications, such as microscopes and telescopes. In addition to a straightforward extension of the current imaging model for this future design, curved DOEs and sensors may provide non-equal magnifications in our imaging systems [4143]. It is also interesting to design the DOEs by assuming post-processing, including deep learning to improve imaging performance, for example, super-resolution and extending depth-of-field [4446]. Our study will contribute to optical designs in various fields, including biomedicine, astronomy, and security, where the FOVs of imaging systems are critical.

Funding

Japan Society for the Promotion of Science (JP20H02657, JP20H05890, JP20K05361, JP23H01874, JP23H05444); Asahi Glass Foundation.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data may be obtained from the authors upon reasonable request.

References

1. J. Fan, J. Suo, J. Wu, H. Xie, Y. Shen, F. Chen, G. Wang, L. Cao, G. Jin, Q. He, T. Li, G. Luan, L. K. Z. Zheng, and Q. Dai, “Video-rate imaging of biological dynamics at centimetre scale and micrometre resolution,” Nat. Photonics 13(11), 809–816 (2019). [CrossRef]  

2. O. Doré, J. Bock, M. Ashby, et al., “Cosmology with the SPHEREX all-sky spectral survey,” arXiv, arXiv:1412.4872 (2014). [CrossRef]  

3. X. Yuan, M. Ji, J. Wu, et al., “A modular hierarchical array camera,” Light: Sci. Appl. 10, 37 (2021). [CrossRef]  

4. J. W. Goodman, Introduction to Fourier optics (McGraw-Hill, 1996), 2nd ed.

5. A. Lohmann, “Scaling laws for lens systems,” Appl. Opt. 28(23), 4996–4998 (1989). [CrossRef]  

6. O. S. Cossairt, D. Miau, and S. K. Nayar, “Scaling law for computational imaging using spherical optics,” J. Opt. Soc. Am. A 28(12), 2540–2553 (2011). [CrossRef]  

7. P. Milojkovic and M. P. Christensen, “Review of multiscale optical design,” Appl. Opt. 54(2), 171–183 (2015). [CrossRef]  

8. D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486(7403), 386–389 (2012). [CrossRef]  

9. Z. Li, Q. Hou, Z. Wang, F. Tan, J. Liu, and W. Zhang, “End-to-end learned single lens design using fast differentiable ray tracing,” Opt. Lett. 46(21), 5453–5456 (2021). [CrossRef]  

10. Q. Sun, C. Wang, Q. Fu, X. Dun, and W. Heidrich, “End-to-end complex lens design with differentiate ray tracing,” ACM Trans. Graph. 40(4), 1–13 (2021). [CrossRef]  

11. A. Halé, P. Trouvé-Peloux, and J. Volatier, “End-to-end sensor and neural network design using differential ray tracing,” Opt. Express 29(21), 34748–34761 (2021). [CrossRef]  

12. S. Cui, B. Wang, and Q. Zheng, “Neural invertible variable-degree optical aberrations correction,” Opt. Express 31(9), 13585 (2023). [CrossRef]  

13. T. Yang, H. Xu, D. Cheng, and Y. Wang, “Design of compact off-axis freeform imaging systems based on optical-digital joint optimization,” Opt. Express 31(12), 19491–19509 (2023). [CrossRef]  

14. D. C. O’Shea, T. J. Suleski, A. D. Kathman, and D. W. Prather, Diffractive Optics: Design, Fabrication, and Test (SPIE, 2003).

15. X. Dun, H. Ikoma, G. Wetzstein, Z. Wang, X. Cheng, and Y. Peng, “Learned rotationally symmetric diffractive achromat for full-spectrum computational imaging,” Optica 7(8), 913–922 (2020). [CrossRef]  

16. E. Tseng, S. Colburn, J. Whitehead, L. Huang, S. Baek, A. Majumdar, and F. Heide, “Neural nano-optics for high-quality thin lens imaging,” Nat. Commun. 12(1), 6493 (2021). [CrossRef]  

17. M. Pan, Y. Fu, M. Zheng, H. Chen, Y. Zang, H. Duan, Q. Li, M. Qiu, and Y. Hu, “Dielectric metalens for miniaturized imaging systems: Progress and challenges,” Light: Sci. Appl. 11(1), 195 (2022). [CrossRef]  

18. H. Hu, T. Jiang, Y. Chen, Z. Xu, Q. Li, and H. Feng, “Elimination of varying chromatic aberrations based on diffractive optics,” Opt. Express 31(7), 11041–11052 (2023). [CrossRef]  

19. K. Matsushima, Introduction to Computer Holography (Springer Cham, 2020).

20. X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science 361(6406), 1004–1008 (2018). [CrossRef]  

21. D. Pi, J. Liu, and Y. Wang, “Review of computer-generated hologram algorithms for color dynamic holographic three-dimensional display,” Light: Sci. Appl. 11(1), 231 (2022). [CrossRef]  

22. E. M. Fenimore and T. M. Cannon, “Coded aperture imaging with uniformly redundant arrays,” Appl. Opt. 17(3), 337–347 (1978). [CrossRef]  

23. R. Horisaki and J. Tanida, “Multi-channel data acquisition using multiplexed imaging with spatial encoding,” Opt. Express 18(22), 23041–23053 (2010). [CrossRef]  

24. J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017). [CrossRef]  

25. J. Rosen and G. Brooker, “Non-scanning motionless fluorescence three-dimensional holographic microscopy,” Nat. Photonics 2(3), 190–195 (2008). [CrossRef]  

26. J. Rosen, A. Vijayakumar, M. Kumar, M. R. Rai, R. Kelner, Y. Kashter, A. Bulbul, and S. Mukherjee, “Recent advances in self-interference incoherent digital holography,” Adv. Opt. Photonics 11(1), 1–66 (2019). [CrossRef]  

27. V. Anand, T. Katkus, S. Hock Ng, and S. Juodkazis, “Review of Fresnel incoherent correlation holography with linear and non-linear correlations,” Chin. Opt. Lett. 19, 020501 (2021). [CrossRef]  

28. J. E. Greivenkamp, Field Guide to Geometrical Optics (SPIE, 2004).

29. R. Horisaki, T. Aoki, Y. Nishizaki, A. Röhm, N. Chauvet, J. Tanida, and M. Naruse, “Compressive propagation with coherence,” Opt. Lett. 47(3), 613–616 (2022). [CrossRef]  

30. R. Suda, M. Naruse, and R. Horisaki, “Incoherent computer-generated holography,” Opt. Lett. 47(15), 3844–3847 (2022). [CrossRef]  

31. D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv, arXiv:1412.6980 (2014). [CrossRef]  

32. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

33. C. Hembd-Sölner, R. F. Stevens, and M. C. Hutley, “Imaging properties of the Gabor superlens,” J. Opt. A: Pure Appl. Opt. 1(1), 94–102 (1999). [CrossRef]  

34. J. Duparré, P. Schreiber, A. Matthes, E. Pshenay-Severin, A. Bräuer, A. Tünnermann, R. Völkel, M. Eisner, and T. Scharf, “Microoptical telescope compound eye,” Opt. Express 13(3), 889–903 (2005). [CrossRef]  

35. K. Stollberg, A. Brückner, J. Duparré, P. Dannberg, A. Bräuer, and A. Tünnermann, “The Gabor superlens as an alternative wafer-level camera approach inspired by superposition compound eyes of nocturnal insects,” Opt. Express 17(18), 15747–15759 (2009). [CrossRef]  

36. H. R. Fallah and A. Karimzadeh, “MTF of compound eye,” Opt. Express 18(12), 12304–12310 (2010). [CrossRef]  

37. H. Wang and R. Piestun, “Dynamic 2D implementation of 3D diffractive optics,” Optica 5(10), 1220–1228 (2018). [CrossRef]  

38. S. Wang, P. C. Wu, V. Su, Y. Lai, C. H. Chu, J. Chen, S. Lu, J. Chen, B. Xu, C. Kuan, T. Li, S. Zhu, and D. P. Tsai, “Broadband achromatic optical metasurface devices,” Nat. Commun. 8(1), 1–9 (2017). [CrossRef]  

39. W. T. Chen, A. Y. Zhu, V. Sanjeev, M. Khorasaninejad, Z. Shi, E. Lee, and F. Capasso, “A broadband achromatic metalens for focusing and imaging in the visible,” Nat. Nanotechnol. 13(3), 220–226 (2018). [CrossRef]  

40. S. Shrestha, A. C. Overvig, M. Lu, A. Stein, and N. Yu, “Broadband achromatic dielectric metalenses,” Light: Sci. Appl. 7(1), 85 (2018). [CrossRef]  

41. M. Roeder, S. Thiele, D. Hera, C. Pruss, T. Guenther, W. Osten, and A. Zimmermann, “Fabrication of curved diffractive optical elements by means of laser direct writing, electroplating, and injection compression molding,” J. Manufact. Process. 47, 402–409 (2019). [CrossRef]  

42. B. Guenter, N. Joshi, R. Stoakley, A. Keefe, K. Geary, R. Freeman, J. Hundley, P. Patterson, D. Hammon, G. Herrera, E. Sherman, A. Nowak, R. Schubert, P. Brewer, L. Yang, R. Mott, and G. McKnight, “Highly curved image sensors: a practical approach for improved optical performance,” Opt. Express 25(12), 13010–13023 (2017). [CrossRef]  

43. T. Nakamura, R. Horisaki, and J. Tanida, “Computational superposition compound eye imaging for extended depth-of-field and field-of-view,” Opt. Express 20(25), 27482–27495 (2012). [CrossRef]  

44. E. Nehme, D. Freedman, R. Gordon, B. Ferdman, L. E. Weiss, O. Alalouf, T. Naor, R. Orange, T. Michaeli, and Y. Shechtman, “DeepSTORM3D: dense 3D localization microscopy and PSF design by deep learning,” Nat. Methods 17(7), 734–740 (2020). [CrossRef]  

45. B. Zhang, X. Yuan, C. Deng, Z. Zhang, J. Suo, and Q. Dai, “End-to-end snapshot compressed super-resolution imaging with deep optics,” Optica 9(4), 451–454 (2022). [CrossRef]  

46. S. Elmalem, R. Giryes, and E. Marom, “Learned phase coded aperture for the benefit of depth of field extension,” Opt. Express 26(12), 15316–15331 (2018). [CrossRef]  

Data availability

Data may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. DOE cascade for extendable FOV imaging with spatially incoherent light.
Fig. 2.
Fig. 2. Optical processes of (a) inverted imaging condition and (b) upright imaging condition.
Fig. 3.
Fig. 3. Array of DOE cascade.
Fig. 4.
Fig. 4. Circulant trajectory of diffracted light in the DOE design.
Fig. 5.
Fig. 5. Imaging performance for each number of layers.
Fig. 6.
Fig. 6. Imaging performance with different training datasets at each iteration. (a) Plots and examples of (b) natural images, (c) delta patterns, and (d) random patterns.
Fig. 7.
Fig. 7. Imaging performance of elemental DOE cascade and two-by-two DOE cascade array for each one-dimensional pixel count.
Fig. 8.
Fig. 8. Imaging performance of DOE cascade array for each number of arrays.
Fig. 9.
Fig. 9. Phase distributions of the elemental DOE cascade.
Fig. 10.
Fig. 10. Imaging results obtained with DOE cascade array. (a) FOVs on the input image with one-dimensional pixel counts $W$ of 400, 800, and 1600. (b) Images of the FOVs reproduced through the DOE cascades with one-by-one, two-by-two, and four-by-four arrays.
Fig. 11.
Fig. 11. Intensity profiles of the optical field in the elemental DOE cascade from three different point sources (a)-(c) on the input plane to the output plane. The dynamic range of the light intensity was compressed with the sigmoid function for visualization purposes.

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

v 1 , m = P 1 [ w m f ] ,
v 2 , m = P 2 [ d 1 v 1 , m ] ,
d 1 = exp ( j ϕ 1 ) .
v k , m = P k [ d k 1 v k 1 , m ] ,
d k = exp ( j ϕ k ) .
g = 1 M m = 1 M | v K + 1 , m | 2 .
L ( ϕ 1 , ϕ 2 , , ϕ K ) = n = 1 N g n f n 2 2 ,
arg min ϕ 1 , , ϕ K L ( ϕ 1 , , ϕ K ) .
L ϕ k = d k ϕ k L d k .
d k ϕ k = r e a l [ j d k L d k ] ,
L d k = 4 M n = 1 N m = 1 M v k , m , n P k + 1 1 [ d k + 1 P k + 2 1 [ d K P K + 1 1 [ v K + 1 , m , n ( g n o n ) ] ] ] ,
ϕ k ( i ) = ϕ k ( i 1 ) A d a m [ L ϕ k ( i 1 ) ] ,
θ λ 2 δ ,
a = 2 × z 2 tan θ z θ ,
a z λ 2 δ δ W ,
W z λ 2 δ 2 ,
W m i n = z λ 2 δ 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.