Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Orthoscopic elemental image synthesis for 3D light field display using lens design software and real-world captured neural radiance field

Open Access Open Access

Abstract

The elemental images (EIs) generation of complex real-world scenes can be challenging for conventional integral imaging (InIm) capture techniques since the pseudoscopic effect, characterized by a depth inversion of the reconstructed 3D scene, occurs in this process. To address this problem, we present in this paper a new approach using a custom neural radiance field (NeRF) model to form real and/or virtual 3D image reconstruction from a complex real-world scene while avoiding distortion and depth inversion. One of the advantages of using a NeRF is that the 3D information of a complex scene (including transparency and reflection) is not stored by meshes or voxel grid but by a neural network that can be queried to extract desired data. The Nerfstudio API was used to generate a custom NeRF-related model while avoiding the need for a bulky acquisition system. A general workflow that includes the use of ray-tracing-based lens design software is proposed to facilitate the different processing steps involved in managing NeRF data. Through this workflow, we introduced a new mapping method for extracting desired data from the custom-trained NeRF-related model, enabling the generation of undistorted orthoscopic EIs. An experimental 3D reconstruction was conducted using an InIm-based 3D light field display (LFD) prototype to validate the effectiveness of the proposed method. A qualitative comparison with the actual real-world scene showed that the 3D reconstructed scene is accurately rendered. The proposed work can be used to manage and render undistorted orthoscopic 3D images from custom-trained NeRF-related models for various InIm applications.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Due to its simple architecture and implementation, integral imaging (InIm) represents a very attractive technology for 3D display applications. This method involves the use of a display panel and a microlens array (MLA) to generate 3D images featuring full parallax, true color, and incoherent light that can be viewed without eyewear [1]. InIm-based 3D light field displays (LFDs) aim to reproduce the original light field of a scene through real or virtual 3D image formation. This process can be described by two key stages of InIm: the capture (or pickup), and the reconstruction [2]. During the capture stage, the 3D information of a scene, or its light field, is recorded and encoded by the MLA. This process results in the formation of an elemental image array (EIA), where each elemental image (EI) presents a distinct perspective of the same scene. Two different approaches are typically used to generate these EIs, namely the physical and synthetic capture. The physical capture approach consists of acquiring different perspectives of a scene with a camera array system [3] or single moving camera [4,5]. This approach provides high-resolution EIs, but has the disadvantage of being cumbersome, requiring calibration, having limited spatial flexibility, having a long acquisition time, and requiring post-processing to form the EIA from captured perspectives. The synthetic capture approach known as computer-generated integral imaging (CGII), much easier to implement, has also been proposed as an alternative to the physical capture approach. In this approach, EIs are generated by computer graphics from synthetic data [6,7]. Although this approach eliminates the implementation of physical capture systems, CGII remains limited to synthetic data, making its implementation unsuitable for real-world scenes.

Moreover, in their standard configuration, InIm-based systems suffer from a major issue, known as the pseudoscopic effect, characterized by a depth inversion of the reconstructed 3D image [8]. While the synthetic approach performs effectively in virtual mode, the pseudoscopic effect remains a problem for the reconstruction of real 3D images with both capture approaches. On the one hand, numerous optical correction methods have been previously suggested to achieve real undistorted orthoscopic 3D image reconstruction, involving additional steps like the two-step recording/displaying technique [8], or additional devices like a transmissive mirror device [9], a gradient-index lens-array [10], or a micro-convex-mirror array [11]. However, these methods add complexity to the capture process, making their implementation even more laborious and time-consuming. On the other hand, some interesting synthetic approaches have been proposed to obtain real undistorted orthoscopic 3D image reconstruction, namely the backward ray-tracing technique [12], nonlinear mapping method [13], or smart pseudoscopic-to-orthoscopic conversion (SPOC) method [14], but none of them can handle complex real-world data. As such, the generation of undistorted orthoscopic EIs from a complex real-world scene remains challenging.

Mildenhall $\textit {et al.}$ presented a method to encode 3D data from a complex scene using a trained full-connected deep neural network. This method, called neural radiance field (NeRF), aims to render any viewpoint of a real-world-based or synthetic-based scene from a set of sampled views of the same scene [15]. NeRF methods offer advantages in 3D scene representation by employing a neural network to store information, as opposed to traditional methods like meshes or voxel grids. The use of neural networks enables a continuous representation, resulting in smooth and precise modeling of complex scenes. Erdenebat $\textit {et al.}$ proposed a method implying the use of a NeRF-related model to mitigate the bulkiness of physical capture systems [16]. This method consists of acquiring just a few perspectives of a real-world scene using a simplified capturing setup and rendering intermediate new perspectives with a custom-trained NeRF-related model to display virtual 3D images. However, their method has certain drawbacks: firstly, despite the simplified capturing setup proposed, which includes various motorized moving stages and several cameras, it remains cumbersome and requires the synchronization of its components; secondly, the NeRF-related model is only used to synthesize novel intermediate views based on the specifications of a particular InIm-based 3D LFD design, restricting its application to a single design; thirdly, post-processing (conducted with the pixel re-arrangement method [17]) is still required to obtain the overall EIs from the corresponding rendered perspectives; and finally, the major issue of depth inversion has not been addressed for real 3D image reconstruction.

To overcome these limitations, our paper introduces a flexible synthetic approach that integrates NeRF-related data into lens design software to achieve undistorted orthoscopic real and/or virtual 3D images from a real-world scene. To do so, we propose using the open-source Nerfstudio project [18] to generate custom NeRF-related models from real-world scenes. Nerfstudio was designed to provide a user-friendly application programming interface (API) that allows to simplify the different processes involved in generating NeRF-related models. Besides, this API facilitates the capture of real-world data for the training process using a simple smartphone, eliminating the need to implement a bulky capture system. While conventional NeRF applications aim to synthesize novel views of a scene, our objective is to use a NeRF-related model as a tool for storing 3D information of a complex real-world scene. Our approach consists of querying the NeRF-related model with the appropriate input to extract the spatial information necessary for achieving undistorted orthoscopic real and/or virtual 3D image reconstruction. Unlike the method proposed by Erdenebat $\textit {et al.}$ [16], the NeRF-related model generation is not restricted by a particular capture system. This allows us to adapt the NeRF querying step to any InIm-based 3D LFD geometry or architecture without retraining the NeRF model. To this end, we proposed a straightforward workflow that integrates ray-tracing-based lens design software to form the corresponding EIA without needing additional optical elements or post-processing. Previous works [19,20] have already shown the effectiveness of such software in simulating InIm-based 3D displays. Since no physical setup is required, the versatility of this approach lies in its adaptability to implement various InIm-based 3D LFD architectures through a ray-tracing-based simulation process. To validate the effectiveness of the proposed method in accurately reproducing real-world scenes, a qualitative experimental comparison is conducted, contrasting the 3D image reconstructed by an InIm-based 3D LFD prototype with the actual scene.

In this article, our work is arranged into six sections. In section 2, we will present the general workflow of the proposed method. In section 3, we will present the experimental setup used to illustrate the proposed method through a direct application. In section 4, we will provide the results of the 3D reconstruction as well as a qualitative assessment to demonstrate the effectiveness of rendering 3D information compared to the actual scene. In section 5, we will present the discussion and finally, we will highlight the main achievements of the proposed work in section 6.

2. Method

The proposed method can be described in three key steps: the first consists of generating the rays involved in the formation of EIs from an InIm-based 3D LFD, the second of generating and querying a custom-trained NeRF-related model with specific ray data to extract the corresponding undistorted orthoscopic data from a real-world scene, and the last of generating and recording pseudoscopic-free EIs based the extracted data. The first and last steps were conducted using the commercial ray-tracing-based Ansys Zemax OpticStudio lens design software. Figure 1 presents the general workflow of the proposed method and these steps are detailed in the following subsections.

 figure: Fig. 1.

Fig. 1. General workflow of the proposed method representing the 3 key steps: 1) the ray generation for determining the position and direction of rays involved in the EIs formation, 2) the custom NeRF model generation for storing and providing real-world data, and 3) the pseudoscopic-free EIs generation based on the sampled data from the NeRF model. The red steps are implemented with lens design software, and the yellow step with Python.

Download Full Size | PDF

2.1 Ray generation for input data

To render a NeRF-related model from a specific viewpoint, a particular position, such as the point of origin $\mathbf {o}$, and the viewing direction, usually denoted by a 3D unit vector $\mathbf {d}$, are used to define the coordinates of a camera ray $\mathbf {r}(t)$, expressed as:

$$\mathbf{r}(t) = \mathbf{o} + t \mathbf{d}\text{.}$$

This ray is then launched from $\mathbf {o}$ with direction $\mathbf {d}$ through the 3D model to sample several coordinates $(\mathbf {p}, \mathbf {d})$ along its path $t$. As shown in Fig. 2, for any launched camera ray $\mathbf {r}(t)$, the NeRF function $F$ is used to produce directional RGB color $\mathbf {c}$ and volume density $\sigma$ (which denotes the probability that a ray will encounter objects along its path) of each sampled coordinate. Classical volume rendering techniques [21], are then used to accumulate those color and density values into a view-dependent RGB color $C(\mathbf {r})$ (equation (1) from [16]). Rendering a view from a NeRF requires estimating $C(\mathbf {r})$ for a camera ray traced through each pixel of the desired virtual camera. Initially, the NeRF function $F$ evaluates rays $\mathbf {r}(t)$ and adjusts the estimated output color $C(\mathbf {r})$ through a training phase. The goal is to ensure that the predicted colors of rays match the colors of the training data at each point in space. In the training phase, the NeRF learns to associate spatial coordinates and viewing directions with color and density values to accurately model complex 3D scenes. This permits the reproduction of various complex illumination behaviors, such as transparency, specular reflections, diffuse reflections, glossy reflections, as well as other non-Lambertian behaviors. While typical applications involve generating novel views of a scene from specific viewpoints, our intention is to exploit a NeRF-related model as a means for storing and managing 3D information of a complex scene. In this use case, the NeRF ability to provide a continuous representation becomes particularly advantageous, allowing us to query the NeRF function $F$ with particular camera rays to compute the view-dependent RGB color $C(\mathbf {r})$.

 figure: Fig. 2.

Fig. 2. Principle of the NeRF rendering process, in which a camera ray $\mathbf {r}(t)$ is launched through the 3D model to sample several locations $\mathbf {p} = (x, y, z)$ and directions $\mathbf {d}$ along its path. These sampled coordinates (input) are fed to the NeRF function $F$ to produce the color $\mathbf {c}$ and density $\sigma$ values encountered in the volume. The final output RGB color $C(\mathbf {r})$ assigned to this ray is then rendered by integrating those values. A single ray is associated with one RGB value to form a pixel into a 2D image. These ray output characteristics are normalized in order to reproduce, once rendered onto a virtual camera, the values that are in the reference images.

Download Full Size | PDF

Therefore, in the proposed workflow, we aim to generate the set of rays involved in the EIs formation of a specific InIm-based 3D LFD, to determine their properties (meaning positions and directions), and then to use these rays as camera rays to query the NeRF function. To this end, we suggest conducting the ray generation process with Ansys Zemax OpticStudio lens design software since it provides ray tracing and calculation tools that can be used to determine the ray characteristics of a particular optical system. Moreover, previous works [19,20] have already shown the effectiveness of lens design software for simulating InIm-based 3D displays. In that way, a simple InIm capture stage simulation was conducted in the Sequential mode of Zemax to generate the set of rays contributing to the formation of EIs. As shown in Fig. 3, the simulation model is based on chief rays tracing and consists of just two parallel surfaces: the object plane representing the pixelated display panel ($S1$), and the pupil plane (stop) ($S2$). Note that the chief rays tracing and pinhole model are considered in this example. This assumption can also be considered for most conventional InIm-based 3D LFDs since the aperture stop is usually located on the surface of the MLA. In this process, each pixel contained in an EI is mapped from the display panel to the center of its corresponding microlens (or pinhole). The size and shape of each EI is defined by the microlens type used. Let’s consider a rectangular MLA for simplicity. In this case, an EI is composed of $N \times N$ pixels, meaning the same number of rays must be traced to their corresponding microlens. To properly simulate the ray generation step, the aperture type and the field type have to be set as "Float By the Stop" and "Object Height", respectively. The distance between ($S1$) and ($S2$) needs to match the gap $g$ of the considered InIm-based 3D LFD configuration. In that way, the position ($x^h_{ij}, y^h_{ij}$) of a pixel ($i,j$) on the display panel defines the field height of the associated ray on the object plane. Based on this, the direction of each ray is simply a function of the field height and the gap between the display panel and the MLA.

 figure: Fig. 3.

Fig. 3. Principle of the ray generation process from the simulated capture stage, where $x_{h}, y_{h}$ and $x_{p}, y_{p}$ denote the field and pupil coordinates associated with a single EI, respectively. For each ray, a position $(x^p_{ij}, y^p_{ij}, z)$ and a direction cosine $(l^p_{ij}, m^p_{ij}, n^p_{ij})$ are extracted at the pupil plane (stop) of the corresponding microlens. This simulation is conducted in a pure Sequential mode in Ansys Zemax OpticStudio. The same principle can be easily extended to the desired number of EIs.

Download Full Size | PDF

Once the simulation model is set up, the position and direction of traced rays are directly obtained using the "Ray Trace" feature provided by Zemax. This feature simulates the path of light rays through optical systems and computes their characteristics at a specified surface, eliminating the necessity for additional calculations. Finally, ray data is extracted at the pupil plane using a custom macro we developed in the Zemax programming language (ZPL) to automate the entire ray generation step just presented. This macro writes and outputs a text file containing a list of positions and direction cosines $(x^p_{ij}, y^p_{ij}, z^p, l^p_{ij}, m^p_{ij}, n^p_{ij})$ for each pixel ($i,j$), which will be required for the next step, i.e. querying the NeRF model with the appropriate input data. The custom macro, as well as the Zemax simulation model used to illustrate the principle of the ray generation step, are available to readers in the Github of our research group in Code 1 (Ref. [22]). Knowing that a large number of rays could be involved in the EIs generation, the proposed method avoids oversampling since no rays are wasted. Indeed, the proposed method consists of assigning a single ray per pixel, so that no physical barrier between the microlenses is required to avoid overlap between EIs. Consequently, to cover an array of $I \times J$ EIs, the minimum amount of rays needed is $N \times N \times I \times J$. Note that Ansys Zemax OpticStudio represents a flexible and powerful ray-tracing-based tool. More complex optical systems, such as multi-layer MLA [23,24], or curved MLA 3D LFDs [25], can easily be handled by Zemax. This capability makes the proposed method suitable for any InIm-based 3D LFD geometry or architecture.

2.2 Custom NeRF model generation

The basic steps required to generate a custom NeRF model from a complex real-world scene are described as follows:

  • 1. Capturing various photos from different viewpoints of a desired scene.
  • 2. Determining camera poses associated with each photo using external tools.
  • 3. Training and generating the NeRF model based on the captured photos and their poses.

In this study, we propose to use the open-source project Nerfstudio, which was designed to offer a user-friendly API that aims to simplify the different steps involved in the NeRF-related model generation process [18]. This API allows even non-technical users to navigate and use the software more easily, making the overall implementation process simpler and less demanding than conventional approaches. Nerfstudio includes and supports various NeRF-related methods, such as Nerfacto [26], Instant NGP [27], standard NeRF [15], Mip-NeRF [28], and TensoRF [29]. Among them, Nerfstudio recommends implementing Nerfacto, which is its default method. Nerfacto is an improved version that combines several NeRF-related methods, adapted to real data capture, to achieve a balance between speed and quality. As presented above, in addition to the capturing step (1), NeRF-related methods require camera poses of each captured photo (2) to understand the spatial layout of the scene. This information is crucial for the NeRF training and rendering processes (3). Typically, camera poses are determined using external tools such as COLMAP, which is used for Structure-from-Motion (SfM) and Multi-View Stereo (MVS) tasks [30]. However, its implementation is very slow. Nerfstudio supports a multitude of other tools to derive the camera poses from a collection of images or videos, with or without appropriate data scaling, which are required for the NeRF training process. Typically, these tools necessitate at least one image, but some may use additional sensors like an accelerometer or an integrated LiDAR. Among these tools, we suggest implementing the Kiri Engine smartphone application, available for free on Android and iOS. This 3D scanning mobile app utilizes photogrammetry technology to achieve high-precision 3D digitization. Using a simple smartphone, this application allows users to perform steps (1) and (2) faster and easier by correlating each captured image with the corresponding camera’s extrinsic and intrinsic parameters while avoiding the need for external or additional tools such as COLMAP. The real custom self-captured data process is then not limited by a bulky acquisition system, offering more flexibility and the ability to explore more complex scenes. As a result, implementing Nerfstudio simplifies the basic steps involved in the process of generating custom NeRF-related models from complex real-world scenes.

Once training is complete, the next step is to query the Nerfacto model with the ray data determined in the subsection 2.1. Those rays are launched toward the Nerfacto model to extract the corresponding 3D information encountered along their propagation. However, before sending rays towards the 3D model, it’s useful to know where the model is. Indeed, it’s important to remember that NeRF-related models provide continuous viewpoints within the space defined by the scanned volume during the capturing step (1). For this purpose, we developed a Python code, available in Code 1 (Ref. [22]), that allows users to interact and control the position and orientation of the NeRF-related model through a live rendering interface. This interface aims to position the 3D model in space relative to a reference point, in this case, an observer. In other words, the interface lets you visualize where the input rays will be sent during the querying step. Fig. 4 shows this interface, where translation, rotation, and scaling transformations can be applied to adjust the representation of the 3D model according to the specification of the corresponding InIm-based 3D LFD. In addition, we implemented in the code a surface corresponding to the position of the MLA (highlighted in red in the rendering interface) relative to the 3D model. This plane can be translated through the 3D model using the $D_{VCP}$ cursor to define the relative depth of objects during the 3D reconstruction process. By moving the MLA plane along the depth direction ($z$-axis), the ray length between objects from the scene and the MLA will be changed. Since the user is free to set the position of the MLA, objects in the scene can be either in front or behind the MLA, leading to controlling the reconstruction mode of the 3D image so that real and/or virtual 3D images can be obtained. This point will be detailed in the following paragraph.

 figure: Fig. 4.

Fig. 4. Live rendering interface to define and preview the desired reference viewpoint. The red highlighted part corresponds to the relative MLA plane position throughout the scene. Each pixel of the rendered viewpoint is associated with the on-axis chief ray of the corresponding microlens, which means that its resolution corresponds to the total number of EI considered. This software is based on the natural units of the NeRF function, which in this workflow is unitless since COLMAP [30] doesn’t provide a clear scale.

Download Full Size | PDF

The method we propose to query and extract undistorted orthoscopic data from NeRF-related models is illustrated in Fig. 5. First, as shown in Fig. 5(a), the input rays are used as constraints to define virtual camera locations on a virtual camera plane (VCP), so that each set of $N \times N$ rays assigned to a microlens (or a pinhole) is mapped to $N \times N$ virtual camera on the VCP. As explained previously, during this key step, the distance between the MLA plane and the VCP, denoted by $D_{VCP}$, can be adjusted to define the relative depth of objects. Since the direction of the rays is fixed, the locations of the virtual cameras depend only on the MLA position. Note that the VCP is arranged in a parallel configuration (meaning parallel to the MLA) during this process.

 figure: Fig. 5.

Fig. 5. The three key steps of the NeRF querying principle, where (a) showing the constraint to determine new coordinates on a virtual camera plane (VCP), (b) the ray propagation from the VCP towards the 3D model to assigned NeRF data into each camera ray, and (c) the reconstruction process to recover orthoscopic real and virtual 3D images.

Download Full Size | PDF

Next, as shown in Fig. 5(b), due to the reversibility of the optical path, camera rays are launched from the various virtual cameras toward the 3D scene by passing through the MLA, until reaching the boundary box of the NeRF. The data carried by each camera ray will correspond to the color and density accumulated along its path through the 3D scene. The pseudoscopic effect is overcome thanks to this key step. Our approach is similar to the one proposed by Xing et al. which is also based on a reversed ray propagation technique [12]. However, in their method, the distance between the virtual cameras and the MLA is fixed and depends solely on the parameters of the considered InIm-based 3D LFD, restricting the dimensions as well as the location of the scene. In our approach, the virtual cameras are just used as "tools" to define the point of origin of the camera rays, which offers more flexibility since the virtual camera locations are not determined by the parameters of the InIm-based 3D LFD, but by the user. Moreover, by controlling the MLA position, the user can also control the reconstruction depth of the 3D images. For example, if the launched rays encounter an object before converging onto the MLA, the information carried by these rays will contribute to forming real EIs. Conversely, if the rays encounter an object after reaching the MLA, the information they carry will form virtual EIs. This approach offers a significant advantage in the control of the 3D image positioning, as objects in the scene are no longer restricted to their initial capture position. This advantage comes from the fact that the generation of the NeRF-related model is independent of the InIm-based 3D LFD architecture.

Finally, as shown in Fig. 5(c), the data carried by each camera ray will be assigned to its corresponding pixel on the display panel, enabling the generation of orthoscopic real and/or virtual EIs from a real-world scene. Note that due to the symmetry of the proposed method, no distortion occurs during this process.

2.3 Elemental images generation

The Non-Sequential ray tracing (NSC) mode of Ansys Zemax OpticStudio allows the implementation of custom light sources. Among them, the "Spectral Data File" (SDF) is a specific file format that can describe a light source from the information carried by a set of rays, where each ray is defined by a position ($x, y, z$), direction ($l, m, n$), wavelength, and intensity. Additional information about the SDF format definition can be found in the help file of the software (Setup Tab > Editors Group (Setup Tab) > Non-sequential Component Editor > Non-sequential Sources > Source File). As explained in subsection 2.2, NeRF models render an RGB color from position and direction, meaning that Ansys Zemax OpticStudio can support and handle NeRF data by the implementation of custom SDF. According to this specificity, we propose a method based on a pure NSC ray tracing approach that aims to integrate NeRF data into Zemax (wave optics theory is not considered in the workflow). However, before exporting specific data from the NeRF model into Zemax, it is important to note that the SDF structure supports only a single wavelength per ray. Consequently, it is necessary to split the sampled RGB color information of each NeRF ray into triad rays with a distinct wavelength to provide a single R, G, or B color channel per SDF ray. In simple terms, each NeRF ray must produce three collinear (sharing the same position and direction) rays with a single wavelength value (R, G, or B) to generate a single SDF ray. Because each SDF ray has a single wavelength, the intensity value is determined by weighting the R, G, or B value from the initial NeRF ray. In addition, the scale of the rays interacting with the NeRF model must be adjusted according to the required size of the EIs. Therefore, to generate a custom SDF for a specific configuration, our Python code also provides a NeRF-to-SDF conversion feature that encompasses the format requirements mentioned above. In that way, using the "Write SDF" button accessible on the rendering interface (see Fig. 4), custom SDF can be automatically written to facilitate its implementation in Ansys Zemax OpticStudio.

After generating a custom SDF, we simulated the EIs generation in pure NSC mode as presented in Fig. 6. This simulation consists of only two objects: a detector color as a display panel, and the custom SDF as a source file. The detector color is a 2D surface (object) that stores power and tristimulus (visual light color) data from source rays that strike it. The resulting color corresponds to the accumulation of energy in each pixel from source rays, meaning RGB color data from the initial ray can be restored from triad rays. The dimensions and pixel number of the detector color must be set according to the specifications of the considered InIm-based 3D LFD. For the same InIm-based 3D LFD architecture (rectangular MLA) presented in subsection 2.1, the detector color must have $N \times I$ pixels on the $x$ axis and $N \times J$ on the $y$ axis. In this simulation, the ray propagation from the custom SDF is reversed compared to the ray generation step, so that the direction $\mathbf {d}$ of each ray becomes $-\mathbf {d}$. This means that the origin points of the rays from the light source model must coincide with the landing points of the rays generated in subsection 2.1. This requirement is essential to properly generate the EIs. To do so, as shown in Fig. 6, the source file object must be set parallel to the detector color and positioned at $g$ (along the $z$ axis) from it. Next, the total number of analysis rays has to be set equal to the number of rays contained in the custom source file, i.e., $3 \times N \times N \times I \times J$. Since each ray triad is allocated to a single pixel of the detector color, the pixel interpolation must be removed (in the proprieties of the object) to avoid degrading the quality of the EIs. For this step, no more settings are required. Finally, the NSC "Ray Trace" feature is used to launch and trace rays from the light source model towards the detector color. The EIA is then formed and saved as an image using the "Detector Viewer" interface. The Zemax simulation model and SDF used to illustrate the principle of this step can also be found in the Github of our research group.

 figure: Fig. 6.

Fig. 6. Principle of the EIs generation step. This step is performed in pure NSC mode of Ansys Zemax OpticStudio to implement custom SDF as a light source. In this process, each ray from the custom SDF is traced toward a detector color. The distance between the SDF location (along the $z$-axis) and the detector color must be equal to the gap ($g$) in the ray generation process. Here a single ray ($i,j$) associated with its corresponding microlens is represented. However, $N \times N$ rays by microlens are traced to form a single EI. The same principle can be easily extended to the overall number of EIs.

Download Full Size | PDF

3. Experimental setup

In this section, we present the hardware parameters used to illustrate the proposed method.

3.1 InIm-based 3D light field display prototype

The InIm-based 3D LFD prototype used in this work consists of a commercial plano-convex MLA and a smartphone as a display panel, whose specifications are presented in Table 1 and Table 2. During the ray generation process (step one of the proposed workflow), the gap distance $g$ between the MLA and the display panel was set equal to the focal length of microlenses. According to the dimensions of microlenses, each EI of the InIm-based 3D LFD prototype contains $25 \times 25$ pixels, where a maximum of $65 \times 153$ EIs can be addressed by the display panel, leading to a total of 6,215,625 pixels. The same amount of rays must be traced in the ray generation simulation model to provide the complete input data required when querying the NeRF. For the simulation of the EIs generation, 18,646,875 monochromatic rays are exported from the Nerfacto model into an SDF using our rendering interface. The computer hardware used to run all the different steps of the workflow consists of an Nvidia RTX 3060 GPU (12.0 GB) graphical card, and an Intel Core i9-12900 (2.40 GHz) CPU with 32.0 GB of RAM. According to these specifications, the Zemax simulation model took 1.58 minutes to trace the total amount of rays and render the overall EIA.

Tables Icon

Table 1. Specifications of the MLA (model 630 from Fresnel Technologies).

Tables Icon

Table 2. Specifications of the display panel from the smartphone Xperia 1 II.

3.2 Nerfacto method and model training

Before starting the NeRF model training, we designed a real-world complex scene with various objects disposed at different depths. Figure 7 presents this scene, where a yellow filter (object 6) and a coin (object 7) were used to demonstrate the effectiveness of the NeRF model in restoring complex illumination behaviors, such as transparency and reflection. As shown in Fig. 7(a), the front view of the real-world scene has been arranged to have the same dimensions as the overall InIm-based 3D LFD prototype (65 $\times$ 152 mm according to the display panel), so that a true-to-scale comparison with the corresponding 3D reconstruction can be provided during the evaluation process. However, this requirement is not part of the NeRF generation process and is only implemented for the qualitative assessment. Outside this context, users are free to consider larger scenes.

 figure: Fig. 7.

Fig. 7. Real-world scene used to illustrate the proposed work, in which eight objects are included. (a) Represents the front view of the scene, and (b) the top view where the relative depth position of the objects is shown according to the virtual MLA plane (red dotted line). The overall dimension of the scene (front view) is $65 (V) \times 152 (H)$ mm, according to the display panel size (see Table 2).

Download Full Size | PDF

Following this, the volume scanning of the real-world scene was conducted with a simple smartphone as presented in subsection 2.2. The scanning process, facilitated with this method, eliminates the need for a specific pitch between two acquisitions, allowing the user to move freely around the scene without constraints. In this example, we chose to acquire various perspectives of the scene within a hemispherical scan volume using the Samsung A54 smartphone that provides full HD resolution (1080x1920 pixels) photos. To do so, we launched the "Camera Pose" feature within the developer mode of the Kiri Engine App. By moving around the scene, this feature automatically acquires multiple photos for a maximum of 2 minutes, but the user is free to stop the acquisition earlier. Note that Kiri Engine recommends providing at least 20 photos from various angles for better results. The more a NeRF has different input views, the higher the model quality generated after the training phase. In our case, we chose to take photos for the maximum time allowed by the application to achieve a high-quality 3D model, resulting in about 250 photos. After the acquisition, the app automatically processed these photos to determine their corresponding camera poses and sent the resulting data to the user via email. Finally, this data was used to begin the NeRF training phase. The Nerfacto method offers three training variants to generate a 3D model, namely Nerfacto, Nerfacto-big, and Nerfacto-huge. The training speed and quality are related to the variant selected, where Nerfacto is the fastest but has the lowest quality, and Nerfacto-huge is the slowest but produces the highest quality. In this work, the standard Nerfacto training variant was considered to illustrate the proposed method, but users are free to use a different method. The Nerfacto training was run using the Python code provided by Nerfstudio. Figure 8 shows the Nerfstudio live interface of the Nerfacto model training. According to the computer setup aforementioned, the training time took about 20 min.

 figure: Fig. 8.

Fig. 8. Nerfstudio viewer interface that shows the custom-trained 3D model. The camera poses of the self-captured photos are displayed around the 3D model.

Download Full Size | PDF

3.3 Evaluation setup

Figure 9 shows the experimental setup employed to perform the qualitative assessment, which includes a motorized gimbal mount, a motorized XYZ translation stage, and a viewing camera. The InIm-based 3D LFD prototype is arranged on the motorized gimbal mount to rotate in both horizontal and vertical directions, and the viewing camera is fixed on the XYZ motorized translation stage to control its position relative to the InIm-based 3D LFD prototype. The evaluation method consists of providing an angular investigation of the reconstructed 3D scene to validate the correct restitution of the perceived 3D information. During the experimental reconstruction, we slightly defocused the InIm-based 3D LFD prototype by setting the gap greater than the focal length ($g = 4$ mm) in order to prevent color artifacts caused by the subpixel layout of the display panel [5,31]. According to the Ref. [32], the maximum angular field of view (FoV) in which a user can observe the reconstructed 3D image without flipping is defined as:

$$\psi = 2 tan^{{-}1} \left( \frac{P_{\text{ml}}}{2g} \right),$$
where $P_{\text {ml}}$ is the pitch of a microlens (or an EI). Based on the prototype configuration, $\psi \approx \pm 7^ {\circ }$. For this reason, we examined the perceived perspectives of the reconstructed 3D image at five viewing angles within $\psi$, i.e., $0^ {\circ }$, and $\pm 7^ {\circ }$ on horizontal and vertical axes. The viewing camera is placed in front of the central view $(0^ {\circ })$ and the InIm-based 3D LFD prototype is then rotated at the aforementioned angles using the gimbal stage. In that way, the viewing camera will perceive a particular viewpoint of the scene as a function of the defined viewing angle. The specifications of the viewing camera are referenced in Table 3 and Table 4.

 figure: Fig. 9.

Fig. 9. The experimental setup employed to provide angular evaluation of the reconstructed 3D scene at different viewpoints.

Download Full Size | PDF

Tables Icon

Table 3. Specifications of the Alvium 1800 U-500 camera from Allied Vision.

Tables Icon

Table 4. Specifications of the 85-351 lens from Edmund Optics.

4. Results

To validate the effectiveness of the proposed method, a comparison between the real-world scene and the corresponding reconstructed 3D image was conducted through the qualitative evaluation process. But first, since the Nerfacto model provides continuous viewpoints within the scanned volume described in subsection 3.2, it is necessary to define a reference pose corresponding to the location of an observer relative to the 3D model. Here, the observer is the viewing camera from our experimental evaluation setup. Moreover, to ensure an accurate comparison with the actual scene, it is necessary to scale the objects in the model. For this purpose, we have oriented, positioned, and scaled the model so that it corresponds to the front view shown in Fig. 7(a). This step was achieved using the rendering interface of our Python code since COLMAP doesn’t provide a scale out of the box for the focal length of the acquisition camera and the distance between them. Specifically, the spatial coordinates of the NeRF model, which are provided by the COLMAP algorithm, are scaled linearly with the target placed in the scene (object 3 in the Fig. 7(b)). Moreover, as shown in Fig. 7(b), the MLA plane (in red) was placed on the target to define a real and virtual reconstruction zone. Figure 10 shows the corresponding EIA generated with Ansys Zemax OpticStudio. Fig. 11(a) shows the reconstructed 3D scene produced by the InIm-based 3D LFD prototype (obtained with the EIA presented in Fig. 10), and Fig. 11(b) shows the corresponding actual scene observed at different viewpoints. For each viewing angle, a distinct perspective of the reconstructed 3D scene can be observed.

 figure: Fig. 10.

Fig. 10. The EIs generated with the Zemax simulation model corresponding to the front view presented in Fig. 7(a). The overall EIA includes $153 \times 65$ EIs. Experienced users should be able to recognize real and virtual EIs.

Download Full Size | PDF

5. Discussion

The displacement of the apparent position of objects in the reconstructed scene matches that of the actual one, which indicates that the 3D information of the actual scene is correctly rendered for both real and virtual image reconstruction. Moreover, as shown in Visualization 2, the occlusion (object 3), transparency (object 6), and reflection (object 7) are maintained with the reconstructed 3D scene.

The color mismatch between the reconstruction and the actual scene comes from the difference in color space between the NeRF data, display panel, and viewing camera. Synchronization of the three color spaces is required to restore the correct color but is not relevant to this study. Note that the rendered RGB color from the SDF source file can be adjusted by considering the new RGB wavelength.

The small difference in object size of the reconstructed scene is caused by the telecentric-like rendering of the scene and the loss of resolution from the InIm-based 3D LFD imaging configuration. The telecentric effect is a consequence of the parallel configuration of the virtual cameras during the NeRF data querying process (see Fig. 5), where the chief rays of each microlens are all parallel to each other. Telecentricity becomes a problem for accurately rendering 3D information when the observer is close to the reconstructed scene. This effect is not related to the NeRF model and can be corrected by implementing a toed-in configuration of the virtual cameras instead of a parallel configuration [13]. Note that the NeRF data querying process is not restricted to any capturing configuration and is performed in the same way whatever the geometry of the virtual cameras.

The loss of resolution can be attributed to the limited depth of field of resolution-priority InIm-based 3D LFDs [33,34]. In this configuration, the reconstructed 3D images will be degraded as objects in the 3D scene move away from a reference image plane (RIP). This effect is known as multifaceted braiding and causes errors in the apparent size of reconstructed objects [33,35]. According to the Gauss law, the location $L$ of the RIP can be expressed as $L = gf/g-f$. Indeed, due to the slight defocus introduced in the experimental setup ($g = 4$ mm), the RIP is located at $L \approx 19$ mm in front of the MLA. This explains why the virtual images, which are located farther from the RIP than real images, result in lower reconstruction quality than the real ones [34]. Since the loss of spatial resolution is inherent to the reconstruction stage of InIm, this issue is outside of the scope of this work.

 figure: Fig. 11.

Fig. 11. (a) The actual real-world scene and (b) the corresponding reconstructed 3D scene observed by the viewing camera at five different viewpoints, namely $(- 7^ {\circ })$ left, $(+ 7^ {\circ })$ right, $(0^ {\circ })$ center, $(+ 7^ {\circ })$ top, and $(- 7^ {\circ })$ bottom. Visualization 1 and Visualization 2 show videos of the actual scene and the reconstructed scene from different viewing angles.

Download Full Size | PDF

6. Conclusion

We presented in this paper a flexible synthetic approach that integrates NeRF-related data into lens design software to achieve undistorted orthoscopic real and/or virtual 3D images from a real-world scene. A general workflow was provided, including the use of ray-tracing-based lens design software, to facilitate the different processing steps involved in managing custom NeRF data. Through this work, we showed that Ansys Zemax OpticStudio can support and exploit NeRF data to generate EIs. Moreover, we showed that Nerfstudio API allows for the modeling of complex real-world 3D scenes without implementing a bulky acquisition system. A simple smartphone is enough to provide the required data to train and create a custom NeRF model. A new mapping method for extracting undistorted orthoscopic data from a custom-trained NeRF model to generate EIs was presented. This method enables the generation of real and virtual 3D images, or both at the same time, by controlling the relative depth of objects from the scene through the MLA plane position. This approach offers a significant advantage in the control of the 3D image positioning since objects are no longer restricted to their initial capture position. The effectiveness of the proposed method was verified through a qualitative assessment of the reconstructed 3D scene. Orthoscopic 3D reconstruction from real-world scenes, which is challenging for conventional InIm capture techniques, was achieved without the implementation of additional post-processing steps. In addition, the results showed that transparency and reflection from the NeRF model were also conserved in the 3D reconstruction process. The proposed work can be used to manage and render 3D orthoscopic images from custom-trained NeRF-related models for any InIm-based 3D LFD. The next step is to implement this approach to characterize the optical performance of InIm-based 3D LFDs using complex 3D scenes from NeRF-related models. Since we demonstrated that the capture stage can be simulated in a lens design software, the 3D reconstruction from a NeRF-related model can also be simulated and analyzed using the method presented in Ref. [19].

Funding

Natural Sciences and Engineering Research Council of Canada (RGPIN-2016-05962).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are available in Visualization 1 and Visualization 2.

References

1. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013). [CrossRef]  

2. M. Martínez-Corral and B. Javidi, “Fundamentals of 3d imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Adv. Opt. Photonics 10(3), 512–566 (2018). [CrossRef]  

3. B. Wilburn, N. Joshi, V. Vaish, et al., “High performance imaging using large camera arrays,” in ACM SIGGRAPH 2005 Papers, (Association for Computing Machinery, New York, NY, USA, 2005), SIGGRAPH ’05, p. 765–776.

4. X. Xiao, M. DaneshPanah, M. Cho, et al., “3d integral imaging using sparse sensors with unknown positions,” J. Display Technol. 6(12), 614–619 (2010). [CrossRef]  

5. J.-S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27(13), 1144–1146 (2002). [CrossRef]  

6. Y. Igarashi, H. Murata, and M. Ueda, “3-d display system using a computer generated integral photograph,” Jpn. J. Appl. Phys. 17(9), 1683–1684 (1978). [CrossRef]  

7. S.-W. Min, S. Jung, J.-H. Park, et al., “Three-dimensional display system based on computer-generated integral photography,” in Stereoscopic Displays and Virtual Reality Systems VIII, vol. 4297 M. T. Bolas, A. J. Woods, M. T. Bolas, J. O. Merritt, S. A. Benton, eds., International Society for Optics and Photonics (SPIE, 2001), pp. 187–195.

8. H. E. Ives, “Optical properties of a lippmann lenticulated sheet,” J. Opt. Soc. Am. 21(3), 171–176 (1931). [CrossRef]  

9. H.-L. Zhang, H. Deng, H. Ren, et al., “Method to eliminate pseudoscopic issue in an integral imaging 3d display by using a transmissive mirror device and light filter,” Opt. Lett. 45(2), 351–354 (2020). [CrossRef]  

10. J. Arai, F. Okano, H. Hoshino, et al., “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37(11), 2034–2045 (1998). [CrossRef]  

11. J.-S. Jang and B. Javidi, “Three-dimensional projection integral imaging using micro-convex-mirror arrays,” Opt. Express 12(6), 1077–1083 (2004). [CrossRef]  

12. S. Xing, X. Sang, X. Yu, et al., “High-efficient computer-generated integral imaging based on the backward ray-tracing technique and optical reconstruction,” Opt. Express 25(1), 330–338 (2017). [CrossRef]  

13. J. Wen, X. Yan, X. Jiang, et al., “Nonlinear mapping method for the generation of an elemental image array in a photorealistic pseudoscopic free 3d display,” Appl. Opt. 57(22), 6375–6382 (2018). [CrossRef]  

14. H. Navarro, R. Martínez-Cuenca, G. Saavedra, et al., “3d integral imaging display by smart pseudoscopic-to-orthoscopic conversion (spoc),” Opt. Express 18(25), 25573–25583 (2010). [CrossRef]  

15. B. Mildenhall, P. P. Srinivasan, M. Tancik, et al., “Nerf: Representing scenes as neural radiance fields for view synthesis,” Commun. ACM 65(1), 99–106 (2021). [CrossRef]  

16. M.-U. Erdenebat, T. Amgalan, A. Khuderchuluun, et al., “Comprehensive high-quality three-dimensional display system based on a simplified light-field image acquisition method and a full-connected deep neural network,” Sensors 23(14), 6245 (2023). [CrossRef]  

17. J. Park, E. Stoykova, H. Kang, et al., “Numerical reconstruction of full parallax holographic stereograms,” 3D Research 3(3), 6 (2012). [CrossRef]  

18. Https://docs.nerf.studio/index.html.

19. S. Rabia, G. Allain, H. M. Wong, et al., “Optical performance analysis of an oled-based 3d light field display using lens design software,” International Optical Design Conference 2023, vol. 12798 (SPIE, 2023), pp. 167–170.

20. Z. Qin, P.-Y. Chou, J.-Y. Wu, et al., “Image formation modeling and analysis of near-eye light field displays,” J. Soc. Inf. Disp. 27(4), 238–250 (2019). [CrossRef]  

21. J. T. Kajiya and B. P. Von Herzen, “Ray tracing volume densities,” ACM SIGGRAPH computer graphics 18(3), 165–174 (1984). [CrossRef]  

22. G. Allain, S. Rabia, and R. Tremblay, “Github project for the NeRF-Zemax workflow,” github, (2024). https://github.com/lrio-copl/NERF-ZEMAX.

23. Y. Liu, D. Cheng, T. Yang, et al., “High precision integrated projection imaging optical design based on microlens array,” Opt. Express 27(9), 12264–12281 (2019). [CrossRef]  

24. Y. Xing, X.-Y. Lin, L.-B. Zhang, et al., “Integral imaging-based tabletop light field 3d display with large viewing angle,” Opto-Electron. Adv. 6(6), 220178 (2023). [CrossRef]  

25. Y. Kim, J.-H. Park, S.-W. Min, et al., “Wide-viewing-angle integral three-dimensional imaging system by curving a screen and a lens array,” Appl. Opt. 44(4), 546–552 (2005). [CrossRef]  

26. M. Tancik, E. Weber, E. Ng, et al., “Nerfstudio: A modular framework for neural radiance field development,” in ACM SIGGRAPH 2023 Conference Proceedings, (2023), SIGGRAPH ’23.

27. T. Müller, A. Evans, C. Schied, et al., “Instant neural graphics primitives with a multiresolution hash encoding,” ACM Trans. Graph. 41(4), 1–15 (2022). [CrossRef]  

28. J. T. Barron, B. Mildenhall, M. Tancik, et al., “Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields,” ICCV (2021).

29. A. Chen, Z. Xu, A. Geiger, et al., “Tensorf: Tensorial radiance fields,” in European Conference on Computer Vision (ECCV), (2022).

30. J. L. Schönberger and J.-M. Frahm, “Structure-from-motion revisited,” in Conference on Computer Vision and Pattern Recognition (CVPR), (2016).

31. M. Okui, M. Kobayashi, J. Arai, et al., “Moiré fringe reduction by optical filters in integral three-dimensional imaging on a color flat-panel display,” Appl. Opt. 44(21), 4475–4483 (2005). [CrossRef]  

32. C. B. Burckhardt, “Optimum parameters and resolution limitation of integral photography,” J. Opt. Soc. Am. 58(1), 71–76 (1968). [CrossRef]  

33. H. Navarro, R. Martinez-Cuenca, A. Molina-Martian, et al., “Method to remedy image degradations due to facet braiding in 3d integral-imaging monitors,” J. Disp. Technol. 6(10), 404–411 (2010). [CrossRef]  

34. J.-S. Jang, F. Jin, and B. Javidi, “Three-dimensional integral imaging with large depth of focus by use of real and virtual image fields,” Opt. Lett. 28(16), 1421–1423 (2003). [CrossRef]  

35. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, et al., “Multifacet structure of observed reconstructed integral images,” J. Opt. Soc. Am. A 22(4), 597–603 (2005). [CrossRef]  

Supplementary Material (3)

NameDescription
Code 1       GitHub project for the NeRF-Zemax workflow
Visualization 1       Video of the actual scene observed from different viewpoints
Visualization 2       Video of the reconstructed 3D scene observed from different viewpoints

Data availability

Data underlying the results presented in this paper are available in Visualization 1 and Visualization 2.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. General workflow of the proposed method representing the 3 key steps: 1) the ray generation for determining the position and direction of rays involved in the EIs formation, 2) the custom NeRF model generation for storing and providing real-world data, and 3) the pseudoscopic-free EIs generation based on the sampled data from the NeRF model. The red steps are implemented with lens design software, and the yellow step with Python.
Fig. 2.
Fig. 2. Principle of the NeRF rendering process, in which a camera ray $\mathbf {r}(t)$ is launched through the 3D model to sample several locations $\mathbf {p} = (x, y, z)$ and directions $\mathbf {d}$ along its path. These sampled coordinates (input) are fed to the NeRF function $F$ to produce the color $\mathbf {c}$ and density $\sigma$ values encountered in the volume. The final output RGB color $C(\mathbf {r})$ assigned to this ray is then rendered by integrating those values. A single ray is associated with one RGB value to form a pixel into a 2D image. These ray output characteristics are normalized in order to reproduce, once rendered onto a virtual camera, the values that are in the reference images.
Fig. 3.
Fig. 3. Principle of the ray generation process from the simulated capture stage, where $x_{h}, y_{h}$ and $x_{p}, y_{p}$ denote the field and pupil coordinates associated with a single EI, respectively. For each ray, a position $(x^p_{ij}, y^p_{ij}, z)$ and a direction cosine $(l^p_{ij}, m^p_{ij}, n^p_{ij})$ are extracted at the pupil plane (stop) of the corresponding microlens. This simulation is conducted in a pure Sequential mode in Ansys Zemax OpticStudio. The same principle can be easily extended to the desired number of EIs.
Fig. 4.
Fig. 4. Live rendering interface to define and preview the desired reference viewpoint. The red highlighted part corresponds to the relative MLA plane position throughout the scene. Each pixel of the rendered viewpoint is associated with the on-axis chief ray of the corresponding microlens, which means that its resolution corresponds to the total number of EI considered. This software is based on the natural units of the NeRF function, which in this workflow is unitless since COLMAP [30] doesn’t provide a clear scale.
Fig. 5.
Fig. 5. The three key steps of the NeRF querying principle, where (a) showing the constraint to determine new coordinates on a virtual camera plane (VCP), (b) the ray propagation from the VCP towards the 3D model to assigned NeRF data into each camera ray, and (c) the reconstruction process to recover orthoscopic real and virtual 3D images.
Fig. 6.
Fig. 6. Principle of the EIs generation step. This step is performed in pure NSC mode of Ansys Zemax OpticStudio to implement custom SDF as a light source. In this process, each ray from the custom SDF is traced toward a detector color. The distance between the SDF location (along the $z$-axis) and the detector color must be equal to the gap ($g$) in the ray generation process. Here a single ray ($i,j$) associated with its corresponding microlens is represented. However, $N \times N$ rays by microlens are traced to form a single EI. The same principle can be easily extended to the overall number of EIs.
Fig. 7.
Fig. 7. Real-world scene used to illustrate the proposed work, in which eight objects are included. (a) Represents the front view of the scene, and (b) the top view where the relative depth position of the objects is shown according to the virtual MLA plane (red dotted line). The overall dimension of the scene (front view) is $65 (V) \times 152 (H)$ mm, according to the display panel size (see Table 2).
Fig. 8.
Fig. 8. Nerfstudio viewer interface that shows the custom-trained 3D model. The camera poses of the self-captured photos are displayed around the 3D model.
Fig. 9.
Fig. 9. The experimental setup employed to provide angular evaluation of the reconstructed 3D scene at different viewpoints.
Fig. 10.
Fig. 10. The EIs generated with the Zemax simulation model corresponding to the front view presented in Fig. 7(a). The overall EIA includes $153 \times 65$ EIs. Experienced users should be able to recognize real and virtual EIs.
Fig. 11.
Fig. 11. (a) The actual real-world scene and (b) the corresponding reconstructed 3D scene observed by the viewing camera at five different viewpoints, namely $(- 7^ {\circ })$ left, $(+ 7^ {\circ })$ right, $(0^ {\circ })$ center, $(+ 7^ {\circ })$ top, and $(- 7^ {\circ })$ bottom. Visualization 1 and Visualization 2 show videos of the actual scene and the reconstructed scene from different viewing angles.

Tables (4)

Tables Icon

Table 1. Specifications of the MLA (model 630 from Fresnel Technologies).

Tables Icon

Table 2. Specifications of the display panel from the smartphone Xperia 1 II.

Tables Icon

Table 3. Specifications of the Alvium 1800 U-500 camera from Allied Vision.

Tables Icon

Table 4. Specifications of the 85-351 lens from Edmund Optics.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

r ( t ) = o + t d .
ψ = 2 t a n 1 ( P ml 2 g ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.