Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Micro to macro scale simulation coupling for stray light analysis

Open Access Open Access

Abstract

Stray light in an optical system is unwanted parasitic light that may degrade performance. It can originate from different sources and may lead to different problems in the optical system such as fogging, ghost images for imagers, or inaccurate measurements for time of flight applications. One of the root causes is the reflectivity of the sensor itself. In this paper we present a new optical simulation methodology to analyze the stray light contribution due to the sensor reflectivity by coupling electromagnetic simulation (to calculate the pixels’ bidirectional reflectance distribution function, also named BRDF) and ray-tracing simulation (for stray light analysis of the camera module). With this simulation flow we have been able to reproduce qualitatively red ghost images observed on different sensors in our laboratory.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

When designing an optical system, whatever the application, minimizing the stray light is an important aspect of the work. It is unavoidable that some parasitic light will remain in the system and degrade the performance, so it is essential to identify the origin of this unwanted light in order to reduce its impact. Stray light is often referred to as flare or veiling glare in an imaging system and defined in the ISO 935 and 18844 [1,2] as the light falling on an image which does not emanate from the subject point. The term “unwanted light” summarizes best what most people define as flare [3]. Depending on the application, it can create different issues.

For visible imaging applications flare is a well-known observed phenomenon and can sometimes be used deliberately for artistic reasons in photography, movies, or video games. It can arise from different origins in the system: bright sources outside the field of view of the system that find a path to the sensor, internal reflections on the optics (lenses, filters) or mechanics of the system, scattering due to material imperfections (roughness for example), diffraction on the elements of the system, etc. Flare reduces the image quality as it can reduce the Signal to Noise Ratio (SNR) of the sensor, reduce the dynamic range of High Dynamic Range (HDR) cameras, create saturated areas of the sensor, or create overlapped features in the final image [39]. For these reasons, it is seen most of the time as an artifact that should be reduced for better image quality. The two pictures in Fig. 1 illustrate some of the different types of flare on images; what can be seen in the left picture around the bright source is called petal flare and will be explained and reproduced in this work in the final section.

 figure: Fig. 1.

Fig. 1. Examples of flare on images (copyright-free images from Ref. [10,11]).

Download Full Size | PDF

For time of flight (TOF) applications in the near infrared range where the distance to the object is evaluated to reconstruct a 3D image of the scene (either with global shutter pixel technology or single photon avalanche diodes), stray light can lead to errors in depth data accuracy [1215]. The degradation can be quite important, for example when a weak signal from a distant object is affected by strong scattering/reflectivity signals from close objects as explained in Fig. 2. As an example, Mure-Dubois [12] reports an error in the range of 100mm to 400mm at 4m distance.

 figure: Fig. 2.

Fig. 2. Light scattering in a TOF camera (Ref. [12,13]).

Download Full Size | PDF

Even though a lot of work can be done for these different applications to reduce these stray light issues by post-processing of the data or calibration of the system [7,1214,1620], it is critical to identify the origin of the problem in the optical module to reduce the amount of stray light perturbing the system.

Analytical calculations in the past [2126], and ray-tracing software today [19,2729] are powerful methods to model light propagation in optical systems and estimate the degradation of performance due to stray light. These studies require a knowledge of the surface properties of the elements of the system. The principle of the analytic calculations done in the past to model stray light in optical modules is to consider the diffraction and scattering of the light on the optical elements of the system. For that purpose, the Bidirectional Scattering Distribution Function (BSDF) is widely used to describe the scattering due to the surface roughness of optical elements [30]. Peterson [22] has for example used the Harvey model [31] to describe and calculate the scattering of rough surfaces of optical elements in an imaging system. More recently Caron, Mariani, and Ritt have extended these calculations for their work and the readers could refer to these interesting reviews and works for more details [25,26,28,29,32]. Of course, any BSDF model could be defined on the surfaces and nowadays most ray-tracing software tools available in the market have different scatter models that can be applied directly on the surfaces of the optical elements and the housing of the system (ABg model, Lambertian, Mie scattering, imported BSDF data, etc.). When only the reflectance of a surface is required for the analysis (as studied here in our work), the bidirectional reflectance distribution function (BRDF) rather than the full BSDF, which includes both reflectance and transmittance, can be used.

Nevertheless, in all these studies, the reflectivity of the detector itself in the parasitic light of the system is either not considered or a simple coating model on the surface of the detector is used to simulate a given amount of specular reflectivity. But considering the wavelength of the applications (visible for photography, or near infrared range for TOF and LIDAR applications as some examples) and the size of the pixels of the detector in the market nowadays (between 1µm to 4µm for standard CMOS imagers and between 5µm to 10µm for TOF pixels or SPAD) the detector will act like a diffraction grating and reflect power in different directions corresponding to the grating orders. Analytical models of a grating could be applied to the detector surface to obtain the correct diffracted order directions depending on the wavelength, the pixel size, and the incident angle, but they cannot predict what fraction of the incident power will be reflected to each diffracted order. To be able to estimate the correct diffuse reflectivity of the detector for each incidence angle of the source (i.e. the BRDFs of the pixels) an electromagnetic simulation of pixels solving Maxwell equations is required. The finite-difference time-domain (FDTD) method is now widely used in the optical community for the simulation of imaging or TOF sensors [3337], as well as for the simulation of surface reflectivity [3841]. Thus, this solver allows us to obtain accurate BRDFs of our pixels, and subsequently use them as the surface property of our detector in the ray-tracing simulation for stray light analysis of the system. The coupling between two simulation worlds (electromagnetic and ray-tracing) with vastly different scales is an emergent field in the scientific community for different applications [4250]. In this work we present a demonstration of such an FDTD to ray-tracing simulation workflow that couples the light propagation at different spatial scales (µm and mm) in order to analyse stray light in an optical module. We believe this methodology could be used to complement and improve the stray light model in the aforementioned references in this field.

The work is organized as follows: in Section 2, we describe the principle of the simulation flow, explaining the constraints considered and the general methodology; in Section 3, we detail the BRDF calculation by electromagnetic simulation and demonstrate this methodology by comparing with the experimental BRDF characterization of some pixels; finally, in Section 4, we apply this coupling methodology to reproduce some stray light effects observed in the characterization of real modules.

2. Micro to macro scale coupling simulation methodology

The methodology we present here allows us to simulate the BRDF of our sensor pixels by electromagnetic simulation and use it as a surface property in our ray-tracing simulation for image sensor or TOF applications. But it is important to note that this methodology could be extended to every application where a BRDF needs to be simulated with electromagnetic simulation and used afterwards for ray-tracing simulations. In general, the electromagnetic simulation evaluates the coherent scattering properties of the surface, from which the incoherent scattering properties required for the BRDF format can be generated. In this work we use Ansys Lumerical FDTD software [51] for the electromagnetic simulation, and Zemax OpticStudio [52] for the ray-tracing simulation, but this simulation flow is compatible with any other software.

A schematic of an optical module for visible imaging is represented in Fig. 3. In this case it is composed of a housing, several lenses, an infrared filter, and a CMOS sensor. The light reaching each pixel is defined by two main parameters: the angular distribution which corresponds to the f-number (f/#) of the objective (ratio between its focal length and its entrance pupil diameter) and the chief ray angle (CRA) which is the ray passing through the center of the aperture stop and the center of the pixel of interest. Figure 3(a) shows what happens for pixels at the sensor center with the insert showing a zoomed view of two Backside Imager (BSI) technology pixels (blue and green pixels) with schematic geometrical rays and electromagnetic simulation of a green wavelength. BSI technology, in comparison with Frontside Imager (FSI) technology, has better optical performance because metallic interconnections are not in the light path [37], and is now the standard in the CMOS image sensor industry. Figure 3(b) shows what happens for pixels at the sensor edge with the insert also showing a zoomed view of two BSI technology pixels (blue and green pixels) with schematic geometrical rays and electromagnetic simulation of a green wavelength. The pixels form a locally periodic pattern on the image sensor surface, and the pitch refers to the minimum distance between pixels. For color sensitive detectors, the pixels are arranged into a 2 by 2 locally periodic structure with red, green and blue (RGB) color filters, called the Bayer pattern, with a period that is twice the pitch, as shown in the top view of Fig. 3.

 figure: Fig. 3.

Fig. 3. Representation of an image sensor module. Figure 3(a) shows rays for an on-axis field and Fig. 3(b) shows rays for an off-axis field with inserts representing light propagation inside two pixels (blue and green) of a 1.1µm pitch sensor (on the left is a schematic of geometrical rays, and on the right the electric field intensity from an FDTD simulation of a green wavelength). Between the inserts is a top view of the RGB sensor with the periodic Bayer RGB pattern (2 by 2 pixels) simulated by FDTD.

Download Full Size | PDF

As represented in Fig. 3(b), to re-center the off-axis incoming light on the photodiode, color filters (if present) and microlenses are shifted in most commercial sensors today. Each pixel on the matrix has a different design because the radial shift of each filter/microlens is optimized depending on the CRA corresponding to the position on the sensor and, as a result, the optical efficiency and the reflectivity will also be different. Thus, the pixel BRDFs must be calculated for several incidence angles (defined by the illumination cone and the CRA), for different pixel designs (defined by the position on the matrix), and for the wavelengths of interest. Of course, on the detector, the local variation of the pixel design and illumination is small and simulating only a few different areas of interest is sufficient to estimate the local BRDFs over the entire sensor plane. We therefore simulate different small groups of pixels at different locations on the sensor (i.e., corresponding to different fields in the scene) that receive the same uniform illumination; the spatial extent of each group is small compared to the spatial variation of the source. The reader could refer to our previous work that describes the CMOS simulation methodology for more details [33]. At the pixel level, light is distributed uniformly: spatially over the pixel area and angularly inside a cone defined by the exit pupil of the objective and the pixel. With electromagnetic simulations, it is possible to simulate these groups of pixels for different incidence angles (with a plane wave source) to get the corresponding BRDF as will be explained in more detail in Section 3. Finally, in the ray-tracing software, these BRDFs are imported and used as surface properties at the corresponding location on the detector plane. This electromagnetism to ray-tracing simulation flow is summarized in Fig. 4.

 figure: Fig. 4.

Fig. 4. Representation of the electromagnetism to ray-tracing simulation flow for stray light analysis of an optical module considering the reflectivity of the detector. As noted on the right, each incident angle (and indeed polarization) must be simulated separately because FDTD is a fully coherent electromagnetic solver.

Download Full Size | PDF

3. BRDF with electromagnetic simulation

The BRDF is the ratio of reflected radiance to the irradiance of a surface for a given incident angle, reflected angle, and wavelength. To fully characterize the BRDF using FDTD (or similar electromagnetic solver) requires many individual simulations, one for each angle of incidence and polarization (S or P). Multiple plane waves at different incident angles cannot be solved in the same simulation because FDTD is fully coherent and interference between the different sources would result. However, the frequency-dependent response can be extracted from a single FDTD simulation, with some care, because it is a time domain method so many wavelengths can be solved in a single simulation. To calculate the BRDF we must consider the reflected light in the far field, where it behaves as a plane wave with well-defined angle and polarization. This is achieved by performing a near to far field projection of the FDTD near field results. In the far field, quantities such as the Poynting vector can be easily determined from the electric field intensity which allows us to calculate radiance quantities required for the determination of the BRDF. The irradiance can be determined from the plane wave source conditions used in the FDTD simulation. Unpolarized results can be easily calculated from the full polarization information available from the FDTD simulations. In the remainder of this section, we explain the full details of this BRDF calculation with Ansys Lumerical FDTD.

The pixel structure is assumed to be locally periodic, which is a good assumption given that the shifts in the microlenses and color filters occur slowly from the center to the edge of the image sensor, on length scales which are much larger than the wavelength. The simulation can therefore be reduced to the smallest, periodically replicated group of pixels, which is often four when a Bayer pattern is used for visible applications (as shown in the top view of the sensor in Fig. 3). In this locally periodic approximation, the Bayer pattern forms a two-dimensional grating with well-defined primitive lattice vectors ${\vec{a}_1}$ and ${\vec{a}_2}$. The excitation source is a plane wave with a given incident angle and polarization. For boundary conditions, we use perfectly matched layer (PML) absorbing boundaries above and below the pixels to absorb both the reflected light and the light transmitted to silicon substrate carrier. On the lateral boundaries, either Bloch-periodic boundaries or split field boundaries, also known as the broadband fixed angle source technique (BFAST), can be used. The Bloch-periodic boundaries result in a wavelength dependent incident angle, while the BFAST method results in a substantial increase in simulation time, especially at steeper incident angles. In this work, we chose Bloch-periodic boundaries because the wavelength dependence of the source angle can be accounted for during the analysis. During the FDTD simulation, the transient electromagnetic fields are Fourier transformed to the frequency domain and normalized to the source pulse used to excite the system [53]. By calculating these fields on a plane above the image sensor surface, we can calculate the Poynting vector in the frequency domain and therefore the total reflected power, normalized to the source power, at any desired wavelength. Typical sizes of the FDTD grid are on the order of λmin/10 where λmin is the minimum wavelength (in each material) over the desired bandwidth and standard methods can be used to ensure convergence.

In order to calculate the reflected power to each angle (or each diffracted order), we must perform a near to far field projection using well known Green’s function methods [53]. In general, calculating the far field from the near field involves evaluating integrals of the tangential electric and magnetic fields over a near field surface for each position in the far field. When the far field position approaches an infinite distance compared to the dimensions of the near field surface (the Fraunhofer diffraction limit), the far field position can be defined by a wave vector $\vec{k} = {k_0}\vec{u}$ and a distance $R = |{{{\vec{r}}_{far}} - {{\vec{r}}_{near}}} |$ where k0 is the wavenumber in the far field medium, $\vec{u}$ is the unit vector given by $\vec{u} = ({{{\vec{r}}_{far}} - {{\vec{r}}_{near}}} )/R\; $ and ${\vec{r}_{near}}$ is the center of the near field surface. The integrals over the near field surface can then be reduced to a series of Fourier transforms of the near electromagnetic fields.

Here, we know the reflected near fields on a surface corresponding to one period of a periodic system. We can therefore calculate the contribution of a single unit cell to the far field, $\vec{E}_0^{far}({\vec{k}} )$ from the near field of a single period, $\vec{E}_0^{near}({\vec{r}} )$. From there, we can calculate the reflected far field of a finite number of periods simply by summing the contributions of each period with the appropriate phase

$${\vec{E}^{far}}({\vec{k}} )= \mathop \sum \nolimits_{p,q = 1}^N w({p,q} ).\vec{E}_0^{far}({\vec{k}} ).exp[{i({{{\vec{k}}_\parallel } - \vec{k}_\parallel^{inc}} )\cdot ({p{{\vec{a}}_1} + q{{\vec{a}}_2}} )} ]$$
where ${\vec{k}_\parallel }$ is the in-plane wavevector of the reflected light, $\vec{k}_\parallel ^{inc}\; $ is the in-plane wavevector of the incident light, ${\vec{a}_1}$ and ${\vec{a}_2}$ are the primitive lattice vectors of the periodic image sensor, and w is a weighting factor. For an infinite device with w = 1, the sum is non-zero only when $({{{\vec{k}}_\parallel } - \vec{k}_\parallel^{inc}} )\cdot {\vec{a}_1}$ and $({{{\vec{k}}_\parallel } - \vec{k}_\parallel^{inc}} )\cdot {\vec{a}_2}$ are multiples of 2π, corresponding to the grating orders of the device. In the simple case where ${\vec{a}_1}$ and ${\vec{a}_2}$ are aligned with the x and y axes respectively, the grating orders occur when
$${k_x} - k_x^{inc} = n2\pi /{a_1}$$
$${k_y} - k_y^{inc} = m2\pi /{a_2}$$
where n and m are integers which can be used to reference each order.

From Eq. (1), we can calculate the normalized power (and polarization, if desired) of reflected light at any angle from the surface, as a function of the incident angle and polarization. This can be done either in the limit of an infinite or a finite number of periods. This information on the reflective properties of the image sensor surface can be saved for subsequent analysis in ray-tracing. We note that the FDTD simulation provides the fully coherent results, including phase and polarization, which can be post-processed into incoherent results as required when passing the information to a ray tracer. There are several methods which can be used to store the reflective properties for ray-tracing. For example, the grating order efficiencies could be used directly to determine the fraction of rays reflected to each order, while the direction could be determined from Eqs. (2) and (3). While this method is preferable in principle, support for wavelength and incident angle dependent diffraction of 2D gratings is not always available in ray-tracing tools. An alternative is to use the tabular BRDF data format that most ray-tracing software can handle easily. When using a tabular BRDF data format, the diffracted order efficiencies cannot be employed directly because the reflectance of a grating as a function of output angle is composed of Dirac delta functions that cannot be easily interpolated. Instead, we want to treat the ideal plane waves as finite sized beams and thereby create a BRDF that is smooth enough to allow for interpolation. One method to achieve this is to apply a weighting function in Eq. (1) of:

$$w(p,q) = exp[ - |p{a_1} + q{a_2}{|^2}/{({R_0}/2)^2}]$$
where we have assumed that the primitive lattice vectors, ${\vec{a}_1}$ and ${\vec{a}_2}$, are aligned with the cartesian x and y axes.

This is approximately equivalent to the angular reflection that will occur for a finite sized incident beam of radius R0. Here, we are neither so concerned with the precise value of R0 nor are we concerned with the specific choice of function used, but rather with smoothing the Dirac delta functions present in a BRDF of an ideal grating into functions that can be managed through the tabular BRDF formalism available in ray-tracing software. Choosing a larger value of R0 gives narrower diffracted beams, which requires sampling the BRDF at more angles. A smaller value of R0 is less accurate but allows sampling the BRDF at fewer angles. More quantitatively, the above weighting function will result in a far field composed of multiple beams of the form

$${\vec{E}^{far}}({\vec{k}} )= \mathop \sum \nolimits_{n,m} {\alpha _{n,m}}exp\left[ {\frac{{ - R_0^2{{({{{\vec{k}}_\parallel } - n{{\vec{K}}_1} - m{{\vec{K}}_2}} )}^2}}}{{16}}} \right]$$
where ${\vec{K}_1}$ amd ${\vec{K}_2}$ are the reciprocal lattice vectors of the grating (${\vec{K}_1} = {\vec{u}_x}2\pi /{a_1}$ and ${\vec{K}_2} = {\vec{u}_y}2\pi /{a_2}$ in the simple case where the primitive lattice vectors are aligned with the cartesian x and y axes), n and m are integers corresponding to the grating orders and αn,m is a complex coefficient. Near normal, this results in beams with an angular FWHM in intensity of $2\sqrt {2\ln (2 )} \lambda /({\pi {R_0}} )\sim 0.75\lambda /{R_0}$. In addition, the angular separation of the orders near normal is $\lambda /a$, where a is the period (in both x and y) of the image sensor. We chose R0 ∼ 5a because it results in beams that are well separated between the orders (the FWHM is approximately 15% of the angular separation of the orders). Furthermore, R0 ∼ 5a results in beams that range from about 1.5 to 3.7 degrees FWHM for the examples considered here, which allows for reasonable angular sampling in the BRDF files and is on the same scale as the angular resolution of the detector used for experimental results. Finally, values of R0 between 4a and 10a did not show appreciable differences in the final ray-tracing results, as expected. We note that there can be some confusion between the pitch of the image sensor and the period of the grating as simulated by FDTD. When a Bayer pattern is used, the period of the grating is twice the pitch because the color filters break the periodicity of each individual pixel, as can be seen in the top view of Fig. 3.

In order to demonstrate this electromagnetic BRDF simulation methodology of pixels, we performed some BRDF characterizations by measurement. The reader could refer to a Guarnera publication [54] for a complete review of BRDF representation and acquisition. The characterizations were performed with a mini-Diff equipment from LightTec [55] at 940nm at normal incidence on two different process technologies with different pixel pitch (4.6µm pixel and 2.2µm pixel without color filters). The illumination spot size in characterization was around 1mm diameter. We performed the measurements on samples without a radial shift of the microlenses (i.e., all pixels were identical) in order to be equivalent to the simulation conditions. Figures 5(a) and 5(b) show, respectively, the results from characterization and simulation on the 2.2µm pixel pitch. We see that both results qualitatively agree in terms of energy and direction angles corresponding to the ones predicted by the grating equation (Eqs. (2) and (3)) for the 940nm wavelength at normal incidence in the air. A log scale is used to emphasize the signal in the diffracted orders. We also notice that the weak 2nd orders are not detected in characterization, due to the limited detector responsivity of the equipment for these conditions [55]. Figures 5(c) and 5(d) show, respectively, the results from characterization and simulation on the 4.6µm pixel pitch. Once again, the simulation results qualitatively match the characterized ones with angles corresponding to the grating equation. The small differences observed could be explained by the simulation model. Even though we try to model the pixels as accurately as possible, there are always some differences with the final process realized on a silicon wafer (refractive indices of the materials, layer thicknesses, microlens shape, isolation trenches profile etc…). Furthermore, the simulated beams have an angular width that is determined by our choice of R0, while the experimental results are limited by the angular resolution of the measurement device.

 figure: Fig. 5.

Fig. 5. BRDF characterization and simulation comparison for different pixel pitch at 940nm at normal incidence. Figures 5(a) and 5(c) show characterization BRDF result for 2.2µm and 4.6µm pixel respectively. Figures 5(b) and 5(d) show FDTD simulation BRDF result for 2.2µm and 4.6µm pixel respectively. The white rings correspond to the polar angles in 10 degree increments up to 90 degrees.

Download Full Size | PDF

We also indicate in Table 1 the absolute values of reflectance from simulation and experimental characterization at 940nm. For reflectance measurements on scattering samples, it is often interesting to combine two different tools: a BRDF equipment like the mini-diff equipment we use here (or a more sophisticated goniometer if available) that gives the angular information but captures less accurately the total diffuse reflectance, and an integrating sphere equipment that accurately measures the total reflectance value but without angular information. For the latter we performed the measurement with a Lambda 1050 from Perkin Elmer [56] to compare the total reflectance with simulation. The values indicated in the Table 1 for the measurement correspond to the mean value measured over 5 samples with uncertainty reported in parenthesis. The results in Table 1 show a good agreement between simulation and characterization (considering the uncertainty of the measurements), for both specular and total integrated reflectance. One should notice from the Table 1 that the reflectance can be quite significant for some pixel technologies, especially at 940nm on BSI technology due to the low absorption of silicon and therefore the light reflected from the metallic interconnections under the silicon substrate. As already discussed, it can create important stray light issues in the optical module that have to be identified and reduced.

Tables Icon

Table 1. Image sensor reflectance results comparison

As it is difficult to accurately characterize by measurement local BRDFs on sensors with a radial shift of microlenses (it would require spot sizes of a few microns in diameter with good angular detection accuracy and wavelength scanning of the source), the FDTD simulation methodology is an efficient way to get the BRDFs of pixels at different incidence angles and wavelengths in order to predict potential issues in the final optical module.

4. Application to stray light analysis

In this section, we use our micro to macro scale optical simulation coupling flow to reproduce and explain stray light issues observed on module. The module is a standard one used for imaging applications in the visible, composed of several lenses and an infrared filter to cut off the signal above 650nm (see the module representation in Fig. 4). We use it in combination with two different sensors: one with a 1.4µm pixel pitch, and another with a 1.75µm pixel pitch. Note that due to the Bayer pattern, the period of the grating simulated by FDTD is twice the pitch. We have observed with laboratory experiments important red ghost images on the sensors when illuminating the module with a bright light source. The ghost patterns observed were composed of several secondary images whose positions in the image depended on the sensor used (i.e. different pixel pitch), which were clear indications that the problem may have arisen from sensor diffraction reflectivity. To confirm this, we performed electromagnetic BRDF simulations of these two pixel pitches at the sensor center and imported the data into the ray-tracing software in order to find the element(s) on the module causing the ghosts. In Fig. 6 we show the BRDF for these two pixel pitches at 630nm wavelength for normal incidence, as well as the incoherent sum of the BRDFs over a sampling of incident angles over the full aperture of the module (F/2.8) at the sensor center.

 figure: Fig. 6.

Fig. 6. BRDF for the two pixel dimensions simulated at 630nm. Figures 6(a) and 6(c) show the BRDF at normal incidence for the 1.75µm and 1.4µm pixel pitch respectively. Figures 6(b) and 6(d) show the incoherent sum of the BRDFs corresponding to a sampling of incident angles over the full aperture (F/2.8).

Download Full Size | PDF

The incoherent sums of the BRDFs in Fig. 6(b) and 6(d), showing the sampling of multiple incident angles over the aperture, illustrate the angular spread and power of each diffracted order due to the f/# of the module at this wavelength. The use of an incoherent sum has been justified in previous work [33], and also corresponds to what will be observed when the BRDF is used in a ray tracer where each ray is treated incoherently. We can observe that the sampling of incident angles in the FDTD simulations should be much higher to obtain results corresponding to a full diffuse source. Indeed, if enough incident angles were simulated, we would expect to see smooth circles with a diameter equal to the full aperture centered at each diffracted order in Figs. 6(b) and 6(d). Therefore, Figs. 6(b) and 6(d) underscore some of the challenges of working with a tabular BRDF data format to represent diffraction from gratings because, for ideal accuracy, a large number of incident angles, output angles and wavelengths would be required to correctly represent the scattering behavior. Nonetheless, the relatively small number of incident angles used are sufficient to obtain qualitative agreement with experiment, as we will see in Fig. 8, despite the fact that artifacts of the limited angular sampling are visible in the simulated results. Improvements to avoid the use of a tabular BRDF data format, such as the use of plugins to better handle the diffraction order efficiencies of the periodic image sensor surface, while analytically accounting for the directions of the diffracted orders, will be the subject of future work.

Once the FDTD electromagnetic simulations were performed, the BRDF data was imported into the ray-tracing software using a tabular text file format and applied as a surface property at the center sensor plane as explained in Section 2. The results of the final ray-tracing simulation at 630nm on the two module configurations are shown in Fig. 7 as well as the captured images with a bright light source. The measured images show red ghost patterns that are well reproduced with the simulation flow. The size, spacing, and directions of the ghosts are comparable between measurement and simulation in the two module configurations (yellow diagonal lines and yellow dot squares have been added to the images to more easily compare dimensions between the patterns).

 figure: Fig. 7.

Fig. 7. Comparison images with ghost patterns between measurement and simulation. Figure 7(a) shows the image captured with the module using the 1.4µm pixel sensor. Figure 7(b) is the corresponding ghost ray simulation at 630nm. Figure 7(c) shows the image captured with module using the 1.75µm pixel. Figure 7(d) is the corresponding ghost ray simulation at 630nm.

Download Full Size | PDF

To better understand the origin of the ghosts, a detailed stray light analysis in ray-tracing software is necessary. The analysis shows that the ghost pattern comes from light that is reflected to various diffracted orders of the periodic sensor that is again reflected on the IR filter and comes back as ghosts captured by the sensor. In Fig. 8 we have represented some simulated rays to illustrate the optical path leading to ghost images on the sensor. The ghost pattern is thus highly dependent on the wavelength, the pixel pitch, the incidence angle on the sensor, the infrared filter to sensor distance and the thickness of the infrared filter. This demonstrates that electromagnetic simulation of the sensor coupled with ray-tracing simulation of the module is the only way to reproduce this kind of behavior.

 figure: Fig. 8.

Fig. 8. Ray-tracing simulation at 630nm of the module with BRDF electromagnetic simulation data applied at the sensor surface. The zoomed view on the right shows the rays generating the ghosts on the sensor, due to small reflections on the sensor (that behaves as a weak reflection grating), that are again reflected by the IR filter back towards the sensor.

Download Full Size | PDF

For this specific issue, a variety of solutions could be explored, such as optimizing the sensor design to reduce reflectivity, changing the IR filter to sensor distance to defocus the ghosts, or optimizing antireflective coatings on the IR filter.

5. Conclusion and discussion

In this paper, we present a new methodology for analyzing stray light in optical systems that originates from image sensor reflections. This methodology is based on a reflection analysis by electromagnetic simulation of the sensor’s locally periodic pixels, coupled to a ray-tracing simulation of the module, and could be used for many different applications including imaging and time of flight modules. This methodology could also be applied at any wavelength and any surface for which a BRDF can be calculated with electromagnetic simulation. We have shown that we are able to reproduce by simulation ghost patterns seen on real images. Thus, this methodology could be a useful additional tool for engineers to detect and solve stray light issues during the design of modules.

The method presented is not limited to a tabular BRDF file format, which was chosen here because it is supported by most ray-tracing software. Indeed, a tabular BRDF file format is limiting for representing reflections from periodic gratings because it requires an artificial broadening of the diffracted orders calculated by electromagnetic simulation, as well as high angular sampling. Furthermore, if more than one wavelength is included, it requires high spectral sampling due to the shift in grating order angles with wavelength. Therefore, a more direct approach in ray-tracing to handling the grating order efficiency information extracted from electromagnetic simulation is desirable and will be the subject of future work.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. International Organization for Standardization, “Optics and optical instruments, Veiling glare of image, Forming systems, Definitions and methods of measurement,” ISO 9358:1994 (1994).

2. International Organization for Standardization, “Photography, Digital cameras, Image flare measurement,” ISO 18844 (2017).

3. D. Wueller, “Image Flare measurement according to ISO 18844,”in IS&T International Symposium on Electronic Imaging, Digital Photography and Mobile Imaging XII (2016), pp. 1–7(7).

4. M. Hullin, E. Eisemann, H.P. Seidel, and S. Lee, “Physically-based real-time lens Flare rendering,” ACM Transactions On Graphics Vol. 30 N. 4, Proceedings of SIGGRAPH (2011).

5. S. Lee and E. Eisemann, “Practical Real-Time Lens-Flare Rendering,” in Eurographics Symposium on Rendering Vol. 32 N. 4 (2013),

6. J. J. McCann and A. Rizzi, “Veiling glare: the dynamic range limit of HDR images,,” Proc. SPIE 6492, 649213 (2007). [CrossRef]  

7. E-V. Talvala, A. Adams, M. Horowitz, and M. Levoy, “Veiling Glare in High Dynamic Range Imaging,” ACM Trans. Graphics 26(3), 37(2007). [CrossRef]  

8. N. Koren, “Measuring the impact of flare light on Dynamic Range,” in IS&T International Symposium on Electronic Imaging, Image Quality and System Performance XV (2018), pp. 169-1-169-6(6).

9. J. Jur, “Stray Light Analysis of a Mobile Phone Camera,” Master’s project, College of Optical Sciences, University of Arizona (2016), https://www.optics.arizona.edu/sites/optics.arizona.edu/files/jordan-jur-msreport.pdf

10. S. Wylie, “Forest Light – Lens flare (real),” (2012), https://commons.wikimedia.org/wiki/File:Forest_Light_-_Lens_flare_(real)_(7882567470).jpg

11. Byronv2, “early spring sunset,” (2016), https://www.flickr.com/photos/7512717@N06/25025964623

12. J. Mure-Dubois and H. Hügli, “Real-time scattering compensation for time-of-flight camera,” in Proc. of the ICVS Workshop on Camera Calibration Methods for Computer Vision Systems (2007).

13. J. Mure-Dubois and H. Hügli, “Optimized scattering compensation for time-of-flight camera,” Proc. SPIE 6762, 67620H (2007). [CrossRef]  

14. J. Holmund, “Characterization and compensation of Stray light effects in Time of flight based range sensors,” Master’s thesis Engineering Physics, Department of Physics, Umeå University (2013), http://www.diva-portal.org/smash/record.jsf?pid=diva2%3A633199&dswid=-2452

15. R. Whyte, Chronoptics (2021), https://medium.com/chronoptics-time-of-flight/multipath-interference-in-indirect-time-of-flight-depth-sensors-7f59c5fcd122

16. J. Achatzi, G. Fischer, V. Zimmer, D. Paulus, and G. Bonnet, “Measurement and analysis of the point spread function with regard to straylight correction,” Proc. SPIE 9404, 940406 (2015. [CrossRef]  

17. B. Bitlis, P. Jansson, and J. Allebach, “Parametric point spread function modeling and reduction of stray light effects in digital still cameras,” Elec. Imaging Computat. Imaging 64980, 64980V (2007). [CrossRef]  

18. J. Wei, B. Bitlis, A. Bernstein, A. de Silva, P. A. Jansson, and J. P. Allebach, “Stray light and shading reduction in digital photography: a new model and algorithm,” Elect. Imaging Dig. Photo. , 6817, 68170H (2008). [CrossRef]  

19. E. Oh, J. Hong, S.W. Kim, Y.J. Park, and S.I. Cho, “Novel ray tracing method for stray light suppression from ocean remote sensing measurement,” Opt. Express 24(10), 10232–10245 (2016). [CrossRef]  

20. P. Gamet, S. Fourest, T. Sprecher, and E. Hillairet, “Measuring, modeling and removing optical straylight from Venus Super Spectral Camera images,” Proc. SPIE 11180, 111804F (2018). [CrossRef]  

21. A. Greynolds, “Formulas For Estimating Stray-Radiation Levels in Well-Baffled Optical Systems,” Proc. SPIE 0257, 39 (1981). [CrossRef]  

22. G. L. Peterson, “Analytic expressions for in-field scattered light distributions,” Proc. SPIE 5178, 184–193 (2004). [CrossRef]  

23. J.E. Harvey, N. Choi, A. Krywonos, G. Peterson, and M. Bruner, “Image degradation due to scattering effects in two-mirror telescopes,” Opt. Eng. 49(6), 063202 (2010). [CrossRef]  

24. K. Sun, H.M. Jiang, and X.A. Cheng, “In-field stray light due to surface scattering effects in infrared imaging systems,” Proc. SPIE 8193, 81933F (2011). [CrossRef]  

25. G. Ritt, “Laser Safety Calculations for Imaging Sensors,” Sensors 19(17), 3765 (2019). [CrossRef]  

26. G. Ritt, “Estimation of Lens Stray Light with Regard to the Incapacitation of Imaging Sensors,” Sensors 20(21), 6308 (2020). [CrossRef]  

27. F. Landini, M. Romoli, S. Fineschi, and E Antonucci, “Stray-light analysis for the SCORE coronagraphs of HERSCHEL,” App. Opt. 45(26), 6657–6667 (2006). [CrossRef]  

28. J. Caron, M. Taccola, and J.-L. Bezy, “Towards a standardized method to assess stray-light in earth observing optical instruments,” Proc. SPIE , 10562, 105622B (2017). [CrossRef]  

29. F. Mariani, J. Caron, M. Taccola, and S. Grabarnik, “StrayLux: an efficient tool for stray-light modelling in optical Instruments,” Proc. SPIE 11180, 1118089 (2019). [CrossRef]  

30. F. E. Nicodemus, J.C. Richmond, J.J. Hsia, I.W. Ginsberg, and T. Limperis, “Geometric considerations and nomenclature for reflectance,” National Bureau of Standards, NBS Monograph160 (1977).

31. J. E. Harvey, “Light-Scattering Characteristics of Optical Surfaces,” Ph.D. Dissertation, University of Arizona (1976).

32. R. N. Pfisterer, “Approximated Scatter Models for Stray Light Analysis,” Opt. Photonics News 22(6), 16–17 (2011). [CrossRef]  

33. J. Vaillant, A. Crocherie, F. Hirigoyen, A. Cadien, and J. Pond, “Uniform illumination and rigorous electromagnetic simulations applied to CMOS image sensors,” Opt. Express 15(9), 5494–5503 (2007). [CrossRef]  

34. A. Crocherie, P. Boulenc, J. Vaillant, F. Hirigoyen, D. Hérault, and C. Tavernier, “From photons to electrons: a complete 3D simulation flow for CMOS image sensor,” IEEE International Image Sensor Workshop (2009).

35. Y. Huo, C. C. Fesenmaier, and P. B. Catrysse, “Microlens performance limits in sub-2μm pixel CMOS image sensors,” Opt. Express 18(6), 5861–5872 (2010). [CrossRef]  

36. T. Jung, Y. Kwon, and S. Seo, “A 4-tap global shutter pixel with enhanced IR sensitivity for VGA time-of-flight CMOS image,” Sensors, Imaging Sensors and Systems conference (2020).

37. F. Hirigoyen, J. Vaillant, E. Huss, F. Barbier, J. Prima, F. Roy, and D. Hérault, “1.1 µm Backside Imager vs. Frontside Imager: an optics-dedicated FDTD approach,” in IEEE Int. Image Sensor Workshop (2009), pp. 1–4.

38. A.H. Wang, P.F. Hsu, and J.J. Cai, “Modeling bidirectional reflection distribution function of microscale random rough surfaces,” J. Cent. South Univ. Technol. 17(2), 228–234 (2010). [CrossRef]  

39. Y. Xuan, Y. Han, and Y. Zhou, “Spectral Radiative Properties of Two- Dimensional Rough Surfaces,” Int. J. Thermophys. 33(12), 2291–2310 (2012). [CrossRef]  

40. K.F. Warnick and W.C. Chew, “Numerical simulation methods for rough surface scattering,” Waves Random Media 11(1), R1–R30 (2001). [CrossRef]  

41. M. A. Marciniak, S. R. Sellers, R. B. Lamott, and B. T. Cunningham, “Bidirectional scatter measurements of a guided-mode resonant filter photonic crystal structure,” Opt. Express 20(25), 27242–27252 (2012). [CrossRef]  

42. Y. Wang, S. Safavi-Naeini, and S. K. Chaudhuri, “A hybrid technique based on combining ray tracing and FDTD methods for site-specific modeling of indoor radio wave propagation,” IEEE Trans. Antennas Propagat. 48(5), 743–754 (2000). [CrossRef]  

43. Y. Kawaguchi, K. Nishizono, J. S. Lee, and H. Katsuda, “Light extraction simulation of surface-textured light-emitting diodes by finite-difference time-domain method and ray-tracing method,” Jpn. J. Appl. Phys. 46(1), 31–34 (2007). [CrossRef]  

44. L. W. Cheng, Y. Sheng, C. S. Xia, W. Lu, M. Lestrade, and Z. M. Li, “Simulation for light power distribution of 3D InGaN/GaN MQW LED with textured surface,” Opt. Quantum Electron. 42(11-13), 739–745 (2011). [CrossRef]  

45. X. Ding, J. Li, Q. Chen, Y. Tang, Z. Li, and Binhai Yu, “Improving LED CCT uniformity using micropatterned films optimized by combining ray tracing and FDTD methods,” Opt. Express 23(3), A180–A191 (2015). [CrossRef]  

46. C. Leiner, W. Nemitz, F. P. Wenzl, P. Hartmann, U. Hohenester, and C. Sommer, “A simulation procedure for light-matter interaction at different length scales,” Proc. SPIE 8429, 84290L (2012). [CrossRef]  

47. C. Leiner, S. Schweitzer, V. Schmidt, M. Belegratis, F.- P. Wenzl, P. Hartmann, U. Hohenester, and C. Sommer, “Multiscale simulation of an optical device using a novel approach for combining ray-tracing and FDTD,” Proc. SPIE 8781, 87810Z (2013). [CrossRef]  

48. C. Leiner, S. Schweitzer, F-P. Wenzl, P. Hartmann, U. Hohenester, and C. Sommer, “A Simulation Procedure Interfacing Ray-Tracing and Finite-Difference Time-Domain Methods for a Combined Simulation of Diffractive and Refractive Optical Elements,” J. Lightwave Technol. 32(6), 1054–1062 (2014). [CrossRef]  

49. C. Sommer, C. Leiner, S. Schweitzer, F.-P. Wenzl, U. Hohenester, and P. Hartmann, “A combined simulation approach using ray-tracing and finite-difference time-domain for optical systems containing refractive and diffractive optical elements,” Proc. SPIE 9193, 91930F (2014). [CrossRef]  

50. G. Chataignier, B. Vandame, and J. Vaillant, “Joint electromagnetic and ray-tracing simulations for quad-pixel sensor and computational imaging,” Opt. Express 27(21), 30486–30501 (2019). [CrossRef]  

51. https://www.lumerical.com/

52. https://www.zemax.com/

53. A. Taflove and S. C. Hagness, “Computational electrodynamics: the finite-difference time-domain method,” (Artech House, Norwood, 2005).

54. D. Guarnera, C. Guarnera, A. Ghosh, C. Denk, and M. Glencross, “BRDF Representation and Acquisition,” Proceedings of the 37th Annual Conference of the European Association for Computer Graphics: State of the Art Reports, pp. 625–650 (2016).

55. http://www.lighttec.fr/scattering-measurements/

56. https://www.perkinelmer.com/category/molecular-spectroscopy-instruments

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Examples of flare on images (copyright-free images from Ref. [10,11]).
Fig. 2.
Fig. 2. Light scattering in a TOF camera (Ref. [12,13]).
Fig. 3.
Fig. 3. Representation of an image sensor module. Figure 3(a) shows rays for an on-axis field and Fig. 3(b) shows rays for an off-axis field with inserts representing light propagation inside two pixels (blue and green) of a 1.1µm pitch sensor (on the left is a schematic of geometrical rays, and on the right the electric field intensity from an FDTD simulation of a green wavelength). Between the inserts is a top view of the RGB sensor with the periodic Bayer RGB pattern (2 by 2 pixels) simulated by FDTD.
Fig. 4.
Fig. 4. Representation of the electromagnetism to ray-tracing simulation flow for stray light analysis of an optical module considering the reflectivity of the detector. As noted on the right, each incident angle (and indeed polarization) must be simulated separately because FDTD is a fully coherent electromagnetic solver.
Fig. 5.
Fig. 5. BRDF characterization and simulation comparison for different pixel pitch at 940nm at normal incidence. Figures 5(a) and 5(c) show characterization BRDF result for 2.2µm and 4.6µm pixel respectively. Figures 5(b) and 5(d) show FDTD simulation BRDF result for 2.2µm and 4.6µm pixel respectively. The white rings correspond to the polar angles in 10 degree increments up to 90 degrees.
Fig. 6.
Fig. 6. BRDF for the two pixel dimensions simulated at 630nm. Figures 6(a) and 6(c) show the BRDF at normal incidence for the 1.75µm and 1.4µm pixel pitch respectively. Figures 6(b) and 6(d) show the incoherent sum of the BRDFs corresponding to a sampling of incident angles over the full aperture (F/2.8).
Fig. 7.
Fig. 7. Comparison images with ghost patterns between measurement and simulation. Figure 7(a) shows the image captured with the module using the 1.4µm pixel sensor. Figure 7(b) is the corresponding ghost ray simulation at 630nm. Figure 7(c) shows the image captured with module using the 1.75µm pixel. Figure 7(d) is the corresponding ghost ray simulation at 630nm.
Fig. 8.
Fig. 8. Ray-tracing simulation at 630nm of the module with BRDF electromagnetic simulation data applied at the sensor surface. The zoomed view on the right shows the rays generating the ghosts on the sensor, due to small reflections on the sensor (that behaves as a weak reflection grating), that are again reflected by the IR filter back towards the sensor.

Tables (1)

Tables Icon

Table 1. Image sensor reflectance results comparison

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

E f a r ( k ) = p , q = 1 N w ( p , q ) . E 0 f a r ( k ) . e x p [ i ( k k i n c ) ( p a 1 + q a 2 ) ]
k x k x i n c = n 2 π / a 1
k y k y i n c = m 2 π / a 2
w ( p , q ) = e x p [ | p a 1 + q a 2 | 2 / ( R 0 / 2 ) 2 ]
E f a r ( k ) = n , m α n , m e x p [ R 0 2 ( k n K 1 m K 2 ) 2 16 ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.