Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Radiometric and design model for the tunable light-guide image processing snapshot spectrometer (TuLIPSS)

Open Access Open Access

Abstract

The tunable light-guide image processing snapshot spectrometer (TuLIPSS) is a novel remote sensing instrument that can capture a spectral image cube in a single snapshot. The optical modelling application for the absolute signal intensity on a single pixel of the sensor in TuLIPSS has been developed through a numerical simulation of the integral performance of each optical element in the TuLIPSS system. The absolute spectral intensity of TuLIPSS can be determined either from the absolute irradiance of the observed surface or from the tabulated spectral reflectance of various land covers and by the application of a global irradiance approach. The model is validated through direct comparison of the simulated results with observations. Based on tabulated spectral reflectance, the deviation between the simulated results and the measured observations is less than 5% of the spectral light flux across most of the detection bandwidth for a Lambertian-like surface such as concrete. Additionally, the deviation between the simulated results and the measured observations using global irradiance information is less than 10% of the spectral light flux across most of the detection bandwidth for all surfaces tested. This optical modelling application of TuLIPSS can be used to assist the optimal design of the instrument and explore potential applications. The influence of the optical components on the light throughput is discussed with the optimal design being a compromise among the light throughput, spectral resolution, and cube size required by the specific application under consideration. The TuLIPSS modelling predicts that, for the current optimal low-cost configuration, the signal to noise ratio can exceed 10 at 10 ms exposure time, even for land covers with weak reflectance such as asphalt and water. Overall, this paper describes the process by which the optimal design is achieved for particular applications and directly connects the parameters of the optical components to the TuLIPSS performance.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The Tunable Light-guide Image Processing Snapshot Spectrometer (TuLIPSS) is a fiber-based snapshot spectral imaging spectrometer, which can capture a spectral image cube with two-dimensional spatial information and one-dimensional spectral information of a scene in a single snapshot [13]. The key component in snapshot imaging is the light guide module within which the object is imaged to the input of a custom fiber bundle; this can also be a light-guide image processor. The bundle is densely organized at the input while the output is sparse thereby creating void spaces for the dispersed spectrum on the detector, enabling the spatial-spectral information of the entire image scene to be captured in a single integration event. Note that the method obtains a similar effect to image slicing mirror/image mappers [46] or lenslet array spectrometers [7,8]. However, TuLIPSS is a more flexible instrument as it allows for the tuning of the relative dimensions of the data cube (x, y, λ). Tunability can be achieved by modifying the void spaces, i.e. changing the spatial vs spectral sampling via the replacement of the filter-disperser pair making TuLIPSS adaptable to different observational applications with versatile choices of spectral resolution to target distinct spectral signatures, such as the delineation of vegetation species [812], identification of minerals [13], and the classification of land cover [14]. Note that neither mirror-based image mapping spectrometers nor lenslet systems can easily adapt their spectral-spatial discrimination. For the former, the rigid design and array of dispersers makes switching impractical. The latter requires either tuning of the entire array or the addition of zoom optics that results in the scaling of the point spread function which, in turn, requires a change in the datacube sampling to achieve detector limited sampling.

The primary goal for TuLIPSS is an efficient utilization of resources to enable a range of applications and, therefore, it is crucial to evaluate the system for disparate operational conditions. Thus, the goal for this paper is to:

  • (1) develop a complete system model to allow an assessment of the instrument for different application and operational conditions,
  • (2) provide the optimal design for each application (throughput, choice of components, etc.), and finally
  • (3) validate the model against calibrated radiometric measurements for the current TuLIPSS implementation.
The analysis provided here, in concert with the hardware implementations reported elsewhere [1], will allow to design and build TuLIPSS-like snapshot imagers optimized for their targeted applications. In section 2, we outline the general principles of the model, while section 3 describes the application of the model to simulate the flux on a single TuLIPSS pixel. The model is validated using directional spectral reflectance in section 4 and absolute radiance in section 5. Our results are discussed in section 6 and we conclude in section 7.

2. TuLIPSS model: general principles

An optical schematic of the TuLIPSS system is illustrated in Fig. 1. It is important to incorporate each component identified in Fig. 1 into the model. Briefly, the far field scene with area, A, and radiance, L, is imaged by the objective to the input end of the fiber bundle. A is determined by the angular field of view of the objective and the distance from the scene to the objective. The solid angle, Ωs, is determined by the distance from the scene to the objective and the size of the objective aperture. At the entrance to the input end of the fiber bundle, the image is divided up by a number of fiber cores and guided to the output end with resultant void spaces. The numerical aperture of the fiber core determines the solid angle accepted by the fiber, Ωfb. To maximize the light throughput, Ωfb cannot be less than the solid angle subtended by the objective to the image at the input end, Ωo. In practical implementations (i.e. when the size of the bundle is limited), it is important to implement small fibers with diameters on the order of 5-10 microns. Due to mechanical stability issues, bundles with dense input and sparse output are not currently commercially available and, as such, custom bundles are required. In our prior work [1], we demonstrated such custom bundles built with fiber block ribbons. Each fiber block contained 6 × 6 fiber cores. The fiber core is fused silica and 10 µm in diameter. This is not, however, optimal for spectral separation and an additional input mask was required for core selection. The model described here implements a mask component to comply with this design. Note that, for other fiber bundle designs, e.g. comprised of individual fibers, the mask can be ignored. The output of the fiber bundle is collimated by a collimating lens and dispersed by a refractive disperser, e.g. a prism. The spectral range is limited by the bandwidth filter and the application of a prism was chosen to facilitate the tuning capability. For the current configuration, we anticipate most applications requiring a spectral bandwidth of 100-400 nm with 2-10 nm sampling. For future implementations, where we expect to have increased range and spectral sampling, we will consider incorporating compound prisms and diffraction gratings (see Selection of Dispersive Element in Supplement 1). The prism component refracts the parallel beams from each fiber core at different angles according to the wavelength. After passing through the focusing lens, the dispersed parallel beams from a given fiber core are focused to the surface of the sensor to form segments of the 2D dispersed image. These segments include the spatial and spectral information of the scene and can be transferred to a spectral data cube through a simple reconstruction that utilizes the relevant calibration lookup table. More details on the system’s principle and processing imaging results described in our previous work [1,2].

 figure: Fig. 1.

Fig. 1. Optical schematic of the TuLIPSS system. The parameters determine the light flux at detection: A, area detectable by a single pixel of the camera; L, radiance from scene; ΩS, solid angle subtended from objective aperture to source ; Ωo, solid angle subtended from objective to image at fiber input end; Ωfb, fiber accepted solid angle; Ωc, accepted solid angle of collimating lens; To, transmittance of the objective lens; Tfb, transmittance of the fiber bundle; Tc, transmittance of the collimating lens; Tp, transmittance of the prism; Tf, transmittance of the focusing lens; η, quantum coefficient of the camera detector. θ1, θ2 and θ3 are the corresponding cone half angles of the solid angles Ωs, Ωo, Ωc.

Download Full Size | PDF

The optimal design of TuLIPSS needs to maximize both the spectral 3D data cube and the light throughput, I(x, y, λ). The area of the output end of the bundle defines the total number of spectral and spatial samples attainable and this is related to the core size of the fibers corresponding to an individual spatial value on the detector to be dispersed into the void spaces. Additionally, the numerical aperture (NA) of the fiber imposes constraints on the imaging optics if all output light is to be collected. Therefore, a re-imaging system is required to re-image the entire fiber output at the fiber’s NA. Under ideal conditions, the spectral cube size – number of spatial samples × number of spectral samples in a snapshot – is equal to the total number of pixels of the camera [1], which is a very challenging requirement. For example, an 18 mm diameter field (e.g. 250×250×50 cube with 10μm fibers) needs to be imaged at a NA of 0.3–0.6, depending on the selected fiber component. In other words, the maximum size of the spectral 3D data cube is determined by the effective field of view of the re-imaging optics and the diameter of the individual fibers. The light throughput is critically impacted by the numerical aperture of the re-imaging system and is also affected by the various components that make up the entire system: fore optics (objective lens $f/\# \; $ vs fiber NA), input mask, filters, and coatings. While the combination of components is relatively easy to optimize, in practice the 3D cube size is generally limited by the field of view of the collimating lens and the minimum diameter of available fibers. To select a suitable collimating lens, one often needs to compromise between the attainable field of view and the numerical aperture.

The radiometric model discussed here captures the influence of each of these components and the flow chart of the model design relations is shown in Fig. 2. Spatial resolution, spectral resolution and spectral bandwidth are specified by the observation requirements. To maximize the spectral 3D data cube, we need to first consider the component that defines the maximum bundle dimensions, namely the field of view of the re-imaging optics. To maximize the light throughput, we then must consider the component that limits the effective numerical aperture of the TuLIPSS instrument in the context of the bundle size. As we consider the flow chart, the sequence of the components is chosen to satisfy the specific requirement determined by the starting point (the component which determines the 3D datacube or limits the NA of TuLIPSS) and follows the direction indicated by the arrows in Fig. 2. However, to fully assess the range of observational applications enabled by TuLIPSS, the model is designed to analyze the full range of parameters in the system, including spatial resolution, spectral resolution, bandwidth, imaging speed and signal-to-noise ratio (SNR).

 figure: Fig. 2.

Fig. 2. Flow chart for design considerations in TuLIPSS. Spatial resolution, spectral resolution and spectral bandwidth are determined by the application. FOV is the field of view of the corresponding component, and NA is the Numerical aperture.

Download Full Size | PDF

Light throughput is a critical factor in evaluating the performance of the instrument and sets the upper limit on light flux collected by the optical system for a known source radiance. On the other hand, the performance of TuLIPSS also determines the ability to provide meaningful information in advancing any particular scientific or operational application. Accurate radiometric modeling allows us to connect the performance of TuLIPSS to each specific application. To evaluate the response of TuLIPSS, we model the light flux at an individual pixel which allows us to determine the expected signal to noise ratio of the measurement.

For a uniform and isotropic radiance surface, the light flux is a product of the radiance of the object/scene, the system’s throughput, and the overall transmittance. The relationship between light throughput and the light flux is expressed as:

$$\emptyset = A{\Omega _s}L\; {T_{total}}$$
where T is the transmittance of the system and the other parameters are as defined above. The combination, s, defines the light throughput of the system and this is conserved as the light travels through the various optical systems where it undergoes only reflections or refractions [15]. In TuLIPSS, it is worth noting that the dispersive prism and fiber bundle break throughput conservation. Due to the dispersion of the prism, diffusion in the fiber core, and aberrations in the lenses, it is necessary to build an optical modelling application that takes account of the light losses to simulate the collected radiant flux at a single pixel at a selected wavelength.

The full modeling diagram is shown in Fig. 3. First, we model the monochromatic radiant flux and expand it to a spectral response resulting from the integral of all the waves in the spectral band that can arrive at the chosen pixel. The radiant flux is evaluated for the configuration determined in the sequence shown in Fig. 1. The analysis is incorporated into a custom modelling framework (application) programmed in Matlab. The validity of this optical modelling is tested through a comparison of the simulated results with the results from actual TuLIPSS measurements using specific components. The optical modelling application can be further used to evaluate different configurations to maximize the performance of TuLIPSS as well as exploring its potential applications.

 figure: Fig. 3.

Fig. 3. Modelling diagram illustrating the influence of optical components and the operations in the modelling process.

Download Full Size | PDF

3. Optical modelling of the light flux

3.1 Spectral radiance

The light source for remote sensing by TuLIPSS is the solar radiance reflected or diffused from various land covers. As shown in Eq. (1), radiance is the radiant flux emitted, reflected, transmitted or received by a given surface, per unit solid angle per unit projected area. The radiance from the object/scene is determined by total solar irradiance and the surface bidirectional reflectance.

The total solar irradiance, Eg, includes both a direct component and a diffuse component. On a horizontal surface, this is called the Global Horizontal Irradiance, which can be expressed as [16]:

$${E_g} = {E_d} + {E_b}\cos (\mathrm{\theta } )$$
Ed is the sky diffused irradiance, Eb is the direct solar irradiance, and $\theta $ is the solar zenith angle, the angle between the light ray and the earth surface normal direction. This zenith angle is related to local latitude, the current declination and the hour angle via [17]:
$$cos (\theta )= sin (\Phi )sin (\delta )+ cos (\Phi )cos (\delta )cos(h )$$
where Ф is the local latitude, δ is the current declination of the Sun, and h is the hour angle, in local solar time. The declination can be accurately calculated by [18]:
$$\delta = arcsin\left[ {sin ({ - {{23.44}^\circ }} )cos(\frac{{{{360}^\circ }}}{{365.24}}({N + 10} )+ \frac{{{{360}^\circ }}}{\pi }0.0167sin\left( {\frac{{{{360}^\circ }}}{{365.24}}({N + 2} )} \right)} \right]$$
where N is the number of the days since 00:00 UT time on January 1 of the current year.

To calculate the direct and diffuse irradiance components of the solar spectrum, several computational models have been developed [16,1924]. Typically, these models require knowledge of the wavelength-dependence of various scattering and absorption mechanisms in the atmosphere, such as Rayleigh scattering, Aerosol scattering and absorption, and ozone absorption. In our current simulation, the Bird simple spectral model (SPCTRAL2 [16]) is chosen. In this model, the ozone optical thickness is the product of the absorption coefficient and the total ozone thickness, and its variations with geographical location and time of year are incorporated as a lookup table containing the historically recorded data. The inputs to the SPCTRAL2 model are limited to the solar zenith angle, the collector tilt angle, atmospheric turbidity, the amount of perceptible water vapor, and the surface atmospheric pressure. Figure 4 shows an example of the indirect solar irradiance spectrum based on SPCTRAL2 at sea level, under clear sky in Houston at the time 12:36 pm on June 16th, 2020.

 figure: Fig. 4.

Fig. 4. Simulated solar irradiation spectrum under cloudless sky based on SPCTRAL2 at Houston, Texas, the time 12: 36 pm on June 16th, 2020.

Download Full Size | PDF

The bidirectional reflectance of various earth surfaces is determined by the properties of the complex land cover components [25,26]. Some surfaces, such as concrete and fresh snow, can generally be modeled as Lambertian surfaces at some particular view angles [27,28], even though bidirectional reflectance may still be a factor. With the Lambertian assumption, the reflectance can be treated as a constant, independent of the incident and the reflectance angles. Under these conditions, the spectral radiance can be simply evaluated as:

$$L = {E_g}\frac{{{R_{hemi}}}}{{2\pi }}$$
where, Rhemi is the directional hemispherical reflectance. We base our initial modeling effort of a multi-parameter / multi-configuration system on the Lambertian assumption.

3.2 Monochromatic radiant flux on a single pixel

Since the target scene is usually not uniform, and the photomask selects only a few individual fiber cores, it is more meaningful to consider the radiant flux to a single pixel of the detector. We follow the layout of the optical components in TuLIPSS to consider the influence of each part to the overall radiant flux reaching the detector. The objective lens is the first optical component considered. Following Eq. (1), the radiant flux from the observed scene to the intermediate image plane, where the input end of the fiber bundle is located, can be expressed as:

$${\emptyset _1} = A{\Omega _s}L\; {T_o}$$
The solid angle ${\Omega _s}$ subtended by the pupil of the objective from the scene is related to the half cone angle ${\theta _1}$ through the following:
$${\mathrm{\Omega }_s} = 2\pi (1 - \cos ({{\theta_1}} )) = \textrm{4}\pi {\sin ^2}\left( {\frac{{{\theta_1}}}{2}} \right)$$
For a far-field scene, $sin\left( {\frac{{{\theta_1}}}{2}} \right) \approx \frac{1}{2}sin({{\theta_1}} )$ and the radiation flux at the image plane (the photomask location) can be expressed as:
$${\emptyset _1} = \pi A{\sin ^2}({{\theta_1}} )L\; {T_o}$$
Considering the Lagrange invariant, $ A\; si{n^2}({{\theta_1}} )= A^{\prime}{\sin ^2}({{\theta_2}} )$. the radiant flux will take the form:
$${\emptyset _1} = \pi A^{\prime}{\sin ^2}({{\theta_2}} )L\; {T_o}$$
where $A^{\prime}$ is the image of A. This result can also be obtained directly from radiance conservation where, for an ideal optical system in air, the radiance at the output is the same as at the input. In TuLIPSS, the objective is a photographic lens where the f-number is a more commonly used parameter than the numerical aperture. If we replace $sin({{\theta_2}} )$ with the f-number, $f/\# \; $, the radiant flux at the bundle’s input plane can be expressed as:
$${\emptyset _1} \approx \frac{\pi }{{4\; {{(f/\# )}^2}}}A^{\prime}L\; {T_o}$$
The photomask is the second component in the optical path of TuLIPSS (see Fig. 1). It plays a significant role in constraining the radiant flux passing through the selected fiber cores and, in particular, blocking the crosstalk between nearby fiber cores to enhance the spectral sampling capability of TuLIPSS. Here a photomask, which is coincident with bundle’s input image plane, is seamlessly attached at the input end of the customized fiber bundle. The diameter of the pinholes on the photomask determine the area in the first image plane that passes through a fiber core by a factor ${\left[ {F\left( {\frac{{{D_{PM}}}}{{{D_{fc}}}}} \right)} \right]^2}$, where ${D_{PM}}$ is the diameter of the pinhole in the photomask and ${D_{fc}}$ the diameter of the fiber core. $F\left( {\frac{{{D_{PM}}}}{{{D_{fc}}}}} \right)$ is a piecewise function, whose value depends on the ratio of $\frac{{{D_{PM}}}}{{{D_{fc}}}}$, as $F(\frac{{{D_{PM}}}}{{{D_{fc}}}} \ge 1\textrm{) = 1}$; $F(\frac{{{D_{PM}}}}{{{D_{fc}}}}\; < 1\textrm{) = }\frac{{{D_{PM}}}}{{{D_{fc}}}}$. The physical importance of the pinhole mask is its role in the selection of cores prior to dispersion where it determines what portion of the core and how many fiber cores are illuminated through a single pinhole.

The mask is immediately followed by the fiber bundle through which the collected radiant flux passes. The fiber core has a specific numerical aperture, which determines the acceptance cone angle of incidence. If the incident angle is smaller than the cone angle, total internal reflection will occur at the boundary between cladding and core, and the light will be allowed to pass through the fiber core without loss. Otherwise, the light will be dissipated during its propagation through the fiber core. When the incident angle is larger than the numerical aperture of the fiber, the radiant flux will be truncated by a factor $F(R )$, where $F(R \ge 1\textrm{) = 1}$ and $F(R\; < 1\textrm{) = }R$ with

$$R = \frac{{1 - \sqrt {1 - N{A_{fb}}^2} }}{{1 - \sqrt {1 - {{(\frac{1}{{2f/\# }})}^2}} }},$$
and $N{A_{fb}}$ is the fiber numerical aperture and $f/\# $ is the f-number of the objective.

The light throughput is not conserved in the fiber bundle since the output beam profile from a multimode fiber can be affected by the beam entry angle [29]. Thus, the beam profile varies with the proportion of light rays propagating as meridional rays vs. skew rays. Meridional rays pass through the central axis of the fiber after each reflection, while the skew rays never pass through the central axis of the fiber. The skew rays propagate in a helical path along the fiber that is tangent to the inner caustic of the path. The beam intensity profile of the output after the propagation along the fiber cores can be mathematically described as the super Gaussian [30]:

$$I = {I_0}exp\left( { - \frac{{2{{({x - {x_0}} )}^\beta }}}{{{w^\beta }}}} \right)$$
Here ${I_0}$ denotes the maximum of the intensity which occurs at $x = {x_0}$, w is the beam radius at the position of the aperture and $\beta $ is the supergaussian order. When $\beta = 2$, Eq. (11) will reduce to the normal Gaussian distribution. In the TuLIPSS system, the Step-Index Fiber with core diameter about 10μm is used as the light guide, focusing the light normal to the input end of the fiber bundle and producing a near-Gaussian output beam. Figure 5 shows the angular distribution from the fiber core of the current TuLIPSS system. The experimental data was fit with a normal Gaussian distribution which yields the power through the aperture as:
$$p(r )= {P_0}\left( {1 - {e^{ - \frac{{2{r^2}}}{{{w^2}}}}}} \right)$$
with r the radius of the collecting aperture, $w$ the radius of the beam, and ${P_0}$ the total power. The collection efficiency is simply given by $1 - {{\boldsymbol e}^{ - 2{{\boldsymbol r}^2}/{{\boldsymbol w}^2}}}$.

 figure: Fig. 5.

Fig. 5. Angular distribution of the output beam profile from a fiber core. The Step-Index Fiber has a core diameter of 10 μm and numerical aperture 0.28. The solid line is the fitted Gaussian function.

Download Full Size | PDF

As mentioned in section 2, the re-imaging optics have critical importance in the context of the light throughput. This system allows imaging of the bundle output plane onto the camera image sensor. The collimating lens and focusing lens are used in a pair and the focusing lens is chosen to satisfy the requirement that all of the radiant flux passing through the collimating lens is focused at the final image plane (camera sensor). The collecting solid angle of the collimating lens determines the overall solid angle of the re-imaging optics. For monochromatic light, the prism provides the deviation of the collimated beam from the original incident angle. The influence factor of the re-imaging optics to the radiation flux can be expressed as: $(1 - {e^{ - 2{r^2}/{w^2}}}){\textrm{T}_{\textrm{c}}\textrm{T}_{\textrm{p}}\textrm{T}_{\textrm{f}}}$, where $\textrm{T}_{\textrm{c}}$, $\textrm{T}_{\textrm{p}}$ and $\textrm{T}_{\textrm{f}}$ are the transmittances of the collimating lens, prism and refocusing lens, respectively; $r$ is the radius of aperture stop of the re-imaging system.

The magnification of the re-imaging system scales (quadratically) with the radiance on a single pixel of the camera. For the re-imaging optical system with pixel pitch d and magnification M, the effective area of the radiation flux at a single pixel is also a piecewise function $G\left( {\frac{d}{{M{D_{fc}}}}} \right)$, with

$$G\left( {\frac{d}{{M{D_{fc}}}} \ge 1} \right) = \; \frac{{\pi {M^2}{D_{fc}}^2}}{4};$$
$$G\left( {1 > \frac{d}{{M{D_{fc}}}} \ge \frac{{\sqrt 2 }}{2}} \right) = \,d\sqrt {{{(M{D_{fc}})}^2} - {d^2}} + \left( {\frac{\pi }{4} - {{\cos }^{ - 1}}(\frac{d}{{M{D_{fc}}}}} \right)){(M{D_{fc}})^2};$$
$$G\left( {\frac{{\sqrt 2 }}{2} > \frac{d}{{M{D_{fc}}}}} \right) = \,{d^2}.$$
Considering all the points above, the radiant flux collected by a single pixel (${\emptyset _{{\boldsymbol sp}}}$) from far-field monochromatic radiance can then be calculated as:
$${\emptyset _{sp}} = \frac{1}{{{M^2}}}G\; \left( {\frac{d}{{M{D_{fc}}}}} \right)\frac{\pi }{{4\; f/{\# ^2}}}L\; {T_o}{\left[ {F\left( {\frac{{{D_{PM}}}}{{{D_{fc}}}}} \right)} \right]^2}F(R\textrm{)Tfb\; }(1 - {e^{ - 2{r^2}/{w^2}}}){\textrm{T}_{\textrm{c}}\textrm{T}_{\textrm{p}}\textrm{T}_{\textrm{f}}}$$
with η the quantum efficiency of the camera detector and the photo-electron generated by the radiant flux will be expressed as:
$${\emptyset _e} = \frac{1}{{{M^2}}}G\; \left( {\frac{d}{{M{D_{fc}}}}} \right)\frac{\pi }{{4\; f/{\# ^2}}}L\; {T_o}{\left[ {F\left( {\frac{{{D_{PM}}}}{{{D_{fc}}}}} \right)} \right]^2}F(R\textrm{)Tfb\; }(1 - {e^{ - 2{r^2}/{w^2}}}){\textrm{T}_{\textrm{c}}\textrm{T}_{\textrm{p}}\textrm{T}_{\textrm{f}}}\mathrm{\eta }$$
It is worth noting that the radiant flux entering the fiber core is assumed to be an isotropic distribution across the outward solid angle, since the view field is far from the photographic lens and the solid angle subtended by the entrance pupil from the object is small. It is therefore acceptable to take the angular distribution as constant. This remains consistent with the Lambertian surface assumption.

3.3 Correction of the collection efficiency induced by the point spread function

For monochromatic light, the spatial distribution of intensity on the camera surface is determined by the convolution of the spatial size of the fiber core with the point spread function (PSF) of the re-imaging optics. Generally, the PSF can be calculated by taking the Fourier transform of the complex pupil function which is mathematically expressed as:

$$A({{x_p},{y_p}} )= T({{x_p},{y_p}} ){e^{i2\pi W({{x_p},{y_p}} )}}$$
yielding a Fourier transform of the complex pupil function:
$$E({x^{\prime},y^{\prime}} )= \mathop {\int\!\!\!\int }\nolimits_{ - {R_{AP}}, - {R_{AP}}}^{{R_{AP,}}{R_{AP}}} T({{x_p},{y_p}} ){e^{i2\pi W({{x_p},{y_p}} )}}{e^{\frac{{i2\pi }}{{{R_{AP}}\lambda }}({x^{\prime}{x_p} + y^{\prime}{y_p}} )}}d{x_p}d{y_p}$$
The incoherent point spread function is then given by:
$$PSF = E({x^{\prime},y^{\prime}} )\cdot {E^\ast }({x^{\prime},y^{\prime}} )$$
Here, $W({{x_p},{y_p}} )$ is the optical phase difference, which includes all the aberrations of the system and can be expressed as linear combination of Zernike polynomials [31]. For a system with no aberrations, $W({{x_p},{y_p}} )= 0$.

Figure 6(a) shows the simulated results of the point spread function of the re-imaging system assuming an RMS spherical wavefront error equal to 0.0745λ, that results from the fact that the conventional “diffraction-limited” aberration level is set at the Strehl ratio of 0.80 regardless of the type of aberration [32,33]. The corresponding cross section of the simulated result, Fig. 6(b), indicates that the full width of half maximum is about 2.2μm. The point spread function convoluted with the fiber core, Fig. 6(c), is determined by the pupil diameter, the aberrations, the focal length and the wavelength. As the wavelength increases, the point spread function will also increase, resulting in a decrease of the collection efficiency of the signal by a single pixel. We see from Fig. 6(d) that the correction factor for the collection efficiency monotonically decreases with wavelength. When aberrations are accounted for, the collection efficiency is about 87% at short wavelengths around 350 nm, while the correction factor for the collection efficiency drops to 70% around 800 nm. The existence of aberrations decreases the collection efficiency and we see that there is about 13% drop at 800nm compared to a system with no aberration.

 figure: Fig. 6.

Fig. 6. (a). Point spread function of the focusing lens; (b) the normalized cross section of the point spread function; (c) the convolution of the point spread function with the fiber core; (d) the correction factor for collection efficiency of a single pixel accounting for the point spread function. All of the results are simulated using the following parameters: Pupil Diameter = 36mm, focal length = 144mm, sampling pitch = 0.125 μm, sampling number = 1024, fiber core diameter = 10 μm, wavelength = 532 nm, RMS spherical aberration = 0.0745 wave. The red line in (d) is the correction factor for collection efficiency without aberration.

Download Full Size | PDF

3.4 Effect of spectral band on a single pixel

A customized glass wedge prism serves as the light dispersing element located between the collimating lens and focusing lens in the TuLIPSS system, as illustrated in Fig. 1. The proper selection of the dispersive element is important for optimization of the performance and the mechanical design of the TuLIPSS housing components. The low-cost single prism is the best candidate for the dispersive element, since its high light throughput and low stray light, as well as the low cost, are the dominant factors in selecting the dispersive element (see Selection of Dispersive Element in Supplement 1). For future implementations, compound prisms will allow us to shrink the system size.

The light from the same point of the fiber core at the output end of the fiber bundle will be collimated to a parallel beam to strike on the prism. In practical applications, the first surface of this prism is aligned perpendicularly to the optical axis (as in Fig. 7(a)).

 figure: Fig. 7.

Fig. 7. (a) Deviation angle of the prism with apex angle α; (b) focusing location of different light waves (only the chief rays are shown), f is the focal length of the focusing lens; h is the distance from the optical axis at the imaging plane; d is the pitch of the camera

Download Full Size | PDF

Dispersion occurs at the second surface. For the prism with refractive index n and apex angle $\alpha $, the deviation angle δ (as illustrated in Fig. 7(a)) of the output ray relative to the incident angle at the front surface can be obtained from Snell’s law as:

$$n\sin \alpha = sin({\alpha + \delta } )$$
Since the refractive index is a function of wavelength, the beam will be dispersed to different deviation angles depending on refractive index:
$$\Delta \delta = \frac{{sin(\alpha )}}{{cos({\alpha + \delta } )}}\Delta n$$
After the focusing lens, rays with different deviation angles will be focused on to different locations along the dispersion direction, where the void space is generated through the spacing of the fiber ribbons.

The spectral resolution of the system is the minimum wavelength difference between two lines in a spectrum that can be distinguished. In this analysis, we use the central wavelength difference from adjacent pixels along the dispersion direction to denote the spectral sampling. For a focusing lens with focal length, $f$, camera pixel pitch, $d$, (Fig. 7(b)), the collection angle of single pixel will be:

$$\Delta \delta = {\tan ^{ - 1}}\left( {\frac{{h + d}}{f}} \right) - {\tan ^{ - 1}}\frac{h}{f}$$
where h is the distance of the pixel from the optical axis at the imaging plane. Under the small angle approximation ($f \gg h$), Eq. (20) can be simplified to:
$$\; \Delta \delta = {\tan ^{ - 1}}\left( {\frac{d}{f}} \right)$$
The Sellmeier equation is an empirical relationship between refractive index and wavelength used to determine the dispersion of light in the medium (Refractive index and dispersion. Schott technical information document TIE-29 (Version February 2016)). For characterization of glasses, an equation consisting of three terms is commonly used, namely:
$${n^2} - 1 = \frac{{{a_1}{\lambda ^2}}}{{{\lambda ^2} - {b_1}}} + \frac{{{a_2}{\lambda ^2}}}{{{\lambda ^2} - {b_2}}} + \frac{{{a_3}{\lambda ^2}}}{{{\lambda ^2} - {b_3}}}$$
where n is the wavelength dependent refractive index; λ is the wavelength with units in μm; ${a_i}$ and ${b_i}$ are coefficients to describe the glass. Equation (22) can be rewritten as:
$$n(\lambda )= {\left( {\frac{{{a_1}{\lambda^2}}}{{{\lambda^2} - {b_1}}} + \frac{{{a_2}{\lambda^2}}}{{{\lambda^2} - {b_2}}} + \frac{{{a_3}{\lambda^2}}}{{{\lambda^2} - {b_3}}} + 1} \right)^{1/2}}$$
Under a Taylor series expansion, the variation of the refractive index can be expressed as:
$$\Delta n = \frac{{n^{\prime}(\lambda )}}{{1!}}\Delta \lambda + \frac{{n^{\prime\prime}(\lambda )}}{{2!}}{({\Delta \lambda } )^2} + \frac{{n^{\prime\prime\prime}(\lambda )}}{{3!}}{({\Delta \lambda } )^3} \cdots $$
and truncating at the third order, allows us to express the resolution Δλ as:
$$\Delta \lambda = \frac{{ - n^{\prime}(\lambda )- \sqrt {{{({n^{\prime}(\lambda )} )}^2} + 2n^{\prime\prime}(\lambda )\Delta n} }}{{n^{\prime\prime}(\lambda )}}$$
Combining Eq. (19) and (21), the variation of refractive index Δn is then:
$$\Delta n = \frac{{cos(si{n^{ - 1}}({nsin(a )} )}}{{sin(a )}}\; {\tan ^{ - 1}}\left( {\frac{d}{f}} \right)$$
Finally, the spectral resolution under the second order approximation becomes:
$$\Delta \lambda = \frac{{ - n^{\prime}(\lambda )- \sqrt {{{({n^{\prime}(\lambda )} )}^2} + 2n^{\prime\prime}(\lambda )\frac{{cos(si{n^{ - 1}}({nsin(a )} )}}{{sin(a )}}\; {{\tan }^{ - 1}}\left( {\frac{d}{f}} \right)} }}{{n^{\prime\prime}(\lambda )}}$$
Equation (27) shows that the spectral band from a point source and projecting on a single pixel is Δλ with the central wavelength at λ. In TuLIPSS, the spectrum is sampled by each pixel of the camera along the dispersion direction in the void space. This equation gives the spectral band over which we must integrate to simulate the radiant flux from a point source at that single pixel.

3.5 Light flux to a single pixel

Since the dispersion occurs in a single direction inside the void space between the rows of the fiber bundle, the fiber core can be divided into equidistant parts, Δa, along the direction of dispersion as shown in Fig. 8(a). The gray circle represents the fiber core, and the dashed lines are perpendicular to the dispersion direction. For Δa small enough, the segment can be assumed to be a line source orthogonal to the dispersion direction. Each line source will be dispersed as a spectrum located in the image plane and shifted by MΔa, where M is the magnification of the re-imaging subsystem. The signal along the dispersion direction is the sum of the contributions from all the parts of fiber core, as illustrated in Fig. 8(b). Under this process, we can simulate both the dispersion spectrum and the spectral distribution on each pixel.

 figure: Fig. 8.

Fig. 8. (a) Schematic of infinitesimal segments of the fiber core. The gray circle represents the fiber core; the dashed lines are orthogonal to the dispersion direction; (b) schematic of spectral accumulation and spectral shift in the image plane. The dispersion spectrum from each segment has a spatial shift along the dispersion direction. The spectral response is the sum of contributions from all the segments.

Download Full Size | PDF

The light power collected by a single pixel will be the integration of the radiant flux reaching that pixel:

$$\emptyset = \frac{{\pi {d^2}}}{{4{{(f/\# M)}^2}}}\; {T_o}{\left[ {f\left( {\frac{{{D_{PM}}}}{{{D_{fb}}}}} \right)} \right]^2}f(\frac{{1 - \sqrt {1 - N{A_{fb}}^2} }}{{\frac{1}{2}{{(\frac{1}{{2f\# }})}^2}}}\textrm{)}{T_{fb}}{T_c}f(R\textrm{)}{T_p}{T_f}\mathop \sum \nolimits_{\Delta a} (\mathop \smallint \nolimits_{\lambda - \frac{{\Delta \lambda }}{2}}^{\lambda + \frac{{\Delta \lambda }}{2}} L(\lambda )\mathrm{\eta }(\lambda )d\lambda \; )\; \; \; \,$$
In this equation, $\Delta \lambda $ is the spectral band corresponding to each segment of fiber $\Delta a$; Σ is the summation of all the contributions from the fiber segments; other parameters are the same as Eq. (14).

Equation (28) defines the light flux to a single pixel of the detector. It is the convolution of the light intensity distribution inside the fiber core with the point spectral resolution of the re-imaging subsystem. The spatial shift of $\Delta a$ inside the fiber core, will induce a spatial shift in the spectrum of M times the spatial shift in the sensor plane. When processing the light flux of a single pixel, the spectral shift of each segment $\Delta a$ of the fiber core must be considered to calculate the overall light contributions.

The optical modelling of the absolute light flux on a single pixel has been developed based on the integrated performance of the optical elements in the TuLIPSS system and the mathematical formulation described above. Considering the parameters which determine the surface radiance of the land cover, and parameters which determine the light throughput of TuLIPSS, we developed a modelling application using the Matlab graphical user interface (see Supplement 1 for a detailed description). This model can be used to simulate the TuLIPSS absolute spectral flux resulting from the solar global irradiance and the hemispherical spectral reflectance data of any land cover or from the absolute irradiance measures of the surface in local regions of interest and is used to optimize the configuration of the TuLIPSS instrument for specific applications (see below).

In addition to the absolute light flux response of TuLIPSS, the modeling tool can also provide the spectral resolution for a selected single-component prism, and the spectral distribution on a single pixel for a given center wavelength, λ. The spectral sampling considers the fiber core as a point source during the dispersion, while the spectral distribution at a single pixel take the convolution of the point source dispersion with the physical size of the fiber core.

4. Validating the system model using directional spectral reflectance

The ECOSTRESS spectral library from NASA provides thousands of directional hemispherical reflectance spectra of natural and manmade earth objects, such as vegetation, minerals, soils, rocks and water [34,35]. The directional-hemispherical reflectance is the integral of the bidirectional reflectance distribution function over all viewing directions. With the assumption of a Lambertian surface, the bidirectional reflectance from the surface can be obtained through the angular average of the hemispherical reflectance, and the directional illumination can be replaced by the global irradiance. The global irradiance under a clear sky can be simulated via SPCTRAL2 modelling [16]. The feasibility of the Lambertian approximation has been shown to be valid for some land covers, such as concrete [27], clouds [36], and sand [37]. In this section, we use the directional hemispherical reflectance spectra and global irradiance as inputs, under the Lambertian assumption, to simulate the signal intensity on a single pixel of TuLIPSS. From the tabulated spectral reflectance of land covers and global irradiance, the TuLIPSS modelling is able predict the signal levels under real application conditions.

The raw data of the TuLIPPS recording is the snapshot image of fiber bundle guided segments of the scene in the field of view and their corresponding dispersions spanned in the void spaces between the segments. A lookup table is needed to reconstruct the spectral imaging datacube from the raw data. The number of the rows in the lookup table is the total number of the fiber cores that have been selected by the photomask. Each row of the lookup table contains 35 columns. Each column contains the (x,y) coordinates of the different wavelengths which are dispersed from the corresponding single fiber core. This lookup table is created through the spatial and spectral calibration of the TuLIPPS as described in detail [1]. Supplement 1 describes the main steps of the calibration process.

The Rice University campus is chosen as the target scene to test the validity of the optical modelling. To generate a quantitative comparison between the measured results and the simulated results, we had to be cognizant of the weather conditions at the time of observation since the SPCTRAL2 results of global irradiance only provide a reliable approximation under clear sky conditions. Figure 9(a) presents the reconstructed composite image of the scene from a TuLIPSS measurement. There are 4 red crosses on the figure, which represent sky, tile, concrete and trees respectively in their relative locations. Figure 9(d) shows the corresponding RGB images of the same field of view taken by a Digital Single Lens Reflex (DSLR) camera (Canon EOS 5D Mark IV DSLR Camera body, Mitakon Zhongyi Speedmaster 85mm f/1.2 Lens).

 figure: Fig. 9.

Fig. 9. Testing the modelling results based on directional hemispherical reflectance under clear sky conditions. In all cases, the simulated results are represented by the black solid line and the TuLIPSS measurements are shown as red circles. (a) image of reconstructed view; (b) spectrum of sky; (c) spectrum of tile on the building surface; (d) view field recorded by camera; (e) spectrum of concrete roof; (f) spectrum of oak trees.

Download Full Size | PDF

Figures 9(b), (c), (e), and (f) show the simulated results as black lines and the measured results as red circles with TuLIPSS from single fiber cores for sky, tile, concrete roof and trees, respectively. Within the bandwidth 450–620 nm, the simulated results for the sky and concrete roof and tile show a strong similarity to the measured results (see Fig. 9(b), 9(c) and 9(e)). This is consistent with the expected bidirectional reflectance of scattering from sky and a concrete surface. For trees, the measured results show a signal lower than the simulated results, especially at the red end. Theses deviations most likely result from the Lambertian approximation for the surface reflectance, for all objects considered. For example, the bidirectional reflectance factor for a forest scene tends to show a variation with view angle which we do not capture with the Lambertian assumption

This consistency of the simulated spectra to those measured by TuLIPSS for a Lambertian-like surfaces demonstrate the validity of optical modeling tool developed. For land cover surfaces that cannot be represented by a Lambertian surface, the simulated results only represent the results of certain measurement geometry. To obtain more accurate performance, it is necessary to adopt the bidirectional reflectance for the illumination and observation geometry for the land cover. It is also worth mentioning that the simulated results are based on the average performance of the fiber bundle. Variations in the transmission of each fiber core due the fabrication process also need to be calibrated for more accurate simulations.

5. Testing the optical modelling based on absolute radiance

The validity of the optical modeling was demonstrated in Section 4 for surfaces that are well represented by the Lambertian approximation. It was also shown that non-Lambertian surfaces induce relatively large deviations between the simulated and measured results.

The absolute radiance from the target scene surfaces can be measured through TuLIPSS after radiometric calibration. The digital numbers (DN) in the measurement can be converted to absolute radiance with the conversion factor curves generated from a measurement of a calibrated light source and an object with uniform scattering. Since the landscape scene is distant from the TuLIPSS instrument, the measured absolute radiance is specific to the view angle of the measurement. Optical modelling based on the measured absolute radiance is no longer limited to the knowledge of the bidirectional reflectance of the surface and the weather conditions.

Radiometric calibration was carried out through TuLIPSS to record the spectral image from a surface with known absolute radiance as a reference. A white paper surface was used as the reference since it is uniform and has Lambertian-like properties. Under uniform illumination from a stable broadband light source (Husky halogen work light, SKU#634-545), the absolute irradiance from the white paper surface was measured with a reference spectrometer (USB 4000-VIS-IR, Ocean optics). This Lambertian-like surface makes it simple to obtain the radiance through an approach utilizing the absolute irradiance. The reconstructed image from this TuLIPPS calibration measurement is shown in Fig. 10(a) (bright area in the acquired frame). Any fiber core inside the observation region can be selected for radiometric calibration. Here we randomly selected 3 fiber cores marked as blue, green and red crosses in Fig. 10(a). The corresponding spectral intensity with digital units from the selected single fiber cores is shown in the solid lines of Fig. 10(b). The left-hand y-axis displays digital count values. Each colored curve in Fig. 10(b) represents the spectral intensity from the same colored cross in Fig. 10(a). The black curve scaled by the right-hand y-axis in Fig. 10(b) shows an absolute irradiance from the white paper surface, as measured with the reference spectrometer. Although the absolute illuminance to these selected fiber cores is identical, some variations exist among the spectral intensity measurements of these cores after propagation through the system. These variations may result from small variations in target uniformity, variations in the surface quality of each fiber core, variations of quantum efficiency of each pixel, mismatching of the pinholes with the fiber cores, and possible field dependent differences in the imaging process. Figure 10(c) shows the conversion coefficients for the selected fiber cores.

 figure: Fig. 10.

Fig. 10. Testing the modelling results after radiometric calibration. (a) Reconstructed image of white A4 paper at 642 nm; three positions (blue, green, red crosses) were randomly selected for radiometric calibration; (b) digital counts of the camera (blue, green, red curve, left axis) from the selected positions and the irradiance of sample surface (black curve, right axis); (c) curves of the conversion coefficients for the three randomly selected positions as the color indicated respectively; (d) the reconstructed image of the campus view; the three selected positions shown in (a) are overlain; (e) response of the digital camera from the selected positions (blue, green, red, left axis); the corresponding irradiance after the radiometric calibration (dashed blue, green, and red, lines, right axis);­­ (f) comparison of the measured results (solid lines) and the simulated results (dashed lines) from the irradiance.

Download Full Size | PDF

Using the conversion factor curves shown Fig. 10(c), the absolute radiance of any object in the landscape scene can be converted from the digital intensity of the TuLIPSS measurement. Figure 10(d) shows the reconstructed image of the Rice University campus landscape with 10ms exposure time. The 3 selected positions shown with blue, green and red crosses in Fig. 10(d) are the same fiber core locations chosen in Fig. 10(a). The corresponding digital response of the camera from these points are the solid curves shown in Fig. 10(e). The calculated absolute radiances of these positions are shown as dashed lines. These were obtained through the product of the digital response and the conversion coefficients. Since the campus field is distant from the photographic lens of TuLIPSS, the radiance can be considered as a constant inside the collection angle. With these radiances as the input to the optical modeling tool, the corresponding camera response can be simulated (dashed lines shown in the Fig. 10(f)). For the purpose of comparison, the raw digital intensity from camera is shown with solid lines in Fig. 10(f). It is clear that the simulated results and the measured results are very similar.

The similarity between the simulated results with the optical modelling tool and the measured results from TuLIPSS further confirms the validity of the modelling tool. Note that at wavelengths shorter than 500nm, the measured results are slightly more intense than for the simulated curves. This is a result of cross talk induced between the fiber cores. The red side of the dispersion from the contiguous fiber cores will contribute to the blue region as in Fig. 10(c-f). At wavelengths longer than 700nm, the simulated results fall to zero, while the measurements still present some signal. This is a result of assuming an aberration free system in the modelling of light flux.

It is clear that modeling the TuLIPPS performance based on the absolute radiances from the non-Lambertian scene is more accurate than the results from optical modelling based on directional hemispherical reflectance and global irradiance. Modeling the TuLIPPS performance based on the absolute radiances avoids the influence from the intrinsically bidirectional reflectance of surfaces, the angular dependence of reflectance and the weather conditions.

6. Discussion

The discussion section is divided into four sub-sections to better describe ­the analysis of important TuLIPSS design aspects: (1) light throughput, (2) spectral sampling/resolution, (3) principles to select the optimal TuLIPSS configuration, and (4) prediction of system response for selected applications.

6.1 Light throughput

This section describes detailed throughout conditions for different component / parameter combinations. We also define an effective light throughput for the model when radiant conservation is not fully maintained, which happens if the mode coupling of the light propagation inside the fibers exists, and the radiance at the entrance surface is different than that of the output surface. The conventional definition of light throughput (G) as the product of area and the solid angle () is no longer suitable. After transformation of Eq. (1), can be expressed as:

$$G = A{\Omega _s} = \emptyset /({L\; {T_{total}}} )$$
It is convenient to define an effective light throughput as the ratio of light flux and the light radiance at the entrance. Considering the light flux of the TuLIPSS system via Eq. (13), the effective light throughput at a single pixel in TuLIPSS will be now in form:
$${G_{sp}} = \frac{1}{{{M^2}}}H\; \left( {\frac{d}{{M{D_{fc}}}}} \right)\frac{\pi }{{4\; {{({f/\# } )}^2}}}{\left[ {F\left( {\frac{{{D_{PM}}}}{{{D_{fc}}}}} \right)} \right]^2}F(R )(1 - {e^{ - 2{r^2}/{w^2}}}\textrm{)}$$
Equation (30) shows the dependence of the light throughput on the parameters of the TuLIPSS system. It is clear that the behavior of the light throughput is not a simple inverse of the square of the $f/\# $, since the piecewise function $F(R )$ includes the $f/\# $ itself, as shown in Fig. 11(a). Only in the specific case where the fiber acceptance NA matches the $f/\# $ of the photographic lens, will the light throughput of the TuLIPSS system be the inverse of the square of $f/\# \; \; $. The inflection point is when f/# and fiber NA match.

 figure: Fig. 11.

Fig. 11. Dependence of the light throughput on the parameters of TuLIPSS. (a) Dependence of light throughput on f number of the objective lens; (b) Dependence of light throughput on magnification of re-imaging system; (c) Dependence of light throughput on pinhole size; (d) Dependence of light throughput on the effective Numerical aperture of re-imaging system. The parameters used in the figures are the f-number of photographic lens = 1.5, the width of a camera pixel size is 6.5 μm, magnification is 0.8, diameter of pinhole is 10 μm, numerical aperture of collimating lens 0.25.

Download Full Size | PDF

Another parameter impacting the radiant flux at the selected pixel is the magnification, M, between the bundle output and the camera image sensor. The piecewise function, $H\!\left( {\frac{d}{{M{D_{fc}}}}} \right)$, shows the general dependence on the magnification of the re-imaging system (see Fig. 11(b)) on the overall radiant flux at the pixel. The value of the function $H\!\left( {\frac{d}{{M{D_{fc}}}}} \right)$ is determined by the ratio $\frac{d}{{M{D_{fc}}}}$. When $\frac{d}{{M{D_{fc}}}} \ge 1$ the light throughput of the TuLIPSS system is independent of $M\; $; while for $1 > \frac{{\boldsymbol d}}{{{\boldsymbol M}{{\boldsymbol D}_{{\boldsymbol fc}}}{\boldsymbol \; }}}$, the light throughput is the inverse of the square of $M$.

The function of the photomask is to minimize the potential of cross-talk caused by the spectral dispersion of nearby fiber cores. Figure 11(c) displays the dependence of light throughput on photomask diameter and shows that as the diameter increases, the light throughput increases quadratically until the diameter larger than that of the fiber core. In practice, increasing the pinhole size beyond 10 microns will cause the illumination of more than 1 core and will result in spectral-spatial crosstalk between cores. In the current design, the photomask precedes the fiber bundle. It is possible to create a spatial shift between the pinhole and the fiber core where multiple cores can then be assigned to one pinhole. This may not completely solve the problem. In this case, placing the photomask at the output end of the fiber bundle would be the better choice.

Figure 11(d) demonstrates light throughput in the context of the effective collecting NA of the re-imaging system. The Figure shows that an increase in the numerical aperture of the collimating lens above 0.28 does not provide meaningful improvement. Since the area of the bundle output is large (in practice, >20 mm diameter), it is difficult to maintain such a high NA over the entire field. Analyzing the impact of the collimating lens NA on throughput may help determine a compromise between lens complexity and cost (e.g. propagating light has a Gaussian profile and working at slightly lower NA may not result in a significant throughput impact).

In general, light throughput optimization is also dependent on the optical system design (to maximize the amount of radiant power transferred from a source to a detector) and might be limited by available components / practicality of the optical design and fabrication constraints (e.g. cost, size, etc.). These figures are useful for informing the optimal design when it is necessary to balance the parameters through the available components to keep the light throughput as high as possible.

6.2 Spectral sampling

Spectral sampling is an important measure of the performance of the TuLIPSS system, since it determines the capability to identify the spectral features in land covers or fields of view. Equation (27) denotes the spectral sampling of the re-imaging system for a point source. It shows that the spectral sampling is determined by the apex angle of the selected prism, dispersion of the selected prism, focal length of the focusing lens, as well as the pixel size of the camera. This equation gives the spectral band across one pixel pitch distance from a point source. In practical applications of the TuLIPSS system, the finite size of the fiber core and the point spread function of the re-imaging system also influence the realized spectral resolution of the system through spatial convolution, which determines the intensity distribution of the spectral band around the center wavelength on a single pixel. The spectral resolution is the FWHM of this distribution on single pixel.

Figure 12(a) shows the behavior of the spectral sampling with different apex angles of the BK7 prism. For these prisms, the wavelength dispersion is non-linear in wavelength and as a result the sampling interval changes with wavelengths. Though these prisms exhibit non-linear dispersion, they are used in TuLIPSS because they enable high light throughput and low scattering, as well as depressing higher orders across the field (this effect exists in field distribution techniques including TuLIPSS, IMS and lenslet arrays [4,5,8]). The dispersion curve obtained can be used to choose the band pass filter to best match the dimension of void space introduced by the fiber bundle. For example, the bandwidth of the bandpass filter must be smaller than the summation of the all of the pixel covered wavelengths $\mathop \sum \nolimits_{i = 1}^{\frac{{MD}}{d}} {\Delta }{\lambda _i}$ where again, M is the magnification, D is the width of the void space, and d is the pixel size. Figure 12(b) shows the spectral distribution on a single pixel with wavelength center at 500nm. As the apex angle increases, the distribution narrows around the center wavelength, which decreases the uncertainty of the center wavelength. The full width half maximum of the bandwidth of a single channel is less than 2 times of the spectral sampling interval.

 figure: Fig. 12.

Fig. 12. Spectral resolution of point source and spectral intensity distribution on single pixel. (a) Simulated spectral resolution for point source with different apex angle of prism. (b) Simulated spectral distribution for single pixel with center wavelength at 500nm. Parameters fixed as Prism BK7, focal length of focusing lens 144 mm, pixel size of the camera 6.5μm, fiber core 10μm. Aberration free is assumed.

Download Full Size | PDF

6.3 Configuration selection

Due to the relatively large physical size of the output end of the fiber bundle (about 20-25 mm across the diagonal) and relatively high fiber NA (0.25-0.65 depending on fiber used), it is a challenge to design the re-imaging sub-system (between the output end of the bundle and the detector / camera). This is because the collimating lens needs to maintain a high NA over a large field of view (FOV). While custom designs can be performed, they require relatively large components and are of high cost. There is a limited selection of suitable off-the-shelf components allowing us to maximize the collection efficiency. In our prior work [1], TuLIPSS was assembled with 75 mm diameter doublets, allowing limited 0.05 effective NA and approximately 3% throughput. Recently, we have evaluated and identified commercially available high-performance stereo microscope objectives as good candidates for the lens choices, since they allow approximately 25 mm FOV and work at relatively high NAs. A list of 4 configurations for the re-imaging sub-system is shown in Fig. 13(a) including its layout and summary table. These configurations work over sufficiently large fields of view (defined by the size of the fiber bundle output) to match a range of NAs and magnification. Note that any applied objectives must work with a dedicated tube lens correcting for field curvature. As the objective and tube lens pair are applied here without a zoom adapter of the actual microscope their diameters are not exactly matched and effective NA is slightly smaller than that of the specification. The effective NAs are listed in the left column, which are limited by the diameter of the tube lens.

 figure: Fig. 13.

Fig. 13. Configuration selection. (a) Schematic configuration of the re-imaging system. (b) Spectral light throughput for different configurations. (c) Spectral resolution of different configurations. Parameters fixed as BK7 Prism, pixel size of the camera 6.5μm, fiber core 10μm. Aberration free and optical transmission loss free are assumed.

Download Full Size | PDF

Figure 13(b) shows the spectral light throughput on a single pixel for these four configurations. Here we simulated the light throughput under the assumptions: transmission efficiencies of all the components for the spectral range are 100%; the system is aberration free. These are different from our earlier analysis of light throughput in section 6.1. Here, the spectral light throughput calculation takes the wavelength dependence of the point spread function of the re-imaging sub-system into account. Configuration D shows the largest light throughput followed by configurations B, A and C in descending order. Though configuration D shows the highest light throughput, the actual spectral sampling is low due the magnification of 0.5 resulting in a decrease in the size of the image of the void space on the camera: this can be mitigated by using a different size of camera pixel. Considering the influence of the magnification on the rate of spatial and spectral sampling, the cube size of these four configurations are listed in Fig. 14 with configuration A having the largest cube size and D the smallest. In the TuLIPSS optimal design, maximizing both the light throughput and the data cube size is crucial. In real applications, control of the imaging speed requires us to maximize the light throughput at the expense of the data cube size, or to maximize cube size at lower light throughput for high light level applications. The spectral sampling of each of these configurations is shown in Fig. 13(c). Configuration B results in a better spectral sampling compared to configuration D although at lower overall throughput. Configuration A and configuration C exhibit the same sampling since the focal length is also the same. Configuration D demonstrates spectral sampling slightly lower than that of configuration C.

 figure: Fig. 14.

Fig. 14. Visualization of performance of different configurations. FOV: field of view. Here the dimensions of sensor area of camera are the same as the output end of the fiber bundle.

Download Full Size | PDF

To visualize the performance of the different configurations tested, we show their field of view and relative signal in Fig. 14. The area of the effective field of view is inversely proportional to the square of the magnification of the re-imaging subsystem and limited either by field of view of the re-imaging system or by the camera. The number of the spectral samples is inversely proportional to the magnification. The signal level is proportional to the spectral light throughput at the single pixel. It is clear that Configurations A and C show better spectral sampling, but have smaller field of view, compared with that of Configurations B and D. Configuration B (with magnification 0.8) is optimal for maximizing spatial-spectral sampling with the light throughput allowing routine imaging at 10ms integration [3], using off-the-shelf components. Two additional columns inserted at right side of Fig. 14 include the systems presented in [1], assembled with off-the-shelf, doublets with custom NA=0.05, and a 1:1 custom relay optics that is currently being developed in our lab. This customized lens configuration yields a better performance of TuLIPSS, though the cost of customized lens is higher than commercially available ones, and will be developed in future TuLIPSS configurations.

In the re-imaging subsystem, rays from different field points (different locations in the output end of the fiber bundle) to the camera surface suffer variations of their optical paths. This difference will induce the geometrical aberrations at the periphery of the camera chip, and manifest as the variation of the point spread function in this plane. Further, this variation in the point spread function potentially influences the spatial and spectral resolution of TuLIPSS. The current reimaging subsystem is composed of a tube lens and a stereo microscope objective. Even without the zoom body, aberrations are well confined in the visible spectral range. In our modelling, we take the aberrations at the diffraction limit and the system is linear shift invariant. As for the re-imaging system with larger aberrations, the spectral resolution will be worse since the spectral band distribution will be larger on a single pixel. As the point spread function becomes larger than 16 μm, the spatial resolution will also be influenced. The larger aberrations also induce the collection efficacy decreasing as described by the correction factor shown in Fig. 6(d).

6.4 Prediction of signal levels

Here we determine the signal levels expected by TuLIPSS from various land surfaces allowing us to calculate the requisite integration times to account for the imaging speed of airborne flight measurements. For example, the speed of the flight platform determines the influence of motion blur on the image and spectrum. With the known parameters, which determine the light throughput, transmittances of the optical components, and the radiance of the land cover, the spectral intensity of the TuLIPSS response can be simulated for a range of exposure times as done in section 5. Due to the diversity and the complexity of land cover, it is a challenge to experimentally acquire bidirectional reflectance distribution functions for all measurement configurations. While the U.S. Geological Survey (USGS) Spectral Library Version 7 has thousands of spectra measured in the laboratory, in situ, and from airborne observations, different optical geometries were used in these measurements limiting their use. The USGS spectra do provide some direct utility to TuLIPSS in that they can be applied using the Lambertian surface assumption together with bidirectional-hemispherical reflectance to simulate the TuLIPSS response, and, consequently, the expected signal to noise ratio.

Figure 15(a) shows the simulated spectral response from the TuLIPSS system for typical land covers, grass, tree, water, and paving asphalt. For grass, tree and paving asphalt, the reflections are well represented by a bidirectional distribution function rather than Lambertian approximation. Here it is still meaningful to use the mean reflectance of the directional hemispherical reflectance to constitute the bidirectional reflectance. For the water measurements, there are larger variations in response to changes in the zenith angle. ­The signal will be dramatically enhanced for viewpoints close to the reflection direction since still water is similar to a glassy surface. In addition, the simulated results may only be suitable to zenith angles far from the reflection directions. The primary sources of noise in the measurement are stray light, camera dark current, and shot noise. In the current TuLIPSS configuration, the dark current is negligible at short exposure times (configuration B allows routine imaging for 100 µs to 20 ms integration times). Stray light is also low due to the well-sealed optical system. Therefore, we can assume imaging conditions limited only by shot noise. Figure 15(b) presents the corresponding signal to noise ratio at 10ms exposure time. It can be seen, even in the weak signal wavelength region, that the signal to noise ratio is larger than five, enabling efficient imaging.

 figure: Fig. 15.

Fig. 15. Simulated results for spectral response from TuLIPSS. (a) Spectral response from typical land covers – Tree, Grass, Asphalt, water, at 10 ms exposure with the current design parameters of configuration B as shown in Fig. 14: Prism BK7, focal length of focusing lens 143 mm, pixel size of the camera 6.5μm, fiber core 10μm. (b) The corresponding Signal Noise ratio, under shot noise imaging condition.

Download Full Size | PDF

For more accurate simulations, we need to implement the bidirectional reflectance for the specific optical geometry used. Recently, a hyperspectral bidirectional reflectance (HSBR) model for land surface has been developed [23,38]. This model includes a diverse land surface bidirectional reflectance distribution function (BRDF) database comprising ∼40,000 spectra. The BRDF database is saved as Ross-Li parameters, which can generate hyperspectral reflectance spectra at different sensor and solar observation geometries. The simulated reflective spectra compare very well to the measurements with standard deviations typically smaller than 0.01 in the unit of reflectivity. With the application of the results from this HSBR model, the current light throughput model will bring more reliable simulated results to compare with field measurements.

7. Conclusions

The TuLIPSS system is a novel adaptive modality designed to provide flexible and versatile spectral imaging applications. Within the high number of these applications and reconfiguration needs, it is crucial to define an optimal system assembly and its expected output. Therefore, this paper provides a comprehensive analysis of the TuLIPSS system and develops a model guiding the optimization process in the context of performance and applications. Thus, we considered the primary parameters that determine the surface radiance, and parameters that determine the light throughput. A modelling tool was developed in the graphical user interface of Matlab and used to simulate the TuLIPSS performance adopting both hemispherical reflectance data and absolute irradiance measured in the field. This application was also used to define the spectral sampling and the spectral width at a single pixel of the TuLIPSS system.

The simulated results obtained compared favorably with measurement data, confirming the validity of the model. Furthermore, this tool can be used to determine optimal system assemblies to achieve the required signal performance at desired datacube specifications.

Currently, the simulation is based on the use of hemispherical spectral reflectance data and is still limited to a Lambertian surface approximation and due to the limited availability of bidirectional reflectance data. We will work to expand the model toward bidirectional reflectance in future applications. Simulations based on the absolute irradiance measurement from the local field has no such limitations. It is straightforward to apply this type of modelling to a complex surface with the available models for bidirectional reflection distribution functions.

Funding

National Aeronautics and Space Administration (NNX17AD30G).

Acknowledgments

We would like to acknowledge all the members in Tkaczyk lab for the helpful discussions and assistance.

Disclosures

Dr. Tomasz Tkaczyk has financial interests in Attoris LLC focusing on applications and commercialization of hyperspectral imaging technologies.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. Y. Wang, M. E. Pawlowski, S. Cheng, J. G. Dwight, R. I. Stoian, J. Lu, D. Alexander, and T. S. Tkaczyk, “Light-guide snapshot imaging spectrometer for remote sensing applications,” Opt. Express 27(11), 15701–15725 (2019). [CrossRef]  

2. Y. Wang, M. E. Pawlowski, and T.S. Tkaczyk, “High spatial sampling light-guide snapshot spectrometer,” Opt. Eng. 56(8), 081803 (2017). [CrossRef]  

3. T. Tkaczyk,"Tunable Light-guide Image Processing Snapshot Spectrometer (TuLIPSS)”, ESTF 2021 May 6.

4. L. Gao, R. T. Kester, N. Hagen, and T. S. Tkaczyk, “Snapshot Image Mapping Spectrometer (IMS) with high sampling density for hyperspectral microscopy,” Opt. Express 18(14), 14330–14344 (2010). [CrossRef]  

5. L. Gao, R. T. Kester, and T. S. Tkaczyk, “Compact image slicing spectrometer (ISS) for hyperspectral fluorescence microscopy,” Opt. Express 17(15), 12293–12308 (2009). [CrossRef]  

6. R. T. Kester, L. Gao, and T. S. Tkaczyk, “Development of image mappers for hyperspectral biomedical imaging applications,” Appl. Opt. 49(10), 1886–1899 (2010). [CrossRef]  

7. L. Gao and R. T. Smith, “Optical hyperspectral imaging in microscopy and spectroscopy - a review of data acquisition,” J. Biophotonics 8(6), 441–456 (2015). [CrossRef]  

8. J. G. Dwight and T. S. Tkaczyk, “Lenslet array tunable snapshot imaging spectrometer (LATIS) for hyperspectral fluorescence microscopy,” Biomed. Opt. Express 8(3), 1950–1964 (2017). [CrossRef]  

9. P. S. Thenkabail, J. G. Lyon, and A. Huete, Hyperspectral Remote Sensing of Vegetation, (CRC Press, 2012).

10. L. Shen, H. Xu, and X. Guo, “Satellite remote sensing of harmful algal blooms (HABs) and a potential synthesized framework,” Sensors 12(6), 7778–7803 (2012). [CrossRef]  

11. M. Govender, K. Chetty, and H. Bulcock, “A review of hyperspectral remote sensing and its application in vegetation and water resource studies,” Water SA 33(2), 145–151 (2007). [CrossRef]  

12. E. Adam, O. Mutanga, and D. Rugege, “Multispectral and hyperspectral remote sensing for identification and mapping of wetland vegetation: a review,” Wetlands Ecol Manage 18(3), 281–296 (2010). [CrossRef]  

13. V. Sivakumar, R. Neelakantan, and M. Santosh, “Lunar surface mineralogy using hyperspectral data: implications for primordial crust in the Earth-Moon system,” Geoscience Frontiers 8(3), 457–465 (2017). [CrossRef]  

14. A. Vali, S. Comai, and M. Matteucci, “Deep Learning for Land Use and Land Cover Classification Based on Hyperspectral and Multispectral Earth Observation Data: A Review,” Remote Sens. 12(15), 2495 (2020). [CrossRef]  

15. Julio Chaves, Introduction to Nonimaging Optics, Second Edition, CRC (2015), pp103–107.

16. R. Bird and C. Riordan, “Simple Solar Spectral Model for Direct and Diffuse Irradiance on Horizontal and Tilted Planes at the Earth's Surface for Cloudless Atmospheres,” J. Appl. Meteorol. 25(1), 87–97 (1986). [CrossRef]  

17. T. Shivalingaswamy and B. A. Kagali, “Determination of the Declination of the Sun on a Given Day,” Eur. J. Phys. Educ. 3(1), 17–22 (2017). [CrossRef]  

18. Wikipedia contributors. “Position of the Sun,” Wikipedia, 19 May 2021

19. C. Emde, R. Buras-Schnell, A. Kylling, B. Mayer, J. Gasteiger, U. Hamann, J. Kylling, B. Richter, C. Pause, T. Dowling, and L. Bugliaro, “The libradtran software package for radiative transfer calculations (version 2.0.1),” Geosci. Model Dev. 9(5), 1647–1672 (2016). [CrossRef]  

20. B. Mayer and A. Kylling, “Technical note: The libRadtran software package for radiative transfer calculations - description and examples of use,” Atmos. Chem. Phys. 5(7), 1855–1877 (2005). [CrossRef]  

21. S. G. Leblancand and J.M. Chen, “A Windows Graphic User Interface (GUI) for the Five-Scale model for fast BRDF simulations,” Rem. Sens. Reviews 19(1-4), 293–305 (2000). [CrossRef]  

22. J. M. Chen and S. G. Leblanc, “Multiple-scattering scheme useful for geometric optical modelling,” IEEE Trans. Geosci. Remote Sensing 39, 1061–1071 (2001). [CrossRef]  

23. Q. Yang, X. Liu, and W. Wu, “A Hyperspectral Bidirectional Reflectance Model for Land Surface,” Sensors 20(16), 4456 (2020). [CrossRef]  

24. N. I. E. Bachari, S. Lamine, and K. Meharrar, “Geometric-Optical Modeling of Bidirectional Reflectance Distribution Function for Trees and Forest Stands,” Advances in Remote Sensing for Natural Resource Monitoring, Wiley-Blackwell (2021), pp28–41.

25. M. Stavridi, B. Van Ginneken, and J. J. Koenderink, “Surface bidirectional reflection distribution function and the texture of bricks and tiles,” Appl. Opt. 36(16), 3717–3725 (1997). [CrossRef]  

26. A. Kuusk, J. Kuusk, and M. Lang, “Measured spectral bidirectional reflection properties of three mature hemiboreal forests,” Agric. For. Meteorol. 185, 14–19 (2014). [CrossRef]  

27. J. J. Koenderink and W. A. Richards, “Why is snow so bright?” J. Opt. Soc. Am. A 9(5), 643–648 (1992). [CrossRef]  

28. J. A. Smith, L. L. Tzeu, and K. J. Ranson, “The Lambertian assumption and Landsat data,” Photogramm Eng Remote Sensing. 46(9), 1183–1189 (1980).

29. W. A. Gambling, D. N. Payne, and H. Matsumura, “Mode conversion coefficients in optical fibers,” Appl. Opt. 14(7), 1538–1542 (1975). [CrossRef]  

30. D. L. Shealy and J. A. Hoffnagle, “Laser beam shaping profiles and propagation,” Appl. Opt. 45(21), 5118–5131 (2006). [CrossRef]  

31. J. Y. Wang and D. E. Silva, “Wave-front interpretation with Zernike polynomials,” Appl. Opt. 19(9), 1510–1518 (1980). [CrossRef]  

32. H. Ottevaere and H. Thienpont, OPTICAL MICROLENSES, Editor(s): Robert D. Guenther, Encyclopedia of Modern Optics, Elsevier (2005), pp 21–43.

33. Vladimir Sacek, “Notes on Amateur Telescope Optics,” http://www.telescope-optics.net

34. S. K. Meerdink, S. J. Hook, D. A. Roberts, and E. A. Abbott, “The ECOSTRESS spectral library version 1.0,” Remote Sens. Environ. 230(111196), 1–8 (2019). [CrossRef]  

35. A. M. Baldridge, S.J. Hook, C.I. Grove, and G. Rivera, “The ASTER Spectral Library Version 2.0,” Remote Sens. Environ. 113(4), 711–715 (2009). [CrossRef]  

36. T. Zhuravleva and I. Nasrtdinov, “Simulation of bidirectional reflectance in broken clouds: from individual realization to averaging over an ensemble of cloud fields,” Remote Sens. 10(9), 1342 (2018). [CrossRef]  

37. C. A. Coburn and G. S. Logie, “Temporal dynamics of sand dune bidirectional reflectance characteristics for absolute radiometric calibration of optical remote sensing data,” J. Appl. Remote Sens 12(01), 1 (2018). [CrossRef]  

38. C. Bacour, F. M. Bréon, L. Gonzalez, I. Price, J. P. Muller, P. Prunet, and A. G. Straume, “Simulating Multi-Directional Narrowband Reflectance of the Earth’s Surface Using ADAM (A Surface Reflectance Database for ESA’s Earth Observation Missions),” Remote Sens. 12(10), 1679 (2020). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Description of disperser selection, model App, and calibration process

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Optical schematic of the TuLIPSS system. The parameters determine the light flux at detection: A, area detectable by a single pixel of the camera; L, radiance from scene; ΩS, solid angle subtended from objective aperture to source ; Ωo, solid angle subtended from objective to image at fiber input end; Ωfb, fiber accepted solid angle; Ωc, accepted solid angle of collimating lens; To, transmittance of the objective lens; Tfb, transmittance of the fiber bundle; Tc, transmittance of the collimating lens; Tp, transmittance of the prism; Tf, transmittance of the focusing lens; η, quantum coefficient of the camera detector. θ1, θ2 and θ3 are the corresponding cone half angles of the solid angles Ωs, Ωo, Ωc.
Fig. 2.
Fig. 2. Flow chart for design considerations in TuLIPSS. Spatial resolution, spectral resolution and spectral bandwidth are determined by the application. FOV is the field of view of the corresponding component, and NA is the Numerical aperture.
Fig. 3.
Fig. 3. Modelling diagram illustrating the influence of optical components and the operations in the modelling process.
Fig. 4.
Fig. 4. Simulated solar irradiation spectrum under cloudless sky based on SPCTRAL2 at Houston, Texas, the time 12: 36 pm on June 16th, 2020.
Fig. 5.
Fig. 5. Angular distribution of the output beam profile from a fiber core. The Step-Index Fiber has a core diameter of 10 μm and numerical aperture 0.28. The solid line is the fitted Gaussian function.
Fig. 6.
Fig. 6. (a). Point spread function of the focusing lens; (b) the normalized cross section of the point spread function; (c) the convolution of the point spread function with the fiber core; (d) the correction factor for collection efficiency of a single pixel accounting for the point spread function. All of the results are simulated using the following parameters: Pupil Diameter = 36mm, focal length = 144mm, sampling pitch = 0.125 μm, sampling number = 1024, fiber core diameter = 10 μm, wavelength = 532 nm, RMS spherical aberration = 0.0745 wave. The red line in (d) is the correction factor for collection efficiency without aberration.
Fig. 7.
Fig. 7. (a) Deviation angle of the prism with apex angle α; (b) focusing location of different light waves (only the chief rays are shown), f is the focal length of the focusing lens; h is the distance from the optical axis at the imaging plane; d is the pitch of the camera
Fig. 8.
Fig. 8. (a) Schematic of infinitesimal segments of the fiber core. The gray circle represents the fiber core; the dashed lines are orthogonal to the dispersion direction; (b) schematic of spectral accumulation and spectral shift in the image plane. The dispersion spectrum from each segment has a spatial shift along the dispersion direction. The spectral response is the sum of contributions from all the segments.
Fig. 9.
Fig. 9. Testing the modelling results based on directional hemispherical reflectance under clear sky conditions. In all cases, the simulated results are represented by the black solid line and the TuLIPSS measurements are shown as red circles. (a) image of reconstructed view; (b) spectrum of sky; (c) spectrum of tile on the building surface; (d) view field recorded by camera; (e) spectrum of concrete roof; (f) spectrum of oak trees.
Fig. 10.
Fig. 10. Testing the modelling results after radiometric calibration. (a) Reconstructed image of white A4 paper at 642 nm; three positions (blue, green, red crosses) were randomly selected for radiometric calibration; (b) digital counts of the camera (blue, green, red curve, left axis) from the selected positions and the irradiance of sample surface (black curve, right axis); (c) curves of the conversion coefficients for the three randomly selected positions as the color indicated respectively; (d) the reconstructed image of the campus view; the three selected positions shown in (a) are overlain; (e) response of the digital camera from the selected positions (blue, green, red, left axis); the corresponding irradiance after the radiometric calibration (dashed blue, green, and red, lines, right axis);­­ (f) comparison of the measured results (solid lines) and the simulated results (dashed lines) from the irradiance.
Fig. 11.
Fig. 11. Dependence of the light throughput on the parameters of TuLIPSS. (a) Dependence of light throughput on f number of the objective lens; (b) Dependence of light throughput on magnification of re-imaging system; (c) Dependence of light throughput on pinhole size; (d) Dependence of light throughput on the effective Numerical aperture of re-imaging system. The parameters used in the figures are the f-number of photographic lens = 1.5, the width of a camera pixel size is 6.5 μm, magnification is 0.8, diameter of pinhole is 10 μm, numerical aperture of collimating lens 0.25.
Fig. 12.
Fig. 12. Spectral resolution of point source and spectral intensity distribution on single pixel. (a) Simulated spectral resolution for point source with different apex angle of prism. (b) Simulated spectral distribution for single pixel with center wavelength at 500nm. Parameters fixed as Prism BK7, focal length of focusing lens 144 mm, pixel size of the camera 6.5μm, fiber core 10μm. Aberration free is assumed.
Fig. 13.
Fig. 13. Configuration selection. (a) Schematic configuration of the re-imaging system. (b) Spectral light throughput for different configurations. (c) Spectral resolution of different configurations. Parameters fixed as BK7 Prism, pixel size of the camera 6.5μm, fiber core 10μm. Aberration free and optical transmission loss free are assumed.
Fig. 14.
Fig. 14. Visualization of performance of different configurations. FOV: field of view. Here the dimensions of sensor area of camera are the same as the output end of the fiber bundle.
Fig. 15.
Fig. 15. Simulated results for spectral response from TuLIPSS. (a) Spectral response from typical land covers – Tree, Grass, Asphalt, water, at 10 ms exposure with the current design parameters of configuration B as shown in Fig. 14: Prism BK7, focal length of focusing lens 143 mm, pixel size of the camera 6.5μm, fiber core 10μm. (b) The corresponding Signal Noise ratio, under shot noise imaging condition.

Equations (34)

Equations on this page are rendered with MathJax. Learn more.

= A Ω s L T t o t a l
E g = E d + E b cos ( θ )
c o s ( θ ) = s i n ( Φ ) s i n ( δ ) + c o s ( Φ ) c o s ( δ ) c o s ( h )
δ = a r c s i n [ s i n ( 23.44 ) c o s ( 360 365.24 ( N + 10 ) + 360 π 0.0167 s i n ( 360 365.24 ( N + 2 ) ) ]
L = E g R h e m i 2 π
1 = A Ω s L T o
Ω s = 2 π ( 1 cos ( θ 1 ) ) = 4 π sin 2 ( θ 1 2 )
1 = π A sin 2 ( θ 1 ) L T o
1 = π A sin 2 ( θ 2 ) L T o
1 π 4 ( f / # ) 2 A L T o
R = 1 1 N A f b 2 1 1 ( 1 2 f / # ) 2 ,
I = I 0 e x p ( 2 ( x x 0 ) β w β )
p ( r ) = P 0 ( 1 e 2 r 2 w 2 )
G ( d M D f c 1 ) = π M 2 D f c 2 4 ;
G ( 1 > d M D f c 2 2 ) = d ( M D f c ) 2 d 2 + ( π 4 cos 1 ( d M D f c ) ) ( M D f c ) 2 ;
G ( 2 2 > d M D f c ) = d 2 .
s p = 1 M 2 G ( d M D f c ) π 4 f / # 2 L T o [ F ( D P M D f c ) ] 2 F ( R )Tfb\;  ( 1 e 2 r 2 / w 2 ) T c T p T f
e = 1 M 2 G ( d M D f c ) π 4 f / # 2 L T o [ F ( D P M D f c ) ] 2 F ( R )Tfb\;  ( 1 e 2 r 2 / w 2 ) T c T p T f η
A ( x p , y p ) = T ( x p , y p ) e i 2 π W ( x p , y p )
E ( x , y ) = R A P , R A P R A P , R A P T ( x p , y p ) e i 2 π W ( x p , y p ) e i 2 π R A P λ ( x x p + y y p ) d x p d y p
P S F = E ( x , y ) E ( x , y )
n sin α = s i n ( α + δ )
Δ δ = s i n ( α ) c o s ( α + δ ) Δ n
Δ δ = tan 1 ( h + d f ) tan 1 h f
Δ δ = tan 1 ( d f )
n 2 1 = a 1 λ 2 λ 2 b 1 + a 2 λ 2 λ 2 b 2 + a 3 λ 2 λ 2 b 3
n ( λ ) = ( a 1 λ 2 λ 2 b 1 + a 2 λ 2 λ 2 b 2 + a 3 λ 2 λ 2 b 3 + 1 ) 1 / 2
Δ n = n ( λ ) 1 ! Δ λ + n ( λ ) 2 ! ( Δ λ ) 2 + n ( λ ) 3 ! ( Δ λ ) 3
Δ λ = n ( λ ) ( n ( λ ) ) 2 + 2 n ( λ ) Δ n n ( λ )
Δ n = c o s ( s i n 1 ( n s i n ( a ) ) s i n ( a ) tan 1 ( d f )
Δ λ = n ( λ ) ( n ( λ ) ) 2 + 2 n ( λ ) c o s ( s i n 1 ( n s i n ( a ) ) s i n ( a ) tan 1 ( d f ) n ( λ )
= π d 2 4 ( f / # M ) 2 T o [ f ( D P M D f b ) ] 2 f ( 1 1 N A f b 2 1 2 ( 1 2 f # ) 2 ) T f b T c f ( R ) T p T f Δ a ( λ Δ λ 2 λ + Δ λ 2 L ( λ ) η ( λ ) d λ )
G = A Ω s = / ( L T t o t a l )
G s p = 1 M 2 H ( d M D f c ) π 4 ( f / # ) 2 [ F ( D P M D f c ) ] 2 F ( R ) ( 1 e 2 r 2 / w 2 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.