Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Intensity-corrected 4D light-in-flight imaging

Open Access Open Access

Abstract

Light-in-flight (LIF) imaging is the measurement and reconstruction of light’s path as it moves and interacts with objects. It is well known that relativistic effects can result in apparent velocities that differ significantly from the speed of light. However, less well known is that Rayleigh scattering and the effects of imaging optics can lead to observed intensities changing by several orders of magnitude along light’s path. We develop a model that enables us to correct for all of these effects, thus we can accurately invert the observed data and reconstruct the true intensity-corrected optical path of a laser pulse as it travels in air. We demonstrate the validity of our model by observing the photon arrival time and intensity distribution obtained from single-photon avalanche detector (SPAD) array data for a laser pulse propagating towards and away from the camera. We can then reconstruct the true intensity-corrected path of the light in four dimensions (three spatial dimensions and time).

Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. Introduction

As light travels and interacts with objects, photons are scattered in all directions. Light-in-flight (LIF) imaging is the process of capturing scattered photons using detectors with high temporal resolution such that light’s path can be reconstructed. Three-dimensional LIF imaging was first captured using a holographic plate to record the spherical wavefronts of pulses reflected by mirrors; the technique involved no mechanical processes and achieved a temporal resolution of 800 ps [1]. This demonstrated real-time imaging of light undergoing dynamic processes, previous imaging was static and time averaged [2,3]. Further work proposed a mechanism for correcting distortion effects when imaging light [4]. Recent 3D LIF holography techniques use a scattering medium and achieve higher temporal resolutions [5,6].

The field of LIF imaging was recently revolutionised by Velten et al. [7], who imaged femtosecond laser pulses propagating through a scattering media using a streak camera. This new LIF method allowed for scattering light dynamics to be observed at unprecedented temporal and spatial resolutions. However, this method requires a scanning mechanism to build a 2D image resulting in an acquisition time of one hour. Other methods for capturing 3D LIF involve transient imaging using a photonic mixer device (PMD), achieving a nanosecond temporal resolution with a one minute acquisition time [8]. In addition, other 3D LIF imaging methods include time encoded amplified imaging and computer tomography, which achieve nanosecond and picosecond temporal resolutions respectively [9,10].

The type of scattering that is observed is dependent on the medium that the light propagates through. For example, light has been captured propagating through fibre optics [11] and heated rubidium vapor [12]. When light travels through air, Rayleigh scattering is the dominant effect, and this was captured by Gariepy et al. who demonstrated three-dimensional LIF imaging using a single-photon avalanche detector (SPAD) array camera [13]. In this work, the light propagated in one plane that was perpendicular to the axis normal to the detector. Following this, it was recognised that relativistic effects, where the apparent velocity of light would deviate from $c$, could be observed with LIF [14,15] and these principles have allowed four-dimensional LIF reconstruction to be demonstrated for multiple paths of light [16]. This was generalised, using a megapixel camera and machine learning techniques, to capture 4D LIF imaging of multiple pulses following arbitrary straight-line paths in space [17].

These technologies have already been used in fluorescence lifetime imaging [18], light detection and ranging (LIDAR) imaging through scattering media [19] and imaging around corners [2022]. Ultimately, the ability to accurately capture the full scattering dynamics of light could lead to new approaches when imaging deep inside the human body, see Ref. [23] for an overview of LIF research. Recent research tackling the problem of imaging in highly scattering media has shown computational imaging approaches can provide images in two [24] and three dimensions [25].

Our work builds on the recent LIF research by developing a model to compensate for distortions in the recorded intensity, as well as relativistic effects previously observed, and reconstruct the 4D path of laser pulses. The scope of this work is to provide a mechanism for the most accurate reconstruction of LIF measurements, relevant for understanding scattering in a range of scenarios. Understanding the intensity effects which occur in LIF imaging could have future applications in medical imaging, where the intensity of scattered photons gives information on the scattering source and its interaction with objects. To do this, it is necessary to understand the underlying physics of light scattering in air and the relationship to imaging optics. This is illustrated in Fig. 1, where light scattered at a time $t_1$ from an object at a location ($x_1, y_1, z_1$) propagates a distance $R_1$ to a camera. The remaining light continues to propagate to position ($x_2, y_2, z_2$) where another scattering event occurs at time $t_2$, and the scattered light travels a distance $R_2$ to the camera. The total time taken for the pulse to travel between the two scattering events is $t_2 - t_1 = \Delta t$. Whereas, the two scattering events are recorded by the camera at times $t_3$ and $t_4$ respectively, and the difference in arrival time recorded by the camera is $t_4 - t_3 = \Delta t + (R_2-R_1)/c$. This means the arrival time data recorded by the camera is different to the true propagation times of light and is ultimately dependent on the propagation angle. For the case of a camera, there is a mapping of an event occurring in three spatial dimensions and time to a camera with two spatial dimensions and time. The third spatial ($z$) dimension is collapsed and contained within the temporal data of the camera.

 figure: Fig. 1.

Fig. 1. (a) Light in “real space" is scattered at ($x_1, y_1, z_1, t_1$) in all directions. A proportion of this scattered light travels to the camera, and is recorded in “camera space" as a signal at ($x_3, y_3, t_3$), where $x_3$ and $y_3$ are pixel positions. (b) The remaining light travels across the field-of-view, and is scattered at position and time ($x_2, y_2, z_2, t_2$). This event is recorded at ($x_4, y_4, t_4$). (c) Birds-eye-view of the two scattering events where $R_1$ and $R_2$ are the distance between the camera and the first and second scattering events respectively, and $\alpha _1$ and $\alpha _2$ are the scattering angles. The time difference for the two events in “camera space" is $t_4 - t_3 = \Delta t + (R_2-R_1)/c$, whereas the time difference in “real space" is $t_2 - t_1 = \Delta t$. Rayleigh scattering effects observed by the camera are dependent on $\alpha _1, \alpha _2, R_1$ and $R_2$, which differ significantly for the two scattering events shown. Focusing effects from the imaging optics also contribute to the intensity signal. An image rendered from the perspective of the camera shows the right side of the beam, which is closer to the camera, is larger compared to the left side of the beam. This corresponds to a lower energy density on the right hand side, and therefore a brighter image on the left.

Download Full Size | PDF

Rayleigh scattering effects are also observed by the camera and are dependent on the scattering angles ($\alpha _1$ and $\alpha _2$) and propagation distances ($R_1$ and $R_2$). These variables vary along light’s path and so the intensity contribution is dependent on the position along the path. Furthermore, focusing effects contribute to the recorded intensity profile and are dependent on the perpendicular distances between the scattering event and camera. This is shown in the camera view image in Fig. 1(c) which depicts an integrated image of the laser pulse’s path across the field-of-view of the camera used to render the image. The depth of field is increased such that the whole path is in focus. The pulse, travelling towards the camera from left to right, is further away from the camera on the left-hand side and is therefore focused to a smaller size than the right-hand side of the pulse. This results in the intensity of the pulse increasing as the distance between the camera and pulse increases.

In this work, we are able to measure and subsequently correct all of the effects mentioned above. That is to say, we can correct both the temporal distortion, arising from relativistic effects, and the intensity distortions, resulting from Rayleigh scattering and the imaging optics. We demonstrate intensity-corrected LIF imaging using a SPAD array, recording data for a laser pulse propagating at large and small angles with respect to the observation axis of the camera. The relativistic effects result in apparent speed of light velocities that span several orders of magnitude, and the intensity effects lead to observed intensities changing by at least a factor of two along the pulse’s path.

2. Theory

Consider a pulse of light that travels in three dimensions and is imaged using a camera with high temporal resolution. To develop the theoretical framework, we introduce the concept of the “camera space" to indicate where the data is recorded and the “real space" to indicate the three dimensional space in which the light pulse travels. It is the goal for the work to convert the camera space data to the real space as accurately as possible. The inversion of the camera space data to the real space path enables the true intensity-corrected light path to be reconstructed.

Light-in-flight data is subject to intensity and relativistic effects observed in the camera space. The intensity of scattered photons along the beam in camera space is derived by considering the intensity contribution from one segment of the beam on one pixel. The intensity of a segment of the beam is calculated using the schematic in Fig. 2 where laser pulses travelling across the field-of-view, at propagation angle $\theta$ with respect to the observation axis, are imaged using a SPAD array. A proportion of the photons within each pulse are scattered by air molecules and travel through the imaging lens aperture. Different segments of the beam are imaged by different pixels within the SPAD array and the intensity contribution from each segment is dependent on focusing effects from the imaging optics ($I_{f}$), Rayleigh scattering ($I_{r}$), and integrated path length ($I_{s}$).

 figure: Fig. 2.

Fig. 2. Theory Schematic: The intensity of a segment of the pulse is dependent on several factors: Rayleigh scattering, focusing effects and the integrated pulse length. These intensity contributions are derived using the variables shown above where $\theta$ is the propagation angle of the pulse relative to the optical axis, d is the distance between the centre of the pulse and imaging lens, f is the focal length of the imaging lens, r is the distance between the imaging lens and the nearest edge of the segment, $\theta _{1}$ is given by Eq. (2), $\theta _{2}$ is given by Eq. (3), x is the distance along the SPAD array to a given pixel and $\Delta$ is the active pixel width. Combining these effects, the intensity of scattered photons along the beam in camera space $(I(x; ~\theta , ~A, ~f, ~\Delta ))$ is derived and given in Eq. (7). The relativistic effects are explained using the same variables by Eq. (9).

Download Full Size | PDF

The intensity of the beam in camera space $(I(\theta _{1}, ~\theta _{2}; ~\theta ))$ is given by

$$I(\theta_{1}, ~\theta_{2}; ~\theta) = B I_{f}(\theta_{1};~r,~f)I_{r}(\theta_{1};~\theta,~r)I_{s}(\theta_{1},~\theta_{2};~r),$$
where $B$ is a normalisation constant dependent on integration time and laser power, $x$ is the distance along the SPAD array to a given pixel, $A$ is the sensor width, $f$ is the focal length of the lens, $\Delta$ is the active pixel width, $\theta$ is the propagation angle relative to the observation axis and $r$ is the distance between the imaging lens and the nearest edge of the segment, $\theta _{1}$ satisfies
$$\theta_{1}(x;~A,~f) = \tan^{{-}1} \Big( \frac{2x - A}{2f} \Big),$$
and $\theta _{2}$ satisfies
$$\theta_{2}(x;~A,~f,~\Delta) = \tan^{{-}1} \Big( \frac{2x - A +2\Delta}{2f} \Big).$$
The first contribution to the intensity of a segment of the beam is from focusing effects in the imaging optics of the system and is given by
$$I_{f}(\theta_{1};~r,~f) = \frac{r\cos\theta_{1}}{f}.$$
This contribution is a result of parts of the beam which are further away from the lens focusing to a smaller point on the SPAD array with higher energy density.

The second contribution is from photons undergoing Rayleigh scattering with air molecules and is given by

$$I_{r}(\theta_{1};~\theta,~r) = \frac{I_{0} \pi^{4}(n^{2}-1)^2 d_{r}^{6}}{8 \lambda^{4} (n^{2}+2)^2 } \frac{1+\cos^2(\theta - \theta_{1})}{r^2},$$
where $I_{0}$ is the intensity constant, n is the refractive index, $d_{r}$ is the scattering particle diameter and $\lambda$ is the wavelength of scattered light. Rayleigh scattering is dependent on the scattering angle and distance between the SPAD array and the pulse, which both change along the beam.

The final contribution to the intensity of one pixel is from the integrated path length, which is the segment length imaged by each pixel, given by

$$I_{s}(\theta_{1},~\theta_{2};~r) = \frac{r\sin(\theta_{2}-\theta_{1})}{\sin(\theta - \theta_{1})}.$$
This results in pixels at the edge of the SPAD array seeing a larger length of pulse than pixels in the middle of the SPAD array.

By combining these effects and substituting Eqs. (2)-(6) into Eq. (1) the intensity of scattered photons along the beam recorded by the SPAD array ($I(x; ~\theta , ~A, ~f, ~\Delta )$) is found to be

$$I(x; ~\theta, ~A, ~f, ~\Delta) = C\frac{1 + \cos^2 (\theta- \tan^{{-}1} (\frac{2x-A}{2f}))}{f \sin\theta - \frac{2x-A}{2}\cos\theta} \Big( \tan^{{-}1}\Big(\frac{2x-A+ 2\Delta}{2f}\Big) - \tan^{{-}1}\Big(\frac{2x-A}{2f}\Big) \Big),$$
where $C$ is a normalisation constant which includes the Rayleigh scattering constants, the integration time of the SPAD array and the optical power of the laser. Equation (7) assumes $\sin (\theta _{2}-\theta _{1}) \approx \theta _{2}-\theta _{1}$.

Finally, the Rayleigh effect is shown by measuring the central pixel intensity $(I_{c}(\theta ;~f, ~\Delta ))$ for different values of $\theta$. This intensity is independent of focusing effects as d is constant for all $\theta$ and is given by

$$I_{c}(\theta;~f, ~\Delta) = I(x=\frac{A}{2}; ~\theta, ~A, ~f, ~\Delta) =\frac{C(1 + \cos^2 \theta)}{f\sin \theta}\tan^{{-}1}\left(\frac{\Delta}{f} \right) \propto \frac{1 + \cos^2 \theta}{\sin \theta},$$
which is derived by substituting $x = A/2$ into Eq. (7). This equation is a modified version of the Rayleigh scattering effect and introduces a normalisation factor that takes into account the length of pulse imaged by the central pixel.

Relativistic effects seen in the camera space result in the pulse appearing to travel at apparent velocities different to the speed of light. The arrival time in camera space is dependent on $\theta$ and d as shown in Fig. 2(a). The arrival time difference between the central pixel and an arbitrary pixel ($\Delta t(x; ~\theta , ~A, ~f)$) is given by

$$\Delta t(x; ~\theta, ~A, ~f) = \frac{d\Big( \sqrt{{\bigg(}(\frac{2x-A}{2f})^{2}+1{\bigg)}}\sin\theta - \frac{2x-A}{2f} \Big)}{c(\sin\theta + \frac{2x-A}{2f}\cos\theta) } -\frac{d}{c},$$
where $c$ is the speed of light in air. From the above equations, the relativistic and intensity effects observed in the camera space can be modelled and compared to experimental data.

3. Experimental setup

The relativistic and intensity effects of LIF imaging are investigated using the experimental set-up shown in Fig. 3. The system includes a SPAD array camera, a 532 nm short pulsed laser (Teem Photonics STG-03E-1x0), and an optical constant fraction discriminator used as a trigger. The impact of intensity and relativistic effects are more pronounced when the light travels at large or small angles with respect to the optical axis of the camera, which corresponds to light travelling towards and away from the camera, see Figs. 3(b) and 3(c) respectively.

Laser pulses, which have a pulse width of $\approx$ 500 ps, are expanded to a beam waist of $\approx 5~$mm and collimated via two lenses of focal length 100 mm and 400 mm respectively, resulting in a Rayleigh range of 150 m. This ensures there are no intensity effects due to the beam diverging as it travels across the field-of-view of the sensor. The laser pulses are directed to a constant fraction discriminator acting as a trigger with 200 ps jitter, which sends 4 kHz transistor–transistor logic (TTL) pulses to the SPAD array. The TTL pulse starts the timer for each of the 32 x 32 pixels operated in Time-Correlated Single Photon Counting (TCSPC) mode. Histograms of photon counts are recorded for every pixel over 1024 time bins, each with a width of 55 ps. The pixel area and active area are $50~\mu$m x $50~\mu$m and $6.95~\mu$m ×$6.95~\mu$m respectively, giving a fill factor of 1.9%.

 figure: Fig. 3.

Fig. 3. (a) 532 nm laser pulses are collimated by a series of lenses, increasing the beam diameter by four times to $\approx 5~$mm, and travel towards the SPAD array which records temporal and intensity data of scattered photons. From this information, $\theta$ and the intensity distribution of the beam $(I(x; ~\theta , ~A, ~f, ~\Delta ))$ in camera space are calculated. (b) Birds-eye-view of pulses travelling towards the SPAD array where $d = 25.8$ cm, $f_{1}=100$ mm and $f_{2}=400$ mm. (c) Birds-eye-view of pulses travelling away from the SPAD array.

Download Full Size | PDF

An 8-mm-focal length c-mount lens is used to image the beam onto the sensor. The aperture of the lens can be stopped down to extend the depth of field, and this is essential to reduce blurring and ensure the entire path of the beam is in focus on the camera. The mirrors used to direct the beam towards the SPAD array are placed outside the field-of-view so only photons scattered by air molecules are collected by the imaging lens, thus avoiding saturation effects and allowing Rayleigh scattering to be observed. Finally, to observe the Rayleigh scattering effects, the laser and trigger were placed on a rotation stage system which allows $\theta$ to be easily varied.

When measuring $I(x; ~\theta , ~A, ~f, ~\Delta )$, it is important for the whole of the beam to be in focus. This is demonstrated in Figs. 4(a) and 4(b) which shows electron multiplying charge-coupled device (EMCCD) intensity images of the beam travelling from right to left away from the SPAD array with an open and closed aperture respectively. When the lens aperture is open, out of focus light contributes to the intensity image resulting in part of the beam being out of focus and less intense than predicted by Eq. (7). When the lens aperture is closed, only in focus light is incident on the detector and the intensity effects predicted are observed. This condition requires longer acquisition times to collect sufficient photon counts to build an intensity image.

 figure: Fig. 4.

Fig. 4. The effect of stopping down the aperture on the camera lens as measured with an EMCCD camera. The light is travelling away from the sensor from right to left in the images. (a) EMCCD intensity image of the beam with the aperture fully open. (b) EMCCD intensity image of the beam with the aperture closed. In (b) the entire beam is in focus and the intensity effects described by Eq. (7). Our theoretical model assumes that we stop down the aperture, as seen in (b).

Download Full Size | PDF

This experimental configuration allows for the imaging of temporally correlated laser pulses and can be generalised for multiple pulses travelling across the field-of-view. In order to image temporally uncorrelated laser pulses, a triggering signal from each source must be provided to the SPAD array.

4. Results

In order to achieve intensity-corrected 4D LIF imaging, it is important to remove the noise present in the SPAD array data. This has been achieved by fitting Gaussian functions to each pixel and setting the pixel intensity to zero if the standard deviation of the Gaussian is outside an acceptable range. Furthermore, for noisy pixels within the beam path, interpolation is performed.

The theoretical model was then created using Eqs. (7) and (9). The only input parameters that the model requires is the distance from the camera to the centre of the pulse along the optical axis $d$, the focal length of the lens $f$, the physical sensor size $A$ and the active pixel width $\Delta$. The propagation angle, pulse standard deviation, peak amplitude, and the average background noise are free parameters that the model fits to using chi-squared minimisation. If necessary $d$ can also be set as a free parameter, although we chose to measure this to increase the accuracy of the $\theta$ measurement.

The input parameters used to model the relativistic effects are similar to those used by Laurenzis et al. [14], however the method for measuring the apparent velocity of the pulse differs. Laurenzis et al. measured the relative velocity of the pulse using the true propagation distance and the measured photon arrival time, whereas we use the propagation distance measured by the camera.

Camera space results for laser pulses travelling from left to right towards the SPAD array at $\theta =167.0^{\circ }\pm 0.5^{\circ }$ are shown in Fig. 5. Three frames, each with time duration 55 ps, of laser pulses travelling across row 17 of the SPAD array are shown at 2.1 ns, 2.5 ns and 2.8 ns in Figs. 5(a) to 5(c). The time taken for the pulse to travel across the SPAD array is less than the time bin width, resulting in the pulse being present for all pixels of row 17 in a single frame.

 figure: Fig. 5.

Fig. 5. Results Camera Space: $\theta =167.0^{\circ }\pm 0.5^{\circ }$: laser pulses travelling towards the SPAD array using the set-up show in Fig. 3(a). (a)-(c) Three frames of laser pulses travelling from left to right across row 17 are shown at 2.1 ns, 2.5 ns and 2.8 ns, where the colour bar represents the number of photon counts and the distance axis is the horizontal field-of-view in real space. (d) and (e) Data and fitted model for row 17 of the SPAD camera. This shows photon counts as a function of position and time. The theoretical fit to the data calculates $\theta$ as $165.5^{\circ }\pm 0.1^{\circ }$. (f) The apparent velocity of the pulse varies from 7.0 c to 6.8 c and the timescale is the camera space time, $\textrm {t}'$. This is significantly shorter than the real space time, leading to the apparent superluminal velocities (see Visualization 1).

Download Full Size | PDF

The data and fitted model for row 17 as a function of position and time is given in Figs. 5(d) and 5(e). The photon intensity decreases as pixel number increases in both the data and the fitted model. This is because the left hand side of the beam is further away from the SPAD array and so focuses to a smaller area on the detector with a higher energy density.

The total time taken for the pulse to travel across the SPAD array was measured to be 21.2 ps, and using Eqs. (7) and (9) to fit a 2D Gaussian function to the data, $\theta$ was calculated to be $165.5^{\circ }\pm 0.1^{\circ }$. The error in $\theta$ was calculated by numerically creating 10 statistically identical data sets and fitting to these. These additional data sets were created by sampling from a Poisson distribution with a mean and variance determined by the initial experimental data. Finally, the apparent velocity of the pulse as a function of camera space time is shown in Fig. 5(f) and varies from 7.0 c to 6.8 c. This superluminal apparent velocity is entirely due to the pulse travelling toward the camera.

Using the data obtained by the SPAD array, the camera space data is converted to real space data. Figures 6(a)–6(c) shows three frames of the pulse travelling across the field-of-view before intensity and relativistic corrections have been applied. The pulse appears to travel at superluminal speeds with a total propagation time of 21.2 ps and the intensity of the pulse appears to decrease as the pulse propagates due to the intensity effects described in Section 2. This corresponds to the camera space data without correction. The arrows indicate the propagation direction is left to right across the camera. Next, the intensity and relativistic effects are corrected by fitting the model to the experimental data and optimising the input parameters. Relativistic effects are corrected by calculating the pulse’s path from $\theta$ and $d$, and the intensity correction is applied by normalising the raw intensity data by the fitted intensity data. Three frames from the real space movie of the pulse traveling towards the SPAD array are shown in Figs. 6(d)–6(f) at 0.0 ns, 0.4 ns and 0.7 ns. The beam diameter used for the real space reconstruction was $5~$mm and the pulse width was 15 cm. These values were taken from measurements and known values of the pulse.

 figure: Fig. 6.

Fig. 6. Results Real Space $\theta =167.0^{\circ }\pm 0.5^{\circ }$: (a)-(c): Three frames of the pulse travelling across the field-of-view before intensity and relativistic corrections have been applied, where the arrows indicate the direction of propagation and $\textrm {t}'$ indicates camera space time. The pulse appears to travel at superluminal speeds and decrease intensity as it propgates due to the relativistic and focusing effects described in Section 2 (see Visualization 2). (d)-(f) Three frames from the real space movie of the pulse traveling at $165.5^{\circ }\pm 0.1^{\circ }$ where $t$ indicates real space time. Both intensity and relativistic corrections have now been applied, the pulse intensity is approximately constant in all frames. Note that the timescale is now real space time as the pulse of light travels at c (see Visualization 3).

Download Full Size | PDF

Camera space data was also recorded for laser pulses travelling from left to right away from the SPAD array at $\theta =13.0^{\circ }\pm 0.5^{\circ }$ using the set-up shown in Fig. 3(b). Three frames of laser pulses travelling across row 17 are shown at 1.7 ns, 2.5 ns and 3.3 ns in Figs. 7(a)–7(c). The pulse length present in each frame is shorter for light travelling away from the SPAD array, indicating lower apparent velocities.

 figure: Fig. 7.

Fig. 7. Results Camera Space: $\theta =13.0^{\circ }\pm 0.5^{\circ }$: laser pulses travelling away from the SPAD array using the set-up show in Fig. 3(b). The pulse direction has been reversed for clarity. (a)-(c) Three frames of laser pulses travelling across row 17 of the SPAD array are shown at 1.7 ns, 2.5 ns and 3.3 ns. (d)-(e) Data and fitted model for row 17 of the SPAD camera. This shows photon counts as a function of position and time. The theoretical fit to the data calculates $\theta$, as $13.9^{\circ }\pm 0.1^{\circ }$. (f) The apparent velocity of the pulse varies from 0.19 c to 0.05 c, indicating a decelerating pulse travelling away from the SPAD array. Note that the timescale is the camera space time, $\textrm {t}'$. This is longer than the real space time, leading to the apparent subluminal velocities (see Visualization 4).

Download Full Size | PDF

The data and model used to calculate $\theta$ are shown in Fig. 7(d) and 7(e) respectively. The total time taken for light to travel in camera space is 1.6 ns, and the curvature of the fitted function indicates the pulses appear to decelerate as they travel away from the SPAD array. Using Eqs. (7) and (9) to fit to the data, $\theta$ was estimated to be $13.9^{\circ }\pm 0.1^{\circ }$.

Finally, the apparent velocity of the pulse as a function of camera space time is shown in Fig. 7(f) and varies from 0.19 c to 0.05 c. This results in a ratio of the fastest to slowest apparent velocities equal to 156. This is the largest ratio of super to subluminal apparent velocities in 4D LIF imaging; the previous highest ratio was 17, reported in Ref. [17].

Using the camera space data in Fig. 7 and the same method as described above, we can then recreate the real space data of the pulse travelling away from the SPAD array. Figures 8(a) to 8(c) shows three frames from a movie of the pulse before intensity and relativistic corrections have been applied. The pulse appears to travel at subluminal speeds with a total propagation time of 1.6 ns and decelerate as it travels across the field-of-view. Following the intensity and relativistic corrections, three frames of the real space movie are shown in Figs. 8(d) to 8(f) at times of 0.0 ns, 0.4 ns and 0.7 ns.

 figure: Fig. 8.

Fig. 8. Results Real Space $\theta =13.0^{\circ }\pm 0.5^{\circ }$: (a)-(c): Three frames of the pulse travelling across the field-of-view before intensity and relativistic corrections have been applied, where $\textrm {t}'$ indicates camera space time. The pulse appears to travel at subluminal speeds, decelerate and increase intensity as it propgates due to the relativistic and focusing effects described in Section 2 (see Visualization 5). (d)-(f) Three frames from the real space movie of the pulse traveling at $13.9^{\circ }\pm 0.1^{\circ }$, where $t$ indicates real space time. Both intensity and relativistic corrections have now been applied, the pulse intensity is approximately constant in all frames. Note that the timescale is now real space time as the pulse of light travels at c (see Visualization 6).

Download Full Size | PDF

Our final experiment demonstrates the angle dependence of scattering in air for LIF imaging, i.e., Rayleigh scattering. This is achieved by placing the pulsed laser on a rotation stage, see Fig. 9, allowing $\theta$ to be easily altered, and recording the intensity of the central pixel of the camera. The central pixel intensity is only dependent on $\theta$ as the distance between the centre of the rotation stage and SPAD array is constant for all values of $\theta$. This removes the effects of focusing and the inverse square dependence, which are both present in the first experiment. Figure 9(b) shows the observed experimental data in good agreement with the predictions of Rayleigh scattering, see Eq. (8). It should be noted that the effects of Rayleigh scattering were present in the previous experimental results but were harder to isolate.

 figure: Fig. 9.

Fig. 9. Experimental setup and results for Rayleigh scattering. (a) Laser pulses travel across the SPAD array field-of-view at an angle $\theta$ set by the rotation stage. Temporal and intensity data is recorded in $5^{\circ }$ intervals between $25^{\circ }$ and $150^{\circ }$. (b) The normalised central pixel intensity ($I_{c}(\theta ;~f, ~\Delta )$) versus $\theta$, where the errors are given by the square root of the total number of photons recorded $\sqrt n$.

Download Full Size | PDF

5. Conclusion

Relativistic effects, focusing, and Rayleigh scattering all play a significant role in the observed signal for LIF imaging. By modelling these effects we have been able to invert SPAD array data and reconstruct the true 4D path of laser pulses, showing a strong agreement between experiment and theory. We demonstrate the validity of our model by fitting to data obtained for light travelling towards and away from a SPAD array and comparing the temporal and intensity distributions to the model. The ratio of the apparent velocity of the pulses travelling towards and away is over two orders of magnitude and is the highest ratio observed for LIF imaging.

Funding

Science and Technology Facilities Council (ST/S505407/1); Engineering and Physical Sciences Research Council (EP/S001638/1, EP/T00097X/1).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. N. Abramson, “Light-in-flight recording by holography,” Opt. Lett. 3(4), 121–123 (1978). [CrossRef]  

2. N. Abramson, “Light-in-flight recording: high-speed holographic motion pictures of ultrafast phenomena,” Appl. Opt. 22(2), 215–232 (1983). [CrossRef]  

3. N. H. Abramson and K. G. Spears, “Single pulse light-in-flight recording by holography,” Appl. Opt. 28(10), 1834–1841 (1989). [CrossRef]  

4. N. Abramson, “Light-in-flight recording. 3: Compensation for optical relativistic effects,” Appl. Opt. 23(22), 4007–4014 (1984). [CrossRef]  

5. G. Häusler, J. Herrmann, R. Kummer, and M. Lindner, “Observation of light propagation in volume scatterers with 10 11-fold slow motion,” Opt. Lett. 21(14), 1087–1089 (1996). [CrossRef]  

6. T. Kubota, K. Komai, M. Yamagiwa, and Y. Awatsuji, “Moving picture recording and observation of three-dimensional image of femtosecond light pulse propagation,” Opt. Express 15(22), 14348–14354 (2007). [CrossRef]  

7. A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: capturing and visualizing the propagation of light,” ACM Trans. Graph. 32(4), 1–8 (2013). [CrossRef]  

8. F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget transient imaging using photonic mixer devices,” ACM Trans. Graph. 32, 1–10 (2013).

9. K. Goda, K. Tsia, and B. Jalali, “Serial time-encoded amplified imaging for real-time observation of fast dynamic phenomena,” Nature 458(7242), 1145–1149 (2009). [CrossRef]  

10. Z. Li, R. Zgadzaj, X. Wang, Y.-Y. Chang, and M. C. Downer, “Single-shot tomographic movies of evolving light-velocity objects,” Nat. Commun. 5(1), 3085 (2014). [CrossRef]  

11. R. Warburton, C. Aniculaesei, M. Clerici, Y. Altmann, G. Gariepy, R. McCracken, D. Reid, S. McLaughlin, M. Petrovich, J. Hayes, R. Henderson, D. Faccio, and J. Leach, “Observation of laser pulse propagation in optical fibers with a spad camera,” Sci. Rep. 7(1), 43302 (2017). [CrossRef]  

12. K. Wilson, B. Little, G. Gariepy, R. Henderson, J. Howell, and D. Faccio, “Slow light in flight imaging,” Phys. Rev. A 95(2), 023830 (2017). [CrossRef]  

13. G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6(1), 6021 (2015). [CrossRef]  

14. M. Laurenzis, J. Klein, and E. Bacher, “Relativistic effects in imaging of light in flight with arbitrary paths,” Opt. Lett. 41(9), 2001–2004 (2016). [CrossRef]  

15. M. Clerici, G. C. Spalding, R. Warburton, A. Lyons, C. Aniculaesei, J. M. Richards, J. Leach, R. Henderson, and D. Faccio, “Observation of image pair creation and annihilation from superluminal scattering sources,” Sci. Adv. 2(4), e1501691 (2016). [CrossRef]  

16. Y. Zheng, M.-J. Sun, Z.-G. Wang, and D. Faccio, “Computational 4d imaging of light-in-flight with relativistic effects,” Photonics Res. 8(7), 1072–1078 (2020). [CrossRef]  

17. K. Morimoto, M.-L. Wu, A. Ardelean, and E. Charbon, “Superluminal motion-assisted four-dimensional light-in-flight imaging,” Phys. Rev. X 11(1), 011005 (2021). [CrossRef]  

18. D.-U. Li, J. Arlt, J. Richardson, R. Walker, A. Buts, D. Stoppa, E. Charbon, and R. Henderson, “Real-time fluorescence lifetime imaging system with a 32× 32 0.13 µm cmos low dark-count single-photon avalanche diode array,” Opt. Express 18(10), 10257–10269 (2010). [CrossRef]  

19. D. M. Kocak, F. R. Dalgleish, F. M. Caimi, and Y. Y. Schechner, “A focus on recent developments and trends in underwater imaging,” Mar. Technol. Soc. J. 42(1), 52–67 (2008). [CrossRef]  

20. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745–748 (2012). [CrossRef]  

21. G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016). [CrossRef]  

22. S. Chan, R. E. Warburton, G. Gariepy, J. Leach, and D. Faccio, “Non-line-of-sight tracking of people at long range,” Opt. Express 25(9), 10109–10117 (2017). [CrossRef]  

23. D. Faccio and A. Velten, “A trillion frames per second: the techniques and applications of light-in-flight photography,” Rep. Prog. Phys. 81(10), 105901 (2018). [CrossRef]  

24. A. Lyons, F. Tonolini, A. Boccolini, A. Repetti, R. Henderson, Y. Wiaux, and D. Faccio, “Computational time-of-flight diffuse optical tomography,” Nat. Photonics 13(8), 575–579 (2019). [CrossRef]  

25. D. B. Lindell and G. Wetzstein, “Three-dimensional imaging through scattering media based on confocal diffuse tomography,” Nat. Commun. 11(1), 4517 (2020). [CrossRef]  

Supplementary Material (6)

NameDescription
Visualization 1       Camera space movie of laser pulses travelling towards a SPAD array. Three frames from this movie are shown in figure 5.
Visualization 2       Camera space movie of laser pulses travelling towards a SPAD array shown in the lab enviroment. Three frames from this movie are shown in figure 6 (a) to (c).
Visualization 3       Real space movie of laser pulses travelling towards a SPAD array. Three frames from this movie are shown in figure 6.
Visualization 4       Camera space movie of laser pulses travelling away from a SPAD array. Three frames from this movie are shown in figure 7.
Visualization 5       Camera space movie of laser pulses travelling away from a SPAD array shown in the lab enviroment. Three frames from this movie are shown in figure 8 (a) to (c).
Visualization 6       Real space movie of laser pulses travelling away from a SPAD array. Three frames from this movie are shown in figure 8.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. (a) Light in “real space" is scattered at ($x_1, y_1, z_1, t_1$) in all directions. A proportion of this scattered light travels to the camera, and is recorded in “camera space" as a signal at ($x_3, y_3, t_3$), where $x_3$ and $y_3$ are pixel positions. (b) The remaining light travels across the field-of-view, and is scattered at position and time ($x_2, y_2, z_2, t_2$). This event is recorded at ($x_4, y_4, t_4$). (c) Birds-eye-view of the two scattering events where $R_1$ and $R_2$ are the distance between the camera and the first and second scattering events respectively, and $\alpha _1$ and $\alpha _2$ are the scattering angles. The time difference for the two events in “camera space" is $t_4 - t_3 = \Delta t + (R_2-R_1)/c$, whereas the time difference in “real space" is $t_2 - t_1 = \Delta t$. Rayleigh scattering effects observed by the camera are dependent on $\alpha _1, \alpha _2, R_1$ and $R_2$, which differ significantly for the two scattering events shown. Focusing effects from the imaging optics also contribute to the intensity signal. An image rendered from the perspective of the camera shows the right side of the beam, which is closer to the camera, is larger compared to the left side of the beam. This corresponds to a lower energy density on the right hand side, and therefore a brighter image on the left.
Fig. 2.
Fig. 2. Theory Schematic: The intensity of a segment of the pulse is dependent on several factors: Rayleigh scattering, focusing effects and the integrated pulse length. These intensity contributions are derived using the variables shown above where $\theta$ is the propagation angle of the pulse relative to the optical axis, d is the distance between the centre of the pulse and imaging lens, f is the focal length of the imaging lens, r is the distance between the imaging lens and the nearest edge of the segment, $\theta _{1}$ is given by Eq. (2), $\theta _{2}$ is given by Eq. (3), x is the distance along the SPAD array to a given pixel and $\Delta$ is the active pixel width. Combining these effects, the intensity of scattered photons along the beam in camera space $(I(x; ~\theta , ~A, ~f, ~\Delta ))$ is derived and given in Eq. (7). The relativistic effects are explained using the same variables by Eq. (9).
Fig. 3.
Fig. 3. (a) 532 nm laser pulses are collimated by a series of lenses, increasing the beam diameter by four times to $\approx 5~$mm, and travel towards the SPAD array which records temporal and intensity data of scattered photons. From this information, $\theta$ and the intensity distribution of the beam $(I(x; ~\theta , ~A, ~f, ~\Delta ))$ in camera space are calculated. (b) Birds-eye-view of pulses travelling towards the SPAD array where $d = 25.8$ cm, $f_{1}=100$ mm and $f_{2}=400$ mm. (c) Birds-eye-view of pulses travelling away from the SPAD array.
Fig. 4.
Fig. 4. The effect of stopping down the aperture on the camera lens as measured with an EMCCD camera. The light is travelling away from the sensor from right to left in the images. (a) EMCCD intensity image of the beam with the aperture fully open. (b) EMCCD intensity image of the beam with the aperture closed. In (b) the entire beam is in focus and the intensity effects described by Eq. (7). Our theoretical model assumes that we stop down the aperture, as seen in (b).
Fig. 5.
Fig. 5. Results Camera Space: $\theta =167.0^{\circ }\pm 0.5^{\circ }$: laser pulses travelling towards the SPAD array using the set-up show in Fig. 3(a). (a)-(c) Three frames of laser pulses travelling from left to right across row 17 are shown at 2.1 ns, 2.5 ns and 2.8 ns, where the colour bar represents the number of photon counts and the distance axis is the horizontal field-of-view in real space. (d) and (e) Data and fitted model for row 17 of the SPAD camera. This shows photon counts as a function of position and time. The theoretical fit to the data calculates $\theta$ as $165.5^{\circ }\pm 0.1^{\circ }$. (f) The apparent velocity of the pulse varies from 7.0 c to 6.8 c and the timescale is the camera space time, $\textrm {t}'$. This is significantly shorter than the real space time, leading to the apparent superluminal velocities (see Visualization 1).
Fig. 6.
Fig. 6. Results Real Space $\theta =167.0^{\circ }\pm 0.5^{\circ }$: (a)-(c): Three frames of the pulse travelling across the field-of-view before intensity and relativistic corrections have been applied, where the arrows indicate the direction of propagation and $\textrm {t}'$ indicates camera space time. The pulse appears to travel at superluminal speeds and decrease intensity as it propgates due to the relativistic and focusing effects described in Section 2 (see Visualization 2). (d)-(f) Three frames from the real space movie of the pulse traveling at $165.5^{\circ }\pm 0.1^{\circ }$ where $t$ indicates real space time. Both intensity and relativistic corrections have now been applied, the pulse intensity is approximately constant in all frames. Note that the timescale is now real space time as the pulse of light travels at c (see Visualization 3).
Fig. 7.
Fig. 7. Results Camera Space: $\theta =13.0^{\circ }\pm 0.5^{\circ }$: laser pulses travelling away from the SPAD array using the set-up show in Fig. 3(b). The pulse direction has been reversed for clarity. (a)-(c) Three frames of laser pulses travelling across row 17 of the SPAD array are shown at 1.7 ns, 2.5 ns and 3.3 ns. (d)-(e) Data and fitted model for row 17 of the SPAD camera. This shows photon counts as a function of position and time. The theoretical fit to the data calculates $\theta$, as $13.9^{\circ }\pm 0.1^{\circ }$. (f) The apparent velocity of the pulse varies from 0.19 c to 0.05 c, indicating a decelerating pulse travelling away from the SPAD array. Note that the timescale is the camera space time, $\textrm {t}'$. This is longer than the real space time, leading to the apparent subluminal velocities (see Visualization 4).
Fig. 8.
Fig. 8. Results Real Space $\theta =13.0^{\circ }\pm 0.5^{\circ }$: (a)-(c): Three frames of the pulse travelling across the field-of-view before intensity and relativistic corrections have been applied, where $\textrm {t}'$ indicates camera space time. The pulse appears to travel at subluminal speeds, decelerate and increase intensity as it propgates due to the relativistic and focusing effects described in Section 2 (see Visualization 5). (d)-(f) Three frames from the real space movie of the pulse traveling at $13.9^{\circ }\pm 0.1^{\circ }$, where $t$ indicates real space time. Both intensity and relativistic corrections have now been applied, the pulse intensity is approximately constant in all frames. Note that the timescale is now real space time as the pulse of light travels at c (see Visualization 6).
Fig. 9.
Fig. 9. Experimental setup and results for Rayleigh scattering. (a) Laser pulses travel across the SPAD array field-of-view at an angle $\theta$ set by the rotation stage. Temporal and intensity data is recorded in $5^{\circ }$ intervals between $25^{\circ }$ and $150^{\circ }$. (b) The normalised central pixel intensity ($I_{c}(\theta ;~f, ~\Delta )$) versus $\theta$, where the errors are given by the square root of the total number of photons recorded $\sqrt n$.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

I ( θ 1 ,   θ 2 ;   θ ) = B I f ( θ 1 ;   r ,   f ) I r ( θ 1 ;   θ ,   r ) I s ( θ 1 ,   θ 2 ;   r ) ,
θ 1 ( x ;   A ,   f ) = tan 1 ( 2 x A 2 f ) ,
θ 2 ( x ;   A ,   f ,   Δ ) = tan 1 ( 2 x A + 2 Δ 2 f ) .
I f ( θ 1 ;   r ,   f ) = r cos θ 1 f .
I r ( θ 1 ;   θ ,   r ) = I 0 π 4 ( n 2 1 ) 2 d r 6 8 λ 4 ( n 2 + 2 ) 2 1 + cos 2 ( θ θ 1 ) r 2 ,
I s ( θ 1 ,   θ 2 ;   r ) = r sin ( θ 2 θ 1 ) sin ( θ θ 1 ) .
I ( x ;   θ ,   A ,   f ,   Δ ) = C 1 + cos 2 ( θ tan 1 ( 2 x A 2 f ) ) f sin θ 2 x A 2 cos θ ( tan 1 ( 2 x A + 2 Δ 2 f ) tan 1 ( 2 x A 2 f ) ) ,
I c ( θ ;   f ,   Δ ) = I ( x = A 2 ;   θ ,   A ,   f ,   Δ ) = C ( 1 + cos 2 θ ) f sin θ tan 1 ( Δ f ) 1 + cos 2 θ sin θ ,
Δ t ( x ;   θ ,   A ,   f ) = d ( ( ( 2 x A 2 f ) 2 + 1 ) sin θ 2 x A 2 f ) c ( sin θ + 2 x A 2 f cos θ ) d c ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.