Abstract
Light-in-flight (LIF) imaging is the measurement and reconstruction of light’s path as it moves and interacts with objects. It is well known that relativistic effects can result in apparent velocities that differ significantly from the speed of light. However, less well known is that Rayleigh scattering and the effects of imaging optics can lead to observed intensities changing by several orders of magnitude along light’s path. We develop a model that enables us to correct for all of these effects, thus we can accurately invert the observed data and reconstruct the true intensity-corrected optical path of a laser pulse as it travels in air. We demonstrate the validity of our model by observing the photon arrival time and intensity distribution obtained from single-photon avalanche detector (SPAD) array data for a laser pulse propagating towards and away from the camera. We can then reconstruct the true intensity-corrected path of the light in four dimensions (three spatial dimensions and time).
Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.
1. Introduction
As light travels and interacts with objects, photons are scattered in all directions. Light-in-flight (LIF) imaging is the process of capturing scattered photons using detectors with high temporal resolution such that light’s path can be reconstructed. Three-dimensional LIF imaging was first captured using a holographic plate to record the spherical wavefronts of pulses reflected by mirrors; the technique involved no mechanical processes and achieved a temporal resolution of 800 ps [1]. This demonstrated real-time imaging of light undergoing dynamic processes, previous imaging was static and time averaged [2,3]. Further work proposed a mechanism for correcting distortion effects when imaging light [4]. Recent 3D LIF holography techniques use a scattering medium and achieve higher temporal resolutions [5,6].
The field of LIF imaging was recently revolutionised by Velten et al. [7], who imaged femtosecond laser pulses propagating through a scattering media using a streak camera. This new LIF method allowed for scattering light dynamics to be observed at unprecedented temporal and spatial resolutions. However, this method requires a scanning mechanism to build a 2D image resulting in an acquisition time of one hour. Other methods for capturing 3D LIF involve transient imaging using a photonic mixer device (PMD), achieving a nanosecond temporal resolution with a one minute acquisition time [8]. In addition, other 3D LIF imaging methods include time encoded amplified imaging and computer tomography, which achieve nanosecond and picosecond temporal resolutions respectively [9,10].
The type of scattering that is observed is dependent on the medium that the light propagates through. For example, light has been captured propagating through fibre optics [11] and heated rubidium vapor [12]. When light travels through air, Rayleigh scattering is the dominant effect, and this was captured by Gariepy et al. who demonstrated three-dimensional LIF imaging using a single-photon avalanche detector (SPAD) array camera [13]. In this work, the light propagated in one plane that was perpendicular to the axis normal to the detector. Following this, it was recognised that relativistic effects, where the apparent velocity of light would deviate from $c$, could be observed with LIF [14,15] and these principles have allowed four-dimensional LIF reconstruction to be demonstrated for multiple paths of light [16]. This was generalised, using a megapixel camera and machine learning techniques, to capture 4D LIF imaging of multiple pulses following arbitrary straight-line paths in space [17].
These technologies have already been used in fluorescence lifetime imaging [18], light detection and ranging (LIDAR) imaging through scattering media [19] and imaging around corners [20–22]. Ultimately, the ability to accurately capture the full scattering dynamics of light could lead to new approaches when imaging deep inside the human body, see Ref. [23] for an overview of LIF research. Recent research tackling the problem of imaging in highly scattering media has shown computational imaging approaches can provide images in two [24] and three dimensions [25].
Our work builds on the recent LIF research by developing a model to compensate for distortions in the recorded intensity, as well as relativistic effects previously observed, and reconstruct the 4D path of laser pulses. The scope of this work is to provide a mechanism for the most accurate reconstruction of LIF measurements, relevant for understanding scattering in a range of scenarios. Understanding the intensity effects which occur in LIF imaging could have future applications in medical imaging, where the intensity of scattered photons gives information on the scattering source and its interaction with objects. To do this, it is necessary to understand the underlying physics of light scattering in air and the relationship to imaging optics. This is illustrated in Fig. 1, where light scattered at a time $t_1$ from an object at a location ($x_1, y_1, z_1$) propagates a distance $R_1$ to a camera. The remaining light continues to propagate to position ($x_2, y_2, z_2$) where another scattering event occurs at time $t_2$, and the scattered light travels a distance $R_2$ to the camera. The total time taken for the pulse to travel between the two scattering events is $t_2 - t_1 = \Delta t$. Whereas, the two scattering events are recorded by the camera at times $t_3$ and $t_4$ respectively, and the difference in arrival time recorded by the camera is $t_4 - t_3 = \Delta t + (R_2-R_1)/c$. This means the arrival time data recorded by the camera is different to the true propagation times of light and is ultimately dependent on the propagation angle. For the case of a camera, there is a mapping of an event occurring in three spatial dimensions and time to a camera with two spatial dimensions and time. The third spatial ($z$) dimension is collapsed and contained within the temporal data of the camera.
Rayleigh scattering effects are also observed by the camera and are dependent on the scattering angles ($\alpha _1$ and $\alpha _2$) and propagation distances ($R_1$ and $R_2$). These variables vary along light’s path and so the intensity contribution is dependent on the position along the path. Furthermore, focusing effects contribute to the recorded intensity profile and are dependent on the perpendicular distances between the scattering event and camera. This is shown in the camera view image in Fig. 1(c) which depicts an integrated image of the laser pulse’s path across the field-of-view of the camera used to render the image. The depth of field is increased such that the whole path is in focus. The pulse, travelling towards the camera from left to right, is further away from the camera on the left-hand side and is therefore focused to a smaller size than the right-hand side of the pulse. This results in the intensity of the pulse increasing as the distance between the camera and pulse increases.
In this work, we are able to measure and subsequently correct all of the effects mentioned above. That is to say, we can correct both the temporal distortion, arising from relativistic effects, and the intensity distortions, resulting from Rayleigh scattering and the imaging optics. We demonstrate intensity-corrected LIF imaging using a SPAD array, recording data for a laser pulse propagating at large and small angles with respect to the observation axis of the camera. The relativistic effects result in apparent speed of light velocities that span several orders of magnitude, and the intensity effects lead to observed intensities changing by at least a factor of two along the pulse’s path.
2. Theory
Consider a pulse of light that travels in three dimensions and is imaged using a camera with high temporal resolution. To develop the theoretical framework, we introduce the concept of the “camera space" to indicate where the data is recorded and the “real space" to indicate the three dimensional space in which the light pulse travels. It is the goal for the work to convert the camera space data to the real space as accurately as possible. The inversion of the camera space data to the real space path enables the true intensity-corrected light path to be reconstructed.
Light-in-flight data is subject to intensity and relativistic effects observed in the camera space. The intensity of scattered photons along the beam in camera space is derived by considering the intensity contribution from one segment of the beam on one pixel. The intensity of a segment of the beam is calculated using the schematic in Fig. 2 where laser pulses travelling across the field-of-view, at propagation angle $\theta$ with respect to the observation axis, are imaged using a SPAD array. A proportion of the photons within each pulse are scattered by air molecules and travel through the imaging lens aperture. Different segments of the beam are imaged by different pixels within the SPAD array and the intensity contribution from each segment is dependent on focusing effects from the imaging optics ($I_{f}$), Rayleigh scattering ($I_{r}$), and integrated path length ($I_{s}$).
The intensity of the beam in camera space $(I(\theta _{1}, ~\theta _{2}; ~\theta ))$ is given by
The second contribution is from photons undergoing Rayleigh scattering with air molecules and is given by
The final contribution to the intensity of one pixel is from the integrated path length, which is the segment length imaged by each pixel, given by
By combining these effects and substituting Eqs. (2)-(6) into Eq. (1) the intensity of scattered photons along the beam recorded by the SPAD array ($I(x; ~\theta , ~A, ~f, ~\Delta )$) is found to be
Finally, the Rayleigh effect is shown by measuring the central pixel intensity $(I_{c}(\theta ;~f, ~\Delta ))$ for different values of $\theta$. This intensity is independent of focusing effects as d is constant for all $\theta$ and is given by
Relativistic effects seen in the camera space result in the pulse appearing to travel at apparent velocities different to the speed of light. The arrival time in camera space is dependent on $\theta$ and d as shown in Fig. 2(a). The arrival time difference between the central pixel and an arbitrary pixel ($\Delta t(x; ~\theta , ~A, ~f)$) is given by
3. Experimental setup
The relativistic and intensity effects of LIF imaging are investigated using the experimental set-up shown in Fig. 3. The system includes a SPAD array camera, a 532 nm short pulsed laser (Teem Photonics STG-03E-1x0), and an optical constant fraction discriminator used as a trigger. The impact of intensity and relativistic effects are more pronounced when the light travels at large or small angles with respect to the optical axis of the camera, which corresponds to light travelling towards and away from the camera, see Figs. 3(b) and 3(c) respectively.
Laser pulses, which have a pulse width of $\approx$ 500 ps, are expanded to a beam waist of $\approx 5~$mm and collimated via two lenses of focal length 100 mm and 400 mm respectively, resulting in a Rayleigh range of 150 m. This ensures there are no intensity effects due to the beam diverging as it travels across the field-of-view of the sensor. The laser pulses are directed to a constant fraction discriminator acting as a trigger with 200 ps jitter, which sends 4 kHz transistor–transistor logic (TTL) pulses to the SPAD array. The TTL pulse starts the timer for each of the 32 x 32 pixels operated in Time-Correlated Single Photon Counting (TCSPC) mode. Histograms of photon counts are recorded for every pixel over 1024 time bins, each with a width of 55 ps. The pixel area and active area are $50~\mu$m x $50~\mu$m and $6.95~\mu$m ×$6.95~\mu$m respectively, giving a fill factor of 1.9%.
An 8-mm-focal length c-mount lens is used to image the beam onto the sensor. The aperture of the lens can be stopped down to extend the depth of field, and this is essential to reduce blurring and ensure the entire path of the beam is in focus on the camera. The mirrors used to direct the beam towards the SPAD array are placed outside the field-of-view so only photons scattered by air molecules are collected by the imaging lens, thus avoiding saturation effects and allowing Rayleigh scattering to be observed. Finally, to observe the Rayleigh scattering effects, the laser and trigger were placed on a rotation stage system which allows $\theta$ to be easily varied.
When measuring $I(x; ~\theta , ~A, ~f, ~\Delta )$, it is important for the whole of the beam to be in focus. This is demonstrated in Figs. 4(a) and 4(b) which shows electron multiplying charge-coupled device (EMCCD) intensity images of the beam travelling from right to left away from the SPAD array with an open and closed aperture respectively. When the lens aperture is open, out of focus light contributes to the intensity image resulting in part of the beam being out of focus and less intense than predicted by Eq. (7). When the lens aperture is closed, only in focus light is incident on the detector and the intensity effects predicted are observed. This condition requires longer acquisition times to collect sufficient photon counts to build an intensity image.
This experimental configuration allows for the imaging of temporally correlated laser pulses and can be generalised for multiple pulses travelling across the field-of-view. In order to image temporally uncorrelated laser pulses, a triggering signal from each source must be provided to the SPAD array.
4. Results
In order to achieve intensity-corrected 4D LIF imaging, it is important to remove the noise present in the SPAD array data. This has been achieved by fitting Gaussian functions to each pixel and setting the pixel intensity to zero if the standard deviation of the Gaussian is outside an acceptable range. Furthermore, for noisy pixels within the beam path, interpolation is performed.
The theoretical model was then created using Eqs. (7) and (9). The only input parameters that the model requires is the distance from the camera to the centre of the pulse along the optical axis $d$, the focal length of the lens $f$, the physical sensor size $A$ and the active pixel width $\Delta$. The propagation angle, pulse standard deviation, peak amplitude, and the average background noise are free parameters that the model fits to using chi-squared minimisation. If necessary $d$ can also be set as a free parameter, although we chose to measure this to increase the accuracy of the $\theta$ measurement.
The input parameters used to model the relativistic effects are similar to those used by Laurenzis et al. [14], however the method for measuring the apparent velocity of the pulse differs. Laurenzis et al. measured the relative velocity of the pulse using the true propagation distance and the measured photon arrival time, whereas we use the propagation distance measured by the camera.
Camera space results for laser pulses travelling from left to right towards the SPAD array at $\theta =167.0^{\circ }\pm 0.5^{\circ }$ are shown in Fig. 5. Three frames, each with time duration 55 ps, of laser pulses travelling across row 17 of the SPAD array are shown at 2.1 ns, 2.5 ns and 2.8 ns in Figs. 5(a) to 5(c). The time taken for the pulse to travel across the SPAD array is less than the time bin width, resulting in the pulse being present for all pixels of row 17 in a single frame.
The data and fitted model for row 17 as a function of position and time is given in Figs. 5(d) and 5(e). The photon intensity decreases as pixel number increases in both the data and the fitted model. This is because the left hand side of the beam is further away from the SPAD array and so focuses to a smaller area on the detector with a higher energy density.
The total time taken for the pulse to travel across the SPAD array was measured to be 21.2 ps, and using Eqs. (7) and (9) to fit a 2D Gaussian function to the data, $\theta$ was calculated to be $165.5^{\circ }\pm 0.1^{\circ }$. The error in $\theta$ was calculated by numerically creating 10 statistically identical data sets and fitting to these. These additional data sets were created by sampling from a Poisson distribution with a mean and variance determined by the initial experimental data. Finally, the apparent velocity of the pulse as a function of camera space time is shown in Fig. 5(f) and varies from 7.0 c to 6.8 c. This superluminal apparent velocity is entirely due to the pulse travelling toward the camera.
Using the data obtained by the SPAD array, the camera space data is converted to real space data. Figures 6(a)–6(c) shows three frames of the pulse travelling across the field-of-view before intensity and relativistic corrections have been applied. The pulse appears to travel at superluminal speeds with a total propagation time of 21.2 ps and the intensity of the pulse appears to decrease as the pulse propagates due to the intensity effects described in Section 2. This corresponds to the camera space data without correction. The arrows indicate the propagation direction is left to right across the camera. Next, the intensity and relativistic effects are corrected by fitting the model to the experimental data and optimising the input parameters. Relativistic effects are corrected by calculating the pulse’s path from $\theta$ and $d$, and the intensity correction is applied by normalising the raw intensity data by the fitted intensity data. Three frames from the real space movie of the pulse traveling towards the SPAD array are shown in Figs. 6(d)–6(f) at 0.0 ns, 0.4 ns and 0.7 ns. The beam diameter used for the real space reconstruction was $5~$mm and the pulse width was 15 cm. These values were taken from measurements and known values of the pulse.
Camera space data was also recorded for laser pulses travelling from left to right away from the SPAD array at $\theta =13.0^{\circ }\pm 0.5^{\circ }$ using the set-up shown in Fig. 3(b). Three frames of laser pulses travelling across row 17 are shown at 1.7 ns, 2.5 ns and 3.3 ns in Figs. 7(a)–7(c). The pulse length present in each frame is shorter for light travelling away from the SPAD array, indicating lower apparent velocities.
The data and model used to calculate $\theta$ are shown in Fig. 7(d) and 7(e) respectively. The total time taken for light to travel in camera space is 1.6 ns, and the curvature of the fitted function indicates the pulses appear to decelerate as they travel away from the SPAD array. Using Eqs. (7) and (9) to fit to the data, $\theta$ was estimated to be $13.9^{\circ }\pm 0.1^{\circ }$.
Finally, the apparent velocity of the pulse as a function of camera space time is shown in Fig. 7(f) and varies from 0.19 c to 0.05 c. This results in a ratio of the fastest to slowest apparent velocities equal to 156. This is the largest ratio of super to subluminal apparent velocities in 4D LIF imaging; the previous highest ratio was 17, reported in Ref. [17].
Using the camera space data in Fig. 7 and the same method as described above, we can then recreate the real space data of the pulse travelling away from the SPAD array. Figures 8(a) to 8(c) shows three frames from a movie of the pulse before intensity and relativistic corrections have been applied. The pulse appears to travel at subluminal speeds with a total propagation time of 1.6 ns and decelerate as it travels across the field-of-view. Following the intensity and relativistic corrections, three frames of the real space movie are shown in Figs. 8(d) to 8(f) at times of 0.0 ns, 0.4 ns and 0.7 ns.
Our final experiment demonstrates the angle dependence of scattering in air for LIF imaging, i.e., Rayleigh scattering. This is achieved by placing the pulsed laser on a rotation stage, see Fig. 9, allowing $\theta$ to be easily altered, and recording the intensity of the central pixel of the camera. The central pixel intensity is only dependent on $\theta$ as the distance between the centre of the rotation stage and SPAD array is constant for all values of $\theta$. This removes the effects of focusing and the inverse square dependence, which are both present in the first experiment. Figure 9(b) shows the observed experimental data in good agreement with the predictions of Rayleigh scattering, see Eq. (8). It should be noted that the effects of Rayleigh scattering were present in the previous experimental results but were harder to isolate.
5. Conclusion
Relativistic effects, focusing, and Rayleigh scattering all play a significant role in the observed signal for LIF imaging. By modelling these effects we have been able to invert SPAD array data and reconstruct the true 4D path of laser pulses, showing a strong agreement between experiment and theory. We demonstrate the validity of our model by fitting to data obtained for light travelling towards and away from a SPAD array and comparing the temporal and intensity distributions to the model. The ratio of the apparent velocity of the pulses travelling towards and away is over two orders of magnitude and is the highest ratio observed for LIF imaging.
Funding
Science and Technology Facilities Council (ST/S505407/1); Engineering and Physical Sciences Research Council (EP/S001638/1, EP/T00097X/1).
Disclosures
The authors declare no conflicts of interest.
Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
References
1. N. Abramson, “Light-in-flight recording by holography,” Opt. Lett. 3(4), 121–123 (1978). [CrossRef]
2. N. Abramson, “Light-in-flight recording: high-speed holographic motion pictures of ultrafast phenomena,” Appl. Opt. 22(2), 215–232 (1983). [CrossRef]
3. N. H. Abramson and K. G. Spears, “Single pulse light-in-flight recording by holography,” Appl. Opt. 28(10), 1834–1841 (1989). [CrossRef]
4. N. Abramson, “Light-in-flight recording. 3: Compensation for optical relativistic effects,” Appl. Opt. 23(22), 4007–4014 (1984). [CrossRef]
5. G. Häusler, J. Herrmann, R. Kummer, and M. Lindner, “Observation of light propagation in volume scatterers with 10 11-fold slow motion,” Opt. Lett. 21(14), 1087–1089 (1996). [CrossRef]
6. T. Kubota, K. Komai, M. Yamagiwa, and Y. Awatsuji, “Moving picture recording and observation of three-dimensional image of femtosecond light pulse propagation,” Opt. Express 15(22), 14348–14354 (2007). [CrossRef]
7. A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: capturing and visualizing the propagation of light,” ACM Trans. Graph. 32(4), 1–8 (2013). [CrossRef]
8. F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget transient imaging using photonic mixer devices,” ACM Trans. Graph. 32, 1–10 (2013).
9. K. Goda, K. Tsia, and B. Jalali, “Serial time-encoded amplified imaging for real-time observation of fast dynamic phenomena,” Nature 458(7242), 1145–1149 (2009). [CrossRef]
10. Z. Li, R. Zgadzaj, X. Wang, Y.-Y. Chang, and M. C. Downer, “Single-shot tomographic movies of evolving light-velocity objects,” Nat. Commun. 5(1), 3085 (2014). [CrossRef]
11. R. Warburton, C. Aniculaesei, M. Clerici, Y. Altmann, G. Gariepy, R. McCracken, D. Reid, S. McLaughlin, M. Petrovich, J. Hayes, R. Henderson, D. Faccio, and J. Leach, “Observation of laser pulse propagation in optical fibers with a spad camera,” Sci. Rep. 7(1), 43302 (2017). [CrossRef]
12. K. Wilson, B. Little, G. Gariepy, R. Henderson, J. Howell, and D. Faccio, “Slow light in flight imaging,” Phys. Rev. A 95(2), 023830 (2017). [CrossRef]
13. G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6(1), 6021 (2015). [CrossRef]
14. M. Laurenzis, J. Klein, and E. Bacher, “Relativistic effects in imaging of light in flight with arbitrary paths,” Opt. Lett. 41(9), 2001–2004 (2016). [CrossRef]
15. M. Clerici, G. C. Spalding, R. Warburton, A. Lyons, C. Aniculaesei, J. M. Richards, J. Leach, R. Henderson, and D. Faccio, “Observation of image pair creation and annihilation from superluminal scattering sources,” Sci. Adv. 2(4), e1501691 (2016). [CrossRef]
16. Y. Zheng, M.-J. Sun, Z.-G. Wang, and D. Faccio, “Computational 4d imaging of light-in-flight with relativistic effects,” Photonics Res. 8(7), 1072–1078 (2020). [CrossRef]
17. K. Morimoto, M.-L. Wu, A. Ardelean, and E. Charbon, “Superluminal motion-assisted four-dimensional light-in-flight imaging,” Phys. Rev. X 11(1), 011005 (2021). [CrossRef]
18. D.-U. Li, J. Arlt, J. Richardson, R. Walker, A. Buts, D. Stoppa, E. Charbon, and R. Henderson, “Real-time fluorescence lifetime imaging system with a 32× 32 0.13 µm cmos low dark-count single-photon avalanche diode array,” Opt. Express 18(10), 10257–10269 (2010). [CrossRef]
19. D. M. Kocak, F. R. Dalgleish, F. M. Caimi, and Y. Y. Schechner, “A focus on recent developments and trends in underwater imaging,” Mar. Technol. Soc. J. 42(1), 52–67 (2008). [CrossRef]
20. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745–748 (2012). [CrossRef]
21. G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016). [CrossRef]
22. S. Chan, R. E. Warburton, G. Gariepy, J. Leach, and D. Faccio, “Non-line-of-sight tracking of people at long range,” Opt. Express 25(9), 10109–10117 (2017). [CrossRef]
23. D. Faccio and A. Velten, “A trillion frames per second: the techniques and applications of light-in-flight photography,” Rep. Prog. Phys. 81(10), 105901 (2018). [CrossRef]
24. A. Lyons, F. Tonolini, A. Boccolini, A. Repetti, R. Henderson, Y. Wiaux, and D. Faccio, “Computational time-of-flight diffuse optical tomography,” Nat. Photonics 13(8), 575–579 (2019). [CrossRef]
25. D. B. Lindell and G. Wetzstein, “Three-dimensional imaging through scattering media based on confocal diffuse tomography,” Nat. Commun. 11(1), 4517 (2020). [CrossRef]