By using time-of-flight information encoded in multiply scattered light, it is possible to reconstruct images of objects hidden from the camera’s direct line of sight. Here, we present a non-line-of-sight imaging system that uses a single-pixel, single-photon avalanche diode (SPAD) to collect time-of-flight information. Compared to earlier systems, this modification provides significant improvements in terms of power requirements, form factor, cost, and reconstruction time, while maintaining a comparable time resolution. The potential for further size and cost reduction of this technology make this system a good base for developing a practical system that can be used in real world applications.
© 2015 Optical Society of America
One of the motivations for developing non-line-of-sight imaging is to remotely view areas that are difficult or dangerous to access. Potential applications of this technology include monitoring hazardous industrial environments, improving spatial awareness in robotic surgery, and searching disaster zones for survivors. There is also a desire to use it in security applications, vehicle navigation and for remote exploration via air and spaceborne imaging systems. The practicality of employing these systems outside of the laboratory is currently limited by their cost, lack of portability, time resolution, and signal-to-noise ratio.
Non-line-of-sight imaging has been demonstrated using both radio and visible wavelengths. At radio wavelengths, systems have been developed to create low resolution images through walls , around corners using specular reflections , and to detect motion around a corner . These systems typically require large apertures, especially when the imaging system is far from the scene to be imaged. Methods of doing non-line-of sight imaging at visible wavelengths include using a coded controllable light source, such as a projector, to illuminate hidden objects  or using specular reflections in a window pane . Photon time-of-flight, which is typically used for ranging in imaging LIDAR or gated viewing systems [6, 7], can also be applied to multiply reflected light to image beyond the direct line of sight.
One of the first techniques used to create time-of-flight videos or transient images was holographic light-in-flight imaging . This method only captures direct, first bounce light and cannot be used for the light transport analysis of light undergoing multiple reflections. Transient imaging of multiply reflected incoherent light has been demonstrated using a streak camera , inexpensive photonic mixer devices [10–13], and Single-Photon Avalanche Diode (SPAD) arrays .
These devices are also able to capture the light transport information encoded in multiply scattered light, which can be used to reconstruct images of scenes beyond the direct line of sight . A streak camera based system was one of the first to demonstrate this technique [16, 17]. This system provided time resolutions down to 2 picoseconds and a lateral spatial resolution of approximately 1 cm in the reconstruction. The size, price (approximately $150,000), and fragility of the streak camera limit the applications of such a system.
In attempts to make these systems more practical, interest has turned to using time modulated source and detection devices such as photonic mixer devices (PMDs) . These devices are compact and inexpensive (less than $500), but are limited to a time resolution of several nanoseconds or a spatial resolution of approximately 3 meters. It is possible to reconstruct smaller scenes with the help of regularization, but this requires the incorporation of additional assumptions about the scene, such as the absence of volumetric scattering .
Another method of collecting time-of-flight information is to use a microchip laser in combination with a gated intensified Charge-Coupled Device (iCCD) camera . This system is portable and has a time resolution of several hundred picoseconds. While less expensive than a streak camera based system, an iCCD camera, at $80,000 is still above the price range of mass market applications. It also has a low photon count rate, making it less ideal for this type of work.
Our system uses a single SPAD detector. SPADs are solid-state photodetectors that are able to collect extremely fast and weak light signals, down to the single photon level . A silicon SPAD is essentially a p-n junction, reverse biased above its breakdown voltage. A single photo-generated charge carrier absorbed in this junction can trigger a self-sustaining avalanche that can be detected by an external readout circuit. SPADs can be gated by modulating the bias voltage a few volts above or below the breakdown value to filter incoming photons. Recently, non-line-of-sight tracking of the position of a single object in an empty space using a 32 by 32 non-gated SPAD array was demonstrated .
We use a single gated SPAD detector along with a scanned laser to produce full reconstructions of complex scenes. We demonstrate reconstruction with approximately 10 cm resolution using normal surface materials at an average illumination power of 50 mW.
2. Experimental setup
The major components of our system include a laser light source, a SPAD detector, a time-correlated single photon counting (TCSPC) module, and the hidden scene to be imaged. Fig. 1(a) shows the light path through our experimental setup.
The light source is an Amplitude Systems Mikan Laser, generating 250 fs long pulses with a repetition rate of 55 MHz and wavelength of 1030 nm. This wavelength is doubled to create a pulse train at 515 nm with an average power of 50 mW. This is an order of magnitude lower than the laser power used by Velten et al. . The pulse train is directed towards one of the side walls of the laboratory using a pair of galvanometer-actuated mirrors following the pattern shown in Fig. 1(b).
Returning photons are collected using a time-gated SPAD with a 20 μm diameter active-area . This detector is made using a standard 0.35 μm Complementary Metal Oxide Semiconductor (CMOS) technology. With a 7 V excess-bias voltage, it exhibits a photon detection efficiency of up to 35% at 515 nm with less than 10 dark counts per second at 273 K. The afterpulsing probability is lower than 1% with a 50 ns hold-off time. The timing jitter on this detector is a key parameter in the present work. In our case, it is better than 30 ps full width at half maximum (FWHM), which corresponds to a traveled path length of about 1 cm at the speed of light.
The time-gating feature of the SPAD allows us to disable the detector during the arrival of the first bounce light (indicated by the dashed arrow in Fig. 1(a)), which would otherwise blind the detector from subsequent bounces. Our SPAD module achieves ON and OFF transition times down to 110 ps, at repetition rates up to 80 MHz and has an adjustable ON-time between 2 ns and 500 ns . In our experiments, the detector ON-time window has a duration of 9.5 ns. This time was chosen as it provides the best compromise between first bounce rejection and extension of the reconstruction volume.
This detector is focused on a single spot covering a 1 cm2 area of the wall, using a 1” diameter lens with a 1” focal length. The detector field of view is not changed during the experiment. The detector is protected by an interference filter with a peak transmission at 515 nm and a FWHM bandwidth of 10 nm.
A Time Correlated Single Photon Counting (TCSPC) unit (PicoQuant HydraHarp) is used to produce a histogram of the photon counts versus the number of time bins after the illumination pulse. This system uses the trigger output of the laser as the time-base. An example of the histogram produced by the TCSPC unit is shown in Fig. 2.
The pattern we use results in 185 datasets or time series. For an exposure time of 1 or 10 seconds, the total capture time is about 5 or 32 minutes respectively.
The objects placed in the scene include two white patches of different sizes, and a 38 by 41 cm letter T made of white paper. These objects are placed so they span the entire available reconstruction volume, as defined by the repetition rate of the laser. In our case, this is approximately a quarter of a sphere with a radius of 1.5 meters since we only reconstruct above the plane of the optical table in front of the wall. We are not able to detect objects outside this area because their reflected light is blocked by the gate closing to prevent the detection of the next first bounce pulse. A photo of the scene is shown in Fig. 3(a) and the three objects are shown in Fig. 3(b).
A web camera is used to take images of the laser spots on the wall during data capture. We use these pictures to determine the location of the spots in 3D space and prevent inaccuracies due to pointing error in our scanning mirror system.
The web camera is calibrated using the regular point grid shown in Fig. 4. We extract the centroids (in pixels) for each of the colored dots in the image and pair them with their known three-dimensional coordinates (in centimeters). The positions of the laser spots are determined in pixels from the pictures taken by the web camera and converted to 3D coordinates using linear interpolation between the pixel/three-dimensional coordinate pairs.
The accuracy of this calibration method is verified by using the first bounce time-of-flight. We deactivate the gating mechanism to detect the first bounce light and remove the lens from the SPAD detector to eliminate distortions. This time-of-flight data is compared with the time-of-flight values calculated from the manually measured 3D coordinates of the galvos and SPAD detector relative to the wall and the positions of the laser spots as determined from the web camera images. The results are shown in Fig. 5(a). The relative errors between the two methods are less than 10 ps or about 3 mm, as shown in Fig. 5(b).
After calibration, the web camera can be used with the first bounce information from the SPAD to determine the laser and camera positions on the wall or any other irregular relay surface without prior knowledge of its position. For a new scene, manual measurements with respect to the relay surface would no longer be necessary.
3. Reconstruction method
For image reconstruction we use a modified version of the backprojection algorithm presented by Velten et al. . While other methods have shown superior resolution and reconstruction quality, such as the convex optimization algorithm used by Heide et al. , the size of the projection matrix that would be required for our reconstruction makes this method unfeasible. Since the size of this matrix is determined by the product of the number of laser positions, camera positions, time points, and voxels in the reconstruction volume, the size of our projection matrix would be on the order of 2 terabytes. A filtered backprojection also does not require assumptions about the hidden scene geometry and can be used without regularization. More complex reconstruction methods can also make it difficult to separate hardware and software based artifacts.
Our algorithm uses the number of photons counted per time bin, the photon time of arrival t, the coordinates of the laser spot on the wall xi and yi, and the coordinates of the spot on the wall observed by the detector xo and yo to determine the location and geometry of the hidden object.
Each photon count, N(t, xi, yi, xo, yo) is projected from its original five dimensional space onto an ellipsoid in the three-dimensional Cartesian space V(x, y, z) spanning the volume to be reconstructed. We choose a voxel size of 2 cm by 2 cm. To correct for the lower intensity of the light further from the wall we include a distance term in the backprojection algorithm. We also account for Lambertian shading on the wall. Since the angle of the object relative to the wall is unknown, we cannot account for Lambertian shading on the light hitting the object. Lambertian shading and distance terms only subtly effect the calculation of object position since they do not alter the position of the ellipsoids being drawn in the backprojection.
This backprojection results in a confidence map that describes the likelihood of the light being reflected by the different voxels in the reconstruction volume. An example of this is shown in Fig. 6.
We apply two different filters to the results of the backprojection. The first filter is a Laplacian as used in . This filter enhances surface edges in the reconstruction volume. To remove noise and obtain surfaces suitable for rendering and display, we apply a thresholding algorithm that favors continuous regions over individual, disconnected voxels. This helps to remove some of the noise that would be present in the 3D reconstruction. Removing the distance and Lambertian shading terms does not negatively impact the results of the thresholding algorithm, indicating that the decreased intensity due to those factors is not significant in our case. Including Lambertian shading, actually tends to amplify noise at the edges of the reconstruction volume and often leads to subjectively inferior reconstruction results. The backprojected ellipsoids have a thickness that is determined by the time resolution and by the size of the laser spot and the size of the area where the SPAD is focused on the wall, as illustrated by s and d respectively in Fig. 1(a). Broadening of the ellipsoid due to time resolution is implicitly included in the backprojection since the sampling rate in the data (1 ps) exceeds the time resolution of the detector (30 ps) and a single impulse on the detector automatically creates a broadened ellipsoid. The thickness due to the size of spots s and d is below 10 ps for the entire reconstruction volume and was ignored in the reconstruction. Furthermore the broadening of an ellipsoid is smaller than the diameter of a single voxel in our reconstruction. The complete reconstruction algorithm is outlined below:
- Create a grid of voxels V(x,y,z) referring to points in the reconstruction volume.
- For each collected photon count N(t, xi, yi, xo, yo) compute the set of voxels V where the scatterer reflecting those photons could have been located. Increment the confidence value for those voxels by N*(2πr3/Ad)*(2πr1/Av)*cos(θ), where 2πr3/Ad is the distance correction term for the SPAD area of focus, 2πr1/Av is the distance correction term for the voxel of interest and cos(θ) is the Lambertian term. θ is the angle between r3 and r4.
- Filter: Compute the Laplacian, ∇2V and normalize.
- Threshold: Each voxel is considered to be above the threshold if its confidence and the confidence of at least 4 neighboring voxels are above the threshold. By normalizing the results of the backprojection before thresholding, we are able to maintain a threshold level that is consistently between 0.5 and 0.6. The threshold is currently adjusted manually to remove visible uniform noise from the 3D reconstruction. This process could be automated in future applications if required, but would take at least several minutes of computation time in the current MATLAB implementation.
The results of this algorithm are converted into a three-dimensional object using a graphical visualization tool (UCSF Chimera ).
The first scene we reconstruct consists of the 38 × 41 cm letter T and the two white patches, placed so they span the entire reconstruction volume as shown in Fig. 3(a). The time-of-flight data was collected with the lights off and an exposure time of 10 s. The resulting reconstruction is shown in Fig. 7.
All three objects in this scene are reconstructed at their correct positions (Fig. 7). An artifact is created by the specular reflection of the 1 inch filter on the camera. It appears at approximately the correct position and depth and does not interfere with the reconstruction of the remaining objects in the scene.
Using the letter T from the scene described above, data was collected again with the room lights off using 1 s and 10 s exposure times to see how the signal to noise ratio effects reconstruction quality. As seen in Fig. 8, the 1 s exposure time (lower signal to noise ratio) resulted in a reconstruction with only slightly less defined edges compared to the 10 s exposure. In both cases the general shape of the object is recovered.
Time-of-flight data was also collected for the letter T with the room lights turned on with a 10 s exposure time. The background noise due to the room lights is around 188,000 counts per second, compared to the 30 background counts per second measured with the room lights and laser off. The three-dimensional reconstruction of this case is shown in Fig. 9. Although, the signal-to-noise ratio is much lower, the quality of the reconstruction is not much worse than the case with the lights off.
We also create a similar scene using objects of different materials. The first target is a cross made of cardboard and the second one is a cross made of a diffuse black material that absorbs most of the incoming light. Both objects are placed approximately one meter from the center of the projection area on the wall. Fig. 10 shows the 3D reconstructions of these targets. Although, both objects are detectable, the black cross is barely visible and the correct shape is not reconstructed.
Our reconstruction method establishes object position by triangulation of distances calculated using time-of-flight information from different points on the laboratory wall. The accuracy of the distance calculation, and resulting position depend on the system’s time resolution and the separation between the considered points on the wall (Fig. 11). One would expect that the angular resolution Θ behaves roughly as
Where τ is the time resolution of the system and a is the largest distance between any two sample points on the wall in the entire pattern contributing to the reconstruction, and c is the speed of light. This is analogous to the Rayleigh criterion for phase based optical and radar imaging. Our system has an angular resolution of (1.22·c·30ps)/1m = 0.0011 rad. Therefore, at a distance of 1 m from the wall we expect a resolution on the order of 1.1 cm. The actual resolution achieved as determined by the reconstructed feature size, is on the order of several centimeters due to uncertainties in the system geometry and artifacts in the reconstruction algorithm.
5.2. Point spread function and number of laser points
The number and distribution of points illuminated on the wall determines the shape of the backprojection for an individual point in the hidden space. This shape varies throughout the reconstruction space and can be seen as the point spread function of the reconstructed image. The more points on the wall involved in the backprojection, the more symmetric and uniform the point spread function is. This is important for a good reconstruction. The decrease in reconstruction quality after reducing the number of laser spots on the wall is shown in Fig. 12. In this reconstruction, we only used half of the collected data by ignoring every second laser spot on the wall.
Finding the optimal number of acquisition points involves finding a balance between minimizing the amount of data required and the appearance of the backprojection. In our case, we found that a 15 × 14 point grid over a 1 × 0.8 m area provided the best balance. We did not use the laser positions surrounding the focus point of the detector on the wall, d as shown in Fig. 1(a). When the wall is illuminated close to d stray light can hit the detector despite the gate and imaging lens.
The backprojection of a single voxel is shown in Fig. 13. It can be viewed as a local point spread function for the reconstruction. If the number of laser positions is small this point spread function is less well defined at the center and contains high frequency artifacts on the sides. This decreases the reconstruction quality and decreases the effectiveness of the Laplacian filter due to its sensitivity to high frequency noise. As the number of laser positions increases, the artifacts are smoothed. However, this is only true until a certain point. When the width of an individual ellipsoid, as determined by the time resolution, is large compared to the spacing of ellipsoids from adjacent laser positions further decreasing the spacing of points has little effect. This threshold depends mainly on the time resolution of the detector. An example is shown in Figs. 13(b) and 13(c).
5.3. Noise levels
Reducing the acquisition time from 10 s to 1 s lowers the signal-to-noise ratio. As the distance from the wall increases, the noise from previous pulses and higher order multiple bounces also increases. A reconstruction of the letter T for different acquisition times is shown in Fig. 8. Although, the edges are not as well defined in the 1 s exposure case, the shape of the object is still discernible and it is reconstructed in the correct location.
In principle, the issue with the low signal-to-noise ratio for short exposure times could also be addressed by increasing the power of the laser.
The data in Table 1 suggest that the most significant source of noise in our data comes from multiply scattered light from previous pulses. To reduce this noise source an illumination laser with a lower repetition rate is required.
5.4. Ambient light capture
Our current filter for ambient light rejection has a full width at half maximum of about 10 nm. By narrowing the filter width, one could improve the ambient light rejection. The best theoretically achievable value is determined by the time resolution of the laser and detector which sets a minimum for the spectral bandwidth of the light used. A diffraction limited Gaussian pulse of 30 ps length would have a spectral bandwidth of about 49 picometers. This suggests a theoretical optimum of about 1/203 of the ambient light measured with the currently used filter which would yield a count rate of approximately 927 photons per second and is comparable to the noise background of 800 counts per second measured in a dark room for our actual data collection. This requires a laser with a range of tunability to tune it to a commercially available narrow band filter.
5.5. Capture speed
With a total capture time of several minutes, our current system is unsuitable for imagining scenes with moving objects. To reduce the acquisition time required for a good reconstruction, it is possible to increase the laser intensity, introduce multi-point detection, or increase the collection area of the detector and the aperture of the lenses. One could be encouraged to keep the laser intensity as low as possible for eye-safety, power dissipation, and cost reduction purposes. Finally, it would also be possible to replace the single SPAD with a 2D array of multiple detectors observing multiple patches on the wall simultaneously. By detecting photons at multiple sources on the wall, multiple intersecting ellipsoids could be created in the backprojection with only one laser position, reducing the number of needed laser points and the total capture time. This kind of detector has already been demonstrated  and recent developments on fully-integrated fast-gating circuits  will pave the way for producing a suitable monolithic array of gated SPADs. With these methods, capture times can likely be reduced to fractions of a second.
Unlike previous methods, our system is able to sample a grid of disconnected points on the wall. Previous systems were only able to image a connected area of pixels. Because of this sparse sampling of positions on the wall, the amount of data collected in this setup is relatively small.
5.6. Portability, cost reduction, and eye safety
In its current implementation, our system makes use of stand-alone components like the gated-mode SPAD module and the TCSPC module, used in combination with an high-performance pulsed laser source and some ancillary instrumentation (scanning system, calibration camera, etc). The overall cost has been reduced compared to previous high time resolution, single photon sensitive implementations , going from hundreds of thousands of dollars for a streak camera to tens of thousands of dollars for the current setup. Beyond this, the real advantage of our implementation is the potential integration of most of the components into a single, compact device such as a single integrated circuit. This has already happened for non-gated SPAD arrays .
Both the gating and TCSPC electronics could be integrated into silicon and already exist in separate chips. The detector is designed in a standard 0.35 μm CMOS technology, making it suitable for integration with time-gating electronic circuits  (both in single and multi-pixel implementations). High-performance 0.35 μm CMOS Time-to-Digital Converters (TDCs) have already been proven to be an effective solution for TCSPC measurements, obtaining resolutions of tens of picoseconds with extremely low power consumption . This type of implementation will take advantage of the cost reduction allowed by microelectronics miniaturization. By using it in combination with a pulsed laser diode, the overall system cost can be cut down to a few thousand dollars, or even less. Furthermore, the portability could benefit from the integration process, giving rise to a compact, hand held, low-power device.
For field use, eye safety is a factor that needs to be taken under consideration. Our current implementation is not eye safe, however there are several things that could be done. Factors affecting the eye safety of a laser include the wavelength of light, scanning speed, and power level of the beam. To avoid retina damage, a laser should have a wavelength between 1.4 and 2 μm. Between these wavelengths, radiation does not penetrate more deeply than 100 μm into the cornea  which has a higher damage threshold than the retina. To modify our system to operate within the “eye-safe” range, we could use a mid-IR range laser with a InGaAs/InP SPAD rather than a silicon based one . This would also lead to higher dark count rates of thousands of counts per second and may not be advisable if dark count rates and not ambient light are the limiting factor. On the other hand there are several frequency bands in this range where sunlight does not effectively penetrate the atmosphere but attenuation levels are low enough to not significantly attenuate a beam over hundreds of meters. Increased scanning speed can lead to reduced laser dwell times and thus reduced average illumination intensity.
We demonstrate a novel photon counting non-line-of-sight imaging system based on a time-gated SPAD detector and demonstrate the ability to reconstruct images of part of our laboratory via the laboratory wall. We also study the sensitivity of the system towards noise and ambient light.
Because of high loss in the propagation path it is essential to maximize the sensitivity, dynamic range, and signal to noise ratio of non-line-of-sight imaging systems. For this reason, it is advantageous to use a sensitive high throughput photon counting detection system such as a gated SPAD. Photon counting is possible with a streak camera and iCCD cameras, but since only a few photons are counted in each frame it takes a long time to build up sufficient photon numbers for a reconstruction. The effective dynamic range of a gated SPAD is further increased by blocking the first bounce light from the detector since this light is many orders of magnitude brighter than the third bounce data required for reconstruction.
SPAD based non-line-of-sight imaging systems provide a promising option for compact, high time resolution, and high sensitivity non-line-of-sight imaging systems.
We would like to acknowledge funding through the NASA NIAC Program ( NNH14ZOA001N-14NIAC-A1), the Laboratory for Optical and Computational Instrumentation (LOCI), and the Morgridge Institute for Research. We would also like to thank Nick Bertone and PicoQuant for generously providing advice and equipment to support this research.
References and links
1. T. Ralston, G. Charvat, and J. Peabody, “Real-time through-wall imaging using an ultrawideband multiple-input multiple-output (MIMO) phased array radar system,” in Proceedings of IEEE International Symposium on Phased Array Systems and Technology (IEEE, 2010), pp. 551–558.
2. B. Chakraborty, Y. Li, J. Zhang, T. Trueblood, A. Papandreou-Suppappola, and D. Morrell, “Multipath exploitation with adaptive waveform design for tracking in urban terrain,” in Proceedings of IEEE International Conference on Acoustics Speech and Signal Processing (IEEE, 2010), pp. 3894–3897.
3. A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar Detection of Moving Targets Behind Corners,” IEEE Trans. Geosci. Remote Sens. 49, 2259–2267 (2011). [CrossRef]
4. P. Sen, B. Chen, G. Garg, S. R. Marschner, M. Horowitz, M. Levoy, and H. P. A. Lensch, “Dual Photography,” ACM Trans. Graph. 24, 745–755 (2005). [CrossRef]
5. E. Repasi, P. Lutzmann, O. Steinvall, M. Elmqvist, B. Ghler, and G. Anstett, “Advanced short-wavelength infrared range-gated imaging for ground applications in monostatic and bistatic configurations,” Appl. Opt. 48, 5956–5969 (2009). [CrossRef] [PubMed]
7. S. Gokturk, H. Yalcin, and C. Bamji, “A Time-Of-Flight Depth Sensor - System Description, Issues and Solutions,” in Proceedings of Conference on Computer Vision and Pattern Recognition Workshop (IEEE, 2004), pp. 35.
9. A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: Capturing and Visualizing the Propagation of Light,” ACM Trans. Graph. 32, 44 (2013). [CrossRef]
10. A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded Time of Flight Cameras: Sparse Deconvolution to Address Multipath Interference and Recover Time Profiles,” ACM Trans. Graph. 32, 167 (2013). [CrossRef]
11. F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget Transient Imaging Using Photonic Mixer Devices,” ACM Trans. Graph. 32, 45 (2013). [CrossRef]
12. M. O’Toole, F. Heide, L. Xiao, M. B. Hullin, W. Heidrich, and K. N. Kutulakos, “Temporal Frequency Probing for 5d Transient Analysis of Global Light Transport,” ACM Trans. Graph. 33, 87 (2014).
13. F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse Mirrors: 3d Reconstruction from Diffuse Indirect Illumination Using Inexpensive Time-of-Flight Sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3222–3229.
14. G. Gariepy, N. Krstaji, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-flight imaging,” Nat. Commun. 6, 6021 (2015). [CrossRef]
15. A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking Around the Corner using Ultrafast Transient Imaging,” Int. J. Comput. Vision 95, 13–28 (2011). [CrossRef]
16. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012). [CrossRef] [PubMed]
18. M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53, 023102 (2014). [CrossRef]
19. F. Zappa, S. Tisa, A. Tosi, and S. Cova, “Principles and features of single-photon avalanche diode arrays,” Sensors Actuat. A: Phys. 140, 103–112 (2007). [CrossRef]
20. G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Tracking hidden objects with a single-photon camera,” http://arxiv.org/abs/1503.01699.
21. F. Villa, D. Bronzi, Y. Zou, C. Scarcella, G. Boso, S. Tisa, A. Tosi, F. Zappa, D. Durini, S. Weyers, U. Paschen, and W. Brockherde, “CMOS SPADs with up to 500 m diameter and 55% detection efficiency at 420 nm,” J. Mod. Optic. 61, 102–115 (2014). [CrossRef]
22. M. Buttafava, G. Boso, A. Ruggeri, A. D. Mora, and A. Tosi, “Time-gated single-photon detection module with 110 ps transition time and up to 80 MHz repetition rate,” Rev. Sci. Instrum. 85, 083114 (2014). [CrossRef] [PubMed]
23. E. F. Pettersen, T. D. Goddard, C. C. Huang, G. S. Couch, D. M. Greenblatt, E. C. Meng, and T. E. Ferrin, “UCSF ChimeraA visualization system for exploratory research and analysis,” J. Comput. Chem. 25, 1605–1612 (2004). [CrossRef] [PubMed]
24. A. Rochas, M. Gosch, A. Serov, P. Besse, R. Popovic, T. Lasser, and R. Rigler, “First fully integrated 2-D array of single-photon detectors in standard CMOS technology,” IEEE Photon. Technol. Lett. 15, 963–965 (2003). [CrossRef]
25. A. Ruggeri, P. Ciccarella, F. Villa, F. Zappa, and A. Tosi, “Integrated Circuit for Subnanosecond Gating of InGaAs/InP SPAD,” IEEE J. Quantum Electron. 51, 4500107 (2015). [CrossRef]
26. B. Markovic, S. Tisa, F. Villa, A. Tosi, and F. Zappa, “A High-Linearity, 17 ps Precision Time-to-Digital Converter Based on a Single-Stage Vernier Delay Loop Fine Interpolation,” IEEE Trans. Circuits and Syst. 60, 557–569 (2013). [CrossRef]
28. J. Zhang, R. Thew, C. Barreiro, and H. Zbinden, “Practical fast gate rate InGaAs/InP single-photon avalanche photodiodes,” Appl. Phys. Lett. 95, 091103 (2009). [CrossRef]