We demonstrate the ability to calibrate a variable optical attenuator directly at the few-photon level using a superconducting Transition Edge Sensor (TES). Because of the inherent linearity of photon-number resolving detection, no external calibrations are required, even for the energy of the laser pulses, which ranged from means of 0.15 to 18 photons per pulse at the detector. To verify this method, calibrations were compared to an independent conventional calibration made at much higher photon fluxes using analog detectors. In all cases, the attenuations estimated by the two methods agree within their uncertainties. Our few-photon measurement determined attenuations using the Poisson-Influenced K-Means Algorithm (PIKA) to extract mean numbers of photons per pulse along with the uncertainties of these means. The robustness of the method is highlighted by the agreement of the two calibrations even in the presence of significant drifts in the optical power over the course of the experiment.
As increasingly weak optical sources are developed and employed, signal-to-noise issues suggest that calibration will become more difficult. However, the quantized nature of light provides an opportunity to break this paradigm and allows for very weak sources and low-light devices to be accurately characterized without requiring ties to existing standards. Absolute calibration techniques for photon-counting and photon-number-resolving detectors have been introduced previously, based on charge integration photodetection , correlated-photon radiometry [2, 3], twin beams , synchrotron radiation , and heralded photons . Here, we demonstrate a technique for the calibration of optical attenuation at the few-photon level, a region of relevance to applications where higher illumination levels are incompatible with the sample of interest, such as a live cell, or a sample that may be nonlinear and thus requires its attenuation to be measured at the lowest light levels. Our technique uses a photon-number-resolving detector, a superconducting Transition Edge Sensor (TES), and a pulsed laser to measure attenuation over a 20 dB range at photon levels of fewer than 20 photons per pulse incident on the detector and achieve uncertainties of ±0.02 dB or better with 95% credible intervals . (See Appendix A.) The technique relies on the photon-number-resolving capability of TES devices paired with an algorithm to accurately convert the output response of the TES into a detected photon number. It is the combination of this detector and algorithm that allows the linearity inherent in photon counting to be used to determine attenuation with accuracy at the few-photon level. Note that measurements of attenuation steps do not require absolute calibration of the detection efficiency of the TES. We present here an extension of our previously published algorithm  to the problem of measuring attenuation at the few-photon level and demonstrate the accuracies which are possible.
Our experiment (Fig. 1) starts with a laser source with a wavelength λ = 850 nm generating picosecond pulses. We use either a TES for observing few-photon pulses or an analog detector for observing at higher power. We describe the few-photon measurements first.
In this case, the laser is operated with a repetition rate of 50 kHz at a relatively high power level for stable operation with a nominal 70 dB (OD 7) fixed attenuator to reduce the light to a level of about 20 detected photons per pulse. Only this low light level is incident on the attenuator under test and only relative changes in this low light level are required to be measured. A measurement of the high light level or the large fixed attenuator is not required. Our attenuator under test is a variable attenuator under computer control and operated in the range of 3 dB to 30 dB. We note for comparison a recent study  also used a TES and an tomography algorithm for extracting photon number, but that study required that a measurement be made at levels compatible with analog detectors and also used an externally calibrated detector.
In our TES measurement sequence, the attenuation was stepped from 30 dB to 3 dB and then back to 30 dB in steps of nominally 1 dB. A measurement was also made with the attenuator set to 60 dB before and after the 30 dB to 3 dB to 30 dB measurement series to act as an effective zero. At each attenuation setting, 512 TES response waveforms with 12,800 discrete 10 ns samples were acquired with voltages digitized to 16 bits with a full range of 0.4 V.
For measurements made using the analog detector, the laser was operated at a repetition rate of 80 MHz, which was one-to-two orders of magnitude faster than the analog detector’s response time. The increase of the repetition rate from 50 kHz to 80 MHz provided a nominal 1600-fold increase in the optical power and the removal of the nominal 70 dB fixed attenuator led to a further increase in power for a total nominal power ratio of 1.6 × 1010. The analog detector is a trap design  followed by a high-gain high-linearity transimpedance amplifier  and a high-accuracy digital multimeter (Hewlett-Packard HP 3458A ) with computer control and recording.
3. Data reduction and results
The TES response waveforms were processed using the Poisson-Influenced K-Means Algorithm (PIKA) . The main idea behind PIKA is that the detector has an ideal response V̄n(t) for each exact number of photons n absorbed by the detector, where V is a voltage and t is time. Moreover, these ideal responses are ordered such that V̄n+1(t) ≥ V̄n(t) for all n ≥ 0 and all t, and V̄0(t) = 0. Given that the detector is a microcalorimeter, it is reasonable that more photons of the same wavelength will deposit more heat and hence give a bigger response. In the experiment, we collect many sets of 12,800 TES waveforms taken with a constant mean photon number. With PIKA, we analyze each set individually. The waveforms were passed through a Hanning filter over the lower half of the available bandwidth and zero thereafter to allow a doubling of the time step after back transformation. No attempt was made to remove blackbody photons, as this had no discernible effect on the cluster means . An initial ordering is obtained by taking the dot product of an individual waveform with the mean TES waveform obtained by averaging over all the waveforms acquired within a set (taken at fixed mean photon number). The waveforms are assigned to clusters based on the ordering and the Poisson distribution. The assignments of the waveforms to clusters are adjusted to maximize the likelihood which includes factors from both the root-mean-squared (RMS) distance of a waveform to the nearest cluster mean and the proportion of counts in each cluster compared to what is predicted by a Poisson distribution.
As initially formulated , PIKA required the mean photon number as an input parameter. Different input parameters could in principle impact the final cluster determinations. In the present work, we treat the mean photon number 〈n〉 as another parameter to be maximized by the PIKA likelihood function. When the clustering is sufficiently strong (i.e., 〈n〉 up to about 20) the likelihood function as a function of 〈n〉 has a fairly sharp maximum. By treating the mean photon number as a parameter to be estimated, it is not necessary to specify 〈n〉 exactly as was required in the initial work .
However, it is still necessary to provide an estimate of the likelihood-maximizing mean photon number to start the algorithm. The likelihood function is convex over a width of 1 in photon number. Secondary peaks exist at integer intervals beyond this width of 1, but they are considerably smaller than the main peak and therefore are neglected. The nominal attenuations and an estimate of the laser power as a function of time gave enough information to generate initial estimates of the mean photon numbers with the required accuracy. Then, it was possible to evaluate the log-likelihood function on a grid of 11 points to find its maximum. For 〈n〉 ≥ 0.8, a uniform grid with 11 values covering a full range of 0.5 in photon number were chosen, but for 〈n〉 < 0.8 uniform steps in the logarithm of the photon number were chosen, with 11 values ranging from a factor of to about a preliminary estimate of the mean photon number. Symmetric and reasonably centered log-likelihood functions were found in all cases.
We expect the ideal response for a particular exact photon number to be identical when obtained with data sets taken with different mean photon numbers. This is illustrated in Fig. 2 for the responses derived from sets taken at mean photon numbers of 12.08 and 15.17. These results are similar to ones presented earlier on different data . The differences in the time of the falling edges between particular photon number responses determined from two different mean photon numbers is small, being on the order of 0.02 μs as compared to a difference of 0.2 μs per photon number. The range of individual photon numbers which is accessible from a given mean photon number is illustrated in Fig. 3.
The results of maximizing the likelihood function for the data sets are shown in Fig. 4. A slow drift with time of the mean values is seen in Fig. 4. We attribute the drift to changes in the power of the laser over time and use a procedure discussed in Appendix A to account for it. The resultant time trend is given in Fig. 5. For the analog measurements, a somewhat more complicated drift was found and accounted for as discussed in Appendix A.
Fig. 6 shows that the 95% credible intervals for the few-photon TES-PIKA measurements agree in all 21 cases with 95% prediction intervals for the analog measurements. Assuming the analog calibration method is correct, such agreement is only possible if the TES-PIKA results for the mean photon number are proportional to the number of incident photons. The constant of proportionality is the detection efficiency. Additionally, the uncertainties are less than the manufacturer’s specification  for repeatability, namely ±0.03 dB up to 10 dB and ±0.10 dB up to 30 dB.
We now show that our attenuation measurement scheme yields an uncertainty that is within a factor of two of the theoretical minimum based on the number of input photons. This fundamental uncertainty limit is independent of the details of the detector. A lower bound to the uncertainties in the TES-PIKA measurement is given by the formula for the standard deviation of the mean , where σ is the standard deviation of the population and N is the number of samples. In the case of a Poisson distribution with parameter μ, . The relative standard deviation is given by . In our problem, physically μ = 〈n〉 and N = 256,000 is the number of samples, 20 sets of 12,800 during the course of the experiment. As seen in Fig. 4, a mid-range value of μ is 2.3 at 14 dB. This gives an estimate for the relative standard deviation of 0.0013. Translating to the decibel scale and assuming that the coverage factor k = 2 generates a 95% credible interval yields 0.011 dB which accounts for a majority of the half-widths of the credible intervals of (a) 0.015 dB and (b) 0.017 dB for the 14 dB cases shown in Fig. 6.
4. Remark on detection efficiency
Although we are able to determine the response of the detector to a given number of detected photons, in the present experiment we are not able to determine the detection efficiency of the detector which is defined as the ratio of detected photons to the number of photons at some point in the detector’s input fiber. The mean number of detected photons is known to better than 1%; nevertheless, we are not able to learn anything about the detection efficiency of the detector because the input obeys a Poisson distribution. A Poisson distribution with parameter μ attenuated by a binomial process with parameter 0 < p ≤ 1 is also a Poisson distribution but with parameter μp . The thermal distribution shares this property  so such a source would also be unsuitable for determining the detection efficiency.
In principle, if the input were known to follow some general distribution — one which is not a Poisson or a thermal distribution — a study of the detected distribution might allow teasing out the detection efficiency. The simplest case is when the photons where known to arrive at the detector exclusively in pairs. Then, the number of two-photon detections N2 would be proportional to η2 where η is the detection efficiency, and the number of single photons detected N1 would be proportional to 2η(1 − η), so the detection efficiency would be given by , which is independent of the source strength. Photons generated by parametric down conversion have been used for absolute calibration with two detectors  and with a single detector .
5. Concluding remarks
We have demonstrated the ability of a TES combined with PIKA for data analysis to yield values of attenuation with low uncertainties using a source with pulses of just a few photons. The measurements were validated against a conventional analog power meter. The measurements always agree to within statistical uncertainties and are well within the manufacturer’s specifications for repeatability. Moreover, a theoretical lower bound for the uncertainty in the experiment was found and the estimated uncertainties are above the bound by less than a factor of two. The uncertainties are smaller with a larger mean number of photons per pulse. Our interpretation is that the benefit of having more counts outweights whatever uncertainties are generated due to inaccuracies in interpreting the number of photons detected from a given response waveform at higher photon numbers. (Naively, some of us had thought the lowest uncertainty would arise with 1 or 2 photons per pulse because these are so well separated, but this is not the case.) The TES with output waveforms analyzed using PIKA gives reliable mean numbers of photons per pulse over at least two decades of mean pulse photon numbers.
Measuring sample attenuation is one of the most basic optical characterizations and is key to many applications. Working at the few-photon level may be the method of choice in cases where the samples are fragile or have large nonlinearities. The ability to perform absolute calibration in situ without resetting optical components may be important in both common applications and exotic ones such as loophole-free tests of Bell’s inequality. In addition, sometimes other requirements lead to the necessity of operating in this regime, such as for quantum key distribution . As another example of the advantage of this type of direct few-photon measurement, we compare our results to those of a recent study aimed at characterizing TES response using tomography which needed to quantify attenuation values . That approach which relies on analog photodetectors to calibrate the attenuators yielded an uncertainty of approximately 8% which is an order of magnitude larger than the uncertainties reported here.
There is another application of the method presented here. While the present work teases out low uncertainties specifically in the few-photon regime, it is possible to use it to increase the range beyond the limit where the visibility of photon number peaks of the TES are lost . By implementing a system with a number of variable attenuators in series, the attenuations of each could be determined in situ with low uncertainties. For example with just two attenuators in series, by setting one to 20 dB attenuation our method would allow the calibration of the other in the 0 dB to 20 dB range. By swapping settings, we could calibrate the other, allowing a few-photon output to calibrate from about 0.1 to 1000 photons per pulse. The uncertainties of the combined attenuation could be as little as more than that reported here. We contrast this stepping up of the calibration range from the few-photon level to the stepping down from high light levels typical of analog detectors . This new paradigm offers a potential for lower uncertainties for sources in the weak-light, but beyond number-resolving, regime. This would be useful for characterizing the detector response where there is no photon-number-visibility (i.e., above about 20 photons) which would otherwise require an external calibration of the incident mean photon number.  A five-stage cascade could bridge the gap between few-photon pulses and nanojoule pulses which could be reliably measured by picowatt detectors if generated at 50 kHz as we did in the present experiment.
A. Estimation of uncertainty
Credible intervals and prediction intervals
The terminology used in Bayesian inference differs from the more common “confidence interval” which comes from the frequentist paradigm. Here, we use both “credible intervals” and “prediction intervals.” The 95% credible intervals refer a region which has a 95% probability of including the true value of a parameter. From our TES-PIKA data, based on 20 observations for each set of conditions, we obtain credible intervals. The 95% prediction intervals intend to capture hypothetical new observations obtained under the same conditions with 95% probability. Our analog data has some 450 observation for each condition. We partition these into 22 sets, each of which has about the same amount of data as the TES-PIKA measurements. We obtain 95% prediction intervals for the analog measurements so that the uncertainty associated with the TES-PIKA data can be compared directly to the uncertainty presented with the analog measurements.
Analysis of TES-PIKA data
We denote the PIKA observations by yr(t), where t is the time since the beginning of the experiment up to tmax = 168 min and r represents the combination of the nominal attenuation and the direction of change of the attenuator (up or down in the optical power). The TES-PIKA observations may be interpreted as an average number of detected photons in a short window of fixed duration around t. Each condition combination is repeated 20 times.
The TES-PIKA observations are modeled on the decibel scale, zr(t) = −10log10[yr(t)], as13], αr and ck are unknown constants to be estimated from the data, and the εr(t) has a normal distribution with mean 0 and standard deviation σr.
The Bayesian paradigm is used for statistical inference , and specifically the posterior distributions of γr = A + αr are sought, where A is a constant chosen to optimize the agreement between the TES-PIKA results and the analog results. Specifically, we choose A to minimize the least squared residuals of the points shown in Fig. 6 for which both analog and TES-PIKA results are given.
To convey a lack of prior knowledge, improper flat priors are used for all parameters. The R  package rstan [21, 22, 23] was used to draw samples from the posterior distributions of γr. These samples empirically construct the posterior distributions. Moreover, sample properties such as the mean and standard deviation approximate the corresponding properties of the distribution. The credible intervals presented in Fig. 6 are taken from the sample percentile values of 2.5% and 97.5%. To ensure that k = 4 was sufficient, the model was reestimated using k = 6. Values for ck are given in Table 1. The estimates for c5 and c6 were not clearly distinguishable from zero, more specifically their 95% credible intervals enveloped zero. Analysis of analog data. The analog data was taken in a fashion similar to the PIKA data, although they were acquired over a longer period of time. Each condition combination r is repeated in 450 cycles over a time of 5 days with the data denoted as xr(t).
The analog data are first normalized as yr(t) = [xr(t) − x0]/I(t) where x0 is the average response with the attenuator set to 60 dB (the effective zero) and I(t) is the linear interpolation of the 3 dB observations (the reference level). They are then converted to the dB scale as zr(t) = −10log10[yr(t)]. For a fixed condition combination, we have a time series of length 450. The time series is partitioned into 22 contiguous pieces with 12 having 20 observations and 10 having 21 observations. The target of 20 observations per partition corresponds to the number of observations per condition combination in the TES-PIKA data set. Within each partition an average is taken and used as the measured attenuation for that period of time. There are then 22 measurements of attenuation for each condition combination. Means and prediction intervals for the 22 measurements are used to describe the center and variability, respectively, of the distribution of measured attenuations. The means and prediction intervals are shifted together so that the mean measurement for the nominal 5 dB up-in-attenuation condition is set to 5.
A small condition-dependent long-term drift remains in the analog data after normalizing the drift of the 3 dB observations. The effect of the long-term drift is to increase the width of the prediction intervals above what they would be if the normalization process had removed all drift. In this way, our uncertainty assessment accounts for the observed condition-dependent drift.
We thank our NIST colleagues Sae Woo Nam and Adriana Lita for providing the TES and Thomas Gerrits and Sergey Polyakov for help with the data acquisition system. Work of the United States Government. Not subject to copyright.
References and links
2. A. Migdall, “Correlated-photon metrology without absolute standards,” Phys. Tod. 52, 41–46 (1999). [CrossRef]
3. A. R. Beaumont, J. Y. Cheung, C. J. Chunnilall, J. Ireland, and M. G. White, “Providing reference standards and metrology for the few photon counting community,” Nucl. Inst. and Meth. A 610, 183–187 (2009). [CrossRef]
4. A. P. Worsley, H. B. Coldenstrodt-Ronge, J. S. Lundeen, P. J. Mosley, B. J. Smith, G. Puentes, N. Thomas-Peter, and I. A. Walmsley, “Absolute efficiency estimation of photon-number-resolving detectors using twin beams,” Opt. Express 17, 4397–4411 (2009). [CrossRef] [PubMed]
5. R. Klein, R. Thornagel, and G. Ulm, “From single photons to milliwatt radiant power in electron storage rings as radiation sources with a high dynamic range,” Metrologia 47, R33–R40 (2010). [CrossRef]
6. A. Avella, G. Brida, I. P. Degiovanni, M. Genovese, M. Gramegna, L. Lolli, E. Monticone, C. Portesi, M. Rajteri, M. L. Rastello, E. Taralli, P. Traina, and M. White, “Self consistent, absolute calibration technique for photon number resolving detectors,” Opt. Express 19, 23249–23257 (2011). [CrossRef] [PubMed]
7. A. Gelman, J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari, and D. B. Rubin, Bayesian Data Analysis (Chapman & Hall/CRC, Boca Raton, FL, 2014).
8. Z. H. Levine, T. Gerrits, A. L. Migdall, D. V. Samarov, B. Calkins, A. E. Lita, and S. W. Nam, “An algorithm for finding clusters with a known distribution and its application to photon-number resolution using a superconducting transition-edge sensor,” J. Opt. Soc. Am. B 29, 2066–2073 (2012). [CrossRef]
9. P. C. Humphreys, B. J. Metcalf, T. Gerrits, T. Hiemstra, A. E. Lita, S. W. Nam, A. Datta, W. S. Kolthammer, and I. A. Walmsley, “Tomography of photon-number resolving continuous-output detectors,” arXiv:1502.07649 (2015).
10. J. H. Lehman and C. L. Cromer, “Optical trap detector for calibration of optical fiber powermeters: coupling efficiency,” Appl. Opt. 31, 6531–6536 (2002). [CrossRef]
11. G. Eppeldauer, “Noise-optimized silicon radiometers,” J. Res. Natl. Inst. Stand. Technol. 105, 209–219 (2000). [CrossRef]
12. The mention of commercial products does not imply endorsement by the authors’ institutions nor does it imply that they are the best available for the purpose.
13. D. Gottlieb and S. A. Orszag, Numerical Analysis of Spectral Methods: Theory and Applications (Society of Industrial and Applied Mathematics, Philadelphia, PA, 1977). P. 159.
14. OZ Optics Limited, “Digital variable attenuator,” http://www.ozoptics.com/ALLNEW_PDF/DTS0007.pdf, 2013, p. 3.
15. H. A. Haus, Electromagnetic Noise and Quantum Optical Measurements (Springer, Berlin, 2000). Sect. 9.2. [CrossRef]
17. A. Czitrovszky, A. Sergienko, P. Jani, and A. Nagy, “Measurement of quantum efficiency using entangled photons,” Laser Phys. 10, 86–89 (2000).
19. Z. H. Levine, B. L. Glebov, A. L. Midgall, T. Gerrits, B. Calkins, A. E. Lita, and S. W. Nam, “Photon-number uncertainty in a superconducting transition-edge sensor beyond resolved-photon-number determination,” J. Opt. Soc. Am. B 31, B20–B24 (2014). [CrossRef]
20. R Core Team, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria (2014).
21. Stan Development Team, Stan: A C++ Library for Probability and Sampling, Version 2.5.0 (2014).
22. Stan Development Team, Stan Modeling Language User’s Guide and Reference Manual, Version 2.5.0 (2014).
23. M. D. Hoffman and A. Gelman, “The no-U-turn sampler: Adaptively setting path lengths in Hamiltonian Monte Carlo,” J. Machine Learning Research (2014).