Abstract

In this paper, we use digital holography (DH) in the off-axis image plane recording geometry with a 532 nm continuous-wave laser to measure the system efficiencies (multiplicative losses) associated with a closed-form expression for the signal-to-noise ratio (SNR). Measurements of the mixing efficiency (36.8%) and the reference noise efficiency (74.5%) provide an expected total system efficiency of 22.7%±6.5% and a measured total system efficiency of 21.1%±6.3%. These total noise efficiencies do not include our measurements of the signal noise efficiency (3%–100%), which are highly dependent on the signal strength and become significant for SNRs>100. Thus, the results confirm that the mixing efficiency is generally the dominant multiplicative loss with respect to the DH system under test; however, excess reference and signal noise are significant multiplicative losses as well. Previous results also agree with these experimental findings.

1. INTRODUCTION

Digital holography (DH) has various remote sensing applications, including long-range imaging [1], 3D imaging [2], and wavefront sensing [3]. In particular, DH offers distinct benefits over traditional wavefront sensing methods, such as a Shack–Hartmann wavefront sensor, given deep-turbulence conditions [46]. These conditions often arise from long, horizontal-propagation paths through the atmosphere and yield low signal-to-noise ratios (SNRs).

With low SNRs in mind, DH uses a strong reference to boost the weak signal above the noise and provide access to the complex optical field. Since the SNR limits the effective ranges of a fielded DH system, it is convenient to treat each source of loss as multiplicative factors in the derived SNR expression to estimate performance. Moving forward we need to quantify these multiplicative losses in order to characterize the performance of a fielded DH system.

In terms of performance, the dominant multiplicative loss or system efficiency is typically the mixing efficiency (i.e., the detected visibility of the signal and reference interference). For example, depolarization from rough-surface scattering reduces the SNR by 50% [7] and the pixel modulation transfer function (MTF) reduces the SNR by approximately 66% [8], yielding a mixing efficiency of 33%. Even with highly efficient focal plane arrays (FPAs) and highly transmissive optics, the ideal, total system efficiency is generally below 30%. This last statement also assumes ideal laser coherence and noise, providing an upper bound on the total system efficiency one can expect from a DH system.

While SNR measurements of coherent lidar systems are available [911], there are distinct differences between these and the SNR measurements associated with DH systems. For example, DH uses spatial modulation, while coherent lidar uses temporal modulation. Since the demodulation techniques are different, some of the system efficiencies are different. To our knowledge, there has not been an examination of the system efficiencies associated with a DH system. Therefore, this paper presents a comparison between the expected and measured SNRs, a quantification of the major system efficiencies, and an examination of excess noise sources all with respect to our DH system under test. In what follows, Section 2 provides the closed-form expressions needed for a comparison between expected and measured SNRs, Section 3 provides an overview of the experimental methods and data processing, Section 4 provides the results presented with discussion, and Section 5 provides a conclusion.

2. CLOSED-FORM EXPRESSIONS FOR SNR

In this section, we present the details associated with closed-form expressions for the SNR associated with DH in the off-axis image plane recording geometry (IPRG). Figure 1 shows an illustration of a DH system in the off-axis IPRG. Note that the detailed development of these closed-form expressions can be found in [12,13]. Also note that we experimentally measure the SNR in the next section to analyze the validity of our model and our assumptions.

 figure: Fig. 1.

Fig. 1. DH system in the off-axis IPRG. Here, we split the light from the master oscillator (MO) laser into two paths: the illuminator and local oscillator (LO). Given the illuminator, we illuminate an object with an optically rough surface, and we collect the scattered speckle in a pupil given the appropriate receiver optics. Denoted as the signal complex optical field, US, the pupil focuses the received speckle onto the FPA. In the other path, the LO provides the reference complex optical field, UR, and we inject the LO off axis in the pupil plane at (xR,yR) to illuminate the FPA.

Download Full Size | PPT Slide | PDF

A. Estimated Signal and Noise Variance

Recall that we illustrate and explain DH in the off-axis IPRG in Fig. 1. At the FPA, the signal and reference interfere to produce the hologram irradiance, iH, such that

iH(x,y)=|US(x,y)+UR(x,y)|2,
where US and UR are the complex optical fields of the signal and reference, respectively. In units of watts per square meter, iH is spatially continuous and real valued. Throughout the remaining analysis, we assume that the reference uniformly illuminates the FPA, such that |UR(x,y)|2=|AR|2, where AR is the complex amplitude of the reference.

Provided Fig. 1, the FPA records the hologram on an M×N array of pixels as a per-pixel mean number of hologram photoelectrons, m¯H(x,y), in units of photoelectrons (pe). In turn, we model the noise as being additive with variance σn2. We then digitize the hologram with a corresponding pixel gain, gA/D, to produce the digital hologram with noise, dH+(x,y), in units of digital numbers (DN). Then, we demodulate the hologram, as illustrated and explained in Fig. 2.

 figure: Fig. 2.

Fig. 2. Illustration of the demodulation process for a digital hologram. The reference spatially modulates the signal (see the far-left fringes on top of the square image) and the FPA records the digital hologram with noise (i.e., dH+). Next, we perform an inverse discrete Fourier transform, DFT1, on dH+, which produces d˜H+ in the Fourier plane (magnitude shown). Four terms arise in the Fourier plane: a strong DC term from the reference irradiance (i.e., |AR|2); the autocorrelation of the pupil, U˜S**U˜S (centered at DC), which produces a 2D chat profile [14]; the signal complex optical field, U˜S (shifted off-axis), since the image and pupil planes are Fourier transform pairs; and the conjugate of the signal complex optical field, U˜S* (shifted off-axis in the opposite direction), since the Fourier transform has Hermitian symmetry. We then shift and window the Fourier plane to obtain the estimated signal, U˜^S, in the pupil plane. Lastly, we perform a discrete Fourier transform, DFT, to obtain the estimated signal, U^S, in the image plane (magnitude shown).

Download Full Size | PPT Slide | PDF

After the demodulation process, we ideally obtain our estimated signal, U^S. In particular,

U^S(x,y)=gA/Dτp2hνUR*US(x,y)+gA/Dπ4qI2σn(x,y)2Nk(x,y),
where τ is the integration time, p is the square-pixel width, h is Planck’s constant, ν is the master oscillator (MO) laser frequency, qI is the image plane sampling quotient, σn2 is again the total noise variance, and Nk is the kth realization of circular-complex Gaussian random numbers with zero mean and unit variance for both the real and imaginary components (hence the 2 factor). In Eq. (2), qI represents the number of circular pupil diameters across the Fourier plane. Additionally, qI represents the number of pixels across the half-width of the Airy disk, such that
qI=λzIpdp,
where λ is the MO laser wavelength, zI is the image distance, and dp is the pupil diameter. The factor of π/4qI2 in Eq. (2) then accounts for the portion of the noise windowed from the Fourier plane. To account for σn2, we include the shot noise from the reference and signal, in addition to the read noise with variance, σr2, viz.,
σn2(x,y)=m¯R+m¯S(x,y)+σr2,
where m¯R is the mean number of reference photoelectrons and m¯S(x,y) is the per-pixel mean number of signal photoelectrons. Note that the shot noise from the reference is not spatially varying, since we have assumed a uniform reference. We neglect quantization noise, since it is typically much less than σr2 for high-bit-depth FPAs. Other noise sources exist [15,16], but we assume they are also negligible.

B. Signal-to-Noise Ratio

In the analysis that follows, we make use of the power definition for the signal-to-noise ratio, S/N, such that

S/N(x,y)=ηTE{|U^S(x,y)|2}V{U^S(x,y)},
where ηT is the total system efficiency, E{·} is the expectation operator, and V{·} is the variance operator. Provided Eq. (2), the numerator follows as
E{|U^S(x,y)|2}=gA/D2m¯S(x,y)m¯R,
and the denominator follows as
V{U^S(x,y)}=gA/D2π4qI2σn2(x,y).
Substituting Eq. (4) into Eq. (7), S/N becomes
S/N(x,y)=ηT4qI2πm¯S(x,y)m¯Rm¯R+m¯S(x,y)+σr2.
When we assume a strong reference, m¯Rm¯S(x,y) and m¯Rσr2, and we reach the shot noise limit, such that
S/N(x,y)ηT4qI2πm¯S(x,y).
Similarly, we can derive the ideal, radiometric SNR, S/NR. Using [13], we obtain the following closed-form expression:
S/NR(x,y)=ρπλ2τhνPo(MTxo,MTyo)Ao,
where ρ is the surface reflection coefficient, the factor of πaccounts for the Lambertian scattering, the factor of λ2accounts for the speckle [7], Po is the power incident on the object, Ao is the area of the uniform object, and the coordinates (MTxo,MTyo) are the magnified object plane coordinates to convert to image plane coordinates. Note that we can derive Eq. (10) from Eq. (9) with the proper radiometry and geometric- or ray-optics substitutions to satisfy imaging. Also note that we will use Eq. (10) in the analysis that follows as the expected SNR without multiplicative losses. The ratio of S/NR with our measured SNR, S/N, will then enable us to calculate ηT.

C. System Efficiencies

To account for the expected total system efficiency, ηT, we comprise ηT in terms of several independent system efficiencies, viz.,

ηT(x,y)=ηtηqηmηRηS(x,y).
Here, ηt is the transmission efficiency (atmospheric and optical), ηq is the quantum efficiency of the FPA, ηm is the mixing efficiency, ηR is the reference noise efficiency, and ηS(x,y) is the signal noise efficiency. The mixing efficiency represents how well the detected reference and signal interfere to produce the spatial modulation known as fringes. It is comprised of two other efficiencies: the polarization efficiency, ηp, and spatial integration efficiency, ηs. In what follows, we assume that there is the ideal temporal coherence between the signal and reference; otherwise, nonideal coherence would factor into ηm.

Recall that the signal becomes fully depolarized via rough-surface scattering from our optically rough object, which we assume is comprised of a dielectric material. Therefore, only half of the signal interferes with the polarized reference [7], so that ηp=50%. The spatial integration efficiency accounts for the detection of the spatial modulation with finite pixels. Using [17], we can mathematically realize the pixel-spatial integration of the hologram irradiance as a 2D convolution of the hologram irradiance with the spatial extent of the pixel. In the Fourier plane, this convolution turns into a multiplication with the pixel MTF. Given a square pixel, we estimate the pixel MTF with a 2D sinc function, where sinc(x)=1 when x=0 and sinc(x)=sin(πx)/(πx) when x0. We, in turn, approximate ηs as

ηs=w(fx,fy)sinc2(pfx,pfy),
where w(fx,fy) is the window function for the pupil in the Fourier plane coordinates (fx,fy) and · denotes spatial average. Note that the pixel MTF is squared because of our SNR definition [cf. the sinc-squared term in Eq. (12)]. Also note that the value of ηs is dependent on the pupil window size and pupil location in the Fourier plane. For example, our experiment had a qI=2.70 and the pupil centered at (fx,fy)=(xr/λzI,yr/λzI)=(0.25,0.26), which yields an ηs=64.4%.

In Eq. (9), we assume the use of a uniform, strong reference with Poisson-distributed shot noise. However in practice, the reference is not spatially uniform and single-mode lasers typically have some excess amplitude noise [18,19]. Thus, we include the reference noise efficiency, ηR, which is the ratio of reference shot noise (i.e., m¯R) to the demodulated reference noise.

We also quantify the strong reference approximation in Eq. (9) by introducing the signal noise efficiency, ηS(x,y). At high SNRs, the strong reference approximation becomes less valid with a stronger signal. Not only does the signal shot noise increase, but the amplitude of the pupil autocorrelation in the Fourier plane increases, which is sampled by the window function (cf. Fig. 2), and provides excess signal shot noise. Therefore, as the signal and SNR increase, the strong reference approximation weakens. We quantify the excess signal noise with the signal noise efficiency, ηS, which is the ratio of the demodulated reference noise to the sum of the demodulated reference and demodulated signal noises.

Before moving on to the next section, we first report the initial expected values for the various system efficiencies in Table 1. Assuming ideal coherence between the signal and reference, ηm=32.2%, since ηp=50% and ηs=64.4%. In addition, we assume no excess noise from the reference, signal, or other noise sources, which allows us to simplify the SNR expression [cf. Eq. (8)] to be dependent only on the signal [cf. Eq. (9)]. The same goes for the radiometric SNR [cf. Eq. (10)].

Tables Icon

Table 1. Expected System Efficiencies

3. EXPERIMENTAL METHODS AND DATA PROCESSING

This section describes the experimental methods and procedures used in this paper. In order to compare the recorded data to our model, we performed the appropriate data processing in order to quantify the system efficiencies or multiplicative losses associated with our closed-form expressions for SNR. Recall that we formulated these expressions and multiplicative losses in the previous section [cf. Eqs. (9)–(11)].

A. Experimental Setup

As depicted in Fig. 3, we implemented a DH system in the off-axis IPRG. The MO laser was a Cobalt Samba 1000 continuous-wave diode-pumped solid-state laser with a wavelength of 532.1 nm, a linewidth of <1MHz, and an output power of 1 W. The MO laser output beam was nearly Gaussian (M2<1.1) and collimated (full-angle divergence <1.2mrad) with a diameter of 700 μm at the laser exit aperture. A Faraday isolator prevented backreflections from getting back into the MO laser, and a half-wave plate (HWP) and polarizing beam splitter (PBS) split a portion of the MO laser to a beam dump, allowing us to adjust the MO laser’s power. Another HWP and PBS split the MO laser into two paths for the local oscillator (LO) and illuminator. To create the LO, we rotated the incident linear polarization state with a HWP to match the stress-rod orientation of the polarization-maintaining fiber before coupling with the appropriate optics. We adjusted the illuminator path strength with a neutral density filter and directed it through a 20× beam expander to illuminate a sheet of Labsphere Spectralon with an approximate 4 cm spot diameter. The rough-surface scattering that resulted was approximately Lambertian with 99% reflectivity. As such, to create new speckle realizations, we placed the sheet of Labsphere Spectralon on a tilted rotation stage.

 figure: Fig. 3.

Fig. 3. Experimental setup for our DH system under test. BD, beam dump; BE, beam expander; FC, fiber coupler; FI, Faraday isolator; λ/2, half-wave plate; M, mirror; MO, master oscillator; ND, neutral density filter; PBS, polarizing beam splitter.

Download Full Size | PPT Slide | PDF

A 1 in. lens with a focal length of 350 mm imaged the scattered signal light (with an object distance of 2.46 m) to a monochrome Point Grey Grasshopper3 camera. The lens had a 532 nm anti-reflective coating and a transmission of 99.7%. We injected our LO near the lens and angled it such that we centered the resulting fiber reference light near the center of the FPA. This geometry gave a measured qI=2.70 with the lens nearly centered in the top right quadrant of the Fourier plane at (fx,fy)=(xR/λzI,yR/λzI)=(.25,.26), since ((xR,yR)=(1.56cm,1.62cm).

To mitigate vibrational effects, we placed the entire experimental setup on a floating optical table. In turn, we verified the MO laser linewidth was <1MHz with a Fabry–Perot (FP) interferometer with a free spectral range of 1.5 GHz, a finesse of >1500, and a spectral resolution of also <1MHz. Next, we measured the full width at half-maximum to be about 1.2 MHz of an approximate Lorentzian profile. Since the observed line shape is the convolution of the FP and laser line shapes, this measurement agreed with the manufacture’s specified linewidth and coherence length. The path difference between the signal and reference was 0.1 m, and we therefore assumed negligible coherence losses. In addition, the variation in laser power was <1% with a power meter measurement. We also conducted the same measurement for the fiber reference light, and we observed similar results. Furthermore, we verified the polarization of the signal and reference. We measured the fiber output polarization with a polarizer and two power meters over an hour and it was >99% polarized. Measuring the scattered signal light reflected from the Spectralon in the same way, we found that it was >99% unpolarized.

The Grasshopper3 camera (GS3-U3-32S4M-C) was a 2048×1536 CMOS array with a specified 100% fill factor and 3.45 μm square pixels. The camera had two cover glasses, where the transmission losses were incorporated into the ηq specification (cf. Table 1). We used the camera in mode 7, which has a ηq=76%, but we removed the front cover glass to reduce the reference etaloning effects. The cover glass had a transmission of 91.7%, which gave an effective quantum efficiency of ηq=83% for the camera. We recorded frames of data using MATLAB. Along with the floating optical table, we set the camera integration time to 1 ms, which stabilized our SNR measurements. The camera pixel gain was 1/0.17, which converts the received photoelectrons to 16-bit digital numbers on the camera [20]. We used camera mode 7, which converts the 16-bit digital numbers to 12-bit digital numbers and MATLAB reads the pixel values as 16-bit digital numbers. Therefore, the first four bits of the recorded 16-bit digital numbers are padded. This outcome translates to a quantization noise variance of σq2=21.3DN2. Note that the read noise variance for mode 7 was about 5.5pe2 or 189.5DN2, and the pixel well depth was 10,482pe. The reference strength was set to approximately 2,500pe or about a quarter of the pixel well depth.

To perform speckle averaging, we rotated the Labsphere Spectralon stage between the recording of the hologram and signal frames to obtain different speckle realizations. We recorded 20 speckle realizations and 20 shot-noise averaging frames for each speckle realization, totaling 400 hologram and signal frames for each dataset. Additionally, we recorded 100 reference and 100 background frames for each dataset. In total, we collected six datasets with different signal strengths while approximately maintaining the fiber reference light and background light levels.

B. Data Processing

We show example recorded and demodulated frames in Fig. 4. Here, we demodulate each recorded frame according to Fig. 2 in MATLAB. We also demodulated the background frames. In turn, we calculated the frame-averaged background frame (i.e., m¯B(x,y)) and subtracted it from the reference and signal frames prior to demodulation to eliminate background noise. We did not modify the hologram frames with background subtraction, so the spatial modulation was not altered. Then, we converted each frame from DN to pe using the manufacturers specification for gain: gA/D=1/0.17. Since the gain cancels out in Eq. (9) and is in terms of pe, this conversion provided a better comparison between the recorded frames and demodulated energies. We did not perform any nonuniformity correction since the pixel-to-pixel performance variation was likely averaged out with the 3.2 mega-pixel array. Table 2 shows the pixel- and frame-averaged values for the background, signal, reference, and hologram number of photoelectrons (i.e., m¯B, m¯S, m¯R, and m¯H, respectively).

 figure: Fig. 4.

Fig. 4. Each plot is from dataset 5, where m¯S=96pe, and represents the full frame. Here, (a) shows a single hologram frame, (b) shows the mean hologram energy in the Fourier plane, (c) shows the mean hologram energy in the image plane E¯H(x,y), (d) shows the mean number reference photoelectrons m¯R(x,y), (e) shows the mean reference energy in the Fourier plane, (f) shows the mean reference noise energy in the image plane E¯R(x,y), (g) shows the mean number signal photoelectrons m¯S(x,y), (h) shows the mean signal energy in the Fourier plane, and (i) shows the mean signal noise energy in the image plane E¯S(x,y). Note that the first and third columns have square pixels with a rectangular array, which gives rise to rectangular pixels with a square array in the second column.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 2. Pixel- and Frame-Averaged Values with Respect to the Six Datasets

We defined the pupil window in the Fourier plane by using the definition of the image plane sampling quotient qI [cf. Eq. (3)]. In addition, we used the same pupil window to demodulate the reference, signal, and background frames. We then averaged the frames associated with the squared magnitude of the demodulated hologram, reference, signal, and background frames, which yielded the mean hologram energy, E¯H(x,y); mean reference noise energy, E¯R(x,y); and mean signal noise energy, E¯S(x,y). As such, the mean total noise energy, E¯N(x,y), was the sum of the our mean noise energies, such that E¯N(x,y)=E¯R(x,y)+E¯S(x,y), and Table 2 provides pixel-averaged values for these noise energies. Note that we excluded the background noise energy from E¯N(x,y) because it was insignificant.

C. Signal Fit

Figure 5 shows the radial profile of the per-pixel mean number of signal photoelectrons, m¯S(x,y), for the six datasets provided in Table 2. For comparisons to the radiometric SNR [cf. Eq. (10)], we needed a profile for the signal to estimate the power at the object, Po. For this purpose, we considered the signal’s speckle noise. The speckle area was approximately 9.7p2 with a measured squared speckle contrast, C2 (i.e., the ratio of the pixel variance to the squared mean), of approximately 0.45, which was within 1% to Goodman’s theory [21,22]. Speckle averaging reduced the measured C2 by approximately 80% to 0.024. Along with the smoothness of Fig. 5, we concluded that speckle averaging was sufficient enough to fit a profile to m¯S(x,y).

 figure: Fig. 5.

Fig. 5. Azimuthal average of m¯S for the six datasets.

Download Full Size | PPT Slide | PDF

With the above details in mind, we fit a 2D Gaussian profile, G(x,y), to m¯S(x,y), such that

G(x,y)=Aexp(12(xxcσx)2+(yycσy)2),
where the fitting parameters were A, the Gaussian amplitude; xc and yc, the Gaussian center location; and σx and σy, the Gaussian widths in the x and y directions. We provide some fit parameters of interest in Table 3, along with the r-squared (r2) fitting metric. The reported uncertainty of each fit parameter was within a few percent. We believe the stark difference of dataset 1’s (m¯S=4.0pe) fit parameters was due to the wings of the beam being below the noise floor of the detector. In addition, we believe the variations between datasets 2–5 was due to realignment with the illuminator and beam expander between dataset collections. Figure 6 shows the relative percent error in the fit for dataset 5, where m¯S=96pe. Here, we observed some structure from an imperfect Gaussian beam, which we believe is due to minor misalignment of the beam expander, the input beam diameter over filling the input aperture of the beam expander, but not significant laser multi-modal behavior.

Tables Icon

Table 3. Gaussian Fit Results with Respect to the Six Datasets

 figure: Fig. 6.

Fig. 6. Relative percent error of the fit for dataset 5, where m¯S=96pe.

Download Full Size | PPT Slide | PDF

In what follows, we show multiple figures with azimuthally averaged measurements (cf. Fig. 5). We performed the azimuthal averages with respect to the normalized Gaussian radius, rG. Because of the slightly different x and y widths from the tilted Labsphere Spectralon stage. We then normalized the radius to the x and y half-width at half-maximum (HWHM), viz.,

rG(x,y)=(xxc2ln(2)σx)2+(yyc2ln(2)σy)2,
where the factor of 2ln(2) converts σx,y to the HWHM. Therefore, rG=1 corresponds to the Gaussian profile at half the maximum value, which provides a spatial comparison.

D. Radiometric SNR

To calculate the radiometric SNR, S/NR, for the measured total system efficiency, we measured the power of the illuminator beam at the tilted Labsphere Spectralon stage over a minute and took a picture of the power meter location with the FPA. The power meter was a ThorLabs PM100D with S130C photodiode. It was 6 months into the calibration lifespan of 2 years and had a vendor specified uncertainty of ±3%. In MATLAB, we drew a mask over the detector area of the power meter picture. We then scaled G(x,y) based on the average power measurement and the mask to obtain the spatial power distribution Po(x,y) in Eq. (10). Figure 7 shows the azimuthally averaged S/NR for each of the six datasets. When compared to Fig. 5, the SNRs reported in Fig. 7 are almost an order of magnitude greater than the signal. This outcome is because of the 4qI2/π factor in Eq. (9), which was about 9.3.

 figure: Fig. 7.

Fig. 7. Azimuthal average of S/NR for the six datasets.

Download Full Size | PPT Slide | PDF

4. MEASUREMENTS, RESULTS, AND DISCUSSION

In this section, we present three measured efficiencies: the mixing efficiency ηm, which again is the product of the polarization efficiency, ηp, and spatial integration efficiency, ηs; the reference noise efficiency, ηR; and the signal noise efficiency ηS. These measured efficiencies refined our expected total system efficiency ηT for comparison to our measured total system efficiency ηT. Additionally, we present results from a previous experiment [8], where we corrected the camera integration time to 55 μs. Appendix A contains additional details with respect to this previous experiment.

A. Measured Mixing Efficiency ηm

To determine ηm, we calculated the ratio of the hologram energy without noise to the product of the mean number of signal and reference photoelectrons, such that

ηm(x,y)=E¯H(x,y)E¯N(x,y)m¯R(x,y)m¯S(x,y).
As a reminder, ηm represents how well the detected reference and signal interfere and is the product of the polarization efficiency (ηp=50%) and spatial integration efficiency (ηs=64.4%). Additionally, ηm would capture any losses not quantified earlier due to vibration or coherence effects, which we assume to be negligible in the present analysis. Figure 8(a) shows ηm(x,y) for dataset 5, where m¯S=96pe, and Fig. 8(b) shows the radial profile of ηm for the six datasets.

 figure: Fig. 8.

Fig. 8. (a) Mixing efficiency ηm for dataset 5, where m¯S=96pe. (b) The azimuthal average of ηm for each dataset with the expected value of 32.2% in the black, dashed line.

Download Full Size | PPT Slide | PDF

In Fig. 8(a), we observed that the measurement was noisy but uniform over the area of illumination. Furthermore, Fig. 8(b) shows the azimuthal average for the six datasets, where we saw ηm was relatively constant for each dataset except for dataset 1. For dataset 1, m¯S=4pe and we determined that the FPA’s absolute sensitivity threshold, where the camera SNR=1, was 4pe, so we recorded those signal frames at the detection limit of the FPA, which explains this deviation. More importantly, datasets 2–6 were 4.6% greater than the expected ηm of 32.2% from Section 2.C. Recall that we modeled the pixel as an ideal square with an ideal-sinc MTF. Since our FPA used microlenses to achieve a specified 100% pixel fill factor, the ideal-sinc model MTF was different than expected. This increase in ηm as compared to theory is fortunate, but shows that the pixel MTF can vary ηs by 10%.

We show a summary of the pixel averaged ηm in Table 4. Here, we took the pixel average over the area of illumination in the frame. We excluded dataset 1 from the average because of the weak signal as mentioned before. In addition, we believe the large standard deviations are from the different speckle realizations between the recorded signal and hologram frames. The average ηm for datasets 5 and 6 (the two highest signal strengths) were about a percent less than datasets 2–4, which can also be seen in Fig. 8(b). We suspected this outcome was due to pixel nonlinearity in the signal frames. Thus, we used the results from datasets 2–6 to obtain an average mixing efficiency of ηm=36.8%±10.2% to incorporate into our estimated total system efficiency for comparison to our measured total system efficiency.

Tables Icon

Table 4. Mixing and Reference-Noise Efficiency Measurements

B. Measured Reference Noise Efficiency ηR

We determined the reference noise efficiency, ηR, as

ηR=π4qIm¯RE¯R,
where the factor of π/4qI again accounts for the ratio of the window area to the total Fourier plane area, and the m¯R and E¯R quantities are both pixel- and frame-averaged values (cf. Table 2). Recall that ηR represents excess reference noise, which means the reference noise is greater than the shot noise. Table 4 contains the results from each dataset.

The average ηR of 74.5% corresponded to the measured reference noise being 34% greater than the shot noise (i.e., 1/ηR). To further investigate the source of this excess reference noise, we measured the per-pixel ratio of reference light variance σR2(x,y) to m¯R(x,y) and referred to as the shot noise ratio. The average of this ratio across all the pixels was 1.11, which demonstrates that the reference noise was 11% greater than the shot noise. We also verified this outcome in the free-space beam to rule out the LO fiber as the source. However, we found that the shot noise ratio increased >2 times for shorter integration times <1msec, where a temporal dependence is indicative of laser amplitude noise. For comparison to the other experiment (cf. Appendix A), which used a different MO laser, ηR=50%, and the average shot noise ratio was 1.36, where the integration time was 55 μs. However, the difference between our shot noise ratio and ηR led us to believe some of the excess reference noise arises from the demodulation process.

We initially assumed the nonuniformity in the reference was filtered in the Fourier plane by the pupil window because it contained low spatial frequencies outside of the window. However, the etalon pattern was still prominent in E¯R(x,y) [cf. Fig. 4(f)], and therefore we did not filter out all of the reference nonuniformity with our Fourier plane window. We also observed this etalon effect in the other experiment (cf. Fig. 14 in Appendix A). Therefore, we suspected the nonuniform reference compounded the excess reference noise captured by ηR.

These results showed that the shot-noise limit assumption for the reference [cf. Eq. (4)] can be insufficient. In the worst case, this efficiency loss can be as significant as the polarization loss 50%. This efficiency can be improved upon with better laser control electronics to reduce the laser amplitude noise, choice of FPA integration time, anti-reflective coatings on the FPA cover glass, and other such measures to reduce the reference etalon effect.

C. Measured Signal Noise Efficiency ηS

For the signal noise efficiency, ηS, we calculated the ratio of the reference noise energy to the total noise energy, such that

ηS(x,y)=E¯R(x,y)E¯N(x,y).
As a reminder, ηS represents the excess signal noise present in the signal-dependent SNR expression [cf. Eq. (9)]. Note that if we had significant background noise, detector noise, or other noise sources, then we could include the background noise energy in E¯N(x,y) to capture other noise sources. We show the ηS measurement for dataset 5, where m¯S=96pe, in Fig. 9(a) and a radial comparison of the six datasets in Fig. 9(b).

 figure: Fig. 9.

Fig. 9. (a) Calculated frame of the signal noise efficiency ηS for dataset 5 where m¯S=96pe. (b) The azimuthal average of ηS for each dataset.

Download Full Size | PPT Slide | PDF

In Fig. 9, we observed that ηS is inversely proportional with the Gaussian signal profile. When m¯Rm¯S, the signal shot noise was negligible as in dataset 1, where m¯S=4pe and ηS100%. In the other datasets, we observed ηS was a minor efficiency loss for datasets 2 and 3 (m¯S=10pe and 16pe, respectively) and became a major efficiency loss at the higher signal strengths (e.g., datasets 4–6, where m¯S>54pe).

The majority of the excess noise came from the signal with the partially windowed pupil autocorrelation in the Fourier plane. As the strength of the signal shot noise increased, so did the strength of the pupil autocorrelation, which increased faster than the signal shot noise. We show this outcome in Fig. 10, where we performed a power regression. On the log-log plot, we fit a line (y=mx+b) to the log10 of the data (i.e., E¯S versus m¯S), where the average residual was <10%. The slope was m=2.04 with a y-intercept of b=1.36. This result corresponded to a quadratic line y=0.044x2 for the data, which demonstrated that E¯S increased faster than the signal shot noise. As such, this quadratic relationship is sound because the pupil autocorrelation energy is proportional to m¯S2 and the signal shot noise energy is proportional to m¯S. The strong reference assumption used in the closed-form SNR expression in Section 2, where the total noise is dominated by the reference noise, becomes less valid when m¯R100m¯S.

 figure: Fig. 10.

Fig. 10. Power regression (dashed line) of m¯S versus E¯S showing the signal noise energy E¯S was proportional to the square of the per-pixel mean number of signal photoelectrons m¯S and not linear like shot noise.

Download Full Size | PPT Slide | PDF

D. Measured Signal-to-Noise Ratio S/N

We determined S/N as the ratio of the hologram energy without noise to the total noise energy, viz.,

S/N(x,y)=E¯H(x,y)E¯N(x,y)E¯N(x,y),
where S/N is equivalent to S/N [cf. Eq. (9)] when all of our assumptions are valid. We show S/N(x,y) for dataset 5, where m¯S=96pe, in Fig. 11(a) and a radial comparison for the six datasets in Fig. 11(b).

 figure: Fig. 11.

Fig. 11. (a) Measured SNR S/N for dataset 5, where m¯S=96pe. (b) The azimuthal average of S/N for each dataset.

Download Full Size | PPT Slide | PDF

We observed the curves in Fig. 11(b) were approximately Gaussian at the lower signal strengths (datasets 1–3), but deviated from the Gaussian shape for the higher signal strengths (datasets 4–6). As seen previously with ηS, E¯S became a considerable noise source between datasets 3 and 4 (m¯S=16pe and 54pe, respectively). This outcome occurred when S/N100, and the SNR drop was due to the excess signal shot noise, as previously discussed in Section 4.C. This outcome illustrates that the excess signal shot noise imposes an upper limit to the SNR, which was 140 from the peak of dataset 4 in Fig. 11(b). Additionally, ηS could appear to be an overwhelming efficiency loss, but it only becomes significant for high signal strengths where the SNR is substantial. For example, in dataset 6, m¯S577 (peak from the fit) and ηS3%, but the center of S/N50. While the SNR upper limit and ηS could appear to be detrimental, we only need a SNR>10 for applications like deep-turbulence wavefront sensing [46].

E. Measured Total System Efficiency ηT

To determine ηT, we calculated the ratio of the SNRs, such that

ηT(x,y)=1ηS(x,y)S/N(x,y)S/NR(x,y).
We included ηS(x,y) in Eq. (19) to counter the spatial and signal strength dependence in the noise as seen in the measured SNR (cf. Section 4.D). With this last point in mind, we show ηT(x,y) for dataset 5, where m¯S=96pe, in Fig. 12(a). The frame is mostly noisy in the center, but some similar structure exists toward the wings of the Gaussian profile that we observed in the fit residuals (cf. Fig. 6). We also did not observe any structure related to the non-Gaussian shape that we saw in Fig. 11(a) with respect to the measured SNR. Recall that the ηS(x,y) factor quantifies the spatial effects of the non-Gaussian shape.

 figure: Fig. 12.

Fig. 12. (a) Total system efficiency ηT for dataset 5 where m¯S=96pe. (b) The azimuthal average of ηT for each dataset.

Download Full Size | PPT Slide | PDF

Next, we show a radial comparison across the datasets in Fig. 12(b). In dataset 1, where m¯S=4pe, we saw this calculation fall quickly from the Gaussian peak. This outcome was due to the very weak signal, where the signal fell below the camera noise in the Gaussian profile wings. For the remaining datasets, the ηT radial average exhibited a wave, which appeared consistent with the reference etalon pattern. We had assumed our measured SNR was directly proportional to the signal strength, which means we should have the same spatial distribution as the signal frame. The nonuniform reference affected E¯H, as seen in Fig. 4(b), as compared to the average signal frames in Fig. 4(g). This outcome led us to believe the nonuniform reference has some effect on the SNR even with subtracting out E¯N and dividing out ηS, which contains the spatial nature of the reference.

For a comparison to ηT, we updated Table 1, which contained our major system efficiencies, with our measurements in Table 5. We excluded the signal noise efficiency ηS because we used it in Eq. (11). Multiplying all the efficiencies together gave an ηT=22.7%±6.5%.

Tables Icon

Table 5. Updated System Efficiencies

To compare to our expected ηT of 22.7%, we took a spatial average of the ηT(x,y) frames from about rG=01.5 for datasets 2–6 and rG=00.5 for dataset 1, where there was detectable signal for the measurement. We show the results in Fig. 13. The error bars, whose widths are one standard deviation, are around 6%–6.5%. Both the measured and expected total system efficiency’s uncertainty fall within a half of the standard deviation. Across the six datasets, the average was ηT=21.2% with σηT=6.3%. We presumed the minor differences between datasets were due to minor laser or system fluctuations throughout the data collection, which occurred over several hours in 2 days.

 figure: Fig. 13.

Fig. 13. Pixel averaged measured total system efficiency ηT for the six datasets, where the width of the error bars represent the standard deviation, the dashed line represents the expected value of 22.7%, and the dotted lines are ±σηT=6.5%.

Download Full Size | PPT Slide | PDF

In the previous experiment (cf. Appendix A), we had a ηt=99%, ηq=50%, ηm=30.9%, and ηR=50.0%, which yielded a ηT=7.7% with an uncertainty of 1.1%. We took three datasets at m¯S=13.9pe,32.3pe, and 39.5pe. The measured total system efficiencies were ηT=8.5%,7.6%, and 7.2%. This outcome resulted in an average ηT=7.8% with σηT=2.9%. The major differences with this experiment were a different vendor for the MO laser and more noise (cf. Table 6 in Appendix A). However, the results from two separate experiments further supported that we have quantified all of the major system efficiencies.

Tables Icon

Table 6. Previous Experiment Details

5. CONCLUSION

In this paper, we showed that the major system efficiencies (multiplicative losses) of our DH system under test were the mixing, reference noise, and signal noise efficiencies (i.e., ηm, ηR, and ηS, respectively). From the collected experimental data, we measured the following values for these efficiencies using a DH system in the off-axis IPRG: ηm=36.8%, ηR=74.5%, and ηS=3%100%. In turn, our measured value of 36.8% for ηm was 4.6% higher than our expected value of 32.2% for ηm using an ideal-sinc model for the pixel MTF. We suspected that our pixel MTF differs from an ideal-sinc model because the FPA used microlenses to achieve a 100% pixel fill factor. Additionally, we found that the reference noise was about 34% greater than that obtained with Poisson-distributed shot noise. Along with laser amplitude noise, we saw that the reference nonuniformity compounds this excess noise. Next, we observed that the signal noise efficiency was dependent on the signal strength at higher SNRs (e.g., an SNR>100). Due to sampling of the pupil autocorrelation term in the Fourier plane, we saw that the demodulated signal noise was proportional to the square of the signal strength m¯S and increased faster than the signal shot noise.

With these efficiency measurements, we modified our expected total system efficiency ηT from 27.5% to 22.7% (cf. Tables 1 and 5). As such, we measured the average total system efficiency ηT=21.1%, which was 1.6% less than expected. These measurements fell within the error bounds of both the expected and measured total system efficiency. We also achieved similar results with the data from a previous experiment, where the ηT was 7.7% and ηT was 7.8%.

Overall, our results show that reaching the ideal mixing efficiency limit for a DH system is difficult without consideration of the excess signal and reference noise. We can mitigate excess signal noise, in practice, by decreasing the Fourier plane sampling (qI4) and moving the pupil location further from the Fourier plane center. However, when the excess signal noise becomes significant, the SNR 100. With deep-turbulence wavefront sensing, for example, we generally need a SNR>10, and we desire maximum Fourier plane sampling (2qI4). We should also take precautions to increase the reference uniformity and reduce the laser amplitude noise to maximize the reference noise efficiency. Therefore, the DH system efficiency could approach the ideal mixing efficiency limit under the strong reference assumption with highly transmissive optics, highly efficient FPAs, and a near-uniform and low-amplitude noise reference.

APPENDIX A: PREVIOUS EXPERIMENT DETAILS

As summarized in Fig. 14 and Table 6, we carried out another experiment in a similar fashion to the one presented in this paper. The main hardware differences were a different vendor for the MO and FPA, but the majority of the specifications for each were comparable. Additional details can be found in Thornton et al. [8].

 figure: Fig. 14.

Fig. 14. (a) Mean number of reference photoelectrons m¯R(x,y) and (b) mean reference noise energy E¯R(x,y) for the previously conducted experiment.

Download Full Size | PPT Slide | PDF

Acknowledgment

The authors would like to thank D. Mao for his insight into the analysis and feedback.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

REFERENCES AND NOTES

1. J. C. Marron, R. L. Kendrick, N. Seldomridge, T. D. Grow, and T. A. Höft, “Atmospheric turbulence correction using digital holographic detection: experimental results,” Opt. Express 17, 11638–11651 (2009). [CrossRef]  

2. J. C. Marron and K. S. Schroeder, “Holographic laser radar,” Opt. Lett. 18, 385–387 (1993). [CrossRef]  

3. P. M. Furth, V. Ponnapureddy, S. R. Dundigal, D. G. Voelz, R. Korupolu, A. Garimella, and M. W. Rashid, “Integrated CMOS sensor array for optical heterodyne phase sensing,” IEEE Sens. J. 11, 1516–1521 (2011). [CrossRef]  

4. M. F. Spencer, R. A. Raynor, M. T. Banet, and D. K. Marker, “Deep-turbulence wavefront sensing using digital-holographic detection in the off-axis image plane recording geometry,” Opt. Eng. 56, 031213 (2016). [CrossRef]  

5. M. T. Banet, M. F. Spencer, and R. A. Raynor, “Digital-holographic detection in the off-axis pupil plane recording geometry for deep-turbulence wavefront sensing,” Appl. Opt. 57, 465–475 (2018). [CrossRef]  

6. D. E. Thornton, M. F. Spencer, and G. P. Perram, “Deep-turbulence wavefront sensing using digital holography in the on-axis phase shifting recording geometry with comparisons to the self-referencing interferometer,” Appl. Opt. 58, A179–A189 (2019). [CrossRef]  

7. A. E. Siegman, “The antenna properties of optical heterodyne receivers,” Appl. Opt. 5, 1588–1594 (1966). [CrossRef]  

8. D. E. Thornton, M. F. Spencer, C. A. Rice, and G. P. Perram, “Efficiency measurements for a digital- holography system,” Proc. SPIE 10650, 1065004 (2018). [CrossRef]  

9. M. C. Teich, “Infrared heterodyne detection,” Proc. IEEE 56, 37–46 (1968). [CrossRef]  

10. J. H. Shapiro, B. A. Capron, and R. C. Harney, “Imaging and target detection with a heterodyne-reception optical radar,” Appl. Opt. 20, 3292–3313 (1981). [CrossRef]  

11. R. Foord, R. Jones, J. M. Vaughan, and D. V. Willetts, “Precise comparison of experimental and theoretical SNRs in CO2 laser heterodyne systems,” Appl. Opt. 22, 3787–3795 (1983). [CrossRef]  

12. M. F. Spencer, “Spatial heterodyne,” in Encyclopedia of Modern Optics, 2nd ed. (Academic, 2017), Vol. IV, pp. 369–400.

13. G. R. Oesch, Optical Detection Theory for Laser Applications (Wiley, 2002).

14. P. Merritt and M. F. Spencer, Beam Control for Laser Systems, 2nd ed. (Directed Energy Professional Society, 2018).

15. E. L. Dereniak and G. D. Boreman, Infrared Detectors and Systems (Wiley, 1996).

16. J. R. Janesick, Photon Transfer (SPIE, 2007).

17. J. D. Gaskill, Linear Systems, Fourier Transforms, and Optics (Wiley, 1978).

18. P. C. D. Hobbs, “Reaching the shot noise limit for $10,” Opt. Photonics News 2(4), 17–23 (1991). [CrossRef]  

19. R. Paschotta, “Intensity noise,” in Encyclopedia of Laser Physics and Technology (2008).

20. FLIR, “How to evaluate camera sensitivity,” https://www.ptgrey.com/white-paper/id/10912.

21. J. W. Goodman, Speckle Phenomena in Optics (Roberts & Company, 2007).

22. Goodman uses the RMS (or amplitude definition) of SNR and the speckle contrast is the inverse of the SNR (i.e., the standard deviation to the mean). Since we used the power definition of the SNR, C2 enables a better comparison to our measurements.

References

  • View by:
  • |
  • |
  • |

  1. J. C. Marron, R. L. Kendrick, N. Seldomridge, T. D. Grow, and T. A. Höft, “Atmospheric turbulence correction using digital holographic detection: experimental results,” Opt. Express 17, 11638–11651 (2009).
    [Crossref]
  2. J. C. Marron and K. S. Schroeder, “Holographic laser radar,” Opt. Lett. 18, 385–387 (1993).
    [Crossref]
  3. P. M. Furth, V. Ponnapureddy, S. R. Dundigal, D. G. Voelz, R. Korupolu, A. Garimella, and M. W. Rashid, “Integrated CMOS sensor array for optical heterodyne phase sensing,” IEEE Sens. J. 11, 1516–1521 (2011).
    [Crossref]
  4. M. F. Spencer, R. A. Raynor, M. T. Banet, and D. K. Marker, “Deep-turbulence wavefront sensing using digital-holographic detection in the off-axis image plane recording geometry,” Opt. Eng. 56, 031213 (2016).
    [Crossref]
  5. M. T. Banet, M. F. Spencer, and R. A. Raynor, “Digital-holographic detection in the off-axis pupil plane recording geometry for deep-turbulence wavefront sensing,” Appl. Opt. 57, 465–475 (2018).
    [Crossref]
  6. D. E. Thornton, M. F. Spencer, and G. P. Perram, “Deep-turbulence wavefront sensing using digital holography in the on-axis phase shifting recording geometry with comparisons to the self-referencing interferometer,” Appl. Opt. 58, A179–A189 (2019).
    [Crossref]
  7. A. E. Siegman, “The antenna properties of optical heterodyne receivers,” Appl. Opt. 5, 1588–1594 (1966).
    [Crossref]
  8. D. E. Thornton, M. F. Spencer, C. A. Rice, and G. P. Perram, “Efficiency measurements for a digital- holography system,” Proc. SPIE 10650, 1065004 (2018).
    [Crossref]
  9. M. C. Teich, “Infrared heterodyne detection,” Proc. IEEE 56, 37–46 (1968).
    [Crossref]
  10. J. H. Shapiro, B. A. Capron, and R. C. Harney, “Imaging and target detection with a heterodyne-reception optical radar,” Appl. Opt. 20, 3292–3313 (1981).
    [Crossref]
  11. R. Foord, R. Jones, J. M. Vaughan, and D. V. Willetts, “Precise comparison of experimental and theoretical SNRs in CO2 laser heterodyne systems,” Appl. Opt. 22, 3787–3795 (1983).
    [Crossref]
  12. M. F. Spencer, “Spatial heterodyne,” in Encyclopedia of Modern Optics, 2nd ed. (Academic, 2017), Vol. IV, pp. 369–400.
  13. G. R. Oesch, Optical Detection Theory for Laser Applications (Wiley, 2002).
  14. P. Merritt and M. F. Spencer, Beam Control for Laser Systems, 2nd ed. (Directed Energy Professional Society, 2018).
  15. E. L. Dereniak and G. D. Boreman, Infrared Detectors and Systems (Wiley, 1996).
  16. J. R. Janesick, Photon Transfer (SPIE, 2007).
  17. J. D. Gaskill, Linear Systems, Fourier Transforms, and Optics (Wiley, 1978).
  18. P. C. D. Hobbs, “Reaching the shot noise limit for $10,” Opt. Photonics News 2(4), 17–23 (1991).
    [Crossref]
  19. R. Paschotta, “Intensity noise,” in Encyclopedia of Laser Physics and Technology (2008).
  20. FLIR, “How to evaluate camera sensitivity,” https://www.ptgrey.com/white-paper/id/10912 .
  21. J. W. Goodman, Speckle Phenomena in Optics (Roberts & Company, 2007).
  22. Goodman uses the RMS (or amplitude definition) of SNR and the speckle contrast is the inverse of the SNR (i.e., the standard deviation to the mean). Since we used the power definition of the SNR, C2 enables a better comparison to our measurements.

2019 (1)

2018 (2)

D. E. Thornton, M. F. Spencer, C. A. Rice, and G. P. Perram, “Efficiency measurements for a digital- holography system,” Proc. SPIE 10650, 1065004 (2018).
[Crossref]

M. T. Banet, M. F. Spencer, and R. A. Raynor, “Digital-holographic detection in the off-axis pupil plane recording geometry for deep-turbulence wavefront sensing,” Appl. Opt. 57, 465–475 (2018).
[Crossref]

2016 (1)

M. F. Spencer, R. A. Raynor, M. T. Banet, and D. K. Marker, “Deep-turbulence wavefront sensing using digital-holographic detection in the off-axis image plane recording geometry,” Opt. Eng. 56, 031213 (2016).
[Crossref]

2011 (1)

P. M. Furth, V. Ponnapureddy, S. R. Dundigal, D. G. Voelz, R. Korupolu, A. Garimella, and M. W. Rashid, “Integrated CMOS sensor array for optical heterodyne phase sensing,” IEEE Sens. J. 11, 1516–1521 (2011).
[Crossref]

2009 (1)

1993 (1)

1991 (1)

P. C. D. Hobbs, “Reaching the shot noise limit for $10,” Opt. Photonics News 2(4), 17–23 (1991).
[Crossref]

1983 (1)

1981 (1)

1968 (1)

M. C. Teich, “Infrared heterodyne detection,” Proc. IEEE 56, 37–46 (1968).
[Crossref]

1966 (1)

Banet, M. T.

M. T. Banet, M. F. Spencer, and R. A. Raynor, “Digital-holographic detection in the off-axis pupil plane recording geometry for deep-turbulence wavefront sensing,” Appl. Opt. 57, 465–475 (2018).
[Crossref]

M. F. Spencer, R. A. Raynor, M. T. Banet, and D. K. Marker, “Deep-turbulence wavefront sensing using digital-holographic detection in the off-axis image plane recording geometry,” Opt. Eng. 56, 031213 (2016).
[Crossref]

Boreman, G. D.

E. L. Dereniak and G. D. Boreman, Infrared Detectors and Systems (Wiley, 1996).

Capron, B. A.

Dereniak, E. L.

E. L. Dereniak and G. D. Boreman, Infrared Detectors and Systems (Wiley, 1996).

Dundigal, S. R.

P. M. Furth, V. Ponnapureddy, S. R. Dundigal, D. G. Voelz, R. Korupolu, A. Garimella, and M. W. Rashid, “Integrated CMOS sensor array for optical heterodyne phase sensing,” IEEE Sens. J. 11, 1516–1521 (2011).
[Crossref]

Foord, R.

Furth, P. M.

P. M. Furth, V. Ponnapureddy, S. R. Dundigal, D. G. Voelz, R. Korupolu, A. Garimella, and M. W. Rashid, “Integrated CMOS sensor array for optical heterodyne phase sensing,” IEEE Sens. J. 11, 1516–1521 (2011).
[Crossref]

Garimella, A.

P. M. Furth, V. Ponnapureddy, S. R. Dundigal, D. G. Voelz, R. Korupolu, A. Garimella, and M. W. Rashid, “Integrated CMOS sensor array for optical heterodyne phase sensing,” IEEE Sens. J. 11, 1516–1521 (2011).
[Crossref]

Gaskill, J. D.

J. D. Gaskill, Linear Systems, Fourier Transforms, and Optics (Wiley, 1978).

Goodman, J. W.

J. W. Goodman, Speckle Phenomena in Optics (Roberts & Company, 2007).

Grow, T. D.

Harney, R. C.

Hobbs, P. C. D.

P. C. D. Hobbs, “Reaching the shot noise limit for $10,” Opt. Photonics News 2(4), 17–23 (1991).
[Crossref]

Höft, T. A.

Janesick, J. R.

J. R. Janesick, Photon Transfer (SPIE, 2007).

Jones, R.

Kendrick, R. L.

Korupolu, R.

P. M. Furth, V. Ponnapureddy, S. R. Dundigal, D. G. Voelz, R. Korupolu, A. Garimella, and M. W. Rashid, “Integrated CMOS sensor array for optical heterodyne phase sensing,” IEEE Sens. J. 11, 1516–1521 (2011).
[Crossref]

Marker, D. K.

M. F. Spencer, R. A. Raynor, M. T. Banet, and D. K. Marker, “Deep-turbulence wavefront sensing using digital-holographic detection in the off-axis image plane recording geometry,” Opt. Eng. 56, 031213 (2016).
[Crossref]

Marron, J. C.

Merritt, P.

P. Merritt and M. F. Spencer, Beam Control for Laser Systems, 2nd ed. (Directed Energy Professional Society, 2018).

Oesch, G. R.

G. R. Oesch, Optical Detection Theory for Laser Applications (Wiley, 2002).

Paschotta, R.

R. Paschotta, “Intensity noise,” in Encyclopedia of Laser Physics and Technology (2008).

Perram, G. P.

Ponnapureddy, V.

P. M. Furth, V. Ponnapureddy, S. R. Dundigal, D. G. Voelz, R. Korupolu, A. Garimella, and M. W. Rashid, “Integrated CMOS sensor array for optical heterodyne phase sensing,” IEEE Sens. J. 11, 1516–1521 (2011).
[Crossref]

Rashid, M. W.

P. M. Furth, V. Ponnapureddy, S. R. Dundigal, D. G. Voelz, R. Korupolu, A. Garimella, and M. W. Rashid, “Integrated CMOS sensor array for optical heterodyne phase sensing,” IEEE Sens. J. 11, 1516–1521 (2011).
[Crossref]

Raynor, R. A.

M. T. Banet, M. F. Spencer, and R. A. Raynor, “Digital-holographic detection in the off-axis pupil plane recording geometry for deep-turbulence wavefront sensing,” Appl. Opt. 57, 465–475 (2018).
[Crossref]

M. F. Spencer, R. A. Raynor, M. T. Banet, and D. K. Marker, “Deep-turbulence wavefront sensing using digital-holographic detection in the off-axis image plane recording geometry,” Opt. Eng. 56, 031213 (2016).
[Crossref]

Rice, C. A.

D. E. Thornton, M. F. Spencer, C. A. Rice, and G. P. Perram, “Efficiency measurements for a digital- holography system,” Proc. SPIE 10650, 1065004 (2018).
[Crossref]

Schroeder, K. S.

Seldomridge, N.

Shapiro, J. H.

Siegman, A. E.

Spencer, M. F.

D. E. Thornton, M. F. Spencer, and G. P. Perram, “Deep-turbulence wavefront sensing using digital holography in the on-axis phase shifting recording geometry with comparisons to the self-referencing interferometer,” Appl. Opt. 58, A179–A189 (2019).
[Crossref]

M. T. Banet, M. F. Spencer, and R. A. Raynor, “Digital-holographic detection in the off-axis pupil plane recording geometry for deep-turbulence wavefront sensing,” Appl. Opt. 57, 465–475 (2018).
[Crossref]

D. E. Thornton, M. F. Spencer, C. A. Rice, and G. P. Perram, “Efficiency measurements for a digital- holography system,” Proc. SPIE 10650, 1065004 (2018).
[Crossref]

M. F. Spencer, R. A. Raynor, M. T. Banet, and D. K. Marker, “Deep-turbulence wavefront sensing using digital-holographic detection in the off-axis image plane recording geometry,” Opt. Eng. 56, 031213 (2016).
[Crossref]

M. F. Spencer, “Spatial heterodyne,” in Encyclopedia of Modern Optics, 2nd ed. (Academic, 2017), Vol. IV, pp. 369–400.

P. Merritt and M. F. Spencer, Beam Control for Laser Systems, 2nd ed. (Directed Energy Professional Society, 2018).

Teich, M. C.

M. C. Teich, “Infrared heterodyne detection,” Proc. IEEE 56, 37–46 (1968).
[Crossref]

Thornton, D. E.

Vaughan, J. M.

Voelz, D. G.

P. M. Furth, V. Ponnapureddy, S. R. Dundigal, D. G. Voelz, R. Korupolu, A. Garimella, and M. W. Rashid, “Integrated CMOS sensor array for optical heterodyne phase sensing,” IEEE Sens. J. 11, 1516–1521 (2011).
[Crossref]

Willetts, D. V.

Appl. Opt. (5)

IEEE Sens. J. (1)

P. M. Furth, V. Ponnapureddy, S. R. Dundigal, D. G. Voelz, R. Korupolu, A. Garimella, and M. W. Rashid, “Integrated CMOS sensor array for optical heterodyne phase sensing,” IEEE Sens. J. 11, 1516–1521 (2011).
[Crossref]

Opt. Eng. (1)

M. F. Spencer, R. A. Raynor, M. T. Banet, and D. K. Marker, “Deep-turbulence wavefront sensing using digital-holographic detection in the off-axis image plane recording geometry,” Opt. Eng. 56, 031213 (2016).
[Crossref]

Opt. Express (1)

Opt. Lett. (1)

Opt. Photonics News (1)

P. C. D. Hobbs, “Reaching the shot noise limit for $10,” Opt. Photonics News 2(4), 17–23 (1991).
[Crossref]

Proc. IEEE (1)

M. C. Teich, “Infrared heterodyne detection,” Proc. IEEE 56, 37–46 (1968).
[Crossref]

Proc. SPIE (1)

D. E. Thornton, M. F. Spencer, C. A. Rice, and G. P. Perram, “Efficiency measurements for a digital- holography system,” Proc. SPIE 10650, 1065004 (2018).
[Crossref]

Other (10)

M. F. Spencer, “Spatial heterodyne,” in Encyclopedia of Modern Optics, 2nd ed. (Academic, 2017), Vol. IV, pp. 369–400.

G. R. Oesch, Optical Detection Theory for Laser Applications (Wiley, 2002).

P. Merritt and M. F. Spencer, Beam Control for Laser Systems, 2nd ed. (Directed Energy Professional Society, 2018).

E. L. Dereniak and G. D. Boreman, Infrared Detectors and Systems (Wiley, 1996).

J. R. Janesick, Photon Transfer (SPIE, 2007).

J. D. Gaskill, Linear Systems, Fourier Transforms, and Optics (Wiley, 1978).

R. Paschotta, “Intensity noise,” in Encyclopedia of Laser Physics and Technology (2008).

FLIR, “How to evaluate camera sensitivity,” https://www.ptgrey.com/white-paper/id/10912 .

J. W. Goodman, Speckle Phenomena in Optics (Roberts & Company, 2007).

Goodman uses the RMS (or amplitude definition) of SNR and the speckle contrast is the inverse of the SNR (i.e., the standard deviation to the mean). Since we used the power definition of the SNR, C2 enables a better comparison to our measurements.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. DH system in the off-axis IPRG. Here, we split the light from the master oscillator (MO) laser into two paths: the illuminator and local oscillator (LO). Given the illuminator, we illuminate an object with an optically rough surface, and we collect the scattered speckle in a pupil given the appropriate receiver optics. Denoted as the signal complex optical field, U S , the pupil focuses the received speckle onto the FPA. In the other path, the LO provides the reference complex optical field, U R , and we inject the LO off axis in the pupil plane at ( x R , y R ) to illuminate the FPA.
Fig. 2.
Fig. 2. Illustration of the demodulation process for a digital hologram. The reference spatially modulates the signal (see the far-left fringes on top of the square image) and the FPA records the digital hologram with noise (i.e., d H + ). Next, we perform an inverse discrete Fourier transform, DF T 1 , on d H + , which produces d ˜ H + in the Fourier plane (magnitude shown). Four terms arise in the Fourier plane: a strong DC term from the reference irradiance (i.e., | A R | 2 ); the autocorrelation of the pupil, U ˜ S ** U ˜ S (centered at DC), which produces a 2D chat profile [14]; the signal complex optical field, U ˜ S (shifted off-axis), since the image and pupil planes are Fourier transform pairs; and the conjugate of the signal complex optical field, U ˜ S * (shifted off-axis in the opposite direction), since the Fourier transform has Hermitian symmetry. We then shift and window the Fourier plane to obtain the estimated signal, U ˜ ^ S , in the pupil plane. Lastly, we perform a discrete Fourier transform, DFT , to obtain the estimated signal, U ^ S , in the image plane (magnitude shown).
Fig. 3.
Fig. 3. Experimental setup for our DH system under test. BD, beam dump; BE, beam expander; FC, fiber coupler; FI, Faraday isolator; λ / 2 , half-wave plate; M, mirror; MO, master oscillator; ND, neutral density filter; PBS, polarizing beam splitter.
Fig. 4.
Fig. 4. Each plot is from dataset 5, where m ¯ S = 96 p e , and represents the full frame. Here, (a) shows a single hologram frame, (b) shows the mean hologram energy in the Fourier plane, (c) shows the mean hologram energy in the image plane E ¯ H ( x , y ) , (d) shows the mean number reference photoelectrons m ¯ R ( x , y ) , (e) shows the mean reference energy in the Fourier plane, (f) shows the mean reference noise energy in the image plane E ¯ R ( x , y ) , (g) shows the mean number signal photoelectrons m ¯ S ( x , y ) , (h) shows the mean signal energy in the Fourier plane, and (i) shows the mean signal noise energy in the image plane E ¯ S ( x , y ) . Note that the first and third columns have square pixels with a rectangular array, which gives rise to rectangular pixels with a square array in the second column.
Fig. 5.
Fig. 5. Azimuthal average of m ¯ S for the six datasets.
Fig. 6.
Fig. 6. Relative percent error of the fit for dataset 5, where m ¯ S = 96 p e .
Fig. 7.
Fig. 7. Azimuthal average of S / N R for the six datasets.
Fig. 8.
Fig. 8. (a) Mixing efficiency η m for dataset 5, where m ¯ S = 96 p e . (b) The azimuthal average of η m for each dataset with the expected value of 32.2% in the black, dashed line.
Fig. 9.
Fig. 9. (a) Calculated frame of the signal noise efficiency η S for dataset 5 where m ¯ S = 96 p e . (b) The azimuthal average of η S for each dataset.
Fig. 10.
Fig. 10. Power regression (dashed line) of m ¯ S versus E ¯ S showing the signal noise energy E ¯ S was proportional to the square of the per-pixel mean number of signal photoelectrons m ¯ S and not linear like shot noise.
Fig. 11.
Fig. 11. (a) Measured SNR S / N for dataset 5, where m ¯ S = 96 p e . (b) The azimuthal average of S / N for each dataset.
Fig. 12.
Fig. 12. (a) Total system efficiency η T for dataset 5 where m ¯ S = 96 p e . (b) The azimuthal average of η T for each dataset.
Fig. 13.
Fig. 13. Pixel averaged measured total system efficiency η T for the six datasets, where the width of the error bars represent the standard deviation, the dashed line represents the expected value of 22.7%, and the dotted lines are ± σ η T = 6.5 % .
Fig. 14.
Fig. 14. (a) Mean number of reference photoelectrons m ¯ R ( x , y ) and (b) mean reference noise energy E ¯ R ( x , y ) for the previously conducted experiment.

Tables (6)

Tables Icon

Table 1. Expected System Efficiencies

Tables Icon

Table 2. Pixel- and Frame-Averaged Values with Respect to the Six Datasets

Tables Icon

Table 3. Gaussian Fit Results with Respect to the Six Datasets

Tables Icon

Table 4. Mixing and Reference-Noise Efficiency Measurements

Tables Icon

Table 5. Updated System Efficiencies

Tables Icon

Table 6. Previous Experiment Details

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

i H ( x , y ) = | U S ( x , y ) + U R ( x , y ) | 2 ,
U ^ S ( x , y ) = g A / D τ p 2 h ν U R * U S ( x , y ) + g A / D π 4 q I 2 σ n ( x , y ) 2 N k ( x , y ) ,
q I = λ z I p d p ,
σ n 2 ( x , y ) = m ¯ R + m ¯ S ( x , y ) + σ r 2 ,
S / N ( x , y ) = η T E { | U ^ S ( x , y ) | 2 } V { U ^ S ( x , y ) } ,
E { | U ^ S ( x , y ) | 2 } = g A / D 2 m ¯ S ( x , y ) m ¯ R ,
V { U ^ S ( x , y ) } = g A / D 2 π 4 q I 2 σ n 2 ( x , y ) .
S / N ( x , y ) = η T 4 q I 2 π m ¯ S ( x , y ) m ¯ R m ¯ R + m ¯ S ( x , y ) + σ r 2 .
S / N ( x , y ) η T 4 q I 2 π m ¯ S ( x , y ) .
S / N R ( x , y ) = ρ π λ 2 τ h ν P o ( M T x o , M T y o ) A o ,
η T ( x , y ) = η t η q η m η R η S ( x , y ) .
η s = w ( f x , f y ) sinc 2 ( p f x , p f y ) ,
G ( x , y ) = A exp ( 1 2 ( x x c σ x ) 2 + ( y y c σ y ) 2 ) ,
r G ( x , y ) = ( x x c 2 ln ( 2 ) σ x ) 2 + ( y y c 2 ln ( 2 ) σ y ) 2 ,
η m ( x , y ) = E ¯ H ( x , y ) E ¯ N ( x , y ) m ¯ R ( x , y ) m ¯ S ( x , y ) .
η R = π 4 q I m ¯ R E ¯ R ,
η S ( x , y ) = E ¯ R ( x , y ) E ¯ N ( x , y ) .
S / N ( x , y ) = E ¯ H ( x , y ) E ¯ N ( x , y ) E ¯ N ( x , y ) ,
η T ( x , y ) = 1 η S ( x , y ) S / N ( x , y ) S / N R ( x , y ) .

Metrics