## Abstract

Orthogonal frequency division multiplexing (OFDM) has recently gained substantial interest in high capacity optical fiber communications. Unlike wireless systems, optical OFDM systems are constrained by the limited resolution of the ultra high-speed digital-to-analog converters (DAC) and analog-to-digital converters (ADC). Additionally, the situation is exacerbated by the large peak-to-average power ratio (PAPR) inherent in OFDM signals. In this paper, we study the effects of clipping and quantization noise on the system performance. We analytically quantify the introduced distortion as a function of bit resolution and clipping ratio, both at the DAC and ADC. With this we provide a back-to-back signal-to-noise ratio analysis to predict the bit error rate of the system, assuming a fixed received optical power and ideal electrical-optical-electrical conversion. Simulation and experimental results are used to confirm the validity of the expressions.

© 2011 OSA

## 1. Introduction

Recently, orthogonal frequency division multiplexing (OFDM) has been suggested as a viable technology for high-speed optical links [1–4]. In contrast to wireless (or even wired) electrical links, optical fiber offers huge amounts of available bandwidth and low attenuation over long distances. Based on these bandwidth/power budgets, the theoretically predicted transmission capacity is in the Tbit/s. Traditionally optical links use single-carrier transmission with small size constellations (like on/off keying), whose transmission speeds are mainly limited by linear (e.g. dispersion) and nonlinear impairments, requiring high equalization cost/complexity. OFDM is well suited to equalize dispersive channels at low complexity in the frequency domain and therefore seems highly promising to harvest the huge bandwidth resources of optical fiber links.

Although the feasibility of optical OFDM has been shown in several proof-of-principle real-time implementations [5–12], the complexity and constraints of the digital signal processing (DSP) at ultra high sampling rates are a concern. In particular the sampling rates and bit resolution of the required digital-to-analog converters (DAC) and analog-to-digital converters (ADC) are a limiting factor. In [5, 10] a 21.4 GS/s DAC was used with only 4 bit resolution, in [6] 5 bit DAC/ADC were used running at 2.5 GS/s, while in [7, 9] the DAC/ADC ran at 4 GS/s with 8 bit resolution, and in [8] a 10 GS/s DAC was used with 6 bit. Most recently in [11, 12] DACs with 6 bit resolution and 25 GS/s were used in real-time OFDM transmitters. Because of the trade-off between sampling speeds and bit resolution, optical OFDM systems are limited by the current technologies of electronic signal converters where the high sampling rates come at the expense of low bit resolution, introducing a significant error floor. Furthermore OFDM signals have large peak-to-average power ratio (PAPR), i.e., there will be a few values that are much larger (5–10 dB) than the average. This leads to inefficient utilization of the already limited DAC/ADC resolution. In the cited works [5–10], this is addressed by “clipping” the time-domain signal. This means that peak values will incur a larger quantization error, but in total this will reduce the distortion introduced by the limited DAC/ADC resolution.

Statistical characterization of clipping and quantization errors has been considered previously in [13, 14] and our preliminary work in [15]. Specifically in [13] the author focused on the clipping and quantization error introduced by the ADC at an OFDM receiver, where also the spectral properties of clipping and quantization errors was studied in case of significant over-sampling of the ADC relative to the OFDM spectrum. In [14] the author focused on the limited DAC resolution at the transmitter, where also the interplay between clipping and increased available transmit power is considered.

In this paper, we consider both the error introduced by the DAC and ADC. As in [13, 14], we start by approximating the time-domain OFDM signal as a Gaussian random process and analytically derive a closed-form expression for the average power of the introduced quantization and clipping error. This expression is only a function of the respective DAC (or ADC) resolution and the “clipping ratio”, which trades off clipping for quantization error. As already published in [15], the found expressions have a clear minimum for a certain clipping ratio; the optimum clipping ratio is independent of the used modulation format (QPSK, QAM) or number of subcarriers in the OFDM signal. In the existing implementations [5–10], this optimum clipping ratio was simply found experimentally by scaling the signal before quantization.

We then conduct a signal-to-noise ratio (SNR) analysis of the optical link, treating the DAC/ADC distortion as additive noise sources, where the DAC noise is filtered by the channel. Motivated by the application of optical links across short distances in data centers [16], we focus on the case where the received optical power is large – making the DAC/ADC distortion the dominant noise source – and neglect the potential improvement in transmitted optical power due to clipping as considered in [14]. We validate the SNR analysis using numerical simulation of the electrical back-to-back performance of a coherent optical OFDM system. The simulation results confirm i) the predicted level of error introduced by the DAC and ADC, ii) the errors are uniformly distributed across the OFDM subcarriers for low over-sampling, and iii) the DAC, ADC, and receiver noise can be well modeled as additive noise sources, i.e., the total noise power is the sum of the separate noise components.

Last, we consider an experimental direct detected OFDM system that uses a sufficient guard band to approximate coherent detection within the signal band (see e.g. [1]). To focus on the DAC resolution of the used waveform generator, we artificially limit it to 3 and 6 bit, while using 8 bit at the ADC of the used sampling scope. We include the measured DAC roll-off in the link SNR analysis by giving each OFDM subcarriers a corresponding channel gain. We compare theoretical link SNR predictions with numerical simulation and experimentally recorded data. In general we find a good match between the link SNR as a function of the clipping ratio, although the experimental results trail the predicted performance by 1–2 dB SNR.

The rest of the paper is organized as follows: in Section 2 we review the coherent OFDM model, in Section 3 we statistically characterize the joint clipping and quantization error, then in Section 4 we use numerical simulation to confirm the theoretic results, and finally we study an experimental setup in Section 5; we conclude in Section 6.

## 2. Optical OFDM

#### 2.1. Transmitter

The OFDM transmitter maps data bits to complex symbols *s*[*k*], which are chosen from a constellation like quadrature phase-shift keying (QPSK) or quadrature amplitude modulation (QAM). These are modulated using an inverse discrete Fourier transform (IDFT) of size *K*, and extended cyclically to include a cyclic prefix (CP) of length *N*
_{CP},

To evaluate the effect of the DAC bit-resolution, we assume that the values are quantized before entering the DAC, for which we introduce the quantized signal,

*v*

_{D}[

*n*] is the quantization error introduced by the DAC. The distortion or mean-squared error (MSE) introduced by the DAC is then,

The quantized signal *x _{q}*[

*n*] can be interpreted as the IDFT of the noisy data symbols

*s*[

_{q}*k*],

*s*[

_{q}*k*] are simply

*ṽ*

_{D}[

*k*] are the discrete Fourier transform (DFT) of the quantization error

*s*[

_{q}*k*] extend beyond the set of active IDFT inputs

*S*.

_{A}The DAC converts the discrete signal, *x _{q}*[

*n*], to a continuous signal,

*x*(

*t*). The operation can be mathematically represented as a discrete time signal driving a low-pass filter (LPF),

*g*

_{DAC},

*B*= 1/

*T*, and the LPF passband needs to be chosen appropriately. The (complex) baseband signal is then upconverted to the carrier frequency,

_{s}*f*,

_{c}*f*is usually expressed in terms of wavelength, e.g., 850 to 1550 nm in typical optical OFDM systems.

_{c}#### 2.2. Coherent Receiver

In this section we assume that the optical field is perfectly mapped onto the electrical field with no frequency or phase offsets. Any dispersive channel can be described using a linear time invariant (LTI) transfer function, *h*(*τ*), while the receiver noise (optical and electrical) is modeled as an additive signal, *v*
_{N}(*t*),

*g*

_{ADC},

The first *N*
_{CP} samples are discarded and the discrete signal is converted to frequency domain via a DFT,

*m*∈

*S*, and where the

_{A}*ṽ*[

_{N}*m*] represent the effect of the optical and electrical receiver noise [18]. The

*ṽ*[

_{A}*m*] are the DFT of the ADC quantization noise,

*v*[

_{A}*n*], as

*ṽ*[

_{D}*m*] is the DFT of the DAC noise. Finally, the channel coefficients

*H*[

*m*] contain the channel and all filter effects,

*G*

_{DAC}(

*f*),

*H*(

*f*), and

*G*

_{ADC}(

*f*) are the Fourier transform of

*g*

_{DAC}(

*τ*),

*h*(

*τ*) and

*g*

_{ADC}(

*τ*) respectively.

## 3. Effect of Quantization Noise

#### 3.1. Signal-to-Noise Ratio

From Eq. (15), we see that there are several noise components; specifically with this model, we can determine the signal-to-noise ratio (SNR) on the *m*-th subcarrier as,

*ṽ*[

_{D}*m*] is weighted by the channel (and filter) coefficients

*H*[

*m*], while the ADC and receiver noise are not. We next evaluate the average noise power.

The total average noise power at the input of the DFT is related to the noise at the output due to Parseval’s theorem, see e.g. [17],

*ṽ*[

_{A}*m*] and

*ṽ*[

_{N}*m*]. An exact analysis of the distribution of the noise power across frequencies can be found in [13]; instead we will approximate the quantization errors as zero-mean and statistically uncorrelated:

For easy notation, we define *N*
_{0} as the power of the optical and electrical receiver noise in the time domain (at the output of the ADC),

*m*-th subcarrier can be evaluated as,

#### 3.2. Gaussian Approximation of Input

We will shortly outline the derivation of the DAC quantization noise, *D*
_{DAC}, the same can be applied to the ADC quantization noise. At the heart of our approach is the central limit theorem, see e.g. [17], based on which we assume that the baseband samples at the output of the IDFT, *x*[*n*], can be well approximated as complex Gaussian random variables with the following moments,

*s*[

*k*] are zero mean, are uncorrelated, and have average power

*E*. The number of active subcarriers is defined as |

_{s}*S*| =

_{A}*N*. Based on this characterization of the OFDM signal, the MSE can be simply calculated as the quantization error of a corresponding Gaussian random variable, which we will do in the following.

_{A}#### 3.3. Calculating the Distortion

We consider a fixed-point format with *q* bits, which corresponds to a uniform quantizer with *M* = 2* ^{q}* levels, as illustrated in Fig. 1. As we model the OFDM signal as zero-mean Gaussian, a mid-rise quantizer that is symmetric around zero is the best choice (although if implementation requires a mid-tread quantizer, the derivation is easily adapted).

We assume that the complex variables *x*[*n*] are distributed as complex Gaussian random variables, with zero mean and variance *N _{A}E_{s}* as shown in Eqs. (23) and (24). Accordingly the real and imaginary part of each

*x*[

*n*] are independent real Gaussian random variables, with half the variance,

*N*/2, each. We therefore limit our consideration to quantizing the real- and imaginary parts identically. With this we can simplify the quantization problem to that of two identical real Gaussian random variables,

_{A}E_{s}*t*is a real Gaussian random variable,

*t*is its quantized version, and

_{q}*f*(

*t*) is its probability density function (the second factor of two is for folding the positive/negative interval). While values of

*t*smaller than

*M*/2 · Δ will only incur a maximum quantization error of ±Δ/2,

*M*/2 · Δ is (theoretically) unbounded,

*μ*= 0 and variance

*σ*

^{2}=

*N*/2 is

_{A}E_{s}We note that to solve Eq. (25) we will have to solve integrals of similar type in Eqs. (26) and (27),

*t*) is the Gaussian cumulative density function

We see that the distortion in Eq. (31) is a function of the number of quantization levels *M* = 2* ^{q}* and the chosen step-size Δ, see also Fig. 1. When choosing the step-size proportional to the standard deviation of the input, see Eq. (31),

*x*[

*n*],

*N*, but is otherwise independent of this parameter. Looking at the “clipping ratio”, which is commonly defined as the ratio of the maximum to the average output power,

_{A}E_{s}*γ*in Eq. (32). So when varying the proportionality constant

*γ*, we are effectively varying the clipping ratio (the factor 1/2 in the numerator is because the ratio is considered per real or imaginary part).

We evaluate the expression Eq. (31) in Fig. 2, where we vary the number of
quantization steps *M* from 2^{3} = 8 to
2^{8} = 256, and the clipping ratio from 0 to 12 dB. As the
clipping ratio is defined proportional to the signal, any signal scaling is
therefore accounted for when entering the DAC. We therefore arbitrarily assume
that the symbol constellations are scaled as *E _{s}*
= 1/

*N*, this leads the to samples

_{A}*x*[

*n*] at the input of the DAC having unit power. Clearly there is an optimum tradeoff between quantization and clipping error that minimizes the distortion or MSE introduced by the DAC. In fact since the MSE scales only linearly with the value of the input power, the optimum clipping ratio depends only on the number of bits

*q*= log

_{2}

*M*.

In the next section, we use numerical simulation to check several of the approximations and assumptions that were made in deriving Eq. (31) in a well controlled environment. Following this, we will consider an experimental system to see the practicality of the analysis.

## 4. Simulation of Coherent OFDM

#### 4.1. Simulation Setup

For numerical simulation, we consider DAC/ADCs with 3–8 bits resolution, as beyond this the effect of DAC/ADC resolution becomes negligible according to our prediction. We assume a sampling rate of 21.4 GS/s as in [5, 10], which corresponds to a baseband bandwidth of *B* = 21.4 GHz.

To simulate the DAC in Eq. (8) and ADC in Eq. (12) we upsample the discrete
signal by a factor of five and apply a truncated raised-cosine LPF,
*g*
_{DAC}(*τ*) =
*g*
_{ADC}(*τ*) =
*g*
_{LPF}(*τ*); the filter
frequency response is plotted in Fig. 3.
This is deemed sufficient to model most effects of D/A and A/D conversion, like
aliasing or filter flanks that extend outside the baseband bandwidth
*B*. The simulation corresponds to electrical back-to-back
performance, and the channel *h*(*τ*) is
simply a time delay. The additive noise power is set to
*N*
_{0} = 10^{−4}. We
consider a DFT size of *K* = 128, to avoid the roll-off
region of the filter we deactivate one eighth of the subcarriers,

*S*| = 112. The CP has a length of

_{A}*N*

_{CP}= 8

#### 4.2. SNR Analysis

To reduce one free parameter, we set the power of the time domain signal to unity *N _{A}E_{s}* = 1; this is achieved by scaling the constellation symbols such that

*E*= 1/

_{s}*N*. With this the SNR at the

_{A}*m*th subcarrier is (assuming |

*H*[

*m*]|

^{2}= 1),

*x*[

*n*], are of unit power, allocating

*K*/

*N*= 0.58 dB amplification per subcarrier. We compare the predicted values with a numerical simulation, where we plot numerator and denominator separately.

_{A}We first neglect the ADC error and focus on the resolution of the DAC. For 4–6 bits the error caused by the DAC is much larger than the effect of the electrical and optical noise. Only at 8 bit the DAC MSE is on the same order as *N*
_{0} leading to about 2 · 10^{−4} ≈ −37 dB. In case of limited DAC and ADC resolution, see Fig. 4(b), the error is simply doubled (3 dB increase), while at 8 bit the combined error is roughly 3 · 10^{−4} ≈ 35 dB. Although the assumed LPF in Fig. 3 is basically ideal within the used frequency band, its flanks can be observed in the DAC noise in Fig. 4(a). This will be more obvious in the experimental section where the used DAC has significant roll-off also within the OFDM signal band and Eq. (35) will have to be replaced with the more detailed Eq. (22) to account for this amplification and/or attenuation of the DAC noise.

In summary, Figs. 4(a) and 4(b) confirmed three things:

- The assumption of the OFDM signal being Gaussian distributed at the input of the DAC or ADC leads to an accurate measure of incurred quantization and clipping error in Eq. (31).
- We observe that the DAC and ADC noise has uniform power in the frequency domain, validating our assumption as uncorrelated in the time-domain Eq. (19) for low over-sampling.
- The DAC noise is filtered by the channel and additive with ADC and receiver noise.

The last point is not necessarily intuitive, as often one would expect that clipping at the transmitter might reduce the incurred clipping error at the receiver.

#### 4.3. BER Prediction

Although we confirmed that the level of MSE at the DFT output matches our prediction, we would also like to use this prediction to determine bit error rate (BER) performance. We therefore introduce the additional assumption that the distortion at the output of the DFT can be approximated as zero-mean additive Gaussian noise of matching variance. The BER can then be calculated for square constellations in terms of the Gaussian cumulative density function, as a function of SNR,

For *B* = 21.4 GHz sampling rate and *M*-ary modulation, the data rate accounting for the overhead caused by the CP and oversampling is,

^{−4}. This indicates that the clipping noise is approximately Gaussian up to about three standard deviations around the mean, but then the tail behavior is noticeably different. In any case the optimum clipping ratio in terms of MSE matches either exactly the one based on BER or is very close (although the actual BER can deviate somewhat).

## 5. Experimental Direct-Detected OFDM System

#### 5.1. Experimental Setup

We now compare both simulation results and predicted performance to an experimental direct-detected OFDM system as shown in Fig. 6. The considered FFT size is *K* = 1024; since we assumed coherent detected OFDM in the derivation, we insert sufficient guard bands to avoid intermodulation distortion (IMD) within the OFDM signal band (see, e.g., [1]),

*S*| = 400; to generate a completely real discrete multi-tone (DMT) signal, we assign the subcarriers with negative indices the conjugate complex values of the subcarriers with positive indices. A bias of 6 dB is also applied to the signal after passing through the DAC, where 6 dB is the ratio of the bias power, relative to the total OFDM double-sideband power. This will effectively reduce the observed SNR at the receiver by about 6 dB, since this amount of optical power is not available for the OFDM signal.

_{A}The DAC in the used waveform generator has 6 bit nominal resolution, but in practice the least-significant bit will be not fully accurate. So we will first limit the resolution intentionally to 3 bit, by only using the three most-significant bits, which guarantees close to ideal behavior. Also the DAC shows a significant roll-off, see Fig. 7. Within the OFDM band alone, the frequency response diminishes by about 5 dB. This will affect both the OFDM signal, as well as the DAC noise – since clipping and scaling is performed before the discrete values enter the DAC. To achieve constant signal power within the OFDM band, the symbols on the subcarriers are scaled to pre-equalize the DAC roll-off, see Fig. 6. Denoting the DAC roll-off on the *m*-th subcarrier as *G*
_{DAC}[*m*], the average signal power of the *m*-th subcarrier at the input of the DAC is set to

*α*is chosen such that

*N*= 1 when evaluating Eq. (31). Since the OFDM sideband is attenuated going through the DAC, it is amplified with a factor

_{A}E_{s}*α*/

*N*before entering the optical modulator and applying the bias, which keeps the transmitted optical power constant (and amplifies the DAC noise). The receiver noise figure is about

_{A}*N*

_{0}= 10

^{−3}, and the used sampling scope has 8 bit ADC resolution and a mostly ideal filter characteristic

*G*

_{ADC}(

*f*) ≈ 1.

With this, the SNR on the *m*-th subcarrier is,

*α*/

*N*≈ 1 dB.

_{A}#### 5.2. SNR Analysis

We first focus on intentionally limiting the DAC resolution to 3 bit. The MSE is measured at the output of the receiver FFT, and we fix the DAC clipping ratio of 4.5 dB. We evaluate the numerator and denominator of Eq. (43) as before. As the input to the FFT is purely real, the negative frequencies do not contain any additional information and are only plotted for consistency with the previous plots.

In Fig. 8(a) we plot the predicted distortion as given in the denominator of Eq. (43) and the results of numerical simulation of the direct-detected OFDM system. The predicted distortion matches reasonably well within the OFDM band, but at lower frequencies, |*m*| < 200, significant IMD of the OFDM band increases the noise level (which is not considered in the prediction). Although the DAC roll-off was precompensated and we achieve uniform spectral power within the OFDM band, the SNR across the modulated subcarriers is not constant, as the dominant DAC noise reflects the DAC roll-off. The match between prediction and simulation is not as close as in Fig. 4(a), which we attribute to unaccounted behavior of direct-detection, e.g., IMD between the OFDM band and parts of the DAC noise that extend into the guard band |*m*| < 256.

Considering the experimentally recorded results in Fig. 8(b), we see largely similar behavior. The curves are noticeably noisier, since instead of having 10^{3} OFDM symbols to estimate the noise level as in the simulation, we use less than a hundred in the experimental setup. We also notice that the measured DAC roll-off in Fig. 7 is not highly precise, as the precompensation does not achieve fully uniform power across the OFDM band. Generally though, the relative levels of noise are fairly accurate.

We next vary the clipping ratio and plot the resulting SNR based on prediction, simulation, and experimentally recorded results, for both 3 and 6 bit DAC resolution. The SNR is averaged across the OFDM band, although we saw in Figs. 8(a) and 8(b) that subcarriers with larger index, *m*, have better performance.

For 3 bit DAC resolution in Fig. 9(a), both simulation and experimental results have an optimum clipping ratio slightly larger than predicted (similar as in Fig. 5). Also, both simulation and experimental results do not quite reach the predicted SNR by almost 2 dB. We attribute this to the aforementioned IMD between the DAC noise and the OFDM band that is not considered in the model, but captured in the numerical simulation model.

For 6 bit DAC resolution the deviation from an ideal DAC adds some additional noise, as effectively the least-significant bit is not fully accurate. To match this we increase the receiver noise figure in both the predicted results and the simulation by 5 dB to *N*
_{0} = 3 · 10^{−3}. With this – including bias and oversampling – the SNR is receiver noise limited to below 24 dB. Based on this, the match in Fig. 9(b) is quite reasonable overall. Compared to Fig. 9(a) the DAC noise is smaller, so the IMD effects are also less present. This leads to a better match between simulation and prediction (which both assume ideal DAC behavior), where before there was a better match between simulation and experiment.

#### 5.3. BER Results

Last, we briefly consider the BER performance, where we use 16-QAM and 64-QAM for 3 and 6 bit DAC resolution respectively. For simulation and experimental results the BER is measured directly, and prediction is based on Eq. (37) and Eq. (38). The data rates correspond to roughly 17 and 25 Gbit/s for 16-QAM and 64-QAM respectively, which are based on Eq. (39) and include an additional factor of 1/2 for DMT modulation.

In Figs. 10(a) and 10(b) we show BER performance based on 16-QAM and 64-QAM; overall we see a good match for Fig. 10(a) and reasonably so for Fig. 10(b). Beyond this, it is interesting to observe that the relative position of the curves shifts slightly. Although BER is a strictly monotonic function of SNR, this can happen since the SNR across subcarriers is not constant. When averaging SNR, it will not necessarily match the result based on averaging BER.

## 6. Conclusion

In this paper we statistically characterized the error introduced by limited DAC/ADC resolution as a function of bit resolution and clipping ratio. We then used numerical simulation to confirm that the introduced distortion, viewed at the receiver, is white across the signal bandwidth and has average power matching our theoretic prediction. Furthermore the distortion introduced by the DAC and ADC are additive, and at the output of the DFT can be well approximated as Gaussian. We also compared the predicted performance to results collected experimentally using a direct detected OFDM setup; we find that the statistical characterization of the clipping and quantization noise can be used reasonably to predict the BER performance of an actual OFDM system.

## References and links

**1. **B. Schmidt, A. Lowery, and J. Armstrong, “Experimental demonstration of electronic dispersion compensation for long-haul transmission using direct-detection optical OFDM,” J. Lightwave Technol. **26**, 196–203 (2008). [CrossRef]

**2. **S. Jansen, I. Morita, T. Schenk, N. Takeda, and H. Tanaka, “Coherent optical 25.8-Gb/s OFDM transmission over 4160-km SSMF,” J. Lightwave Technol. **1**, 6–15 (2008). [CrossRef]

**3. **X. Jin, J. Tang, K. Qiu, and P. Spencer, “Statistical investigation of the transmission performance of adaptively modulated optical OFDM signals in multimode fiber links,” J. Lightwave Technol. **18**, 3216–3224 (2008). [CrossRef]

**4. **J. Armstrong, “OFDM for optical communcations,” J. Lightwave Technol. **27**, 189–204 (2009). [CrossRef]

**5. **Y. Benlachtar, P. Watts, R. Bouziane, P. Milder, D. Rangaraj, A. Cartolano, R. Koutsoyannis, J. Hoe, M. Püschel, M. Glick, and R. Killey, “Generation of optical OFDM signals using 21.4 GS/s real time digital signal processing,” Opt. Express **17**, 17658–17668 (2009). [CrossRef] [PubMed]

**6. **N. Kaneda, Q. Yang, X. Liu, S. Chandrasekhar, W. Shieh, and Y.-K. Chen, “Real-time 2.5 GS/s coherent optical receiver for 53.3-Gb/s sub-banded OFDM,” J. Lightwave Technol. **28**, 494–501 (2010). [CrossRef]

**7. **R. Giddings, E. Hugues-Salas, X. Jin, J. Wei, and J. Tang, “Experimental demonstration of real-time optical OFDM transmission at 7.5 Gb/s over 25-km SSMF using a 1-GHz RSOA,” IEEE Photon. Technol. Lett. **22**, 745–747 (2010). [CrossRef]

**8. **F. Buchali, R. Dischler, A. Klekamp, M. Bernhard, and Y. Ma, “Statistical transmission experiments using a real-time 12.1 Gb/s OFDM transmitter,” in “*Optical Fiber Communication Conference and Exposition (OFC)*,”(San Diego, CA, 2010).

**9. **R. Giddings, X. Jin, E. Hugues-Salas, E. Giacoumidis, J. Wei, and J. Tang, “Experimental demonstration of a record high 11.25 Gb/s real-time optical OFDM transceiver supporting 25 km SMF end-to-end transmission in simple IMDD systems,” Opt. Express **18**, 5541–5555 (2010). [CrossRef] [PubMed]

**10. **Y. Benlachtar, P. Watts, R. Bouziane, P. Milder, R. Koutsoyannis, J. Hoe, M. Püschel, M. Glick, and R. Killey, “Real-time digital signal processing for the generation of optical orthogonal frequency-division-multiplexed signals,” IEEE J. Sel. Top. Quantum Electron. **16**, 1235–1244 (2010). [CrossRef]

**11. **B. Inan, O. Karakaya, P. Kainzmaier, S. Adhikari, S. Calabro, V. A. Sleiffer, N. Hanik, and S. L. Jansen, “Realization of a 23.9 Gb/s real time optical-OFDM transmitter with a 1024 point IFFT,” in “*Optical Fiber Communication Conference and Exposition (OFC)*,”(Los Angeles, CA, 2011).

**12. **R. Schmogrow, M. Winter, B. Nebendahl, D. Hillerkuss, J. Meyer, M. Dreschmann, M. Huebner, J. Becker, C. Koos, W. Freude, and J. Leuthold, “101.5 Gbit/s real-time OFDM transmitter with 16QAM modulated sub-carriers,” in “*Optical Fiber Communication Conference and Exposition (OFC)*,”(Los Angeles, CA, 2011).

**13. **D. Dardari, “Joint clip and quantization effects characterization in OFDM receivers,” IEEE Trans. Circ. Syst. I **53**, 1741–1748 (2006). [CrossRef]

**14. **E. Vanin, “Performance evaluation of intensity modulated optical OFDM system with digital baseband distortion,” Opt. Express **19**, 4280–4293 (2011). [CrossRef] [PubMed]

**15. **C. R. Berger, Y. Benlachtar, and R. I. Killey, “Optimum clipping for optical OFDM with limited resolution DAC/ADC,” in “*Proc. OSA Advanced Photonics Congress*,”(Toronto, CA, 2011).

**16. **Y. Benlachtar, R. Bouziane, R. Killey, C. Berger, P. Milder, R. Koutsoyannis, J. Hoe, M. Püschel, and M. Glick, “Optical OFDM in the data center,” in “*Proc. of Intl. Conf. on Transparent Optical Networks*,” (Munich, Germany, 2010). [CrossRef]

**17. **A. Papoulis and S. U. Pillai, *Probability, Random Variables and Stochastic Processes* (McGraw-Hill, 2002).

**18. **For simplicity we neglect the effect of the receive filters and ADC conversion on this noise term.

**19. **J. G. Proakis, *Digital Communications*, 4th ed. (McGraw-Hill, 2001).