## Abstract

An iterative Fourier-transform-based deconvolution method for resolution enhancement is presented. This method makes use of the *a priori* information that the data are real and positive. The method is robust in the presence of noise and is efficient especially for large data sets, since the fast Fourier transform can be employed.

© 2009 Optical Society of America

## 1. INTRODUCTION

Limitations on the resolution of physical measurements due to convolution with an instrumental response function, diffraction, or other causes are ubiquitous in the physical sciences. A considerable amount of research has been devoted to methods for deconvolving experimental data to obtain a more accurate representation of the original data. Deconvolution allows more information to be extracted from the data or the same amount of information to be obtained at lower resolution, allowing the use of simpler instrumentation. The approach employed here makes use of the *a priori* information that in many cases of interest the measured quantities are real and positive, which is employed in some of the better known methods [1, 2, 3, 4, 5].

## 2. THEORY

Convolution can be performed by multiplying the Fourier transform (FT) of the data by the FT of the instrument response convolving function and then applying the inverse Fourier transform (IFT). Let $D\left(\tau \right)$ represent the original data consisting of *N* data points and $\stackrel{\u0303}{D}\left(v\right)$ be the FT coefficients of the data. The tilde symbol will be used throughout to make it easier to distinguish the FT coefficients from the corresponding data series. $C\left(\tau \right)$ are these data after convolution with an instrumental response function $R\left(\tau \right)$; $C\left(\tau \right)$ are the data actually recorded by the instrument. The convolved data $C\left(\tau \right)$ are mathematically equivalent to the IFT of the product of the $\stackrel{\u0303}{D}\left(v\right)$ FT coefficients and the $\stackrel{\u0303}{R}\left(v\right)$ coefficients:

*a priori*knowledge that the original data do not have negative values to restore missing frequency coefficients. Let $A\left(\tau \right)$ denote the inverse filtered data and

*M*be the minimum magnitude of $\stackrel{\u0303}{R}\left(v\right)$ determined by the signal-to-noise ratio beyond which meaningful inverse filtering is not feasible:

Note that for conceptual simplicity the $\stackrel{\u0303}{A}\left(v\right)$ coefficients can be regarded as nonzero below some cutoff frequency and the $\stackrel{\u0303}{B}\left(v\right)$ as nonzero above the cutoff frequency. This method is not restricted to this case however, and the convolving-function Fourier coefficients could equal zero at various frequencies or ranges of frequencies that are not necessarily sequential as long as Eq. (8) holds. Ideally the restoration process should result in $D\left(\tau \right)=A\left(\tau \right)+B\left(\tau \right)$. The FT coefficients of $A\left(\tau \right)$ are considered to be known from Eq. (5), while the FT coefficients of $B\left(\tau \right)$ will be estimated. $A\left(\tau \right)+B\left(\tau \right)$ may yield negative values as a result of errors in the estimation of the $\stackrel{\u0303}{B}\left(v\right)$ coefficients. This error will be used to improve the estimate of the $\stackrel{\u0303}{B}\left(v\right)$ coefficients. The basic idea behind this method is to try to restore as much of the missing information as possible by making use of *a priori* information about the data, in particular the positive value of many physically measurable quantities such as, for example, the intensity of spectral lines. This can be accomplished by minimizing the sum of the squares of the negative portion of the deconvolved data. Proceeding in a manner similar to the approach of Howard [6], the Fourier expansion of a representative coefficient is

*K*in this expression is an arbitrary parameter that in the limit that $K\to \infty $ reproduces the behavior of the Heaviside step function:The sum of the squares of the negative values in this approximation is

*N*for a particular unknown frequency ${v}_{m}$, where the

*m*subscript denotes a

*v*missing from the measured data, the FT expansion gives

Since the subscript *m* refers to the coefficients of the missing frequencies, Eq. (8) and the orthogonality of the summation eliminates $A\left(\tau \right)$ from the left-hand side of Eq. (17), giving

An equivalent procedure that follows from Eq. (16) is to obtain the $\stackrel{\u0303}{B}\left(v\right)$ coefficients by subtracting ${[A\left(\tau \right)+B\left(\tau \right)]}_{-}$ from the estimate of $[A\left(\tau \right)+B\left(\tau \right)]$ and utilizing these in the IFT. The advantage is that the term can be subtracted with a coefficient greater than unity, resulting in faster convergence, as will be discussed below.

## 3. DATA AND ANALYSIS

The method was tested by examining the isotope shift of the red line of a deuterium hydrogen gas mixture. The 512 point spectrum was collected on a Jobin Yvon/Spex Triax 550 monochromator with the slits set at $0.08\text{\hspace{0.17em}}\mathrm{mm}$ and is shown as the upper trace in Fig. 1 .

The data were processed by performing the FT of the spectrum and were inverse filtered by multiplying the frequency coefficients by a smoothly increasing function up to a cutoff limit at the 71st complex coefficient, beyond which all Fourier coefficients were set equal to zero. This function was obtained from the inverse of the convolving function of the spectrometer estimated from the profile of an isolated spectral line. The inverse filtered spectrum is shown in the lower trace of Fig. 1. Although the resolution is enhanced, there are significant negative excursions and spurious peaks.

The results of the method after various numbers of iterations are shown in Fig. 2 . The method gives a fairly constant improvement in resolution with each iteration up to some limit that depends on the details of the data and convolving function.

The results of the method after 100 iterations are shown as the upper trace in Fig. 3 , and the middle trace is the spectrum recorded with a $0.01\text{\hspace{0.17em}}\mathrm{mm}$ slit width for comparison. The width of the lines shows a fourfold improvement over the original data, and the method succeeded in greatly reducing the oscillations and negative excursions. The deconvolved spectrum compares very well with the data recorded with an eight times smaller slit width. The separation of the deconvolved peaks agrees with the values obtained from the higher-resolution data to within the experimental uncertainty. The area of the smaller peak relative to the larger was overestimated by approximately 40%; however, extensive experimentation with recorded and simulated data has shown that this error is not typical and is due to inaccuracy in estimating the convolving function for inverse filtering rather than to a defect in the method itself.

Since the missing frequencies are identified by their negative contribution to the spectrum, this underestimates their magnitude by a factor of 2 since only the negative area of each is taken into consideration. Experimentation bears this out, and multiplying ${[A\left(\tau \right)+B\left(\tau \right)]}_{-}$ by a factor of 2 before subtracting results in an algorithm that converges to the same solution in fewer iterations. Factors larger than 2 speed up the algorithm even more but can result in noise and spurious peaks, which ultimately can cause the method to diverge.

Another technique that reduced the number of iterations for a given increase in resolution was to subtract a constant offset from the estimate of $[A\left(\tau \right)+B\left(\tau \right)]$. The price of this increase in speed was again the introduction of more noise and small spurious peaks. The most efficient approach was to use the offset for most of the iterations for a quicker increase in resolution and then perform a few more iterations without the offset, which eliminated much of the noise and spurious speaks. The optimum offset depends on details such as the accuracy of the inverse filtering and the signal-to-noise ratio. The progress of the method was monitored visually by plotting the new spectrum after each iteration and was stopped when little further improvement occurred. This could just as easily have been automated by terminating the process when the change after each iteration became less than the preestablished tolerance. However, it was found that visually monitoring the progress provided insight into how well the method was working and exposed potential problems if the deconvolution was pursued too aggressively.

This method was also evaluated by comparing it with a Landweber method modified to include a nonnegativity constraint as implemented by Tadrous [7]. This method gave similar results with somewhat higher resolution, although at the price of producing two spurious peaks. The height of the smaller peak was overestimated in both methods, but experimentation on simulated data with both methods showed that this was caused by difficulty in establishing the exact form of the convolving function rather than by the methods themselves.

## 4. CONCLUSIONS

The method detailed in this paper is computationally efficient especially for large data sets because the fast Fourier transform can be employed. The most critical factor in obtaining a good restoration is the choice of the initial convolving function. A poor choice for this function can introduce spurious peaks and noise. With a reasonable choice for this function, this method greatly reduces spurious peaks, negative excursions, and noise while providing a further increase in resolution. Even with some uncertainty in the choice of this function and with the presence of noise, the method proves robust and converges to a consistent and reasonably accurate representation of the original data. In applying the method, it may be necessary to establish a local baseline such as, for example, when a spectrum is superimposed on an interference pattern from a window. The same is true in applying this method to other areas such as image deconvolution, where, for example, text might be superimposed on a uniform background. As with any deconvolution method the signal-to-noise ratio is very important in determining the ultimate level of resolution enhancement that is achievable. An advantage of this method is the simplicity of implementing it and the ease with which other *a priori *information can be incorporated. Conditions such as finite extent [8] (also known as compact support) and bounded upper limits can be handled in exactly the same way.

## ACKNOWLEDGMENTS

The author thanks Efstratios Manousakis, Cecilia Barnbaum, and Sean Barton for their suggestions for improving this manuscript.

**1. **Y. Biraud, “A new approach for increasing the resolving power by data processing,” Astron. Astrophys. **1**, 124–127 (1969).

**2. **P. A. Jansson, R. H. Hunt, and E. K. Plyler, “Resolution enhancement of spectra,” J. Opt. Soc. Am. **60**, 596–599 (1970). [CrossRef]

**3. **B. R. Frieden, “Restoring with maximum likelihood and maximum entropy,” J. Opt. Soc. Am. **62**, 511–517 (1972). [CrossRef] [PubMed]

**4. **B. R. Frieden and J. J. Burke, “Restoring with maximum entropy, II: superresolution of photographs of diffraction-blurred impulses,” J. Opt. Soc. Am. **62**, 1202–1210 (1972). [CrossRef]

**5. **P. A. Jansson, *Deconvolution of Images and Spectra* (Academic, 1997).

**6. **S. J. Howard, “Continuation of discrete Fourier spectra using a minimum-negativity constraint,” J. Opt. Soc. Am. **71**, 819–824 (1981). [CrossRef]

**7. **P. J. Tadrous, BiaQIm image processing software, Version 21.01, 2008, URL http://www.bialith.com.

**8. **S. J. Howard, “Method for continuing Fourier spectra given by the fast Fourier transform,” J. Opt. Soc. Am. **71**, 95–98 (1981).