A method for deconvolution of the true image of an object from two recorded images is proposed. These two images have to be made by an imaging system with two different but interconnected kernels. The method is formulated as a system of Fredholm equations of the first kind reduced to a single functional equation in Fourier space. The kernels of the system and the true image of an object are found from the same recorded images.
© 2013 Optical Society of America
The problem of deconvolution of the true image of an object from a recorded image of the object is equivalent (for linear imaging systems) to solving a 2-D Fredholm integral equation of the first kind:1], which under some assumptions does not require knowledge of the kernel to restore the true image of an object. However, in general, due to the ill-posed character of the problem, it is important to know the kernel with high accuracy . Usually, the kernel is theoretically calculated or experimentally measured. After that, well-known techniques like Wiener filtering , Richardson-Lucy iterations [4,5], Tikhonov regularization , or their modifications are used. A drawback of these methods is that they do not guarantee that the kernel used in restoration calculations is equal to the kernel that is employed during image recording.
Rather than to perform passive measurements/calculations of the kernel, it is possible to affect the kernel physically to achieve better deconvolution results. For 1-D images (signals), we proposed this idea in  and implemented in  with the purpose of simultaneous calculation of the kernel and the true signal. For 2-D images, the coded exposure  and coded aperture  techniques are used in order to make the deconvolution problem better posed. 3-D images can be restored by changing the focus settings of an imaging system as it is described in ; a scene and a depth map can be estimated from several low resolution defocused images .
As opposed to depth from focus/defocus methods, e.g , where the shape of the kernel is assumed to be known while the width of the kernel depends on a 3-D surface and focal settings, in the proposed method, neither the shape nor the width of the kernel are known in advance; the entire point of the method is to determine the kernel precisely. After the kernel is calculated, the corresponding deconvolution can be found by known methods for solving ill-posed problems. For LIDAR imaging systems based on a 2-D antenna array, the result of such deconvolution is the true topographical image of a scene.
The method is based on following manipulations with the kernel of Eq. (1). The pattern of a LIDAR two-dimensional antenna array can be stretched in two directions without changing the shape of the pattern . Such a transformation can be achieved, for example, by using the same LIDAR but working with a different wavelength of radiation.
The idea of the method is to use two images of an object so that the second image is recorded by the same system but with the stretched kernel. The equation describing the relationship between these two unknown kernels along with two Fredholm equations describing the recorded images form a system of three equations. The kernels and the true image F(x, y) of the object are found from this system of equations. No specific assumptions regarding the shapes or the widths of the kernels are required. The method works if the kernel of Eq. (1) is separable and the Fourier spectrum of the recorded image is differentiable and bounded at frequency equal to zero.
2. 1-D algorithm
For 1-D images, a recorded image I(x0) can be expressed as
According to the proposed algorithm, in order to get the true image of an object, at least two images have to be recorded by an imaging system. The first image is recorded with the kernel A1(x - x0); the second – with the kernel A2(x - x0) = A1[(x - x0)/m], where m >1 (m is not necessarily an integer number). Figure 1 illustrates the relationship between two kernels when m = 2.
Mathematically, the problem of deconvolution of images obtained by 1-D imaging system is equivalent to solving the system of the following two integral equations:Eq. (6) in the form i2(w/m) = ma(w)s(w/m) and dividing Eq. (5) by the rewritten Eq. (6) gives a single functional equation with one unknown function s(w):
The solution s(w) of Eq. (7) is not unique: s(w) multiplied by a constant is also a solution. At the same time, s(w) is a unique solution among all functions that are equal to 1 when w = 0. To prove this fact, suppose that there is a different solution r(w) of Eq. (7):Eq. (8) by Eq. (7) gives
Equation (7) can be rewritten asEq. (10) becomes
The substitution of Eqs. (11) and (12) into Eq. (13), then using Eq. (7) N times in the result, and letting N → ∞ give the identical equation that shows that Eq. (13) holds for any w. It means that Eq. (13) has the same solution s(w) as the original system of Eqs. (5) and (6). Equation (13) remains an ill-posed problem: if H(w) has a zero value, the exact recovery of S(x) is impossible. However, now Eq. (13) has H(w) expressed in terms of the experimentally known function i2(w) and Y(w) expressed in terms of the experimentally known function i1(w). Now there is no need to measure the kernel H(w) or to make a theoretical assumption regarding its shape.
To illustrate the fact that the product (11) converges, consider an example where m = 2 and i1(w) = exp(−3w2):
Below is the proof that the product (11) converges under the assumption that i1(w) has a first derivative, that this derivative is bounded at w = 0, and that i1(w) is normalized so that i1(w) → 1 while w → 0. The last condition means that starting from some number k all terms of the product are close to 1.
To prove the convergence of a product, it is sufficient to prove the convergence of a sum , which for the product (11) is
Similarly, the product (12) converges under assumptions that i2(w) has a first derivative, that this derivative is bounded at w = 0, and that i2(w) is normalized so that i2(w) → m while w → 0.Eq. (11), H(w) is defined by Eq. (12), and a regularization parameter h can be estimated as it is described in .
3. 2-D algorithm for a LIDAR imaging system
In a LIDAR imaging system, images are recorded by a scanning LIDAR beam. Each pixel of images is presented by at least one LIDAR pulse. The objective of the proposed algorithm is to resolve objects that are smaller than the diameter of a LIDAR beam.
For a LIDAR based on a 2-D antenna array, the radiation pattern depends on the ratio of wavelength of radiation to the grating pitch  and can be manipulated both in x- and y- directions.
According to the algorithm, in order to get the true image of an object, at least two images have to be recorded by an imaging system. The first image I1(x0, y0) is recorded with a beam of photons having a wavelength λ1; the second image I2(x0, y0) – with a beam of photons having a wavelength λ2. The image I1(x0, y0) represents the average travel time of photons with the wavelength λ1 from their source to the target point (x0, y0) and back; the image I2(x0, y0) represents the average travel time of photons with a wavelength λ2. Because these wavelengths are different, the diameters of the beams are also different; one beam illuminates an area which is greater than an area illuminated by the other beam, and hence the areas have different topographies.
It is assumed that there are no infinite-depth holes in the object. The number of photons returning from a point (x, y) of the object is proportional to the intensity of illumination of this point. This intensity is proportional to the radiation pattern K[x - x0, y - y0]; so K[x - x0, y - y0] serves as a weight in calculating the average return time of the photons. The return time of photons from a point (x, y) located at the distance d = F(x, y) from the source of photons depends on d linearly: it is equal to 2d/c, where c is the speed of light. So Eq. (1) represents the average return time measured at the moment when the LIDAR beam is aimed at the point (x0, y0); the coefficient 2/c is omitted as unimportant.
The described way of image registration is based on travel time of photons, so the colors of illuminated objects are irrelevant.
For the first image, Eq. (1) can be expressed asEq. (1) can be expressed asEq. (20) and Eq. (21) can be rewritten in Fourier space as
This system of two functional equations can be easily reduced to a single functional equationEq. (7) but repeated twice – first for v and then for w.
Equation (24) can be rewritten asEq. (10)–Eq. (13) gives
4. Numerical simulation of image deconvolution
The true image of an object is simulated on a rectangle 0 ≤ x < 80, 0 ≤ y < 80 by a functionFig. 2. This true image F(x, y), recorded images I1 and I2, and the recovered image Frecovered are presented as bitmaps 256 by 256 pixels:
The images I1 and I2 are calculated as the convolutions of F(x, y) with the following kernels A and B:formula (26). The regularization parameter φ2 is equal to 0.00003. To remove a high frequency component from the recovered image, only 20 first harmonics are used and the intensities of the recovered image less than 50% of its maximum intensity are set to zero.
The process of deconvolution is sensitive not only to noise but to imprecision of calculations of kernels as well. The sensitivity to imprecision is estimated by changing the number N used in Eqs. (27) and (28). For N = 8, the error in the calculation of the kernels’ values at the half of their heights is equal to 0.003%; for N = 2 – to 3%. At the same time, the visual presentation of Frecovered when N = 8 is practically the same as when N = 2.
The proposed method allows finding the true image of an object from its two recorded images expressed as a system of two Fredholm equations. The importance of the method is that it does not require neither experimental nor theoretical information regarding the kernels of these equations. The method can be used when other known methods are not applicable.
To use the method, it is necessary to know the relationship between these two kernels; this requirement is easy to satisfy. The method also requires the kernels to be separable functions, which is valid for a wide class of imaging systems, e.g., systems based on Gaussian beams or on 2-D antenna arrays.
The mathematical requirement for convergence of the proposed algorithm is that derivatives di1(w)/dw and di2(w)/dw have to be bounded at w = 0. In other words, d[a(w)s(w)]/dw has to be bounded at w = 0. For some kernels a(w), this requirement cannot be satisfied. At the same time, most of practically used kernels, for example, sombrero kernels have derivatives that are bounded and equal to zero at w = 0.
The current presentation of the method is limited to a conceptual demonstration. Future research should include development of direct methods for solving Eq. (24) and utilization of the method for real-life imaging systems.
The main advantage of the proposed method is that in order to recover the true image of an object, there is no need to know in advance the kernel of an imaging system. The method resolves a logical contradiction of the standard deconvolution approach, where the kernel of an imaging system and an image of an object are obtained in different experimental conditions.
As opposed to known reconstruction techniques, the method gives the resolution of the recovered image of an object better than the resolution of recorded images of the object.
The method can be used in scanning microscopy, remote sensing, and beam profiling.
References and links
1. M. Demenikov and A. R. Harvey, “Parametric blind-deconvolution algorithm to remove image artifacts in hybrid imaging systems,” Opt. Express 18(17), 18035–18040 (2010). [PubMed]
2. W. Dong, H. Feng, Z. Xu, and Q. Li, “A piecewise local regularized Richardson–Lucy algorithm for remote sensing image deconvolution,” Opt. Laser Technol. 43(5), 926–933 (2011). [CrossRef]
3. N. Wiener, The Extrapolation, Interpolation and Smoothing of Stationary Time series (John Wiley & Sons, 1949).
4. W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. 62(1), 55–59 (1972). [CrossRef]
5. L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745–754 (1974). [CrossRef]
6. A. N. Tikhonov and V. Y. Arsenin, Methods for Solution of Incorrect Problems (Nauka, 1986).
7. V. A. Gorelik, “Method of recovering the fine structure of a spectrum without measuring the instrument function of the spectrometer,” Tech. Phys. 39, 444–446 (1994).
8. V. A. Gorelik and A. V. Yakovenko, “Fine structure extraction without analyser function measurement,” J. Elec. Spec. Rel. Phenom. 73(1), R1–R3 (1995). [CrossRef]
9. R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. 25(3), 795–804 (2006). [CrossRef]
10. A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26(3), 69 (2007). [CrossRef]
11. Y. Y. Schechner and N. Kiryati, “Depth from Defocus vs. Stereo: How Different Really Are They?” Int. J. Comput. Vis. 39(2), 141–162 (2000). [CrossRef]
12. D. Rajan and S. Chaudhuri, “Simultaneous Estimation of Super-Resolved Scene and Depth Map from Low Resolution Defocused Observations,” IEEE Trans. Pattern Anal. Mach. Intell. 25(9), 1102–1117 (2003). [CrossRef]
14. J. K. Doylend, M. J. R. Heck, J. T. Bovington, J. D. Peters, L. A. Coldren, and J. E. Bowers, “Two-dimensional free-space beam steering with an optical phased array on silicon-on-insulator,” Opt. Express 19(22), 21595–21604 (2011). [CrossRef] [PubMed]
15. A. R. Vasishtha, Complex Analysis, 11th ed. (Krishna Prakashan Media, 2010).
16. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes in C: The Art of Scientific Computing, 2nd ed. (Cambridge University Press, 1992), Chap. 13.