Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Blind multi-frame deconvolution by tangential iterative projections (TIP)

Open Access Open Access

Abstract

A methodology for retrieving the unknown object distribution and point-spread functions (PSFs) from a set of images acquired in the presence of temporal phase aberrations is presented in this paper. The method works by finding optimal complimentary linear filters for multi-frame deconvolution. The algorithm uses undemanding computational operations and few a priori, making it simple, fast and robust even at low signal-to-noise ratios. Results of numerical simulations and experimental tests are given as empirical proof, alongside comparisons with other algorithms found in the literature.

© 2017 Optical Society of America

1. Introduction

Scientific imaging systems used in real-world applications rarely produce images that are comparable with their theoretical optimum — the diffraction limit. The presence of media-induced aberrations, both in phase and amplitude, produces a quantitative reduction in the amount of information that is recorded by the system. Incoherent imaging may, however, be considered as a convolution between two functions and if the blurring kernel, or point-spread function, can be obtained it is theoretically possible to extract the object distribution from the recorded image data. The problem is not instantly tractable, as this deconvolution, is a classic ill-posed inverse problem. If nothing is known a priori about the two functions, it implies that for any recorded image there are infinite possible combinations of object and point-spread function.

To begin the story of how these inverse problem may be solved, one must go back several decades to the pioneering work of Gerchberg and Saxton [1]. They made great practical progress in solving these inverse problems with the application of an alternating projection (AP) framework. Later the work of Fienup [2] refined these initial discoveries and was significant in finding practical and robust solutions to the phase retrieval problem along with a more thorough mathematical foundation for the operation of AP algorithms.

Gonsalves [3] worked on a related problem, known as phase diversity, approaching it with a similar alternating framework. In phase diversity, one adds a known phase aberration to the optical system and subsequently solves an optimisation problem in order to identify any other unknown aberration. The work of Paxman et al. [4] shows the move towards a more explicit optimisation framework, where a metric is explicitly evaluated and used to inform steps towards a solution. A method for the identification of both the object and the unknown phase was presented and marks the branching of methodology for solving inverse problems: implicit optimisation schemes such as AP and those explicitly evaluating an optimisation metric.

Whilst adding known phases to an optical system is possible, it is indeed, not very practical in many cases. To resolve this problem the notion of an iterative blind deconvolution was presented by Ayers and Dainty [5]. Again, in this work they returned to the AP framework and allowed a way to identify the object and the point-spread function by the application of a priori constraints on the object and point-spread function.

Aberrations cause certain spatial frequencies in the object not to be transmitted to the imaging sensor meaning that a complete reconstruction of the object from a single aberrated observation is impossible. The way to solve this problem is by the acquisition of multiple frames with different aberrations. If there is sufficient diversity, i.e. all frequencies have been transmitted in the sequence, these images can be used to recover a better estimate of the object than could be possible with a single image by using a blind multi-frame deconvolution algorithm.

Blind multi-frame deconvolution (BMFD) was first truly demonstrated in a practical setting by Schulz [6] and later by Yaroslavsky et al. [7]. Since this point in time, the number of publications about types and variations of blind multi-frame algorithms have multiplied beyond the scope that may all be summarised here [8–11]. They have become merely a subset of a host of general deconvolution methods that can be broadly subdivided into two main groups: linear and non-linear methods. The linear methods, e.g. Wiener-like filters [12], are fast and robust, but cannot restore information lost during image registration and have propensity to produce non-physical solutions (e.g. negative object intensity). Non-linear methods [13], on the other hand, use the constraints imposed by the physical conditions and are generally less robust and slower to implement.

Another current method for blind multi-frame deconvolution is Maximum Likelihood Estimation (MLE) [14, 15], which uses a Bayesian framework to find the most likely object (and PSFs) satisfying all of the a priori information. Multi-frame may also be referred to as multichannel deconvolution [16] and is functionally equivalent. Most of these algorithms may be reduced to the alternating minimisation (AM) class of methods or otherwise Iterative Shrinkage [13, 17]. AM methods subdivide the problem into several individual parts (usually two) and proceed by iteratively updating each of the parts assuming the other parts to be fixed and given by the previous iteration step, they have been well studied as a methodology for solving such inverse problems [18, 19].

The drawback with most inverse methods, for blind deconvolution specifically, is their reliance on a priori information to work successfully. Or to be more specific, tunable a priori information. It is not possible to assume that all the necessary information is known to the investigator. The second problem, and perhaps the more significant problem, is that these techniques do not work reliably on data at low signal-to-noise ratios without significant investment of time to tune and denoise the data.

In this paper, the tangential iterative projections (TIP) algorithm is presented as an approach to the aforementioned problems, and as a widely applicable algorithm that may be employed in imaging scenarios where temporal variation of the phase aberration is present. It is shown that it is possible with minimal a priori information, only the support size for the point-spread function and its real non-negativity, that blind multi-frame deconvolution can be performed robustly at low signal-to-noise ratios. To verify the claims, thorough qualitative and quantitative tests are performed on numerically simulated images and those acquired with a telescope.

The outline of this paper is as follows: firstly, the formalisation of blind multi-frame (or multi-channel) deconvolution shall be given; secondly, an outline of the framework of the TIP algorithm will be explained integrated with a look at the implementation of the algorithm; then the experimental results followed by a discussion and the conclusions of the article.

2. Blind multi-frame (or multi-channel) deconvolution

One has an incoherent imaging system through which it is possible to acquire a set of N two-dimensional images {in}, n ∊ ℕ and 1 ≤ nN, of an unknown and constant source object o with changing isoplanatic aberrations, giving rise to different point-spread functions (PSFs) {hn} for each of these recorded images {in} with an additive noise component wn. These images are sampled on a discrete evenly spaced grid with M × M pixels.

In this blind case, it is assumed that one neither knows the object nor the PSFs, and both these variables must be jointly estimated. The mathematical model for the nth image formation is:

in=o*hn+wn,in,o,hn+M×M,
where * is the convolution operation and +M×M defines that all the images, the object and the point-spread functions are all discrete real non-negative (M × M)-dimensional matrices. It is assumed that within an optimisation framework, there exists a set of {h^n} and o^ that may be used to accurately approximate the {in} in the least-squares sense:
{h^n,o^}=argminhn,on=1Nino*hn2,s.t.o,hn+M×M,
where ║·║ denotes L2 norm. This form of the cost function ensures fidelity to the original image data, but does not provide any constraints to what the PSFs or the object should be, and it has seemingly trivial solutions, such as:
h^n=in,o^=δM×M,in=in*δM×M,
where δM×M is the Delta function on +M×M. In order to obtain a solution that is closer to the real values of {hn} and o additional terms, corresponding to the solution smoothness, reality, or regularisation, may be added to the cost function, or otherwise achieved by imposing constraints on the solutions instead. A discussion of which, will be returned to later.

By exploiting the properties of the Fourier transform, the relationship between the recorded spectral images In=(in), the optical transfer functions (OTFs) Hn=(hn); and the object spectrum O=(o) is given by:

In(vm)=Hn(vm)O(vm)+Wn(vm),
where vm, m = 1, …, M2 are the spectral sampling points and · represents element-wise multiplication. Here it is assumed that there is the presence of Wn, which corresponds to the unknown noise spectrum. With this notation, Eq. (2) can be represented as:
{H^n,O^}=argminHn,Om=1M2n=1N|In(vm)O(vm)Hn(vm)|2,s.t.O,Hn{(f)withf+M×M}.

The problem of Eq. (5) is a constrained bilinear problem, with an objective function that is linear in each of the two variables O and Hn if the other is fixed, and can be solved by alternating minimisation fixing one variable sequentially [20]. From here the m index will be omitted for notational clarity.

For a set of known OTFs {Hn} the solution to Eq. (5) is given by the linear multi-frame deconvolution filter [7]:

O^=n=1N(Hn)*Inn=1N|Hn|2.

This object estimation may be seen as a Wiener filter with a infinite signal-to-noise ratio (SNR), whose general form for known PSFs is expressed as:

O^=n=1N(Hn)*Inn=1N|Hn|2+1SNR.

As shown in Jansson et al. [13] and elsewhere, that even for the case of one image (N = 1), and a known PSF h, unconstrained linear deconvolution results in noise amplification and loss of information in points where H is close to zero and can be addressed by imposing a non-negativity constraint on the object (and/or the PSF), which plays an important role in establishing convergence to a solution. With this in mind, the problem stated in Eq. (5) can be augmented by applying any other a priori knowledge in terms of the feasible sets of objects and PSFs O and respectively, and the optimisation problem becomes:

{H^n,O^}=argminHn,On=1NInOHn2,s.t.OO,Hn.

For completeness, following one major family of solutions to the BMFD problem; this problem of jointly estimating the object and the PSFs belonging to sets O and has been alternatively formulated as a regularised least-squares cost function [16, 17]:

{H^n,O^}=argminHn,On=1NInOHn2+λoQ(O)+λhR({Hn}).
Here the terms Q(O) and R({Hn}) are regularisation terms on respectively the object and the PSFs. Many different variations have been proposed in the last few decades, for the interested, the book [17] has an overview and (convergence) analysis of some numerical solutions to this problem based on alternating minimisation (AM). The convergence analysis in [17] is based on the following paper [21]. In Chan and Wong [21], the use is made of quadratic smoothness regularisers for both the image and PSFs. This type of regularisation tend to favour very smooth solutions [17] (as will be seen later). In order to avoid such smoothing the total variation (TV) for both the object and the PSFs has been elsewhere proposed as alternative regularisation (metrics) [17].

3. TIP algorithm

The problem stated by Eq. (8) is a non-convex optimisation problem, leading to non-unique solutions and multiple local minima [21]. The TIP algorithm does not attempt to minimise this metric. The approach of the TIP algorithm is to modify the problem of Eq. (8) in such a way that it would be possible to get a close enough estimate of the object using as little a priori information and with as simple operations as possible.

Following this train of thought, TIP keeps using the linear deconvolution estimate for the object Eq. (6) at every iteration step. By analysing the denominator of the formula, one can note that, unlike the case of single-image deconvolution, the noise amplification issue will be less prominent if the OTFs {H^n} do not have common zeros. Please note here, that noise still will be amplified at high frequencies, where the OTFs are small because of the diffraction effect; this issue will be addressed by apodisation as explained in the Discussion. Where there are no common zeros, however, a close enough estimate of the object will be given by a simple and fast operation of linear deconvolution provided the OTFs estimates are close enough to their true values.

To formalise, the k-th estimate for the object spectrum O^(k) is obtained as projection on the feasible set of the result of the linear deconvolution filter:

O^(k)=POargminOM×Mn=1NInOH^n(k1)2.

To update the estimates of OTFs {H^n}, a natural approach would be to use the same Eq. (8), now for a fixed object spectrum estimate O^, which is equivalent to finding a projection of point (I1, …, In) on a convex set O^ (that is a non-uniformly scaled copy of ). Despite its convexity, this problem is difficult to solve as one needs to update the projection operators at every iteration step, which is contradictory to the desire for simplicity. Instead of this, TIP uses again the linear deconvolution filter In/O^ with a subsequent projection onto feasibility set , which remains constant at every iteration step. To formalise:

H^n(k)=PInO^(k),n=1,,N.

At first glance, it is not clear whether this formula is relevant to the initial problem. It will now be illustrated in a non-rigorous manner that it makes good sense for some types of constraints. Firstly, one notes that in case of non-negative objects and PSFs, the L1 norm of the convolution operation is equal to the product of the norms of its terms, that is:

o*hn1=o1hn1.

Here one can assume all the norms have unit value by adding the corresponding constraint to the definitions of O and , and including a re-normalisation step in each iteration of the algorithm.

Secondly, if one restricts to being the case of the limited support constraint ={h|h(x)=0,xX} for some set X, one can represent any discrete PSF h as the sum of two non-negative components, which shall be denoted as the tangential and normal components h and h, the former made with pixels from X, and the later by the rest of the pixels:

h=h+h,h(x)=0,ifxX,h(x)=0,ifxX.
Non-negativity implies that ║h1 = ║h1 + ║h1 and thus for a noiseless images one yields from Eq. (12) and Eq. (13):
i1=o*h1=o1h1+o1h1.
If as an a priori one takes that (the most part of) the PSF is expected to be confined by the region X, the last summand in Eq. (14) can be considered as fitting error and should be minimised. This is equivalent to maximising the tangential part of PSF. It may be then seen that the tangential part of the PSF is given by projecting the result of the linear deconvolution on :
h=P1IO,
and from this one gets the motivation for the use of Eq. (11). In presence of noise any tangential component of the noise deconvolved with the object is indiscernible from the possible PSF and will add up to the estimation:
i/o=h+w/o=(h+(w/o))+(h+(w/o)),
where notation has been misused to denote as a/b result of linear deconvolution of a with b. This might result to the convergence of the algorithm to a wrong solution, which nevertheless will satisfy the imposed constraints.

By this novel alternating minimisation of the TIP algorithm, one ensures that when the image data is linearly deconvolved by the estimated OTFs {H^n} the object spectrum O^ is closest to its feasible set. The object spectrum and the OTFs have a mutual fidelity with regards to the other. Whereas, to the author’s knowledge the majority of blind multi-frame deconvolution methods focus on maximising the fidelity of Hn · O to the image data In within a regularisation framework to minimise the contribution of Wn vis-à-vis Eq. (9).

Figure 1 is a flow chart of the algorithm. In it the process of the TIP algorithm is given by 4 operators P1P4. The details of these operators are given in Table 1 and X continues to define the finite support of the PSFs here, which may be a circle or a square of a tunable size. The algorithm starts with step P1 and uses δ-functions as the initial set of unknown PSFs {H^n(0)} corresponding to minimal a priori knowledge about the PSFs.

 figure: Fig. 1

Fig. 1 An algorithm flowchart showing the steps of TIP. The different projection steps are shown in different colours and the explicit operations are given by the lines between the boxes that contain the data. P1 is multi-frame linear deconvolution, P2 is the projection to the feasible set O, P3 is linear deconvolution for every frame, and P4 is the projection to the feasible set . The details are shown in Table 1.

Download Full Size | PDF

Tables Icon

Table 1. Steps of the TIP algorithm with their descriptions

4. Experiments

In this section, a set of experimental tests will be run to demonstrate the properties of the TIP algorithm. Firstly, the deconvolution results from the TIP algorithm will be presented to give an overview of its general performance and behaviour. Secondly, it will be compared with other blind multi-frame deconvolution algorithms and they shall be analysed in terms of their noise robustness and performance on experimental data.

4.1. Object and PSF reconstruction

The test image of the cameraman is chosen for use as it gives a grayscale image with some features that are quickly lost in the presence of aberrations. This image is shown in Fig. 2, (512×512px, 16-bit) and is aberrated using randomly generated PSFs, based on 20 Zernike modes, to give 4 images created by multiplication of the object spectra with the OTFs. A point to note here is that the diffraction-limited image of the scene is lower resolution than the object in Fig. 2 due to the size of PSF, therefore, it should be not expected that the algorithm reproduces the object to the same resolution due to the band limitation. (In these images the loss is imperceivable.)

 figure: Fig. 2

Fig. 2 On the left, the full frame object is shown, the algorithm processes the image in this size. For clarity, a smaller region-of-interest (ROI) is reproduced on the right. This example shows the reconstruction of the TIP algorithm is very close to the object for this data set and provides an improvement over the input data. The PSFs for this dataset are shown in Fig. 3.

Download Full Size | PDF

The result shown in Fig. 2 demonstrates that the TIP algorithm is able to correctly reconstruct the object o from this input data, furthermore, in Fig. 3 the reconstructions of the PSFs {h^n} show high fidelity to those used to generate the input images. The identification of the PSFs is not pixel perfect, however, it is good enough to produce a qualitatively good reconstruction of the object via the least-squares deconvolution (Eq. (6)). If one closely observes the edges in the object reconstruction, it is possible to see the effect of the errors in the PSF in the form of a halo.

 figure: Fig. 3

Fig. 3 Comparison between the real PSFs, top row, used to generate the images seen in Fig. 2 and those retrieved by the TIP algorithm, bottom row. The size of the support is shown on the image, a circle with radius 11px.

Download Full Size | PDF

4.2. Image types

The TIP result is better than any of the original observations and one is not introducing any amplification of unwanted noise — the nemesis of linear deconvolution procedures. To demonstrate this not a fluke of this single image, the same generation process with new PSFs is used for different objects, and the results are shown in Fig. 4. Figure 4 shows that the algorithm works on the following images: a cell from fluorescence microscopy (bright features on a dark field); a photograph of a tulip (dark features on a light field); and also the 1951 USAF test chart (binary). It can be concluded from this that the TIP algorithm is no better suited one particular type of image than another, making use of minimal a priori information about the system, the algorithm is widely applicable; however, it better suited to extended sources than point-source images

 figure: Fig. 4

Fig. 4 Comparison between the object (top), an acquired frame (middle), and the TIP reconstructions (bottom) for different image types.

Download Full Size | PDF

4.3. Empirical convergence

In this paper, the TIP algorithm has not been proven mathematically to converge. This is still a matter of ongoing research for the authors. What can be shown empirically, however, is that the algorithm does converge. To demonstrate this, 100 of trial datasets of 16 images are generated with different aberrations and no additive noise. These are processed using the TIP algorithm and the peak signal-to-noise ratio is recorded, and here it is defined in dB for images normalised in the range [0, 1] for iteration k as being:

PSNRk=10log10(oo^k22/M2),
where o^ follows the previous notation of being the estimated object. This is essentially, the difference between the estimated object and the real object, the higher the number the closer the result is to the correct solution.

A plot of these 100 datasets is shown in Fig. 5. Here all of the trialled datasets are shown to converge rapidly to a solution, this validates the claim that 10 iterations is enough to produce the solution, since the difference in 10 or 1000 iterations is minimal in linear space. What cannot be claimed, however, is that the algorithm always converges to the correct solution — even if for the a priori it is the optimal solution. The reason for this is simple, it is an inverse problem in a highly multidimensional space, which may have non-unique solution or have trivial solutions. The goal of the constraints used is to reduce which of these solutions the algorithm is able to converge to. Once again, empirically one observes that if the a priori constraints are suitable, the TIP algorithm produces a good deconvolution result — how this can be asserted is tackled in the following section.

 figure: Fig. 5

Fig. 5 Demonstration of the convergence of the TIP algorithm by comparison of the PSNR of the estimated object at each iteration step. The algorithm shows empirical convergence in the majority of cases. Occasionally, the algorithm switches to another behaviour c.f. the top black line, the cause for this numerical instability is still unknown.

Download Full Size | PDF

4.4. Algorithm comparisons

In this sub-section, the performance of the TIP algorithm will be compared with existing methods found in the literature. Comparisons have been made here with: the algorithm described by Pakhomov and Losin [22, 23], referred to as PL, an alternating projection algorithm; a regularisation-based approach from Sroubek and Milanfar [16], written as MCD; and a Bayesian algorithm found in Katkovnik et al. [15], written as MLE.

Fair comparison with the work of others, especially in image processing algorithms is a difficult or even impossible task. As has been previously stated, many algorithms require or can accept extra a priori information. By tuning these hyper-parameters, the results of the same algorithm can vary hugely in their performance on the same datasets. For these reasons, to make these comparisons as fair and scientific as possible, only algorithms that had an implementation available not written by the authors have been used. The exception here is the Pakhomov and Losin algorithm, an extension of which was used in the author’s previous work [24, 25].

Care has been taken in this process to ensure the algorithms compared with performed as well as possible on the datasets used, by tuning of their hyper-parameters to the best of the author’s knowledge and ability. The goal of these comparisons has been to show that with the same minimal a priori information for all the algorithms, i.e. the support size of the PSF, TIP performs better. This does not imply that the others are not excellent deconvolution algorithms or better than TIP under other conditions or extra a priori. Please note that some algorithms such as MLE have built in a priori, which makes the comparison even more difficult.

4.4.1. Solution quality

Returning to the image of the cameraman as an object, 16 images are generated with the additive Gaussian noise at one fourth 214/216 of the signal level along with Poisson noise. To quantify the quality of the incident images with the aberrations a blurred signal-to-noise ratio (BSNR) can be defined as:

BSNR=10log10((Sμ(S))2(Wμ(W))2),
where S and W refer to the signal and the noise respectively, units of BSNR are decibels (dB). μ is the mean. This information is not accessible for the output images and therefore, to analyse these results quantitatively, one can compare the output PSNR. As further preparation, the images are apodized to remove the bright edges and the images are given to the four algorithms with no additional processing. The results of these deconvolutions for a sub-region are shown in Fig. 6.

 figure: Fig. 6

Fig. 6 Comparison between the object reconstructions for different algorithms with the same a priori information. The sub-region of the first input image is shown on the left-hand side and the object on the right-hand side.

Download Full Size | PDF

In these reconstructions one does not achieve the same result with each of the algorithms. The difference between these reconstructions lies in the algorithms’ ability to be robust to the noise present in the images. In this example, the noise contributions have been added to a level to ensure that the other algorithms struggle and fail to correctly reconstruct the object. In this way, the superiority of the TIP algorithm for this task is hopefully demonstrated. It can be asserted without much argument that the TIP algorithm here produces the qualitatively and quantitatively better image, in a visual and in a PSNR sense respectively, see Table 2. The MLE and MCD solutions are corrupted by noise that destroys its dynamic range and the PL algorithm has strong sinusoidal components that result in interference-like effects in its reconstruction. The TIP reconstruction on the other hand contains a smoothness and fidelity to the object that is not seen in the other algorithms.

Tables Icon

Table 2. Comparison between the PSNR for the images shown in Fig. 6. *BSNR is shown for the image.

4.4.2. Effect of additive noise

Whilst in the previous section it was demonstrated on one dataset that the TIP algorithm performed better with regard to noise, it is necessary to engage with this point further.

A similar plot for the convergence have been made in the presence of noise showing similar behaviour and have been omitted for sake of brevity.

The other algorithms tested are very good blind multi-frame deconvolution algorithms, to show this one shall now compare over a range of BSNR values to show the point at which TIP becomes the better option. In each case, the a priori PSF support size is given as the same.

A set of simulated fluorescence microscopy images are used for this purpose, four images with different aberrations. These are generated in the same manner as previously but are modified by changing the signal-to-noise ratio. Each of the aberrated images are generated so that the signal fills a full 16-bit range. To this a Poisson noise is added and then Gaussian noise is added with a standard deviation of increasing bit levels, i.e. σ = 2b where b = 0, 1 … 15. The result is then renormalised to use the full 16-bit range again, such that signal information is lost by casting to the nearest integer value. Explicitly, this is given by:

in=(2161)hn*ohn*o+randp(λ=(2161)*hn*ohn*o)+randp(μ=0,σ=2b),in=(2161)inin.

As a result of this process a set of images was produced that have decreasing signal-to-noise ratio in such a way that the signal is encoded in a decreasing number of bits, i.e. 216−b. Thus essentially for the last set, the signal has a binary encoding, these values may also be converted easily into BSNR values.

The algorithms are run on these sets of four images for the different noise levels and the data is shown in Fig. 7. Here the first column shows the converted BSNR value for the bit encoding. The second corresponds to the first image in the series, the third to PL, the fourth to MLE, fifth MCD and the last to TIP. The even values of b = 0, 2, …, 12 have been represented here, discarding b = 14, in order to reduce the space, but what is displayed is sufficient to see the trends and performance of the algorithms. In the top right corner the object is shown for comparison with the solutions.

 figure: Fig. 7

Fig. 7 Comparison of algorithm performance vs. increasing noise level for four input frames with the same a priori information. The first column is the first image from the set i1. The second column is PL, the third MLE, fourth MCD, and the final column in the TIP algorithm. The object is shown in the top right hand corner.

Download Full Size | PDF

As with the previous section, one observes that the TIP algorithm is the most robust to the noise conditions and consistently produces the best reconstruction under all noise conditions when compared with the other algorithms. Even at the high signal-to-noise ratios the other algorithms fail to reproduce the object as seen in the figure; even thought MLE that produces a sharp image, it does not accord as accurately as TIP with the object used to generate the images.

4.4.3. Telescope tests

Up to this point, the TIP algorithm has been shown to work on different simulated images under different noise conditions. As the final verification of the TIP methodology, one compares the performance of the algorithm with the other algorithms on experimental data from an amateur telescope (20cm diameter). These images are different from the model assumed in Eq. (1) as mostly in astronomical observation the aberration is not isoplanatic.

The object being observed is Jupiter and its moons through the turbulent atmosphere. The dataset compromises of 20 images acquired in a timelapse resulting a drift in the position of the object. This is corrected beforehand with cross-correlation and it is the same for all algorithms. After this registration procedure the images are processed by the PL, MLE, and the TIP algorithm using ≤10 iteration steps for each. The results of the reconstructions are shown in Fig. 8 along with example frames from the sequence of images. A small region (128 × 128px) of the original processed field-of-view (512 × 512px) has been reproduced to allow the observation of the details. One common feature for all the algorithms is the amplification of the noise when processing images such as this. For this reason, the MLE algorithm was stopped once it started to diverge and produced worse estimates of the object. It is clear from this test that on this dataset the TIP algorithm performs better than the others and produces an improvement over the best or ‘lucky’ image of the sequence.

 figure: Fig. 8

Fig. 8 The performance of the algorithm on test images of Jupiter and its moon from a small amateur telescope. The top-row shows four frames from the acquisition; the bottom row shows the PL reconstruction, the TIP reconstruction, the best frame from the sequence, and the MLE reconstruction. The TIP algorithm performs better under these conditions than the other algorithms tested.

Download Full Size | PDF

One concludes, the TIP framework produces the best results on this dataset, therefore, experimentally demonstrating the robustness of the algorithm to the sources of noise found in scientific imaging applications, that is those with low-light and low signal-to-noise ratios.

As a final rest, the results of the TIP algorithm is compared with MCD on experimental data. The dataset is acquired with a telescope through horizontal turbulence and is of a construction crane. Of this larger dataset, 16 frames are chosen when the object is severely distorted by the turbulence, but are sequential in time. One of the best images in the sequence is taken as a reference and is shown in Fig. 9. The first frame of the sequence is reproduced too along with the reconstructions for the MCD and the TIP algorithm. The algorithms are provided with the same a priori information, i.e. the PSF support is a box 9 × 9 pixels. The images have been apodized for the sake of the Fourier domain processing in both algorithms.

 figure: Fig. 9

Fig. 9 The performance of the algorithm on experimental images of a crane through horizontal turbulence. The top row shows the first image from the sequence of 16 frames with high turbulence, the middle the MCD solution and the right-hand side is the TIP solution. The second row shows a low turbulence frame, not included in the processing, and a zoomed region-of-interest outlined by the yellow box.

Download Full Size | PDF

If one compares the results, the MCD solution is heavily filtered to suppress the effect of the noise in the images, it is clear that this involves information loss that has been restored in the TIP solution. The drawback of the TIP methodology is that there is a correlation with the noise, which may be reduced with more observation frames. The key result here is that features of the crane that are not visible in the unprocessed image sequence have been restored here, as can be seen with comparison with the low turbulence image.

5. Discussion

In this section, the motivation, limitations and strengths of the TIP algorithm will be explored and are provided as a contrast to the description given in Section 3. To begin, TIP does not yet fit into a mathematically neat and tidy framework; however, it is stressed that it is an ongoing piece of research to determine why the algorithm works as is observed in the experimental results. What can be discussed has been empirical discovered, but not yet rigorously proven mathematically.

TIP was designed to act as an alternative to adaptive optics in wide-field microscopic imaging. In this end application, near real-time deconvolution of the images is desired and one assumes that it possible to acquire more frames at will, but there is some cost associated with this. For this reason, TIP is designed to work on images of low signal-to-noise ratio extended objects, where the temporal dynamics of the medium are much faster than the object’s and where an upper bound on the size of the PSF can be estimated. Furthermore, the PSF distortion is assumed to be caused by the presence of non-uniform media. TIP was not designed to treat motion blur, for example, and will not work when there are large displacements of the object. In fact for optimal performance, images should be pre-registered before processing with TIP.

The application gives rise to two a priori that have been used in TIP. Firstly, one can be reasonably certain that most of the energy of the PSF is contained within a small region determined by the optics and the phase aberration statistics. The residual outside this region can be safely regarded as having a negligible effect on the final image. This is a highly advantageous a priori as it reduces the number of unknowns within the optimisation problem dramatically. Secondly, the temporal dynamics is assumed to be non-existent under the period of acquisition, this is justifiable since the frame rate of cameras can run much faster than many dynamical process found in nature.

What these two a priori assumptions correspond to mathematically is a constraint of finite support on both the object and the PSFs. The support for the object is along the temporal dimension and the support for the PSF is along the spatial dimension. The motivation for finite support has been physically justified above, however, in terms of signal processing this has an intrinsic problem that all finite-supported functions are not bandlimited, whereas in fact, the image and PSFs are bandlimited. In practice, one finds if the support is sufficient the PSF has decayed such that there is no hard edge and no ill-effect is observed.

The authors hope that the algorithm will be useful in other imaging modalities, where it has been impossible for the authors to currently test it on. The version of the algorithm used in the experiments in this paper, both implemented in Matlab and Python, can be found with examples in Code 1 [26]. It may be noted that since the image spectra are produced from the images by discrete Fourier transforms (DFTs) zero-padding may be used to reduce any edge effects. Moreover, it is often helpful to apodise the images optically or computationally to further reduce these limited domain problems and the noise amplification at high spatial frequencies. Apodisation is spatial finite support for the object and arises from a consequence of the finite information content in an image without extra a priori.

To start addressing the limitations, the TIP algorithm is specifically and exclusively a multi-frame (MF) algorithm. It does not work with a single-frame (SF), that is the case of N = 1. The reason that it does not work is that it is design to work in situations where one has minimal a priori information. One provides only the PSF support information to the algorithm; this implies that all the other necessary information must be present in the multiple frames acquired. TIP cannot interpolate, extrapolate or supplement the information in the datasets due to external a priori provided by the user. This naturally imposes limitations on the performance of the algorithm, leading to datasets where the algorithm will not be successful in finding the correct {H^n,O^}.

In Fig. 10, it is demonstrated that the TIP algorithm, starting from a δ-function PSF has no information to generate the true PSF, therefore, it remains as a δ-function. The output object is then the same as the input image, however, one observes when extending the dataset by three more frames to be MF, the algorithm is able to extract the object information better. It should be noted that this SF solution found, does satisfy the constraints but is not the correct solution.

 figure: Fig. 10

Fig. 10 A demonstration that single-frame use of the TIP algorithm yields the trivial δ-function PSF due to lack of information in the dataset. Images have been cropped.

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 An example deconvolution problem where the TIP algorithm does not perform better than another algorithm IDAC. This stellar deconvolution problem of 9 frames from [27] cannot be reduced to δ-functions with the TIP algorithm.

Download Full Size | PDF

This is the first and primary limitation of the algorithm, N > 1. All the other limitations of the algorithm can be related to this first one, that is in the information content of the dataset. This condition may be formalised; for the algorithm to work optimally, the sum OTF spectrum must cover all frequencies within the bandlimit of the system and have no common zeros:

n=1N|Hn(vm)|>0|vm|vmax,
where vmax is the bandlimit of the signal. If these conditions are not met, the algorithm will not be able to separate the zeros of the OTFs from the zeros of object spectrum, resulting in a poor estimate of the object, as extra zeros in the spectrum correspond to missing information. In this case, more frames should added to the dataset. The number of frames N required for this condition to be met is dependent on the PSF support size chosen, the temporal aberration statistics, and the signal-to-noise ratio. It is not easy to quantify this relationship, therefore, when used experimentally one can simply acquire more images until the information content is high enough to produce a good result. Although, it should be noted that if there is a static component of the aberration, not removed by the temporal dynamics, then TIP can never remove this static component.

Another case where the algorithm will not perform well is with sparse images, an example of this is stellar images. Here one is looking at point sources of differing intensity. Due to the simplicity of the object spectrum, the TIP algorithm struggles under these conditions to separate the zeros between the spectra. To demonstrate this, a comparison is made with some data found on the IDAC website [27], as seen in Fig. 11. The bandwidth of the object is much greater than the OTF bandwidth and TIP cannot restore this information in the manner of some other deconvolution algorithms such as IDAC. This leads us to another limitation of the TIP algorithm, the following condition must be met:

{Hm(vm)}ϵ>0{O(vm)}ϵ>0,
where ϵ is some small threshold applied to the spectra. Another limitation this implies is that TIP cannot super-resolve features beyond the optical systems bandlimit as the OTF is zero in this region.

Many of the strengths of TIP have been discussed throughout the article and what is placed here will serve as a clarifier of important points. Firstly, the quality of the solution, in a PSNR sense, produced by the TIP algorithm mainly arises from lack of noise amplification due to the choice of P3 and P4 projections — this is the “tangential projection.” The reason for this robustness to noise arises since the algorithm in its operation indirectly filters the noise by reduction of its normal component. In this sense, the reconstructed images O^H^n produced from this pair of O^ and H^n with the aforementioned mutual fidelity can be seen as de-noised versions of the input images In, reconstructed with the minimum possible a priori knowledge. They could be used further as an input for any other algorithm making use of other a priori information. There is no process that can remove the tangential component without further a priori about the noise.

Secondly, TIP’s operations are simple and fast. Both in the sense of requiring ≈ 10 iterations before producing a usable estimate; but also in the number of floating point operations required. For example, on a sufficiently small number of pixels it is possible to run this algorithm for real-time image acquisition. The unoptimised Python or MATLAB implementation used to generate the figures in this paper processed at an average speed of 2.2 × 10−7MNK seconds, i.e. per pixel per iteration — for example 2.2 × 10−7 × 256 × 256 × 4 × 10 = 0.58 seconds. Even with larger images, the speed may be greatly increased if only a sub-section of the image is used to identify the PSFs before deconvolving the whole dataset using the identified optimal linear filter; this possibility depends on the SNR.

Finally, whilst the convergence of the algorithm has not yet been proved, the TIP algorithm is not seen to diverge empirically in the majority of cases tested. This has been attested to by the measurements that have been made in this paper, see Section 4.3. The solution is stable under a high number of iterations (up to a 1000 have been tested) or the addition of more input frames, which for TIP only serves to increase the quality of the reconstruction by boosting the overall SNR.

To conclude, in this paper the TIP algorithm has been presented as a multi-frame blind deconvolution algorithm that uses minimal a priori information. It has been found to have several key advantages when compared with other algorithms under these conditions and this comes in the form of: the quality of final solution (Section 4.4.1), its robustness to noise (Section 4.4.2), and its real-world applicability (Section 4.4.3).

This technique is, however, not perfect and improvements are possible, especially, with regard to the final image reconstruction once the point spread functions have been elucidated. Whilst the process of tangential projections provides the optimum point-spread functions these can still be effected by the tangential noise component, therefore, it would be possible to find a better non-blind deconvolution method than that is currently used in the algorithm. This would remove or at least reduce the correlation with the noise that is still sometimes present in the final reconstruction, this may be seen in Fig. 8 and Fig. 9.

The authors hope that further research will yield a satisfying mathematical proof of the operation and convergence of the algorithm to augment this article’s empirical results along with further experimental validation of this technique for the correction of phase aberrations in scientific imaging applications.

Funding

European Union’s Seventh Framework Programme FP7/2007-2013 (339681); Russian Ministry of Education (“5 in 100” Programme).

Acknowledgments

The experimental image sequences from the telescope we were kindly given permission to use by M. Loktev. We would also like to thank the contributions of W.J.M. van Geest and C.J. Slinkman for their technical support and Flexible Optical B.V. for their ongoing contributions to our work.

References and links

1. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

2. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]   [PubMed]  

3. R. A. Gonsalves, “Phase retrieval and diversity in adaptive optics,” Opt. Eng. 21, 215829 (1982). [CrossRef]  

4. R. G. Paxman, T. J. Schulz, and J. R. Fienup, “Joint estimation of object and aberrations by using phase diversity,” J. Opt. Soc. Am. A 9, 1072–1085 (1992). [CrossRef]  

5. G. Ayers and J. C. Dainty, “Iterative blind deconvolution method and its applications,” Opt. Lett. 13, 547–549 (1988). [CrossRef]  

6. T. J. Schulz, “Multiframe blind deconvolution of astronomical images,” J. Opt. Soc. Am. A 10, 1064–1073 (1993). [CrossRef]  

7. L. P. Yaroslavsky and H. J. Caulfield, “Deconvolution of multiple images of the same object,” Appl. Opt. 33, 2157 (1994). [CrossRef]   [PubMed]  

8. T. Zhang, H. Hong, and J. Shen, “Restoration algorithms for turbulence-degraded images based on optimized estimation of discrete values of overall point spread functions,” Opt. Eng. 44, 017005 (2005). [CrossRef]  

9. C. R. Vogel, T. Chan, and R. Plemmons, “Fast algorithms for phase diversity-based blind deconvolution,” Adaptive Optical System Technologies 3353, 994–1005 (1998). [CrossRef]  

10. D. Turaga and T. E. Holy, “Image-based calibration of a deformable mirror in wide-field microscopy,” Appl. Opt. 49, 2030–2040 (2010). [CrossRef]   [PubMed]  

11. P. M. Johnson, M. E. Goda, and V. L. Gamiz, “Multiframe phase-diversity algorithm for active imaging,” J. Opt. Soc. Am. A 24, 1894–1900 (2007). [CrossRef]  

12. N. Wiener, Extrapolation, Interpolation, and Smoothing of Stationary Time Series vol. 7 (MIT, 1949).

13. P. Jansson, Deconvolution of Images and Spectra (Academic, 1997).

14. J. Idier, ed., Bayesian Approach to Inverse Problems (Wiley, 2008). [CrossRef]  

15. V. Katkovnik, D. Paliy, K. Egiazarian, and J. Astola, “Frequency domain blind deconvolution in multiframe imaging using anisotropic spatially-adaptive denoising,” in “Signal Processing Conference, 2006 14th European, ” (IEEE, 2006), pp. 1–5.

16. F. Sroubek and P. Milanfar, “Robust multichannel blind deconvolution via fast alternating minimization,” IEEE Trans. Imag. Process. 21, 1687–1700 (2012). [CrossRef]  

17. S. Chaudhuri, R. Velmurugan, and R. Rameshan, “Blind deconvolution methods: A review,” in “Blind Image Deconvolution,” (Springer, 2014), pp. 37–60.

18. H. H. Bauschke, P. L. Combettes, and D. R. Luke, “Phase retrieval, error reduction algorithm, and fienup variants: a view from convex optimization,” J. Opt. Soc. Am. A 19, 1334–1345 (2002). [CrossRef]  

19. D. R. Luke, “Finding best approximation pairs relative to a convex and prox-regular set in a hilbert space,” SIAM J. Optim. 19, 714–739 (2008). [CrossRef]  

20. G. Li, C. Wen, and A. Zhang, “Fixed point iteration in identifying bilinear models,” Syst. Control Lett. 83, 28–37 (2015). [CrossRef]  

21. T. F. Chan and C.-K. Wong, “Convergence of the alternating minimization algorithm for blind deconvolution,” Linear Algebra Appl. 316, 259–285 (2000). [CrossRef]  

22. A. Pakhomov and K. Losin, “Processing of short sets of bright speckle images distorted by the turbulent earth’s athmosphere,” Opt. Commun. 125, 5–12 (1996). [CrossRef]  

23. A. Pakhomov, “Fast digital processing of images of artificial earth satellites,” J. Commun. Technol. Electron. 52, 1114–1124 (2007). [CrossRef]  

24. M. Loktev, G. Vdovin, O. Soloviev, and S. Savenko, “Experiments on speckle imaging using projection methods,” in “SPIE Optical Engineering+ Applications,” J. J. Dolne, T. J. Karr, V. L. Gamiz, S. Rogers, and D. P. Casasent, eds. (International Society for Optics and Photonics, 2011), pp. 81650M.

25. M. Loktev, O. Soloviev, S. Savenko, and G. Vdovin, “Speckle imaging through turbulent atmosphere based on adaptable pupil segmentation,” Opt. Lett. 36, 2656–2658 (2011). [CrossRef]   [PubMed]  

26. D. Wilding, O. Soloviev, P. Pozzi, G. Vdovin, and M. Verhaegen, “Tangential Iterative Projections (TIP) Source Code,” figshare (2017) [retrieved 7 August 2017], https://doi.org/10.6084/m9.figshare.5281087.

27. J. Christou and K. Hege, “Iterative Deconvolution Algorithm in C (IDAC)” (Arizona Board of Regents, 2000). http://cfao.ucolick.org/software/idac/

Supplementary Material (1)

NameDescription
Code 1       TIP Algorithm Source Code

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 An algorithm flowchart showing the steps of TIP. The different projection steps are shown in different colours and the explicit operations are given by the lines between the boxes that contain the data. P 1 is multi-frame linear deconvolution, P 2 is the projection to the feasible set O , P 3 is linear deconvolution for every frame, and P 4 is the projection to the feasible set . The details are shown in Table 1.
Fig. 2
Fig. 2 On the left, the full frame object is shown, the algorithm processes the image in this size. For clarity, a smaller region-of-interest (ROI) is reproduced on the right. This example shows the reconstruction of the TIP algorithm is very close to the object for this data set and provides an improvement over the input data. The PSFs for this dataset are shown in Fig. 3.
Fig. 3
Fig. 3 Comparison between the real PSFs, top row, used to generate the images seen in Fig. 2 and those retrieved by the TIP algorithm, bottom row. The size of the support is shown on the image, a circle with radius 11px.
Fig. 4
Fig. 4 Comparison between the object (top), an acquired frame (middle), and the TIP reconstructions (bottom) for different image types.
Fig. 5
Fig. 5 Demonstration of the convergence of the TIP algorithm by comparison of the PSNR of the estimated object at each iteration step. The algorithm shows empirical convergence in the majority of cases. Occasionally, the algorithm switches to another behaviour c.f. the top black line, the cause for this numerical instability is still unknown.
Fig. 6
Fig. 6 Comparison between the object reconstructions for different algorithms with the same a priori information. The sub-region of the first input image is shown on the left-hand side and the object on the right-hand side.
Fig. 7
Fig. 7 Comparison of algorithm performance vs. increasing noise level for four input frames with the same a priori information. The first column is the first image from the set i1. The second column is PL, the third MLE, fourth MCD, and the final column in the TIP algorithm. The object is shown in the top right hand corner.
Fig. 8
Fig. 8 The performance of the algorithm on test images of Jupiter and its moon from a small amateur telescope. The top-row shows four frames from the acquisition; the bottom row shows the PL reconstruction, the TIP reconstruction, the best frame from the sequence, and the MLE reconstruction. The TIP algorithm performs better under these conditions than the other algorithms tested.
Fig. 9
Fig. 9 The performance of the algorithm on experimental images of a crane through horizontal turbulence. The top row shows the first image from the sequence of 16 frames with high turbulence, the middle the MCD solution and the right-hand side is the TIP solution. The second row shows a low turbulence frame, not included in the processing, and a zoomed region-of-interest outlined by the yellow box.
Fig. 10
Fig. 10 A demonstration that single-frame use of the TIP algorithm yields the trivial δ-function PSF due to lack of information in the dataset. Images have been cropped.
Fig. 11
Fig. 11 An example deconvolution problem where the TIP algorithm does not perform better than another algorithm IDAC. This stellar deconvolution problem of 9 frames from [27] cannot be reduced to δ-functions with the TIP algorithm.

Tables (2)

Tables Icon

Table 1 Steps of the TIP algorithm with their descriptions

Tables Icon

Table 2 Comparison between the PSNR for the images shown in Fig. 6. *BSNR is shown for the image.

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

i n = o * h n + w n , i n , o , h n + M × M ,
{ h ^ n , o ^ } = arg min h n , o n = 1 N i n o * h n 2 , s . t . o , h n + M × M ,
h ^ n = i n , o ^ = δ M × M , i n = i n * δ M × M ,
I n ( v m ) = H n ( v m ) O ( v m ) + W n ( v m ) ,
{ H ^ n , O ^ } = arg min H n , O m = 1 M 2 n = 1 N | I n ( v m ) O ( v m ) H n ( v m ) | 2 , s . t . O , H n { ( f ) with f + M × M } .
O ^ = n = 1 N ( H n ) * I n n = 1 N | H n | 2 .
O ^ = n = 1 N ( H n ) * I n n = 1 N | H n | 2 + 1 SNR .
{ H ^ n , O ^ } = arg min H n , O n = 1 N I n O H n 2 , s . t . O O , H n .
{ H ^ n , O ^ } = arg min H n , O n = 1 N I n O H n 2 + λ o Q ( O ) + λ h R ( { H n } ) .
O ^ ( k ) = P O arg min O M × M n = 1 N I n O H ^ n ( k 1 ) 2 .
H ^ n ( k ) = P I n O ^ ( k ) , n = 1 , , N .
o * h n 1 = o 1 h n 1 .
h = h + h , h ( x ) = 0 , if x X , h ( x ) = 0 , if x X .
i 1 = o * h 1 = o 1 h 1 + o 1 h 1 .
h = P 1 I O ,
i / o = h + w / o = ( h + ( w / o ) ) + ( h + ( w / o ) ) ,
PSNR k = 10 log 10 ( o o ^ k 2 2 / M 2 ) ,
BSNR = 10 log 10 ( ( S μ ( S ) ) 2 ( W μ ( W ) ) 2 ) ,
i n = ( 2 16 1 ) h n * o h n * o + randp ( λ = ( 2 16 1 ) * h n * o h n * o ) + randp ( μ = 0 , σ = 2 b ) , i n = ( 2 16 1 ) i n i n .
n = 1 N | H n ( v m ) | > 0 | v m | v max ,
{ H m ( v m ) } ϵ > 0 { O ( v m ) } ϵ > 0 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.