Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Super resolution imaging with noise reduction and aberration elimination via random structured illumination and processing

Open Access Open Access

Abstract

Hereby we describe an improved aberration-free super resolution imaging system that uses binary speckle patterns (BSP) for encoding and decoding. According to the scheme, the object is multiplied by a fine random speckle pattern and subsequently the poorly resolved acquired image is multiplied by the same pattern, stored digitally. Summing many samples of similar operations restores a high quality image. In this paper we define encoding and decoding BSP that form complementary sets. Contrary to using totally random BSP, the use of complementary sets reduces the noise of the resulting image significantly. Moreover, they allow reducing the number of operations necessary to restore the image. We found out that using sparse BSP followed by a stochastic noise reduction algorithm, fitted for the decoding process, reduces the noise significantly and allows use of even a single set of complementary BSP for obtaining excellent results. Simulation results are presented both for binary images as well as for gray level objects.

©2007 Optical Society of America

1. Introduction

The design of conventional high quality imaging systems involves much effort spent on aberration balancing to achieve an extended field of view. Once the corrected lens, which usually consists of several components, is assembled, the performance is usually limited by defocus, which defines an allowed axial shift of the optical system elements. Thus, if extended depth of field (DOF) in object space is required, the resolution performance of the image quality is inevitably reduced.

The subject of the effects of aberrations on optical imaging has been extensively covered in the last decades [1-3]. As for defocus aberration, increasing the DOF of imaging systems has been a subject to many current studies [4-11]. The oldest ways to increase the DOF of an imaging system are by stopping down the aperture or by using an annular ring as a pupil mask [1]. The main disadvantage of both ways is the reduced light throughput that is associated with the smaller transparent aperture area. The later method, however, allows one to obtain any desired resolution for an arbitrary DOF, albeit with a poor contrast. Modern optical processing methods, such as using diffractive optical elements as pupil masks, were used in recent years in order to achieve extended DOF while maintaining resolution as high as possible. Such solutions are based on attaching a mask to the system aperture without reducing its size [4-11]. When the mask is a phase mask, light throughput is essentially unchanged. Nevertheless, resolution is always reduced with respect to the diffraction limit in case of extended DOF imaging. Attempts to restore some of the lost resolution with combined optical digital hybrid imaging systems were made recently [9-11], and improved performance indeed has been achieved. In such hybrid systems, the optically acquired image is corrupted in a known way, designed to be relatively robust to defocus inaccuracies, so that a single digital restoration filter may provide a digital output image with diffraction limit performance, at least in case of a noise free system [10-11].

In parallel, during the last decades, imported concepts from information theory were applied in several studies regarding the information that an imaging system can transmit, which finally allowed an improvement of the spatial resolution of an imaging system beyond the classical limit. The idea is based on the fact that the number of the degrees of freedom of an optical signal that an imaging system can transmit is constant. Having some a-priori knowledge about the signal, one can use redundant degrees of freedom to increase spatial bandwidth, while the total number of degrees of freedom remains unchanged [12-19]. The a-priori known information can be associated with several classes, such as object shape [14], temporary restricted signal [15-17], wavelength restricted signal [18], and polarization restricted signal [19]. Recently, a novel approach for temporary restricted signals, based on speckle patterns encoding and decoding instead of the common usage of periodic gratings was introduced [20]. There, an encoding speckle pattern was created optically on an object. Several images of the object, multiplied by the spatially shifted speckle were imaged through a low resolution imaging system. Each acquired image was multiplied by the exact digital image of the encoding shifted speckle and all images were thereafter summed together, to finally provide an improved image resolution. The procedure was shown to work only for an in-focus condition. Moreover, exact knowledge of the actual encoding speckle pattern, created on-line by a light source, was required for the decoding, thus making this approach impractical. Recently, an imaging system that combines extended DOF and aberration independence, as well as resolution beyond the classical limit, has been presented [21]. There, the resulting digital image is independent of the system aperture, and the finest detail is defined by the grain size of the speckle. Nevertheless, to achieve a noise free output image with a resolution beyond the diffraction limit, which is also aberration independent, would require a very large number of encoding speckle patterns. In that study, we had to use 400 binary speckle patterns, to receive acceptable imagery. This was cumbersome to carry, and still the resulting output image was noisy. For a finite number of binary speckle patterns used for encoding and decoding, the larger the aberration, the higher the noise level in the output image.

In the present work we analyze the characteristics of the noise that is superposed onto the output image when one uses a modified finite binary speckle pattern set, followed by a noise reduction algorithm. Such approach reduces the noise level in the output image even if a significantly smaller number of speckle encoding patterns are used and still provides super resolution in the presence of a wide aberration range. The imaging system described in this paper is practical in particular in microscopy, whereby the resolving power of the objective lens is increased.

2. Theory

2.1. Aberration-free superresolution imaging system

We will first review briefly the theory of achieving superresolution in imperfect imaging systems, in the presence of aberrations, via encoding the object and decoding the acquired optical image with a binary speckle pattern (BSP) of high resolution. The resolution of the BSP mask will finally determine the image resolution. The BSP consists of pixels that may be either fully transparent with a probability of p, or fully opaque with a probability of 1-p. Detailed description as well as experimental results were presented recently21. Main features are repeated below, as deemed necessary. A block diagram of the imaging system is shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. Imaging System Block diagram

Download Full Size | PDF

Assuming spatially incoherent illumination, one sequentially multiplies the object with each one of N different BSP's, one at a time, e.g. by a spatial light modulator (SLM) positioned next to the object plane, to obtain N different encoded objects:

Ien(x;n)=Iobj(x)Sn(x)

Here, Iobj(x) is the object intensity, Sn(x) is the n-th BSP distribution and Ien(x,n) is the n-th encoded object. Restricting our discussion to isoplanatic imaging systems, the image of each encoded object is expressed, in case of quasi-monochromatic illumination, by:

IF(x;n)=κIen(x´M;n)h(xx´)2dx´+ξn(x)

where h(x) is the point spread function (PSF) exhibited by the aberrated imaging system, κ is a proportion coefficient, M is the lateral magnification and ξn(x) is a zero mean additive Gaussian detector noise, associated with the nth acquired image at the pixel x in the image plane. Let us further assume that the variance of the Gaussian detector noise is equal to σ2. The acquired image is blurred since the PSF region of support is assumed to be wider than the object finest details as well as wider than the fine details of the BSP distribution. Thus, the finest details of the BSP as well as the object are not resolved in the N captured images and digital decoding must be used in order to attempt the recovery of image information.

The decoding is carried by multiplying each one of the N captured images, IF(x,n), by the exact corresponding image of the encoding BSP, and averaging over all multiplications, i.e.:

Iout(x)=1Nn=1NIF(x;n)Sn(xM)

Explicitly, one obtains:

Iout(x)=κIobj(x´M)h(xx´)2[1Nn=1NSn(x´M)Sn(xM)]dx´+r(x)

where r(x) is defined as:

r(x)1Nn=1Nξn(x)Sn(xM)

Let us first ignore the term r(x) and examine only the term in brackets in Eq. (4), which is essentially the correlation function of the BSP distribution that couples the spatial variable x in the image plane, with x' in the object plane [21]. For large N, the correlation function provides in the limit:

limN=(1Nn=1NSn(x´M)Sn(xM))=Γ(xx´)=p2+p(1p)Λ(xx´B)

where p is the probability of a pixel to be transparent in the BSP, B is the lateral dimension of a BSP's pixel and Λ(x)=1-∣x∣ for ∣x∣<1 and zero elsewhere. The term p2 in Eq. (5) is a constant “pedestal”, responsible for displaying the blur image, as acquired in a conventional imaging system. The second term with the narrow triangle function, whose region of support is twice the size of the BSP finest detail, multiplies the ∣h(x-x')∣2 term and thus generates a new narrow PSF, independent of blur and aberration that has the ability to faithfully restore the original image. As in our previous work21, the blur image is calculated on-line by averaging all captured images:

Iblur(x)=limN1Nn=1NIF(x,n)=Iobj(x´M)h(xx´)2dx´

where for large enough N value (N>50) the statistical average over the additive Gaussian detector noise is assumed to be zero. This blur image is then subtracted from the output image Eq. (4), to retain the restored aberration free high resolution image as well as the noisy term, r(x) that is carried through:

Ihr;(x)=Iout(x)pIblur(x)=(1p)Iobj(x´M)h(xx´)2Λ(xx´B)dx´+r(x)

Since the stochastic processes ξn(x) and Sn(x/M) are independent, the statistical average over r(x) is readily calculated to be zero. Moreover, since the additive Gaussian noise patterns in different acquired images (n≠n') are uncorrelated, the variance of r(x) is calculated as follows:

r2(x)=1N2nn´Sn(xM)Sn´(xM)ξn(x)ξn´(x)=
=1N2n=n´Sn(xM)Sn´(xM)ξn(x)ξn´(x)1N2nξn2(x)=σ2N

Here we used the fact that the multiplication Sn 2(x/M) provides binary values of zero or one and thus its statistical average is bounded by one. The variance of r(x) is actually bounded by the variance of the accumulated additive noise obtained if no digital BSPs are used. Since throughout this work we use values of N which are higher than 50 and well illuminated scenes are assumed, the term r(x) will be neglected in the rest of the article, unless explicitly specified. Thus, using the above assumptions, Eq. (7a) becomes:

Ihr;(x)=Iout(x)pIblur(x)=(1p)Iobj(x´M)h(xx´)2Λ(xx´B)dx´

This procedure fails in the presence of defocus aberration, since in such case the magnification is unknown due to object's exact location within the depth of field range. In the presence of defocus, the digital decoding process shown in Eq. (3) - Eq. (7c) should be carried in several parallel channels, each corresponding to a certain assumed location within the depth of field (DOF). Finally, one can determine the “winning channel”, by a hard decision process [21].

The theoretical analysis presented in Eq. (5) - Eq. (7c) is an idealized calculation that assumes an infinite number of BSP's. In such idealized experiment the resulting equivalent PSF region of support is determined by the width of the correlation peak, which is twice the BSP pixel dimension. Therefore, the image is indeed aberration free, since the image is formed from the encoded original object, independent of aberrations introduced by the optical system. However, since in practice only finite N values are used, the output quality deteriorates as the PSF region of support increases, as it was demonstrated experimentally [21]. Therefore, it is important to investigate the statistics of the BSP's in order to establish whether a reduced number of BSP's may be sufficient to maintain a certain output image quality.

2.2. Averaged binary speckle patterns statistics.

In this section, we calculate the mean value as well as the standard deviation (STD) for each pixel in the pattern SN(x), which is an average of N different BSP's:

SN(x)=1Nn=1NSn(x);n{1N}

where Sn(x) is one of the BSP distributions.

The value of each pixel is one of the N+1 possible entries of the set Ak=k/N, where k∈{0..N}. The probability of acquiring a certain value Ak is given by the binomial distribution:

pk=Nkpk(1p)Nk;whereNkN!(Nk)!k!andk{0N}

Therefore, the average value of each pixel in SN(x) may be calculated immediately:

SN(x)=k=0NAkpk=p

Similarly, it is straight forward to find the second moment of each pixel, as well as the variance, VN and the standard deviation (STD), σN, provided by the following expressions:

SN2(x)=k=0NAk2pk=p2N1N+pN
VN=SN2(x)SN(x)2=pp2N=p(1p)N
σN=VN=p(1p)N

One notes that the variance decreases at the extreme positions (p→0 or p→1) and it is maximal at p=1/2. It is shown later that operation with small values of p is advantageous.

2.3. Statistics of the correlation function, calculated for finite N value.

The correlation function Γ(x-x'), provided in Eq. (5), assumes infinite N and therefore it is a function only of the difference between x and x'. However, when N is finite, the correlation peak and the pedestal around it possess random values that vary with x and x'. Similar to the derivation of Eq. (5), the correlation function now becomes:

Γ(x;x´)=1Nn=1NSn(x´M)Sn(xM)=Δ(x,x´)xx´>B+Δxx´BΛ(xx´B)

where Δ(x,x')∣x-x'∣>B is the random pedestal value and Δ∣x-x'∣≤B is the random peak value of the correlation. The mean and STD of these two random processes are calculated in this section. According to the definitions of Eq. (12), both random processes may be written as:

Δ(x,x´)xx´B1Nn=1NSn(xM)Sn(xM),
Δ(x,x´)xx´>B1Nn=1NSn(x´M)Sn(xM),

where x and x' are different pixels belonging to the same BSP. Eq. (13a) represents the correlation peak value, while Eq. (13b) represents the pedestal value. Since the BSP's possess only the values zero and one, Eq. (13a) represents the average over N different BSP's. It is thus identical to that of Eq. (8), and the results expressed in Eq. (11) hold here as well. The second expression, Eq. (13b), is different, since the probability to achieve the value of 1 in a certain pixel is p2 rather than p as in the previous case. The statistics for the first and second moments (Eq. (9) - Eq. (11)) are valid for this case too if p is replaced with p2. The mean and the STD of the pedestal are now provided by the expressions:

Δ(x,x´)xx´>B=p2
σ(xx´)N=p2(1p2)N

We will now show that the randomness in the correlation peak, as well as the noise in the averaged blur image, as given by Eq. (6), can be eliminated. Let us define a set of N1 “complementary BSP”, where N1=1/p, such that any two members of the set are mutually exclusive. The entire set of N1 masks will automatically sample all pixels, such that each pixel is sampled only once. When one uses a total of N different BSP's to multiply the object, we will divide N into L0 different subsets of N1 complementary BSP's, so that N=L0N1. An illustration of one complementary BSP subset with p=0.2, i.e. with N1=5, is shown in Fig. 2, as well as its summation (low-right corner). Note that here only a small portion of 25 pixels is shown.

 figure: Fig. 2.

Fig. 2. One subset of five complementary BSPs with p=0.2 (N1=5). The image in the low-right corner is their summation.

Download Full Size | PDF

One should note that in this example the whole BSP sequence consists of N=5L0 BSPs and that each subset of 5 complementary BSP, like the one in Fig. 2, is independent of the other subsets and sums to unity. Using complementary BSPs, the noise in Eq. (8) as well as in Eq. (13a) is totally eliminated, since it now becomes:

SN(x)=1L0N1n=1L0k=1N1Sn;k(x)

where Sn;k(x) is the kth BSP in the nth subset. The summation of all the BSP's of each subset fully covers the entire input matrix, i.e.:

k=1N1Sn;k(x)=1(x)1everywhere.

Equation (15a) now becomes:

SN(x)=1L0N1n=1L01(x)=1N1=p

For such “complementary” coding scheme with N>50 and the assumptions made in section 2.1, the blur image, given by Eq. (6), may be considered as noise-free and independent of the value of L0, as long as N1=1/p is an integer:

Iblur(x)=1Nn=1NIF(x,n)=Iobj(x´M)h(xx´)2dx´

Moreover, using the results of Eq. (15a) - Eq. (15c), it is easily seen that the peak value, Δ∣x-x'∣≤B, defined in Eq. (13a), has a value of p with no fluctuations whatsoever, and thus the correlation function becomes:

Γ(x;x´)=Δ(x,x´)xx´>B+pΛ(xx´B)

The statistics of Δ∣x-x'∣>B, remains unchanged in this modified coding scheme, since these BSP's are still pseudorandom. This is tested in Fig. 3, where simulation results of the pedestal's average and STD values, estimated for complementary BSP coding sets, are found to fit the theory. A probability value of p=0.2 was chosen for this comparison.

 figure: Fig. 3.

Fig. 3. The average and standard deviation of the pedestal value. Statistical variations of the theoretical vs. simulation calculation are presented. (a)- average and (b)- standard deviation of pedestal level for p=0.2 achieved with different number of elements in the BSP set, N.

Download Full Size | PDF

A comparison between the output image qualities, obtained with these two coding methods, is presented in Fig. 4 as well as in the results section.

2.4. The origin of stochastic pattern noise

The images obtained by the method described in our earlier work [21] were quite noisy. There, high stochastic pattern noise (SPN) levels were exhibited both “inside” the object, chosen to be the letter U (areas that are supposed to be constant) as well as in the surrounding areas along the edges of the letter. The origin of this SPN is due to two mechanisms: variations of the correlation peak and fluctuations associated with the pedestal of the correlation function. The variations in the correlation peak are totally eliminated when one uses the complementary coding BSP sets, whose correlation function is provided by Eq. (17). However, the pedestal fluctuations that remain after one subtracts the average blur image [Eq. (16)] from the pedestal, may result in high SPN level in case of a severe aberration since the variations are related directly to the extent of the PSF region of support. We neglect here the additive detector noise term r(x) deliberately since we now investigate the statistics of the SPN solely. For a complementary BSP coding set and a finite number of samples N=L0N1, the high resolution restored signal is provided by:

Ihr(x)=κIobj(x´M)h(xx´)2[(1p)pΛ(xx´B)+spn(x,x´)xx´>B]dx´

where spn(x,x')∣x-x'∣>B = Δ(x,x')∣x-x'∣>B-p2. Thus, the signal is composed of two terms:

Ihr(x)=Ihr;(x)+spn(x)

where:

Ihr;(x)κ(1p)pIobj(x´M)h(xx´)2Λ(xx´B)dx´
spn(x)κIobj(x´M)spn(x,x´)xx´>Bh(xx´)2dx´

The first term, Ihr;∞(x), is identical to the restored high resolution signal, as expressed in Eq. (7c) when an infinite N value is assumed. The SPN, denoted as spn(x), is a result of the convolution between the low resolution PSF over the corrupted noisy object Iobj(x'/M)spn(x,x')∣x-x'∣>B. As the PSF region of support increases, as for instance in the presence of an increased defocus condition or a low resolution imaging system, the SPN [Eq. (19c)] increases as well. Moreover, the SPN is strongly dependent on the input object shape and is not wide sense stationary (WSS). In fact, it is absent within opaque object regions, but appears in illuminated regions and around edges. Figure 4 shows the object (a), the letter U and four pairs of lines next to it, its low resolution image [Fig. 4(b)], the restored image when a random BSP sequence is used [Fig. 4(c)] and the restored image in the same conditions, but with a complementary BSP sequence [Fig. 4(d)]. A PSF circ function with diameter of 8 pixels was considered in these simulations to mimic a defocus condition. The width of the parallel lines, as well as the separation between them is 2 pixels, such that the extent of the PSF totally obliterates them [ig. 4(b)]. The achieved resolution is higher by a factor of four with respect to the original image. In this simulation p was taken to be 0.25 and N=300. Figures 4(c) and 4(d) clearly show that the “body” of the letter “U” is noisy, since uniform intensity regions of the object are not properly restored. However, the image quality is improved when one uses the complementary encoding rather than the totally random encoding due to an achieved lower SPN level.

2.5. SPN reduction by using sparse complementary BSP sets

Results obtained with the complementary BSP coding, as shown in Fig. 4(d) identified two major disadvantages that need improvement: the need to reduce the stochastic pattern noise and the need to eliminate, if possible, the subtraction of the blur image. Our first requirement, i.e. to reduce SPN in the output image, can be achieved if the correlation function, provided by Eq. (17), has low pedestal noise value [Eq. (14b)]. Our second requirement, i.e. to avoid blur image subtraction step, is achieved if the ratio between the value of the correlation peak amplitude above the pedestal, p-p2, and the pedestal mean value, p2, is high enough. Both requirements can be satisfied by shrinking the value of p, i.e. by imposing p<<1.

Therefore, we concentrated on analyzing the complementary coding scheme with a probability value of p=1/64. We found that in order to achieve a good output image quality, no more than two complementary sets are required and even one set provides an acceptable quality, i.e. N=64 or 128 instead of 400 as in our previous work [21]. In addition, there is no need to subtract the blur image since p is small.

 figure: Fig. 4.

Fig. 4. Computer simulations of input object (a) and three output images (b-d). (a)- Input object. (b)- Low resolution image. (c)- Restored object using a random sequence of BSPs. (d)- Restored object using complementary BSP sequences. Note improved quality in (d) with respect to (c). In this simulation, the PSF is a circ function with diameter of 8 pixels, the BSP's had a probability p=0.25 and the number of images used was N=300.

Download Full Size | PDF

Moreover, to reduce SPN even further, each BSP was divided into mutually exclusive square sub areas of 8×8 pixels that cover the whole BSP area. In each square, only one random pixel was allowed to be transparent, since we chose p=1/64. Once again, the BSP set was complementary consisting of N=64 terms, i.e. each pixel will be transparent only once and the summation over all the BSP in the sequence provides a constant value with no opaque holes. The advantage of this 8x8 subdivision is the fact that it reduces the possibilities of generating clustering of sampling pixels. Indeed clusters of transparent pixels, as seen for instance in Fig. 2, clearly deteriorate the resolution in their vicinity and provide added noise as well. One should note that the choice of 8×8 squares subdivision, associated with p=1/64, indicates that in any BSP only one pixel in any 8×8 block will be chosen. For such small probability value there is no need to subtract the blur image. Similar results will be obtained for 7×7 and even 6×6 squares, associated to probabilities of 1/49 and 1/36 respectively, but with the penalty of higher SPN due to the fact that clusters are more likely to occur. Of course, 9×9 and even larger squares may be used if one agrees to work with higher N values, or respectively low values of p.

2.6. SPN reduction in uniform intensity regions

The SPN in the output image increases with the PSF region of support, as predicted by Eq. (19c). We will show in this section that in uniform intensity regions the noise can be reduced significantly, provided that one has a-priori information on the extent of the PSF. This can be extracted for instance, from the inspection of the image of an isolated object point.

First, we investigate the origin of these variations in the image. We will consider a uniform transparent object, i.e. with no information, that one tries to retrieve using the complementary coding method. Multiplying this object with a certain BSP retains just the BSP itself. The image of such BSP, denoted by IF(x,n) in Eq. (2), contains the superposition of every pixel, now spread over a region, determined by the extent of the PSF. As long as the PSF region of support is narrower than the minimal distance between the sampling pixels, no noise occurs and complete restoration is possible. However, as the PSF size increases, cross talk between adjacent pixels will occur, and therefore the acquired intensity at some locations is higher than the intensity obtained for isolated BSP pixels with no overlap. In the decoding process such image when multiplied by the same BSP, will result in a pattern that is not binary anymore: regions of clusters will exhibit higher values than regions with isolated pixels. Now, summing all those patterns together, according to Eq. (3), will not result in a constant function any more as expected for the uniform object under consideration.

We will now consider a modification of the decoding algorithm that alleviates this problem. Let SF(x,n) be the acquired intensity of a certain BSP by the aberrated imaging system, i.e.:

SF(x,n)κSn(x´Mt)h(xx´)2dx´

where h is the PSF. In the decoding stage, we now multiply with a modified digital decoding speckle, S'F(x,n), defined as:

S´F(x,n)={Sn(xM)SF(x,n)ifSn(xM)00otherwise

rather than with Sn(x/M) only. Summing up thereafter, all acquired images, one now gets:

Iout(x)=1Nn=1NIF(x;n)S´F(x,n)

This method improves the uniformity of constant intensity regions as shown in Fig. 5, where the image of same letter U, used in Fig. 4, was calculated and displayed. In this simulation only one complementary BSP set with a probability of p=1/64 and 8×8 subdivisions were assumed. Due to the small value of p, there is no need to subtract the blur image. Similarly to the case shown in Fig. 4, the circ blurring PSF had 8 pixels diameter. One readily observes that the finest details of the object are restored, while the SPN in uniform intensity regions is almost totally eliminated with just a single set of 64 BSPs. Note the improvement in noise reduction with respect to Fig. 4(d); there, 300 BSP's were used and the blur image had to be subtracted.

 figure: Fig. 5.

Fig. 5. Stochastic Pattern Noise (SPN) reduction algorithm. (a)- Low resolution image. (b)-Restored object using one complementary BSP set with application of SPN reduction algorithm for uniform intensity regions. In this simulation, the PSF was a circ function with diameter of 8 pixels, the chosen BSP's had a probability of p=1/64 with a subdivision of 8×8 pixels. Restored image (b) was achieved with N=64 only.

Download Full Size | PDF

The reader should note, however, that this correction method [Eq. (21)] can be applied only when small values of p are used in the coding process. The reason is that the blur image cannot be subtracted at all after using the modified decoding speckle S'F(x,n) since the decoding speckle is actually dependent on the PSF distribution, i.e. it depends on the aberration. Therefore, one may use this method safely only if the energy of the blur image is negligible with respect to the energy of the high resolution signal, as is the case for small probability values.

2.7. Random versus periodic encoding-decoding BSPs

Throughout this work we deal with random BSPs. However, encoding and decoding with gratings or other deterministic periodic patterns has been used already in the literature [12,16]. One may wonder about the advantage of using BSPs, as well as the necessity of randomness.

The main disadvantage of grating encoding-decoding schemes is that the resulting transfer function is actually a weighted summation of shifted transfer functions associated with the low resolution imager [12, 16]. Thus when the optical transfer function of the low resolution imaging system is corrupted from the beginning and exhibits frequencies with zero contrast values as well as frequency regions of negative contrast values, as it often happens in case of defocus, the transfer function that results from this weighted summation is poor and attempts to obtain super resolution in such conditions fail. Similar effects will occur for periodic BSPs, as well as for any periodic structure, so that superresolution is not attainable. This is due to the fact that the correlation function is periodic too, having thus a multitude of periodic peaks rather than just one. Convolution with the equivalent PSF, which is provided by a multiplication between the correlation function and the PSF of the low resolution imager, provides ghost images. The separation between those ghost images is exactly the BSP period and thus fine details in the order of the period are averaged.

Ghost images are totally eliminated when one uses random BSPs. In fact, if an infinite set of random BSPs is allowed, only one peak is exhibited in the correlation function, as the right hand side of Eq. (5) states. Thus, the extent of the low resolution PSF has no significance, since the equivalent PSF width is determined by the correlation peak. This is the reason why our method may be considered to provide aberration free imaging. Experimental results that achieved super resolution in the case of a severe defocus condition were presented in our previous work21. Moreover, for infinite value of N, the SPN does not exist due to the perfectly smooth pedestal, as Eq. (14b) reveals, and the detector additive noise is totally eliminated by the accumulation, as shown in Eq. (7b).

Nevertheless, as the value of N is reduced, as in any practical case happens, and a complementary BSP set is used, the resulting correlation function is provided now by Eq. (17) and therefore the output image possesses SPN as well as some additive detector noise. Thus, the performance of our super resolution imaging system in case of low quality imaging due to high f number or aberrations is limited by the amount of SPN in the output image, which increases with the PSF region of support.

Therefore, contrary to the ghost images obtained in the case of periodic BSP, which ruin the high resolution details in the output image, the finest details in case of random BSP encoding are preserved, as our results clearly show, albeit with the penalty of some noise. This fact makes random BSPs to be a better choice for the design of a high resolution extended DOF imaging system when inexpensive imaging lenses that cannot provide diffraction limit resolution are used.

3. Simulation Results

In this section we compare the image quality for two cases: when coding is carried with a random set of N different BSP's vs. the case of coding carried with complementary mutually exclusive BSP's, composed of L0 different subsets of spatially mutually exclusive BSP's, providing in total N patterns. The chosen quality criterion was the Mean Square Error (MSE) between the output high resolution restored pattern and the input object. Zero additive Gaussian detector noise was assumed. We first considered a blurring PSF circ function having a radius of 6 pixels. Thereafter, we repeated the simulation for the case of 10 pixels radius. The energies of the recovered high resolution pattern and the input object were normalized and the MSE results were averaged over different input images. Figure 6 plots the averaged MSE curves for the case of totally random encoding set (dash line) as well as for complementary mutually exclusive subsets (solid line), all for p=0.25. It is clear that coding with complementary mutually exclusive BSP's provides lower MSE values. Moreover, Fig. 6 reveals that using a set of 100 complementary BSP provides lower or comparable MSE to those obtained with a random set of 1000 encoding BSP. This is not surprising due to the elimination of the variations in the correlation peak as well as in the blurred image. The remnant blur image, provided by the pedestal is responsible for the noisy appearance in both cases.

 figure: Fig. 6.

Fig. 6. Mean Square Error (MSE) Vs. the number of the coding BSP patterns, N, for totally random sequence (Dash) and complementary sequence consisting of N/4 subsets (solid). PSF blurring circ function has a radius of (a) - 6 pixels; (b) - 10 pixels.

Download Full Size | PDF

As described earlier, reducing the value of the probability p has the advantage of SPN reduction as well as it eliminates the need for blur image subtraction. We plotted in Fig. 7 the standard deviation (STD) of the SPN for a uniform unity input encoded by complementary BSP sequences with p=1/64, for L0 subsets. L0 was allowed to vary from 1 to 20 and the Gaussian PSF blur standard deviation was allowed to be 3 pixels, 7 pixels, 11 pixels and 15 pixels. It is clearly observed that as the blur size increases, the value of the SPN STD increases as well. Moreover, as the value of L0 increases, the STD value shrinks. Repeating this calculation, but applying also the noise reduction method [Eq. (20) - Eq. (22)], the STD values were found to be negligible (<10-4) for all L0 values and therefore are not plotted here, being undistinguishable from the abscissa axis.

 figure: Fig. 7.

Fig. 7. The standard deviation (STD) of the stochastic pattern noise, calculated for a uniform unity input encoded by complementary BSP sequences with p=1/64 as a function of the number L0 of complementary subsets. Each subset consists of 64 BSPs, the Gaussian PSF blur standard deviation has 3 pixels (solid), 7 pixels (solid with circles), 11 pixels (dash) and 15 pixels (dash with circles).

Download Full Size | PDF

To visualize this effect, we applied the SPN reduction method [Eq. (20) - Eq. (22)] to inspect the SPN reduction in uniform intensity areas, as shown in Fig. 8. Figure 8(a) shows the output image of a step function, where the decoding was done according to Eq. (3), while Fig. 8(b) shows the same image with the decoding according to Eq. (22). For both cases p=1/64 and two complementary subsets of 64 images each were used to provide a total number of 128 BSP. It is clear that the SPN is totally eliminated in uniform regions when Eq. (22) is used. Nevertheless, the presence of SPN at edges is readily observed.

 figure: Fig. 8.

Fig. 8. (a). Output image of a step function, where the decoding was done without the stochastic pattern noise (SPN) reduction method (Eq. 3). (b). Same but with the SPN reduction method (Eq. (22). For both cases p=1/64 and two complementary subsets of 64 images each were used to provide a total number of 128 BSP's.

Download Full Size | PDF

The reduction in the value of p as well as gray level objects may decrease the overall light throughput of the imaging system. Note, however, that imaging is a “point to point” operation, i.e. the light is concentrated around the transparent points of the BSP and all the opaque dark points in the object plane that usually provide only detector noise are multiplied by zero throughout the decoding process. Thus, as shown in section 2.1, the average operation over all the acquired frames, IF(x,n), reduces the additive detector noise. We applied this approach for imaging objects that possess continuous gray levels and not only binary objects, using the image of “Lena” as an object. Figure 9 shows the output images, calculated for a single complementary set of 64 BSP (p=1/64) with the refinement of 8×8 subdivisions. Figure 9(a) shows a magnified portion of the input object “Lena”. The same portion of the output image, obtained with a conventional imaging system having poor resolution, determined by a Gaussian PSF with a STD of 4 pixels as a blur kernel, is shown in Figure 9(b). Since wide blur means high SPN level when a small number of BSP is used to encode and decode the signals, it is not surprising to find such SPN in the output image, Fig. 9(c), restored in the regular way, i.e. by Eq. (3). One should note that super resolution in addition to SPN indeed occurs. Highly improved results are obtained in Fig. 9(d) when the complementary coding incorporating the SPN reduction correction is used. Figure 9(e) shows the same as Fig. 9(d) but for a severe additive Gaussian detector noise with standard deviation of σ=5, which is about 2% of the full dynamic range 0-255. It is clearly seen that the detector noise does not reduce the image quality significantly, thus justifying our initial approximation of neglecting it in case of high signal to noise ratio scenario. These results were obtained with a single set of complementary BSP in the sequence, so that N=64. It is a significant improvement with respect to our previous work, where several hundreds BSP were required to obtain acceptable output results. This technique was not tested before on gray level images, as it was done here, in Fig. 9.

The reader should note that the additive noise level may be reduced easily by increasing N. Figure 9(f) shows the same as Fig. 9(e), but with N=128. The noise reduction is clearly observable.

 figure: Fig. 9.

Fig. 9. Image restoration using Random Structured Encoding/Decoding. (a). A magnified portion of the input object “Lena”. (b). The same portion of the output image, obtained with a conventional imaging system having poor resolution, determined by a Gaussian PSF with a STD of 4 pixels as a blur kernel. (c). High resolution output image restored without noise reduction. (d). High resolution restoration with noise reduction. (e). Same as (d). but with additive Gaussian detector noise who's STD is 2% of the dynamic range. (f). Same as (e) but with two BSP sub sets. All figures obtained with a complementary BSP set, with p=1/64 and using the refinement 8×8 subdivisions. Images (c), (d) and (e) were obtained with one subset (N=64) only, while image (f) was obtained with two subsets i.e. N=128.

Download Full Size | PDF

4. Conclusion

In this paper, we studied several modifications of a method [21] meant to provide an aberration-free imaging system via an encoding/decoding mechanism based on speckle pattern encoding in the presence of additive detector noise as well. The modifications consisted in choosing complementary sets of random encoding Binary Speckle Patterns (BSP) rather than totally random ones. Moreover, further refinement, based on controlled distribution of the sampling points and a noise reduction algorithm, resulted in superior quality of restored images, even when processing was done with a limited number of encoding BSP. This opens the door to practical implementations for microscopy in particular.

We succeeded in this study to reduce the number of the BSP required for encoding from 400 in our previous work21 to 64, thus a reduction by a factor of about 6 in complexity was achieved, while the noise was reduced significantly too.

References and links

1. M. Born and E. Wolf, Principles of Optics, (Cambridge University Press, London 7th ed., 1999).

2. G. Martial, “Strehl ratio and aberration balancing,” J. Opt. Soc .Am. A 8,164–170 (1991). [CrossRef]  

3. B. Tatian, “Aberration balancing in rotationally symmetric lenses,” J. Opt. Soc .Am. A 64,1083–1091 (1974). [CrossRef]  

4. M. Mino and Y. Okano, “Improvement in the OTF of a Defocused Optical System Trough the Use of Shaded apertures,” Appl. Opt. 10,2219–2225 (1971). [CrossRef]   [PubMed]  

5. J. O. Castaneda, E. Tepichin, and A. Diaz, “Arbitrary high focal depth with a quasioptimum real and positive transmittance apodizer,” Appl. Opt. 28,2666–2669 (1989). [CrossRef]  

6. E. Ben-Eliezer, Z. Zalevsky, E. Marom, and N. Konforti, “Experimental realization of an imaging system with an extended depth of field,” Appl. Opt. 44,2792–2798 (2005). [CrossRef]   [PubMed]  

7. E. Ben-Eliezer, Z. Zalevsky, E. Marom, and N. Konforti, “Radial mask for imaging systems that exhibit high resolution and extended depth of field,” Appl. Opt. 45,2001–2013 (2006). [CrossRef]   [PubMed]  

8. J. O. Castaneda and L. R. Berriel-Valdos, “Zone plate for arbitrary high focal depth,” Appl. Opt. 29,994–997 (1990). [CrossRef]  

9. J. O. Castaneda, R. Ramos, and A. Noyola-Isgleas, “High focal depth by apodization and digital restoration,” Appl. Opt. 27,2583–2586 (1988). [CrossRef]  

10. E. R Dowski Jr and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt 34,1859 (1995). [CrossRef]  

11. N George and W. Chi, “Computational imaging with the logarithmic asphere: theory,” J. Opt. Soc. Am. A 20,2260–2273 (2003). [CrossRef]  

12. Z. Zalevsky and D. Mendlovic, Optical Superresolution, (Springer-Verlag Berlin, 2004).

13. D. Mendlovic and A. W. Lohmann, “SW- adaptation and its application for super resolution- Fundamentals,” J. Opt. Soc .Am. A 14,558–562 (1997). [CrossRef]  

14. G. Toraldo Di Francia, “Degrees of freedom of an image,” J. Opt. Soc. Am. 59,799–804 (1969). [CrossRef]   [PubMed]  

15. M. Francon, “Amelioration de resolution d'optique,” Nuovo Cimento Suppl. 9,283–290 (1952).

16. W. Lukosz, “Optical systems with resolving power exceeding the classical limit,” J. Opt. Soc .Am. A 56,1463–1472 (1966). [CrossRef]  

17. A. Shemer, D. Mendlovic, Z. Zalevsky, J. Garcia, and P. Garcia Martinez, “Super resolving optical system with time multiplexing and computer decoding,” Appl. Opt. 38,7245 (1999). [CrossRef]  

18. A. I. Kartashev, “Optical systems with enhanced resolving power,” Opt. Spectrosc. 9,204–206 (1960).

19. W. Gartner and A. W. Lohmann, “An experiment going beyond Abbe's limit of diffraction,” Z. Phys. 174,18 (1963).

20. Javier García, Zeev Zalevsky, and Dror Fixler, “Synthetic aperture superresolution by speckle pattern projection,” Opt. Express 13,6073–6078 (2005). [CrossRef]   [PubMed]  

21. E. Ben-Eliezer and E. Marom, “Aberration Free Super Resolution Imaging via Binary Speckle Patterns Encoding and Processing,” J. Opt. Soc. Am. A, (to be published).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Imaging System Block diagram
Fig. 2.
Fig. 2. One subset of five complementary BSPs with p=0.2 (N1=5). The image in the low-right corner is their summation.
Fig. 3.
Fig. 3. The average and standard deviation of the pedestal value. Statistical variations of the theoretical vs. simulation calculation are presented. (a)- average and (b)- standard deviation of pedestal level for p=0.2 achieved with different number of elements in the BSP set, N.
Fig. 4.
Fig. 4. Computer simulations of input object (a) and three output images (b-d). (a)- Input object. (b)- Low resolution image. (c)- Restored object using a random sequence of BSPs. (d)- Restored object using complementary BSP sequences. Note improved quality in (d) with respect to (c). In this simulation, the PSF is a circ function with diameter of 8 pixels, the BSP's had a probability p=0.25 and the number of images used was N=300.
Fig. 5.
Fig. 5. Stochastic Pattern Noise (SPN) reduction algorithm. (a)- Low resolution image. (b)-Restored object using one complementary BSP set with application of SPN reduction algorithm for uniform intensity regions. In this simulation, the PSF was a circ function with diameter of 8 pixels, the chosen BSP's had a probability of p=1/64 with a subdivision of 8×8 pixels. Restored image (b) was achieved with N=64 only.
Fig. 6.
Fig. 6. Mean Square Error (MSE) Vs. the number of the coding BSP patterns, N, for totally random sequence (Dash) and complementary sequence consisting of N/4 subsets (solid). PSF blurring circ function has a radius of (a) - 6 pixels; (b) - 10 pixels.
Fig. 7.
Fig. 7. The standard deviation (STD) of the stochastic pattern noise, calculated for a uniform unity input encoded by complementary BSP sequences with p=1/64 as a function of the number L0 of complementary subsets. Each subset consists of 64 BSPs, the Gaussian PSF blur standard deviation has 3 pixels (solid), 7 pixels (solid with circles), 11 pixels (dash) and 15 pixels (dash with circles).
Fig. 8.
Fig. 8. (a). Output image of a step function, where the decoding was done without the stochastic pattern noise (SPN) reduction method (Eq. 3). (b). Same but with the SPN reduction method (Eq. (22). For both cases p=1/64 and two complementary subsets of 64 images each were used to provide a total number of 128 BSP's.
Fig. 9.
Fig. 9. Image restoration using Random Structured Encoding/Decoding. (a). A magnified portion of the input object “Lena”. (b). The same portion of the output image, obtained with a conventional imaging system having poor resolution, determined by a Gaussian PSF with a STD of 4 pixels as a blur kernel. (c). High resolution output image restored without noise reduction. (d). High resolution restoration with noise reduction. (e). Same as (d). but with additive Gaussian detector noise who's STD is 2% of the dynamic range. (f). Same as (e) but with two BSP sub sets. All figures obtained with a complementary BSP set, with p=1/64 and using the refinement 8×8 subdivisions. Images (c), (d) and (e) were obtained with one subset (N=64) only, while image (f) was obtained with two subsets i.e. N=128.

Equations (34)

Equations on this page are rendered with MathJax. Learn more.

I en ( x ; n ) = I obj ( x ) S n ( x )
I F ( x ; n ) = κ I en ( x ´ M ; n ) h ( x x ´ ) 2 dx ´ + ξ n ( x )
I out ( x ) = 1 N n = 1 N I F ( x ; n ) S n ( x M )
I out ( x ) = κ I obj ( x ´ M ) h ( x x ´ ) 2 [ 1 N n = 1 N S n ( x ´ M ) S n ( x M ) ] dx ´ + r ( x )
r ( x ) 1 N n = 1 N ξ n ( x ) S n ( x M )
lim N = ( 1 N n = 1 N S n ( x ´ M ) S n ( x M ) ) = Γ ( x x ´ ) = p 2 + p ( 1 p ) Λ ( x x ´ B )
I blur ( x ) = lim N 1 N n = 1 N I F ( x , n ) = I obj ( x ´ M ) h ( x x ´ ) 2 dx ´
I hr ; ( x ) = I out ( x ) p I blur ( x ) = ( 1 p ) I obj ( x ´ M ) h ( x x ´ ) 2 Λ ( x x ´ B ) dx ´ + r ( x )
r 2 ( x ) = 1 N 2 n n ´ S n ( x M ) S n ´ ( x M ) ξ n ( x ) ξ n ´ ( x ) =
= 1 N 2 n = n ´ S n ( x M ) S n ´ ( x M ) ξ n ( x ) ξ n ´ ( x ) 1 N 2 n ξ n 2 ( x ) = σ 2 N
I hr ; ( x ) = I out ( x ) p I blur ( x ) = ( 1 p ) I obj ( x ´ M ) h ( x x ´ ) 2 Λ ( x x ´ B ) dx ´
S N ( x ) = 1 N n = 1 N S n ( x ) ; n { 1 N }
p k = N k p k ( 1 p ) N k ; where N k N ! ( N k ) ! k ! and k { 0 N }
S N ( x ) = k = 0 N A k p k = p
S N 2 ( x ) = k = 0 N A k 2 p k = p 2 N 1 N + p N
V N = S N 2 ( x ) S N ( x ) 2 = p p 2 N = p ( 1 p ) N
σ N = V N = p ( 1 p ) N
Γ ( x ; x ´ ) = 1 N n = 1 N S n ( x ´ M ) S n ( x M ) = Δ ( x , x ´ ) x x ´ > B + Δ x x ´ B Λ ( x x ´ B )
Δ ( x , x ´ ) x x ´ B 1 N n = 1 N S n ( x M ) S n ( x M ) ,
Δ ( x , x ´ ) x x ´ > B 1 N n = 1 N S n ( x ´ M ) S n ( x M ) ,
Δ ( x , x ´ ) x x ´ > B = p 2
σ ( x x ´ ) N = p 2 ( 1 p 2 ) N
S N ( x ) = 1 L 0 N 1 n = 1 L 0 k = 1 N 1 S n ; k ( x )
k = 1 N 1 S n ; k ( x ) = 1 ( x ) 1 everywhere .
S N ( x ) = 1 L 0 N 1 n = 1 L 0 1 ( x ) = 1 N 1 = p
I blur ( x ) = 1 N n = 1 N I F ( x , n ) = I obj ( x ´ M ) h ( x x ´ ) 2 dx ´
Γ ( x ; x ´ ) = Δ ( x , x ´ ) x x ´ > B + p Λ ( x x ´ B )
I hr ( x ) = κ I obj ( x ´ M ) h ( x x ´ ) 2 [ ( 1 p ) p Λ ( x x ´ B ) + spn ( x , x ´ ) x x ´ > B ] dx ´
I hr ( x ) = I hr ; ( x ) + spn ( x )
I hr ; ( x ) κ ( 1 p ) p I obj ( x ´ M ) h ( x x ´ ) 2 Λ ( x x ´ B ) dx ´
spn ( x ) κ I obj ( x ´ M ) spn ( x , x ´ ) x x ´ > B h ( x x ´ ) 2 dx ´
S F ( x , n ) κ S n ( x ´ M t ) h ( x x ´ ) 2 dx ´
S ´ F ( x , n ) = { S n ( x M ) S F ( x , n ) if S n ( x M ) 0 0 otherwise
I out ( x ) = 1 N n = 1 N I F ( x ; n ) S ´ F ( x , n )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.