Super-resolution is an important goal of many image acquisition systems. Here we demonstrate the possibility of achieving super-resolution with a single exposure by combining the well known optical scheme of double random phase encoding which has been traditionally used for encryption with results from the relatively new and emerging field of compressive sensing. It is shown that the proposed model can be applied for recovering images from a general image degrading model caused by both diffraction and geometrical limited resolution.
©2010 Optical Society of America
Super resolution (SR) is being considered as one of the “holy grails” of optical imaging and image processing. The endeavor to obtain high spatial resolution images with limited resolution imaging systems has raised great interest both practically and theoretically (see for example [1–7]). In general, the resolution of an imaging system is limited by its optical subsystem and by its digital sensing subsystem. Optical resolution is usually limited by some optical blur mechanism, most common being the diffraction blur. Digital sensing subsystems using pixilated sensors (such as CCD or CMOS) induce resolution loss due to subsampling, according to Shannon-Nyquist sampling theorem and integration over the finite pixel fill-factor. SR techniques developed to overcome the optical subsystem spatial resolution loss are generally referred as “optical SR” or “diffractive SR techniques” [5,7]. Super resolution techniques designed to overcome the sampling limit of the imaging sensor, generally referred to as “geometrical SR” [5,7] or “digital SR”  techniques, have attracted most of the attention during the last three decades . However, we point out that those methods cannot overcome concomitantly the sensor resolution loss and the optical resolution loss. The maximum achievable resolution with digital SR is inherently upper-bounded by the diffraction limited bandwidth .
In order to overcome imaging system resolution limitation, SR techniques typically capture additional object information in some indirect way, and then perform some clever processing that extracts the high resolution data from the overall captured data. The additional information is typically captured by encoding the image in the temporal domain (e.g. by taking multiple sub-pixel shifted exposures), in the spatial domain (using multiple apertures or encoding the aperture), or in other domains such as state of polarization or spectral domain.
In this work we present a SR method that does not need any additional information acquisition. The data is captured with a single exposure without sacrificing the field of view, or requiring any other measurement dimensions. The key to achieve SR without additional data is by utilizing the fact that the information in all human intelligible images is very redundant. Therefore, instead of acquiring extra data we properly encode the data within one image and apply reconstruction algorithms that reconstruct the desired high resolution image.
Our proposed approach is to use the well-known double random phase encoding (DRPE) technique [8,9] as a mean of coding the scene. The acquisition-reconstruction relies on the emerging field of compressive sensing (CS). Compressive sensing theory breaks the Shannon-Nyquist sampling paradigm by utilizing the fact that the image is sparse in some arbitrary representation basis. Our approach permits both diffraction and geometrical SR. It uses a single shot acquisition process and does not sacrifice any measurement dimensions. Unlike some conventional SR systems, the approach proposed here does not require any movements.
The paper is organized as follows: in section 2, we briefly review the DRPE technique. In section, 3 we provide a short background on CS and point out the connection between the DRPE and the Gaussian random sensing, which is a universal sensing scheme. In section 4, we show how the DRPE enables the gain of SR image from a single exposure, and present simulation results. We conclude in section 5.
2. Double random phase encoding (DRPE)
Double random phase encoding (DRPE) was originally developed for optical security . Figure 1 depicts a block diagram of the double random phase encoding process and Fig. 2 shows a possible optical implementation. DRPE is based on random masks placed in the input and Fourier plane of the optical system, which whitens both the data and its Fourier spectrum. Many implementation methods were reported for the original DRPE  and similar setups, including coherent [9–11], incoherent , Fresnel , fractional Fourier  domains and other implementations (see [15–19] to name a few). Here, we adopt compressive sensing techniques to obtain SR imaging with DRPE; therefore, we denoted the method DRPE-CS.
First, let us review the equations describing the double phase encoding process. Let t(x,y) represent an NxN object. It may be the object's field in the object plane or any field uniquely related to the object, e.g. free space propagation of the object's complex field. Also, p(x,y) and b(u,v) are two independent i.i.d white sequences uniformly distributed between [0,1]. Then, the encoded output u(x,y) is given by :Figure 2, shows a 4-f optical scheme, where RPM denotes the random phase masks.
We shall show that the DRPE process allows us to create a single exposure compressive imaging scheme, with lateral resolution gain. This is due to the fact that it implements a universal CS scheme, as demonstrated in the next section.
3. Double random phase encoder as a universal compressive sensing encoder
3.1 Brief introduction to Compressive sensing
Compressive sensing (see for example [20–25]) is a relatively new sampling paradigm which seeks to sense only the “essential” features of a signal/image, i.e., CS minimizes the sampling process. This in contrast to conventional sampling paradigm, which can be summarized as sample as much as possible data, and then discard most of it using some compression method (e.g., common JPEG).
Figure 3 shows a block diagram of the compressive imaging process . In order to formulate mathematically the CS concept, let us consider an object t which is described by an N dimensional real valued vector (in case that the object represents an image of N pixels, t is a one dimensional vector obtained by rearranging the image in a lexicographic order) being projected (imaged) to u which is an M dimensional vector. One can also think of M as the number of detector pixels. In CS we are interested in the case of M<N, i.e., the signal is undersampled according to the Shannon-Nyquist theorem. The sensing process is given by:
Compressive sensing relies on two principles, signal sparsity and the incoherence between the sensing and sparsifying operators . By assuming signal sparsity, we infer that the signal t can be sparsely represented in some arbitrary orthonormal basis Ψ (e.g., wavelet, or DCT). Thus, α is the K-sparse representation of image t projected on Ψ T, meaning, that α has only K non-zero terms.
Incoherence is a measure of dissimilarity between the sensing and sparsifying operators. The measure is mathematically quantified by (3):21]. CS theory suggests that a signal (image) measured with sensing operator Φ, can be actually recovered by l 1-norm minimization. The estimated coefficients vector is the solution of the convex optimization program :20,21]:21]; i.e., its mutual coherence is regardless of Ψ.
Often, when using CS, the sub-sampling is done by picking M out of N measurements uniformly at random [20,21]. This may impose limitations on the physical realization of an image sensing system. Let us think of a CCD camera, where we have pixels. Random sub-sampling means turning off many of the pixels, thus not using the entire capability of the sensor. This setting is not of much practical value unless each and every pixel is extremely expensive, which may be the case for some detectors (UV for example). On the other hand, we may use a Gaussian random projection operator, where we would not have to randomly subsample the measurements. In the case of a random Gaussian projection, we just need to take M measurements, which is more relevant to our physically constrained blurring and sampling scheme.
3.2 Double phase encoding as a universal sensing operator
The Gaussian sensing operator holds two key properties that makes it very popular for CS applications [21,22]. The first is that each row is a set of i.i.d Gaussian random variables, drawn from the same distribution . This property assures low mutual coherence (Eq. (3) between the sensing and sparsifying basis. The second desired property is the statistical independence between the measurements . The DRPE operator performs exactly this process: it statistically de-correlates the data in both space and spatial frequency domains [ 16]. Thus, the DRPE operator acts like a random Gaussian sensing operator. It can be shown  that when using such a sensing scheme, we would need only about compressive signals, in order to guarantee the reconstruction of the signal, that is, the amount of samples we need to reconstruct a signal sensed in DRPE process.Eq. (1) on its diagonal, , and F is the N 2xN 2 discrete Fourier transform matrix. The input t and output u are an N2x1 lexicographical arrangement of the input and output fields, respectively. All the matrices are of size N 2xN 2. Thus, we may write the FP in Eq. (6) representing random scrambling in the frequency domain matrix as:22]. Now, according to most CS implementations, at this stage, we should randomly sub-sample FPt, in order to guarantee the measurements independence [ 21,22]. However, since we would like to obey the physical constraint of our optical system, which performs deterministic sampling by its nature, and we still want to de-correlate the measurements, we can perform the de-correlation using some optical implementation, and deterministically sub-sample, instead. This can be achieved by performing another random scrambling (this time in the space domain) in the same manner, by using the F*H operator in Eq. (6). Performing F*H has the de-correlating effect on the result of the FPt operator (similar to the effect FP had on t). This can also be seen as guaranteeing inter-row statistical independence. Now, a statistical independence between the measurements is guaranteed. After the second scrambling, the signal may undergo a deterministic blurring or sub-sampling, and enjoy the powerful results of CS theory such as described in section 3.
4. Super resolution with double random phase encoding
4.1 DRPE image degradation model
Let us consider an object's image with pixel size . The image has size of for both x and y directions. The random phase masks have a pixel size of which we assume to be as small as . The phase patch, , serves as element of the high resolution grid, and ultimately determines the achievable resolution. The image degradation model applied to the proposed DRPE sensing model is given by:
The main thing we should bear in mind is the critical role of the first multiplication, which spreads the image's spatial frequencies before propagating through the rest of the system. Since the convolution operator is commutative , the spatial bandwidth of the rest of the system is always limited by that of hs. Therefore, without the first multiplication, the high spatial frequencies are lost due the sensor or optics limitations and there is no practical way they could be fully recovered. In the next section, we demonstrate that this way of encoding provides an effective spatial bandwidth extension both for the case that hs is dominated by the pixelated sensor or determined by diffraction.
4.2 Geometrical sub-sampling
Geometrical sub-sampling refers to the case that the resolution limitation is caused by the digital sensors (e.g., CCD or CMOS). Let us assume for convenience that each object pixel is sub-sampled by a factor of Lx and Ly in the x and y directions, respectively, i.e. each image pixel has the size of due to sensor pixelation. Consequently, the number of CCD pixels are N/L where L = L x L y. The image obtained with the DRPE system is given by:Eq. (9) we choose to solve the problem:
Figure 4 shows the simulation results with geometrically resolution limited model using USAF resolution chart. The original image has a size of 1024x1024 pixels. This was also the size of the random phase masks. The detector pixel size was taken to be 2∆ by 4∆, i.e. 2 and 4 times larger in the vertical and horizontal directions, respectively. Accordingly, the number of pixels in the captured image was N/LxxN/Ly, with Lx = 2 and Ly = 4. Figure 4(b) is obtained by averaging and sub-sampling the output data from the DRPE system (Fig. 1(a)) 4 times less in the horizontal direction and 2 times in the vertical direction. As a result, the CCD captures 512x256 pixels, with pixels size of 2∆x4∆. Fig. 4(c) shows the reconstructed image after solving Eq. (12) for the output of the DPRE-CS system. Figure 4 (d) - (f) show a zoom in on the finest resolution details, comparing the low-resolution image obtained with conventional imaging system to the super-resolution image obtained with DRPE-CS. It is evident that the DRPE-CS method resolves almost perfectly the finest details that are obviously lost with conventional imaging systems. It can be seen in Fig. 4 (e) that details which correspond to 1/4 line pairs per pixel are irresolvable, while in Fig. 4(f) the finest detail resolution corresponding to 1/2 line pairs per pixel is evident. Thus a resolution gain of at least 2 is demonstrated.
Here, we would like to point out that the DPRE-CS model in Eq. (9) can be also mathematically related to the random convolution model in . In , the sensing operator is defined as:
After inspection of the DRPE sensing model in Eq. (8) and the model presented in , one can see that the DRPE model is actually the adjoint sensing operator of . Therefore, the bound on the minimum number of samples derived in  applies here too. Note that this is also consistent with bound in [ 22]. Thus, we also offer the connection between the two approaches for structured CS. Despite the similarity between the model in Eq. (9) and that of Eq. (14), the DPRE-CS offers some practical advantages, which are described in the next subsection.
4.3 Lens blurring
In this sub-section, we consider super-resolving an image heavily degraded by diffraction. The sensing model in such a case is:Figure 5 presents simulation results for a lens with a cutoff spatial radial frequency 6 times smaller than the one required for no blurring,, i.e., with a spatial frequency cutoff where fnom represents the diffraction cutoff frequency matched to the rest of the system. In our simulations, fnom is the cutoff frequency set by the object's pixel size. We also added measurement noise to the captured image such that the SNR was 37dB. The reconstruction of the fine resolution details using DRPE-CS strategy is evident from the zooming in of Fig. 5 despite of the substantial spatial frequency sub-sampling. We can notice in Fig. 5 (d) that due to blurring, targets with spatial frequency larger than 1/12 line pairs per pixels became irresolvable using conventional imaging. On the other hand, when acquiring and reconstructing data using the DRPE-CS, details corresponding to 1/2 line pairs per pixels (see Fig. 5 (e)) are clearly resolvable. Hence SR by approximately a factor of 6 is demonstrated.
4.4 General degrading model
As stated in Eq. (8), we can incorporate a more general degrading model, such as when we have both blurring caused by diffraction and sub-sampling caused by geometrical limits of the sensor (CCD, etc). Figure 6 shows simulation results for such a scenario. The lens has a radial cutoff frequency and the CCD causes averaging and sub-sampling by a factor of 2 in both horizontal and vertical directions. Noise was also added yielding a 37 dB SNR. In Fig. 6, we focus on the fine details and notice that the blurring has made targets having more than 1/10 line pairs per pixels become irresolvable; while the DRPE-CS was able to resolve 1/2 line pairs per pixels (Fig. 6 (c)). Thus, a 5 times resolution increase is evident, and also the ability of the DRPE-CS to resolve a general image degrading model is illustrated.
We note that the model presented by  is not applicable for the diffraction limited scenario discussed in this subsection, since by applying hdiff immediately on the input signal t all the high frequencies are filtered out before entering the sensing system. This information would be lost forever and cannot be uniquely reconstructed.
We have shown that the well known double random phase encoding architecture which has been traditionally used for optical security can be successfully used for a new application, that is, super-resolution with a single exposure. The technique relies heavily on the property of the double random phase encoding that randomly spreads the data in both space and spatial frequency domains, thus mimicking a universal random Gaussian sensing operator for compressive sensing. Arguably, we can use this sensing scheme to super-resolve almost any image degradation model, caused by passing through a low pass filter linear physical system. We have demonstrated numerically super resolution reconstructions for sub-sampling caused by geometrical limits of a sensor (such as CCD array), for severe diffraction limitation, and for a combination of the two. Simulations demonstrated substantial improvements for both geometrical and optical super resolution. The sensing process works with a single exposure, which enables super resolution with real time scene acquisitions; therefore, enabling video rate implementation.
This research was partially supported by The Israel Science Foundation grant 1039/09.
References and links
1. S. Park, M. Park, and M. Gang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003). [CrossRef]
2. Z. Zalevsky and D. Mendlovic, Optical Super Resolution (Springer-Verlag, 2003).
4. S. Prasad and X. Luo, “Support-assisted optical superresolution of low-resolution image sequences: the one-dimensional problem,” Opt. Express 17(25), 23213–23233 (2009). [CrossRef]
5. A. Stern, Y. Porat, A. Ben-Dor, and N. S. Kopeika, “Enhanced-resolution image restoration from a sequence of low-frequency vibrated images by use of convex projections,” Appl. Opt. 40(26), 4706–4715 (2001). [CrossRef]
7. A. Borkowski, Z. Zalevsky, and B. Javidi, “Geometrical superresolved imaging using nonperiodic spatial masking,” J. Opt. Soc. Am. A 26(3), 589–601 (2009). [CrossRef]
10. O. Matoba, T. Nomura, E. Perez-Cabre, M. S. Millan, and B. Javidi, “Optical techniques for information security,” Proc. IEEEl 97(6), 1128–1148 (2009). [CrossRef]
11. E. Tajahuerce, O. Matoba, S. C. Verrall, and B. Javidi, “Optoelectronic information encryption with phase-shifting interferometry,” Appl. Opt. 39(14), 2313–2320 (2000). [CrossRef]
12. E. Tajahuerce, J. Lancis, P. Andres, V. Climent, and B. Javidi, “Optoelectronic Information Encryption with Incoherent Light,” in Optical and Digital Techniques for Information Security, B. Javidi, ed. (Springer-Verlag, 2004).
13. O. Matoba and B. Javidi, “Encrypted optical memory system using three-dimensional keys in the Fresnel domain,” Opt. Lett. 24(11), 762–764 (1999). [CrossRef]
14. G. Unnikrishnan, J. Joseph, and K. Singh, “Optical encryption by double-random phase encoding in the fractional Fourier domain,” Opt. Lett. 25(12), 887–889 (2000). [CrossRef]
15. P. C. Mogensen and J. Glückstad, “Phase-only optical encryption,” Opt. Lett. 25(8), 566–568 (2000). [CrossRef]
16. B. M. Hennelly, T. J. Naughton, J. McDonald, J. T. Sheridan, G. Unnikrishnan, D. P. Kelly, and B. Javidi, “Spread-space spread-spectrum technique for secure multiplexing,” Opt. Lett. 32(9), 1060–1062 (2007). [CrossRef] [PubMed]
17. O. Matoba and B. Javidi, “Encrypted optical storage with angular multiplexing,” Appl. Opt. 38(35), 7288–7293 (1999). [CrossRef]
18. E. Tajahuerce and B. Javidi, “Encrypting three-dimensional information with digital holography,” Appl. Opt. 39(35), 6595–6601 (2000). [CrossRef]
19. X. Tan, O. Matoba, Y. Okada-Shudo, M. Ide, T. Shimura, and K. Kuroda, “Secure optical memory system with polarization encryption,” Appl. Opt. 40(14), 2310–2315 (2001). [CrossRef]
20. D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]
21. E. Candes and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]
22. T. Do, T. Tran, and L. Gan, “Fast compressive sampling with structurally random matrices,” in Proc. ICASSP, 3369–3372, (2008).
23. J. Romberg, “Compressive sensing by random convolution,” SIAM J. Imaging Sci. 2(4), 1098–1128 (2009). [CrossRef]
24. Y. Rivenson, A. Stern, and B. Javidi, “Compressive Fresnel Holography,” to appear in IEEE/OSA J. on Display Technology, (2010).
25. A. Stern and B. Javidi, “Random projections imaging with extended space-bandwidth product,” IEEE/OSA Journal on Display Technology, 3(3), 315–320 (2007).