Abstract

Super-resolution is an important goal of many image acquisition systems. Here we demonstrate the possibility of achieving super-resolution with a single exposure by combining the well known optical scheme of double random phase encoding which has been traditionally used for encryption with results from the relatively new and emerging field of compressive sensing. It is shown that the proposed model can be applied for recovering images from a general image degrading model caused by both diffraction and geometrical limited resolution.

© 2010 OSA

1. Introduction

Super resolution (SR) is being considered as one of the “holy grails” of optical imaging and image processing. The endeavor to obtain high spatial resolution images with limited resolution imaging systems has raised great interest both practically and theoretically (see for example [‎17]). In general, the resolution of an imaging system is limited by its optical subsystem and by its digital sensing subsystem. Optical resolution is usually limited by some optical blur mechanism, most common being the diffraction blur. Digital sensing subsystems using pixilated sensors (such as CCD or CMOS) induce resolution loss due to subsampling, according to Shannon-Nyquist sampling theorem and integration over the finite pixel fill-factor. SR techniques developed to overcome the optical subsystem spatial resolution loss are generally referred as “optical SR”[‎4] or “diffractive SR techniques” [‎5,‎7]. Super resolution techniques designed to overcome the sampling limit of the imaging sensor, generally referred to as “geometrical SR” [‎5,‎7] or “digital SR” [‎3] techniques, have attracted most of the attention during the last three decades [‎1]. However, we point out that those methods cannot overcome concomitantly the sensor resolution loss and the optical resolution loss. The maximum achievable resolution with digital SR is inherently upper-bounded by the diffraction limited bandwidth [‎4].

In order to overcome imaging system resolution limitation, SR techniques typically capture additional object information in some indirect way, and then perform some clever processing that extracts the high resolution data from the overall captured data. The additional information is typically captured by encoding the image in the temporal domain (e.g. by taking multiple sub-pixel shifted exposures), in the spatial domain (using multiple apertures or encoding the aperture), or in other domains such as state of polarization or spectral domain.

In this work we present a SR method that does not need any additional information acquisition. The data is captured with a single exposure without sacrificing the field of view, or requiring any other measurement dimensions. The key to achieve SR without additional data is by utilizing the fact that the information in all human intelligible images is very redundant. Therefore, instead of acquiring extra data we properly encode the data within one image and apply reconstruction algorithms that reconstruct the desired high resolution image.

Our proposed approach is to use the well-known double random phase encoding (DRPE) technique [‎8,9] as a mean of coding the scene. The acquisition-reconstruction relies on the emerging field of compressive sensing (CS). Compressive sensing theory breaks the Shannon-Nyquist sampling paradigm by utilizing the fact that the image is sparse in some arbitrary representation basis. Our approach permits both diffraction and geometrical SR. It uses a single shot acquisition process and does not sacrifice any measurement dimensions. Unlike some conventional SR systems, the approach proposed here does not require any movements.

The paper is organized as follows: in section 2, we briefly review the DRPE technique. In section, 3 we provide a short background on CS and point out the connection between the DRPE and the Gaussian random sensing, which is a universal sensing scheme. In section 4, we show how the DRPE enables the gain of SR image from a single exposure, and present simulation results. We conclude in section 5.

2. Double random phase encoding (DRPE)

Double random phase encoding (DRPE) was originally developed for optical security [‎8]. Figure 1 depicts a block diagram of the double random phase encoding process and Fig. 2 shows a possible optical implementation. DRPE is based on random masks placed in the input and Fourier plane of the optical system, which whitens both the data and its Fourier spectrum. Many implementation methods were reported for the original DRPE [‎8] and similar setups, including coherent [‎9–‎11], incoherent [‎12], Fresnel [‎13], fractional Fourier [‎14] domains and other implementations (see [‎1519] to name a few). Here, we adopt compressive sensing techniques to obtain SR imaging with DRPE; therefore, we denoted the method DRPE-CS.

 

Fig. 1 Block diagram of the double phase encoding process. ℑ denotes Fourier transform operator.

Download Full Size | PPT Slide | PDF

 

Fig. 2 DPRE Implementation of Fig. 1 using a 4f optical scheme.

Download Full Size | PPT Slide | PDF

First, let us review the equations describing the double phase encoding process. Let t(x,y) represent an NxN object. It may be the object's field in the object plane or any field uniquely related to the object, e.g. free space propagation of the object's complex field. Also, p(x,y) and b(u,v) are two independent i.i.d white sequences uniformly distributed between [0,1]. Then, the encoded output u(x,y) is given by [‎8]:

u(x,y)={t(x,y)exp[j2πp(x,y)]}h(x,y),
where H(u,v)=exp[j2πb(u,v)] is the Fourier transform pair of h(x,y). Figure 2, shows a 4-f optical scheme, where RPM denotes the random phase masks.

We shall show that the DRPE process allows us to create a single exposure compressive imaging scheme, with lateral resolution gain. This is due to the fact that it implements a universal CS scheme, as demonstrated in the next section.

3. Double random phase encoder as a universal compressive sensing encoder

3.1 Brief introduction to Compressive sensing

Compressive sensing (see for example [2025]) is a relatively new sampling paradigm which seeks to sense only the “essential” features of a signal/image, i.e., CS minimizes the sampling process. This in contrast to conventional sampling paradigm, which can be summarized as sample as much as possible data, and then discard most of it using some compression method (e.g., common JPEG).

Figure 3 shows a block diagram of the compressive imaging process [25]. In order to formulate mathematically the CS concept, let us consider an object t which is described by an N dimensional real valued vector (in case that the object represents an image of N pixels, t is a one dimensional vector obtained by rearranging the image in a lexicographic order) being projected (imaged) to u which is an M dimensional vector. One can also think of M as the number of detector pixels. In CS we are interested in the case of M<N, i.e., the signal is undersampled according to the Shannon-Nyquist theorem. The sensing process is given by:

u=Φt,
where Φ is an M by N matrix.

 

Fig. 3 Imaging scheme of compressed sensing [25]

Download Full Size | PPT Slide | PDF

Compressive sensing relies on two principles, signal sparsity and the incoherence between the sensing and sparsifying operators [‎21]. By assuming signal sparsity, we infer that the signal t can be sparsely represented in some arbitrary orthonormal basis Ψ (e.g., wavelet, or DCT). Thus, α is the K-sparse representation of image t projected on Ψ T, meaning, that α has only K non-zero terms.

Incoherence is a measure of dissimilarity between the sensing and sparsifying operators. The measure is mathematically quantified by (3):

μ=Nmaxij|<φi,ψj>|,
where φi,ψj denote the column vector of Φ and Ψ respectively, ,denotes regular inner product and N is the length of the column vector. The mutual coherence is bounded by 1μN [21]. CS theory suggests that a signal (image) measured with sensing operator Φ, can be actually recovered by l 1-norm minimization. The estimated coefficients vector α is the solution of the convex optimization program [‎21]:
α^=minαα1subjecttou=ΦΨα,
where α^1=i|αi| is the ℓ1-norm. One way of guaranteeing recovery via the ℓ1-norm minimization of a K-sparse signal t is by taking M measurements satisfying [20,21]:
MCμ2(Φ,Ψ)Klog(N),
where C is some small positive constant. The role of the mutual coherence becomes clear. The larger it is, the more samples you need. One can also think of the mutual coherence as a measure of how much the projection Φ spreads the information among many entries. Thus, if every coefficient from the sparsely represented signal is spread on many projections, we may have a better chance of reconstructing the signal from less available samples. A Gaussian random sensing basis is often chosen since it is universal CS operator, meaning that it fits to signals sparse in any domain [21]; i.e., its mutual coherence is μ2logNregardless of Ψ.

Often, when using CS, the sub-sampling is done by picking M out of N measurements uniformly at random [20,‎21]. This may impose limitations on the physical realization of an image sensing system. Let us think of a CCD camera, where we have N×Npixels. Random sub-sampling means turning off many of the pixels, thus not using the entire capability of the sensor. This setting is not of much practical value unless each and every pixel is extremely expensive, which may be the case for some detectors (UV for example). On the other hand, we may use a Gaussian random projection operator, where we would not have to randomly subsample the measurements. In the case of a random Gaussian projection, we just need to take M measurements, which is more relevant to our physically constrained blurring and sampling scheme.

3.2 Double phase encoding as a universal sensing operator

The Gaussian sensing operator holds two key properties that makes it very popular for CS applications [21,22]. The first is that each row is a set of i.i.d Gaussian random variables, drawn from the same distribution [22]. This property assures low mutual coherence (Eq. (3) between the sensing and sparsifying basis. The second desired property is the statistical independence between the measurements [22]. The DRPE operator performs exactly this process: it statistically de-correlates the data in both space and spatial frequency domains [ 16]. Thus, the DRPE operator acts like a random Gaussian sensing operator. It can be shown [22] that when using such a sensing scheme, we would need only about MKlog2N compressive signals, in order to guarantee the reconstruction of the signal, that is, the amount of samples we need to reconstruct a signal sensed in DRPE process.

Reformulating Eq. (1) in vector-matrix form reveals the similarity between DRPE operation and the Gaussian random sensing operator. The vector-matrix formulation of Eq. (1) is:

u=F*HFPt,
where P=diag(exp[j2πp(x,y)]), that is, a diagonal matrix having the first random phase mask's elements in Eq. (1) on its diagonal, H=diag(exp[j2πb(u,v)]), and F is the N 2xN 2 discrete Fourier transform matrix. The input t and output u are an N2x1 lexicographical arrangement of the input and output fields, respectively. All the matrices are of size N 2xN 2. Thus, we may write the FP in Eq. (6) representing random scrambling in the frequency domain matrix as:
FP=[11..11WN..WN(N1)......1WN(N1).WN(N1)(N1)][ej2πp10000.ej2πp2000......0...ej2πpN]==[ej2πp1ej2πp2..ej2πpNej2πp1WNej2πp2..WN(N1)ej2πpN......ej2πp1WN(N1)ej2πp2.WN(N1)(N1)ej2πpN],
where WN=ej2πN and pi is drawn from a uniform distribution on {0,1}. Please recall that all the entries of b(u,v) and p(x,y) are drawn independently from a uniform distribution between [0,1]; therefore, since E{ej2πpiej2πpj}=0, FP holds an inter-column statistical independence. In this sense the FP operator behaves equivalently to the operation of the random Gaussian sensing scheme [22]. Now, according to most CS implementations, at this stage, we should randomly sub-sample FPt, in order to guarantee the measurements independence [ 21,22]. However, since we would like to obey the physical constraint of our optical system, which performs deterministic sampling by its nature, and we still want to de-correlate the measurements, we can perform the de-correlation using some optical implementation, and deterministically sub-sample, instead. This can be achieved by performing another random scrambling (this time in the space domain) in the same manner, by using the F*H operator in Eq. (6). Performing F*H has the de-correlating effect on the result of the FPt operator (similar to the effect FP had on t). This can also be seen as guaranteeing inter-row statistical independence. Now, a statistical independence between the measurements is guaranteed. After the second scrambling, the signal may undergo a deterministic blurring or sub-sampling, and enjoy the powerful results of CS theory such as described in section 3.

4. Super resolution with double random phase encoding

4.1 DRPE image degradation model

Let us consider an object's image with pixel size Δo. The image has size of NΔo for both x and y directions. The random phase masks have a pixel size of Δp which we assume to be as small as Δo. The phase patch, Δp, serves as element of the high resolution grid, and ultimately determines the achievable resolution. The image degradation model applied to the proposed DRPE sensing model is given by:

uL(x,y)=D(L){t(x,y)exp[i2πp(x,y)]}h(x,y)hs,
where D(L)is the decimator operator standing for picking one of every L samples. The h s operator accounts for the entire blurring caused by the optical system, including blur due to sensor's geometrical limits, small NA (diffraction limited imaging), motion blur and defocusing blur.

The main thing we should bear in mind is the critical role of the first multiplication, which spreads the image's spatial frequencies before propagating through the rest of the system. Since the convolution operator is commutative h(x,y)hs=hsh(x,y), the spatial bandwidth of the rest of the system is always limited by that of hs. Therefore, without the first multiplication, the high spatial frequencies are lost due the sensor or optics limitations and there is no practical way they could be fully recovered. In the next section, we demonstrate that this way of encoding provides an effective spatial bandwidth extension both for the case that hs is dominated by the pixelated sensor or determined by diffraction.

4.2 Geometrical sub-sampling

Geometrical sub-sampling refers to the case that the resolution limitation is caused by the digital sensors (e.g., CCD or CMOS). Let us assume for convenience that each object pixel is sub-sampled by a factor of Lx and Ly in the x and y directions, respectively, i.e. each image pixel has the size of LxΔ×LyΔ due to sensor pixelation. Consequently, the number of CCD pixels are N/L where L = L x L y. The image obtained with the DRPE system is given by:

uL(x,y)=D(L){t(x,y)exp[i2πp(x,y)]}h(x,y)hCCD,
where
hCCD=rect(xLxΔ)rect(yLyΔ)
represents the averaging (integration) over the sensor pixel area. D(L)stands for picking one sample by averaging each Lx×Ly”high resolution” pixels. For example with sub-sampling by a factor of L = 4 this operator is written in a matrix notation (for a 1-D signal) as:
uL=D(L)u=14SDu=14[1000000000000000000000001][1111000001111000001111000001111000001111]u,
where S denotes the sub-sampling matrix, and D performs the averaging (low-pass) operation. In order to reconstruct the object t from the measured u according Eq. (9) we choose to solve the problem:
min|ΨTt|1+γTV(t)s.t.u=DF1HFPt,
where TV stands for the total variation operator defined as
TV(x)=i,j(xi+1,jxi,j)2+(xi,j+1xi,j)2.
The measured signal is denoted by u, and Ψ, is the sparsifying operator. The TV functional used here is well-known in signal and image processing for its tendency to suppress spurious high-frequency features.

Figure 4 shows the simulation results with geometrically resolution limited model using USAF resolution chart. The original image has a size of 1024x1024 pixels. This was also the size of the random phase masks. The detector pixel size was taken to be 2 by 4∆, i.e. 2 and 4 times larger in the vertical and horizontal directions, respectively. Accordingly, the number of pixels in the captured image was N/LxxN/Ly, with Lx = 2 and Ly = 4. Figure 4(b) is obtained by averaging and sub-sampling the output data from the DRPE system (Fig. 1(a)) 4 times less in the horizontal direction and 2 times in the vertical direction. As a result, the CCD captures 512x256 pixels, with pixels size of 2x4∆. Fig. 4(c) shows the reconstructed image after solving Eq. (12) for the output of the DPRE-CS system. Figure 4 (d) - (f) show a zoom in on the finest resolution details, comparing the low-resolution image obtained with conventional imaging system to the super-resolution image obtained with DRPE-CS. It is evident that the DRPE-CS method resolves almost perfectly the finest details that are obviously lost with conventional imaging systems. It can be seen in Fig. 4 (e) that details which correspond to 1/4 line pairs per pixel are irresolvable, while in Fig. 4(f) the finest detail resolution corresponding to 1/2 line pairs per pixel is evident. Thus a resolution gain of at least 2 is demonstrated.

 

Fig. 4 (a) Original USAF 1024x1024 pixels resolution chart image, with pixel spacing ∆ (b) DRPE image using a 512x256 pixels, 2x4 pixels averaged captured image. (c) Image reconstructed from DRPE captured image (b). (d) zoom-in to (a). (e) Result of downsampling the image in (a); taking 512x256 regular samples from the original image averaged with a 2x4 pixels kernel and then up-sampled to size of 1024x1024 by bi-cubic interpolation. (f) Zoom in to (c) which is captured with DRPE-CS.

Download Full Size | PPT Slide | PDF

Here, we would like to point out that the DPRE-CS model in Eq. (9) can be also mathematically related to the random convolution model in [‎23]. In [‎23], the sensing operator is defined as:

ud=D(L)[thrandom_phase]hrandom_signshCCD.

After inspection of the DRPE sensing model in Eq. (8) and the model presented in [‎23], one can see that the DRPE model is actually the adjoint sensing operator of [‎23]. Therefore, the bound on the minimum number of samples derived in [‎23] MKlog2N applies here too. Note that this is also consistent with bound in [ 22]. Thus, we also offer the connection between the two approaches for structured CS. Despite the similarity between the model in Eq. (9) and that of Eq. (14), the DPRE-CS offers some practical advantages, which are described in the next subsection.

4.3 Lens blurring

In this sub-section, we consider super-resolving an image heavily degraded by diffraction. The sensing model in such a case is:

uL(x,y)={t(x,y)exp[i2πp(x,y)]}h(x,y)hDiff,
where hDiff is the blurring induced by the finite aperture size of the imaging optics. Let us define fnom as the nominal cutoff frequency of the lens aperture in order to capture the image without blurring. In such a case we have to solve the following:
min|ΨTt|1+γTV(t)s.t.u=F1HFAPt,
where A is the matrix-vector representation describing the aperture function, in the spatial frequency domain. Figure 5 presents simulation results for a lens with a cutoff spatial radial frequency 6 times smaller than the one required for no blurring,, i.e., with a spatial frequency cutoff fcutoff=fnom/6 where fnom represents the diffraction cutoff frequency matched to the rest of the system. In our simulations, fnom is the cutoff frequency set by the object's pixel size. We also added measurement noise to the captured image such that the SNR was 37dB. The reconstruction of the fine resolution details using DRPE-CS strategy is evident from the zooming in of Fig. 5 despite of the substantial spatial frequency sub-sampling. We can notice in Fig. 5 (d) that due to blurring, targets with spatial frequency larger than 1/12 line pairs per pixels became irresolvable using conventional imaging. On the other hand, when acquiring and reconstructing data using the DRPE-CS, details corresponding to 1/2 line pairs per pixels (see Fig. 5 (e)) are clearly resolvable. Hence SR by approximately a factor of 6 is demonstrated.

 

Fig. 5 (a) The target in Fig. 4(a) blurred by an aperture with fcutoff=fnom/6, and additive noise yielding SNR of 37 dB. (b) Reconstruction from data captured with DRPE-CS. (c) Zoom in to the original object (Fig. 4 (a)). (d) zoom-in to (a). (e) Zoom in to (b) which is captured with DRPE-CS.

Download Full Size | PPT Slide | PDF

4.4 General degrading model

As stated in Eq. (8), we can incorporate a more general degrading model, such as when we have both blurring caused by diffraction and sub-sampling caused by geometrical limits of the sensor (CCD, etc). Figure 6 shows simulation results for such a scenario. The lens has a radial cutoff frequency fcutoff=fnom/5 and the CCD causes averaging and sub-sampling by a factor of 2 in both horizontal and vertical directions. Noise was also added yielding a 37 dB SNR. In Fig. 6, we focus on the fine details and notice that the blurring has made targets having more than 1/10 line pairs per pixels become irresolvable; while the DRPE-CS was able to resolve 1/2 line pairs per pixels (Fig. 6 (c)). Thus, a 5 times resolution increase is evident, and also the ability of the DRPE-CS to resolve a general image degrading model is illustrated.

 

Fig. 6 (a) Zoom in to the original object (Fig. 4 (a)). (b) Blurring by an aperture with fcutoff=fnom/5, sensor down-sampling by a 2x2 factor, and an additive measurement noise to yield 37 db SNR. (c) Reconstruction result using the DRPE-CS strategy.

Download Full Size | PPT Slide | PDF

We note that the model presented by [‎23] is not applicable for the diffraction limited scenario discussed in this subsection, since by applying hdiff immediately on the input signal t all the high frequencies are filtered out before entering the sensing system. This information would be lost forever and cannot be uniquely reconstructed.

5. Conclusions

We have shown that the well known double random phase encoding architecture which has been traditionally used for optical security can be successfully used for a new application, that is, super-resolution with a single exposure. The technique relies heavily on the property of the double random phase encoding that randomly spreads the data in both space and spatial frequency domains, thus mimicking a universal random Gaussian sensing operator for compressive sensing. Arguably, we can use this sensing scheme to super-resolve almost any image degradation model, caused by passing through a low pass filter linear physical system. We have demonstrated numerically super resolution reconstructions for sub-sampling caused by geometrical limits of a sensor (such as CCD array), for severe diffraction limitation, and for a combination of the two. Simulations demonstrated substantial improvements for both geometrical and optical super resolution. The sensing process works with a single exposure, which enables super resolution with real time scene acquisitions; therefore, enabling video rate implementation.

Acknowledgements

This research was partially supported by The Israel Science Foundation grant 1039/09.

References and links

1. S. Park, M. Park, and M. Gang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003). [CrossRef]  

2. Z. Zalevsky and D. Mendlovic, Optical Super Resolution (Springer-Verlag, 2003).

3. S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. 13(10), 1327–1344 (2004). [CrossRef]   [PubMed]  

4. S. Prasad and X. Luo, “Support-assisted optical superresolution of low-resolution image sequences: the one-dimensional problem,” Opt. Express 17(25), 23213–23233 (2009). [CrossRef]  

5. A. Stern, Y. Porat, A. Ben-Dor, and N. S. Kopeika, “Enhanced-resolution image restoration from a sequence of low-frequency vibrated images by use of convex projections,” Appl. Opt. 40(26), 4706–4715 (2001). [CrossRef]  

6. J. García, Z. Zalevsky, and D. Fixler, “Synthetic aperture superresolution by speckle pattern projection,” Opt. Express 13(16), 6073–6078 (2005). [CrossRef]   [PubMed]  

7. A. Borkowski, Z. Zalevsky, and B. Javidi, “Geometrical superresolved imaging using nonperiodic spatial masking,” J. Opt. Soc. Am. A 26(3), 589–601 (2009). [CrossRef]  

8. P. Réfrégier and B. Javidi, “Optical image encryption based on input plane and Fourier plane random encoding,” Opt. Lett. 20(7), 767–769 (1995). [CrossRef]   [PubMed]  

9. B. Javidi, G. Zhang, and J. Li, “Encrypted optical memory using double-random phase encoding,” Appl. Opt. 36(5), 1054–1058 (1997). [CrossRef]   [PubMed]  

10. O. Matoba, T. Nomura, E. Perez-Cabre, M. S. Millan, and B. Javidi, “Optical techniques for information security,” Proc. IEEEl 97(6), 1128–1148 (2009). [CrossRef]  

11. E. Tajahuerce, O. Matoba, S. C. Verrall, and B. Javidi, “Optoelectronic information encryption with phase-shifting interferometry,” Appl. Opt. 39(14), 2313–2320 (2000). [CrossRef]  

12. E. Tajahuerce, J. Lancis, P. Andres, V. Climent, and B. Javidi, “Optoelectronic Information Encryption with Incoherent Light,” in Optical and Digital Techniques for Information Security, B. Javidi, ed. (Springer-Verlag, 2004).

13. O. Matoba and B. Javidi, “Encrypted optical memory system using three-dimensional keys in the Fresnel domain,” Opt. Lett. 24(11), 762–764 (1999). [CrossRef]  

14. G. Unnikrishnan, J. Joseph, and K. Singh, “Optical encryption by double-random phase encoding in the fractional Fourier domain,” Opt. Lett. 25(12), 887–889 (2000). [CrossRef]  

15. P. C. Mogensen and J. Glückstad, “Phase-only optical encryption,” Opt. Lett. 25(8), 566–568 (2000). [CrossRef]  

16. B. M. Hennelly, T. J. Naughton, J. McDonald, J. T. Sheridan, G. Unnikrishnan, D. P. Kelly, and B. Javidi, “Spread-space spread-spectrum technique for secure multiplexing,” Opt. Lett. 32(9), 1060–1062 (2007). [CrossRef]   [PubMed]  

17. O. Matoba and B. Javidi, “Encrypted optical storage with angular multiplexing,” Appl. Opt. 38(35), 7288–7293 (1999). [CrossRef]  

18. E. Tajahuerce and B. Javidi, “Encrypting three-dimensional information with digital holography,” Appl. Opt. 39(35), 6595–6601 (2000). [CrossRef]  

19. X. Tan, O. Matoba, Y. Okada-Shudo, M. Ide, T. Shimura, and K. Kuroda, “Secure optical memory system with polarization encryption,” Appl. Opt. 40(14), 2310–2315 (2001). [CrossRef]  

20. D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]  

21. E. Candes and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]  

22. T. Do, T. Tran, and L. Gan, “Fast compressive sampling with structurally random matrices,” in Proc. ICASSP, 3369–3372, (2008).

23. J. Romberg, “Compressive sensing by random convolution,” SIAM J. Imaging Sci. 2(4), 1098–1128 (2009). [CrossRef]  

24. Y. Rivenson, A. Stern, and B. Javidi, “Compressive Fresnel Holography,” to appear in IEEE/OSA J. on Display Technology, (2010).

25. A. Stern and B. Javidi, “Random projections imaging with extended space-bandwidth product,” IEEE/OSA Journal on Display Technology, 3(3), 315–320 (2007).

References

  • View by:
  • |
  • |
  • |

  1. S. Park, M. Park, and M. Gang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003).
    [CrossRef]
  2. Z. Zalevsky and D. Mendlovic, Optical Super Resolution (Springer-Verlag, 2003).
  3. S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. 13(10), 1327–1344 (2004).
    [CrossRef] [PubMed]
  4. S. Prasad and X. Luo, “Support-assisted optical superresolution of low-resolution image sequences: the one-dimensional problem,” Opt. Express 17(25), 23213–23233 (2009).
    [CrossRef]
  5. A. Stern, Y. Porat, A. Ben-Dor, and N. S. Kopeika, “Enhanced-resolution image restoration from a sequence of low-frequency vibrated images by use of convex projections,” Appl. Opt. 40(26), 4706–4715 (2001).
    [CrossRef]
  6. J. García, Z. Zalevsky, and D. Fixler, “Synthetic aperture superresolution by speckle pattern projection,” Opt. Express 13(16), 6073–6078 (2005).
    [CrossRef] [PubMed]
  7. A. Borkowski, Z. Zalevsky, and B. Javidi, “Geometrical superresolved imaging using nonperiodic spatial masking,” J. Opt. Soc. Am. A 26(3), 589–601 (2009).
    [CrossRef]
  8. P. Réfrégier and B. Javidi, “Optical image encryption based on input plane and Fourier plane random encoding,” Opt. Lett. 20(7), 767–769 (1995).
    [CrossRef] [PubMed]
  9. B. Javidi, G. Zhang, and J. Li, “Encrypted optical memory using double-random phase encoding,” Appl. Opt. 36(5), 1054–1058 (1997).
    [CrossRef] [PubMed]
  10. O. Matoba, T. Nomura, E. Perez-Cabre, M. S. Millan, and B. Javidi, “Optical techniques for information security,” Proc. IEEEl 97(6), 1128–1148 (2009).
    [CrossRef]
  11. E. Tajahuerce, O. Matoba, S. C. Verrall, and B. Javidi, “Optoelectronic information encryption with phase-shifting interferometry,” Appl. Opt. 39(14), 2313–2320 (2000).
    [CrossRef]
  12. E. Tajahuerce, J. Lancis, P. Andres, V. Climent, and B. Javidi, “Optoelectronic Information Encryption with Incoherent Light,” in Optical and Digital Techniques for Information Security, B. Javidi, ed. (Springer-Verlag, 2004).
  13. O. Matoba and B. Javidi, “Encrypted optical memory system using three-dimensional keys in the Fresnel domain,” Opt. Lett. 24(11), 762–764 (1999).
    [CrossRef]
  14. G. Unnikrishnan, J. Joseph, and K. Singh, “Optical encryption by double-random phase encoding in the fractional Fourier domain,” Opt. Lett. 25(12), 887–889 (2000).
    [CrossRef]
  15. P. C. Mogensen and J. Glückstad, “Phase-only optical encryption,” Opt. Lett. 25(8), 566–568 (2000).
    [CrossRef]
  16. B. M. Hennelly, T. J. Naughton, J. McDonald, J. T. Sheridan, G. Unnikrishnan, D. P. Kelly, and B. Javidi, “Spread-space spread-spectrum technique for secure multiplexing,” Opt. Lett. 32(9), 1060–1062 (2007).
    [CrossRef] [PubMed]
  17. O. Matoba and B. Javidi, “Encrypted optical storage with angular multiplexing,” Appl. Opt. 38(35), 7288–7293 (1999).
    [CrossRef]
  18. E. Tajahuerce and B. Javidi, “Encrypting three-dimensional information with digital holography,” Appl. Opt. 39(35), 6595–6601 (2000).
    [CrossRef]
  19. X. Tan, O. Matoba, Y. Okada-Shudo, M. Ide, T. Shimura, and K. Kuroda, “Secure optical memory system with polarization encryption,” Appl. Opt. 40(14), 2310–2315 (2001).
    [CrossRef]
  20. D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006).
    [CrossRef]
  21. E. Candes and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008).
    [CrossRef]
  22. T. Do, T. Tran, and L. Gan, “Fast compressive sampling with structurally random matrices,” in Proc. ICASSP, 3369–3372, (2008).
  23. J. Romberg, “Compressive sensing by random convolution,” SIAM J. Imaging Sci. 2(4), 1098–1128 (2009).
    [CrossRef]
  24. Y. Rivenson, A. Stern, and B. Javidi, “Compressive Fresnel Holography,” to appear in IEEE/OSA J. on Display Technology, (2010).
  25. A. Stern and B. Javidi, “Random projections imaging with extended space-bandwidth product,” IEEE/OSA Journal on Display Technology, 3(3), 315–320 (2007).

2009 (4)

S. Prasad and X. Luo, “Support-assisted optical superresolution of low-resolution image sequences: the one-dimensional problem,” Opt. Express 17(25), 23213–23233 (2009).
[CrossRef]

A. Borkowski, Z. Zalevsky, and B. Javidi, “Geometrical superresolved imaging using nonperiodic spatial masking,” J. Opt. Soc. Am. A 26(3), 589–601 (2009).
[CrossRef]

O. Matoba, T. Nomura, E. Perez-Cabre, M. S. Millan, and B. Javidi, “Optical techniques for information security,” Proc. IEEEl 97(6), 1128–1148 (2009).
[CrossRef]

J. Romberg, “Compressive sensing by random convolution,” SIAM J. Imaging Sci. 2(4), 1098–1128 (2009).
[CrossRef]

2008 (1)

E. Candes and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008).
[CrossRef]

2007 (1)

2006 (1)

D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006).
[CrossRef]

2005 (1)

2004 (1)

S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. 13(10), 1327–1344 (2004).
[CrossRef] [PubMed]

2003 (1)

S. Park, M. Park, and M. Gang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003).
[CrossRef]

2001 (2)

2000 (4)

1999 (2)

1997 (1)

1995 (1)

Ben-Dor, A.

Borkowski, A.

Candes, E.

E. Candes and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008).
[CrossRef]

Donoho, D.

D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006).
[CrossRef]

Elad, M.

S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. 13(10), 1327–1344 (2004).
[CrossRef] [PubMed]

Farsiu, S.

S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. 13(10), 1327–1344 (2004).
[CrossRef] [PubMed]

Fixler, D.

Gang, M.

S. Park, M. Park, and M. Gang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003).
[CrossRef]

García, J.

Glückstad, J.

Hennelly, B. M.

Ide, M.

Javidi, B.

A. Borkowski, Z. Zalevsky, and B. Javidi, “Geometrical superresolved imaging using nonperiodic spatial masking,” J. Opt. Soc. Am. A 26(3), 589–601 (2009).
[CrossRef]

O. Matoba, T. Nomura, E. Perez-Cabre, M. S. Millan, and B. Javidi, “Optical techniques for information security,” Proc. IEEEl 97(6), 1128–1148 (2009).
[CrossRef]

B. M. Hennelly, T. J. Naughton, J. McDonald, J. T. Sheridan, G. Unnikrishnan, D. P. Kelly, and B. Javidi, “Spread-space spread-spectrum technique for secure multiplexing,” Opt. Lett. 32(9), 1060–1062 (2007).
[CrossRef] [PubMed]

E. Tajahuerce and B. Javidi, “Encrypting three-dimensional information with digital holography,” Appl. Opt. 39(35), 6595–6601 (2000).
[CrossRef]

E. Tajahuerce, O. Matoba, S. C. Verrall, and B. Javidi, “Optoelectronic information encryption with phase-shifting interferometry,” Appl. Opt. 39(14), 2313–2320 (2000).
[CrossRef]

O. Matoba and B. Javidi, “Encrypted optical storage with angular multiplexing,” Appl. Opt. 38(35), 7288–7293 (1999).
[CrossRef]

O. Matoba and B. Javidi, “Encrypted optical memory system using three-dimensional keys in the Fresnel domain,” Opt. Lett. 24(11), 762–764 (1999).
[CrossRef]

B. Javidi, G. Zhang, and J. Li, “Encrypted optical memory using double-random phase encoding,” Appl. Opt. 36(5), 1054–1058 (1997).
[CrossRef] [PubMed]

P. Réfrégier and B. Javidi, “Optical image encryption based on input plane and Fourier plane random encoding,” Opt. Lett. 20(7), 767–769 (1995).
[CrossRef] [PubMed]

Joseph, J.

Kelly, D. P.

Kopeika, N. S.

Kuroda, K.

Li, J.

Luo, X.

Matoba, O.

McDonald, J.

Milanfar, P.

S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. 13(10), 1327–1344 (2004).
[CrossRef] [PubMed]

Millan, M. S.

O. Matoba, T. Nomura, E. Perez-Cabre, M. S. Millan, and B. Javidi, “Optical techniques for information security,” Proc. IEEEl 97(6), 1128–1148 (2009).
[CrossRef]

Mogensen, P. C.

Naughton, T. J.

Nomura, T.

O. Matoba, T. Nomura, E. Perez-Cabre, M. S. Millan, and B. Javidi, “Optical techniques for information security,” Proc. IEEEl 97(6), 1128–1148 (2009).
[CrossRef]

Okada-Shudo, Y.

Park, M.

S. Park, M. Park, and M. Gang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003).
[CrossRef]

Park, S.

S. Park, M. Park, and M. Gang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003).
[CrossRef]

Perez-Cabre, E.

O. Matoba, T. Nomura, E. Perez-Cabre, M. S. Millan, and B. Javidi, “Optical techniques for information security,” Proc. IEEEl 97(6), 1128–1148 (2009).
[CrossRef]

Porat, Y.

Prasad, S.

Réfrégier, P.

Robinson, M. D.

S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. 13(10), 1327–1344 (2004).
[CrossRef] [PubMed]

Romberg, J.

J. Romberg, “Compressive sensing by random convolution,” SIAM J. Imaging Sci. 2(4), 1098–1128 (2009).
[CrossRef]

Sheridan, J. T.

Shimura, T.

Singh, K.

Stern, A.

Tajahuerce, E.

Tan, X.

Unnikrishnan, G.

Verrall, S. C.

Wakin, M.

E. Candes and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008).
[CrossRef]

Zalevsky, Z.

Zhang, G.

Appl. Opt. (6)

IEEE Signal Process. Mag. (2)

E. Candes and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008).
[CrossRef]

S. Park, M. Park, and M. Gang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003).
[CrossRef]

IEEE Trans. Image Process. (1)

S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. 13(10), 1327–1344 (2004).
[CrossRef] [PubMed]

IEEE Trans. Inf. Theory (1)

D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006).
[CrossRef]

J. Opt. Soc. Am. A (1)

Opt. Express (2)

Opt. Lett. (5)

Proc. IEEEl (1)

O. Matoba, T. Nomura, E. Perez-Cabre, M. S. Millan, and B. Javidi, “Optical techniques for information security,” Proc. IEEEl 97(6), 1128–1148 (2009).
[CrossRef]

SIAM J. Imaging Sci. (1)

J. Romberg, “Compressive sensing by random convolution,” SIAM J. Imaging Sci. 2(4), 1098–1128 (2009).
[CrossRef]

Other (5)

Y. Rivenson, A. Stern, and B. Javidi, “Compressive Fresnel Holography,” to appear in IEEE/OSA J. on Display Technology, (2010).

A. Stern and B. Javidi, “Random projections imaging with extended space-bandwidth product,” IEEE/OSA Journal on Display Technology, 3(3), 315–320 (2007).

T. Do, T. Tran, and L. Gan, “Fast compressive sampling with structurally random matrices,” in Proc. ICASSP, 3369–3372, (2008).

E. Tajahuerce, J. Lancis, P. Andres, V. Climent, and B. Javidi, “Optoelectronic Information Encryption with Incoherent Light,” in Optical and Digital Techniques for Information Security, B. Javidi, ed. (Springer-Verlag, 2004).

Z. Zalevsky and D. Mendlovic, Optical Super Resolution (Springer-Verlag, 2003).

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1

Block diagram of the double phase encoding process. ℑ denotes Fourier transform operator.

Fig. 2
Fig. 2

DPRE Implementation of Fig. 1 using a 4f optical scheme.

Fig. 3
Fig. 3

Imaging scheme of compressed sensing [25]

Fig. 4
Fig. 4

(a) Original USAF 1024x1024 pixels resolution chart image, with pixel spacing ∆ (b) DRPE image using a 512x256 pixels, 2x4 pixels averaged captured image. (c) Image reconstructed from DRPE captured image (b). (d) zoom-in to (a). (e) Result of downsampling the image in (a); taking 512x256 regular samples from the original image averaged with a 2x4 pixels kernel and then up-sampled to size of 1024x1024 by bi-cubic interpolation. (f) Zoom in to (c) which is captured with DRPE-CS.

Fig. 5
Fig. 5

(a) The target in Fig. 4(a) blurred by an aperture with f cutoff = f nom / 6 , and additive noise yielding SNR of 37 dB. (b) Reconstruction from data captured with DRPE-CS. (c) Zoom in to the original object (Fig. 4 (a)). (d) zoom-in to (a). (e) Zoom in to (b) which is captured with DRPE-CS.

Fig. 6
Fig. 6

(a) Zoom in to the original object (Fig. 4 (a)). (b) Blurring by an aperture with f cutoff = f nom / 5 , sensor down-sampling by a 2x2 factor, and an additive measurement noise to yield 37 db SNR. (c) Reconstruction result using the DRPE-CS strategy.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

u ( x , y ) = { t ( x , y ) exp [ j 2 π p ( x , y ) ] } h ( x , y ) ,
u = Φ t ,
μ = N max i j | < φ i , ψ j > | ,
α ^ = min α α 1 s u b j e c t t o u = Φ Ψ α ,
M C μ 2 ( Φ , Ψ ) K log ( N ) ,
u = F * HFPt ,
F P = [ 1 1 . . 1 1 W N . . W N ( N 1 ) . . . . . . 1 W N ( N 1 ) . W N ( N 1 ) ( N 1 ) ] [ e j 2 π p 1 0 0 0 0 . e j 2 π p 2 0 0 0 . . . . . . 0 . . . e j 2 π p N ] = = [ e j 2 π p 1 e j 2 π p 2 . . e j 2 π p N e j 2 π p 1 W N e j 2 π p 2 . . W N ( N 1 ) e j 2 π p N . . . . . . e j 2 π p 1 W N ( N 1 ) e j 2 π p 2 . W N ( N 1 ) ( N 1 ) e j 2 π p N ] ,
u L ( x , y ) = D ( L ) { t ( x , y ) exp [ i 2 π p ( x , y ) ] } h ( x , y ) h s ,
u L ( x , y ) = D ( L ) { t ( x , y ) exp [ i 2 π p ( x , y ) ] } h ( x , y ) h C C D ,
h C C D = r e c t ( x L x Δ ) r e c t ( y L y Δ )
u L = D ( L ) u = 1 4 S D u = 1 4 [ 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ] [ 1 1 1 1 0 0 0 0 0 1 1 1 1 0 0 0 0 0 1 1 1 1 0 0 0 0 0 1 1 1 1 0 0 0 0 0 1 1 1 1 ] u ,
min | Ψ T t | 1 + γ T V ( t ) s . t . u = D F 1 H F P t ,
T V ( x ) = i , j ( x i + 1 , j x i , j ) 2 + ( x i , j + 1 x i , j ) 2 .
u d = D ( L ) [ t h r a n d o m _ p h a s e ] h r a n d o m _ s i g n s h C C D .
u L ( x , y ) = { t ( x , y ) exp [ i 2 π p ( x , y ) ] } h ( x , y ) h Diff ,
min | Ψ T t | 1 + γ T V ( t ) s . t . u = F 1 H F A P t ,

Metrics