Abstract

In this paper, we propose generalized sampling approaches for measuring a multi-dimensional object using a compact compound-eye imaging system called thin observation module by bound optics (TOMBO). This paper shows the proposed system model, physical examples, and simulations to verify TOMBO imaging using generalized sampling. In the system, an object is sheared and multiplied by a weight distribution with physical coding, and the coded optical signal is integrated on to a detector array. A numerical estimation algorithm employing a sparsity constraint is used for object reconstruction.

© 2010 OSA

1. Introduction

A compound-eye imaging system is a promising computational imaging modality. Compound-eye optics have enabled light-field acquisition[1] and device compactness[2, 3]. Thin observation module by bound optics (TOMBO) is a representative example of a compound-eye imaging system[4].

An advantage of compound-eye imaging systems is that they permit diverse data acquisition schemes. Different lenslets may create different encodings. For example, time detection based on the encoding concept has been proposed[5], and range detection in [6] can be considered as a system based on the concept. These compact systems reconstruct a three-dimensional object from a two-dimensional measurement where the size is the same as that of an axial plane of the object.

This paper proposes generalized sampling approaches for multi-dimensional object acquisition using TOMBO. In the proposed system, an object is acquired with coding and multiplexing in a two-dimensional snapshot. In particular, the coding schemes in [5, 6] are extended for multi-dimensional data acquisition of various objects.

There could be multiple choices for coding schemes for multi-dimensional object acquisition such as coded aperture imaging and multi-shot imaging. These schemes differ by design constraints. This paper considers the compactness of hardware and single-shot object acquisition capability as critical design constraints. As indicated by a large literature, TOMBO imaging modality is one of such techniques that can implement a compact system that can meet our design constraints. This motivates us to investigate its potential as a compressive imaging technique.

In this paper, the mathematical model of the proposed system and examples of the coding schemes for spectral and polarization imaging techniques are presented. Simulation results of the proposed system are shown. The implementations are inspired by [7, 8, 9]. The previously presented systems have a tradeoff between the spatial and axial resolutions. For example, in [7, 8], the number of the spectral or polarization channels is roughly proportional to that of the lenses. Increasing the number of lenses reduces the spatial resolution. The approaches proposed in this paper may compensate for the tradeoff by leveraging compressive sampling[10].

A constrained optimization technique to incorporate sparsity in some basis of an object estimate is used for reconstruction. The reconstruction method is inspired by compressive sampling [10]. In compressive sampling, the systems should satisfy some assumptions stated in section2 for accurate reconstruction. The proposed system is compared to a theoretical baseline sensing system which is a Gaussian random sensing matrix. Several systems based on sparse reconstruction have been demonstrated and have shown promising results[11, 12, 13].

 

Fig. 1. Cross section view of TOMBO. ν, Ou, and Lu are the spatial dimension, the center position, and the position of a lenslet in the u-th unit, respectively.

Download Full Size | PPT Slide | PDF

Section 2 provides a brief background on TOMBO and compressive sampling. Section 3 describes a general model for multi-dimensional TOMBO imaging. Section 4 presents examples of coding schemes. Simulation results are given in section 5.

2. Background

2.1. TOMBO

In a simplified conceptual model, TOMBO consists of lenslets and a detector array as shown in Fig. 1. An imaging structure associated with a lenslet is called a unit[4]. Each unit produces a low-resolution (LR) image on the detector array.

When the number of units is Nu × Nu in a square arrangement, the focal-length and the diameter of the lenslet need to be Nu times smaller than those of the corresponding conventional full-aperture system to obtain the same field of views. This results in a LR image whose size is Nu times smaller than that of an image produced by the full-aperture system. The thickness and depth-of-field of TOMBO are Nu times shorter and Nu 2 times longer, respectively. This allows for compact hardware with a large depth-of-field. Objects are often assumed to be located within the depth-of-field, and the lenslets are assumed to be aberration-free[4, 14]. These assumptions are made throughout this paper, unless otherwise stated.

2.2. Compressive sampling

The proposed system model in this paper forms an underdetermined linear system of equations as described in section 3. Compressive sampling (CS) is a theoretical framework for solving an underdetermined system[10, 15]. The reconstruction method in this paper is inspired by CS.

A linear system model can be written as

g=Φf=ΦΨβ=Θβ,

where gNg×1 , ΦNg×Nf , fNf×1 , ΨNf×Nf , and βNf×1 are a measurement vector, a sensing matrix, an object vector, a basis matrix, and a transform coefficient vector, respectively. Ni×Nj denotes a Ni × Nj matrix of real numbers. We consider the case where NgNf.

Let s denote the number of non-zero coefficients in β. CS indicates that, for accurate reconstruction, Θ should satisfy a sufficient condition for any s-sparse β. The condition is called the restricted isometric property (RIP) defined by,

(1cs)βΛ22ΘΛβΛ22(1+cs)βΛ22,

where cs ∈ (0,1) is a constant and ||·||2 2 denotes an ℓ2-norm[16]. Λ is a subset of indices supporting s nonzero coefficients in β. β Λ and Θ Λ are elements of β and columns of Θ that support the s coefficients. If cs is close to 0, Eq. (2) indicates that Θ Λ preserves the Euclidean length of βΛ.μ(Φ,Ψ)[1,Nf] defined as

μ(Φ,Ψ)=Nfmax1iNg,1jNfΦ(i,:),Ψ(:,j)

is called coherence. Φ(i, :), Ψ(:, j), and 〈,〉 are the i-th row of Φ, the j-th column of Ψ, and an inner product, respectively. When the coherence is small, Φ and Ψ are said to be incoherent. The number of measurement components required for accurate reconstruction is given as

Ngcμ(Φ,Ψ)2slogNf,

where c is a constant[15]. According to CS theory[15], if Θ satisfies RIP (Eq. (2)), measurements are with high probability sufficient to accurately estimate β. An accurate estimate of the s nonzero coefficients in β can be obtained by solving

β̂=argminββ1subjecttog=Θβ,

where || · ||1 denotes ℓ1 norm.

3. A mathmatical model for proposed acquisition schemes

Let F (x,y,z 0, ⋯, zNn1 denote a continuous density function representing a multi-dimensional object. x and y represent spatial dimensions, and z 0, ⋯, zNn1 represent the other dimensions dependent on the application. x = 0 and y = 0 are defined as the center of a detector array. For simplicity, the y dimension is omitted. Extending to higher dimensions may be readily achieved with small modifications of the model.

3.1. Continuous model

In the proposed system, a multi-dimensional object is integrated on to detectors with one of two coding schemes, as demonstrated in Fig. 2. In one of the coded integrations inspired by [6], an object is sheared by an optical element, and the sheared optical signal is integrated on to a detector array. In the shear-transformation, each axial plane in an object is shifted along the x axis as shown in Fig. 2(a). In [6], the shift corresponds to a parallax. In another coded integration inspired by [5], an object is multiplied with a weight distribution, and the weighted optical signal is integrated on to a detector array as shown in Fig. 2(b). The weight distribution is a continuous function of z. In [5], the weight distribution corresponds to an exposure time. The two schemes are referred to as sheared integration (SI) and weighted integration (WI), respectively.

We denote integrated data associated with the u-th unit as Gu(ν), where ν denotes the spatial dimension in a unit as shown in Fig. 1. ν in the u-th unit is defined as ν = xOu, where Ou is the center of the u-th unit. Gu(ν) is expressed as

Gu(v)=F(vLunSn,u(zn),z0,,zNn1)nWn,u(zn)dzn,
(u=0,,Nu1),

where Sn,u(zn) and Wn,u(zn) show a shift in SI and a weight distribution in WI of the zn dimension in the u-th unit, respectively. Lu is the center position of the u-th lenslet on the ν axis as shown in Fig. 1. For simplicity, Nn = 1 is assumed, and a subscript n is omitted. Eq. (6) can be rewritten as

Gu(v)=F(vLuSu(z),z)Wu(z)dz.
 

Fig. 2. Coding schemes in TOMBO. (a) Sheared integration and (b) weighted integration in a unit.

Download Full Size | PPT Slide | PDF

3.2. Discretization model

A discrete object Nx×Nz and a discrete integrated data Nx×Nu are denoted by (l,m) = F(lΔx,mΔz) and u(i) = Gu(iΔx) using notation similar to that in [17], where a tilde indicates a discrete data. ′ is an intermediate data before sampling by the detectors. l, m, and i are integer variables of the x, z, and ν axes in a discretization model, respectively. Δx and Δz are the pixel pitches along the x and z axes in a discrete object. u(i) is sampled by detectors. The measurement data Nv×Nu is expressed as u(j) = ∑i u(i) rect((iΔxjΔνDu)/Δν), where Nν, j, Δν, and Du are the number of detectors in a unit, an index for the detectors in a unit, the pixel pitch of the detectors, and the center of the center detector on the ν axis in the u-th unit. Then, the measurement data can be written as

G˜u(j)=irect(ixjvv)mF˜(iS¯u(m),m)W¯u(m),

where u(m) = ⌊(Lu + Su(mΔz) − Du)/Δx + 0.5⌊ and u(m) = Wu(mΔzz· ⌊·⌋ is the floor function.

3.3. System matrix

We assume that Δx = Δν/Nu and Nx = Nν Nu, which are both typical assumptions in TOMBO imaging [4, 5, 6, 14]. Thus, the numbers of elements in the measurement data and the object are Ng = Nx and Nf = Nx Nz.

As indicated in Eq. (8), the m-th axial plane in is shifted by u(m) and multiplied with u(m). Cm,uNx×Nx which denotes the coding operation for the m-th axial plane in in the u-th unit is expressed as,

Cm,u(p,q)={W¯u(m)(p=q+S¯u(m)),0(pq+S¯u(m)),

where C m,u (p,q) is the (p,q)-th element in the matrix C m,u.

CuNf×Nf represents the coding operation implemented by the TOMBO system on the object f ( in vector form) in the u-th unit and is written as

Cu=[C0,uOOOC1,uOOOCNz1,u],

where O is a Nx × Nx zero matrix.

The matrix QNx×Nf which sums all of the axial layers is defined by

Q=[III],

where INx×Nx denotes an identity matrix.

The downsampling matrix TNv×Nx can be defined by

T=[1T0T0T0T1T0T0T0T1T],

where 1 and 0 denote a Nu × 1 vector whose elements are all 1 and a Nu × 1 vector whose elements are all 0, respectively. A superscript T indicates a transpose of a matrix.

Therefore, the measurement data on the u-th unit is TQCuf, which means, firstly, an object is coded in each unit by Cu, secondly, the coded data is integrated on to a detector array by Q, and lastly, the integrated data is downsampled with detectors by T. The sensing matrix ΦNg×Nf is expressed by

Φ=[TQC0TQC1TQCNu1]=[TC0,0TC1,0TCNz1,0TC0,1TC1,1TCNz1,1TC0,Nu1TC1,Nu1TCNz1,Nu1].

4. Implementation of proposed acquisition schemes

The proposed coding schemes can be implemented for a wide array of practical applications. Each application would rely on some physical optical elements to implement the coding scheme expressed by Eqs. (6) or (7). In this section, we present examples of the coding scheme for spectral imaging and polarization imaging. Using similar schemes, physical coding strategies for range, time, spectrum, polarization, large dynamic range, and wide field-of-view may be available.

Physical codings for spectral imaging using SI and WI are illustrated in Fig. 3. SI for spectral imaging can be implemented by using dispersive elements (e.g., prisms). The elements in each unit have different dispersion directions as shown in Fig. 3(a). The dispersion results in different shifts for each spectral slice. In Eq. (7), z represents the wavelength. The shift corresponds to Su(z).

WI for spectral imaging may be implemented with multi-band pass filters placed above or below the lenslet as shown in Fig. 3(b). Each of the filters has different pass-bands. Pass-bands and stop-bands are represented with Wu(z) = 1 and Wu(z) = 0 in Eq. (7), respectively. A stack of bandstop filters or a patch of bandpass filters may be used to substitute for the multi-band pass filter.

 

Fig. 3. Cross section views of TOMBO for spectral imaging with (a) SI and (b) WI.

Download Full Size | PPT Slide | PDF

 

Fig. 4. Top views of TOMBO for polarization imaging with (a) SI and (b) WI. Arrows, dots, circles, and shaded areas indicate directions of polarization, centers of shifted images, lenslets, and polarization plates, respectively.

Download Full Size | PPT Slide | PDF

Figure 4 shows a conceptual diagram for polarization imaging with the proposed codings. SI for polarization imaging may be performed with birefringent linear polarizers[18]. The elements split an incident ray into two polarized rays. Hence, an image at each polarization angle is shifted. Each unit has different shift for each polarization angle as shown in Fig. 4(a). Here, z represents a linear polarization angle. The shift corresponds to Su(z) in Eq. (7).

WI for polarization imaging may be performed with polarization plates. Polarization plates with different linear polarization angle are placed above or below the lenslet as shown in Fig. 4(b). The weight distribution is expressed as Wu(z) = cos2(Puz)[18], where Pu is the polarization angle in the u-th unit. A patch of polarization plates, where each plate has a different polarization angle, allows flexibility in the design of a weight distribution.

5. Simulation of the proposed concept

The concept of multi-dimensional TOMBO imaging was verified through application independent simulations. These general simulations could readily modified for a specific application like those mentioned in the previous section.

A method called two-step iterative shrinkage/thresholding algorithm (TwIST) [19] was used for reconstruction. TwIST is an interative convex optimization algorithm that uses two previous estimates to improve convergence properties for the problem described by Eq. (5).

For simplicity, a shift in SI was assumed as Su(z) = (Auz + Bux. Au and Bu are a gradient and a bias, respectively, of the shear-transformation in the u-th unit defined as Au = (−2u/(Nu − 1) +1)A 0 and Bu = −AuNzΔz/2. For example, A 0 = 1.0 of Nu = 3 indicates that A 0 = 1.0, A 1 = 0.0, and A 2 = −1.0. A shift at the center axial plane, where z = NzΔz/2, is Su(z) = 0.0. Au and Bu of the y axis is the same as those of the x axis. A weight distribution in WI was assumed to be a binary pattern. In the m-th axial plane, h units were set as Wu(mΔz) = 1 in Eq. (7). The h units were randomly chosen, while the other Nu 2h units were set as Wu(mΔz) = 0. In this case, the maximum number of separable axial planes is fixed Nu2Ch . A lenslet’s position Lu in Eq. (8) was randomly set in each unit. The range was [−Δν/2, Δν/2], where Δν is the pixel pitch of the detectors. The position of the center detector in a unit is Du = 0.

Figure 5 shows a simulation of four-dimensional data acquisition using the two TOMBO coding schemes. An object whose size is 128 × 128 × 4 × 2 and measurement data whose size is 128 × 128 are shown in Figs. 5(a) and 5(b). The compression ratio is 8, which is calculated as Nf/Ng, where Nf and Ng are the numbers of elements in an object and measurement data, respectively. In Fig. 5, the object and the simulation results are reshaped to 128 × 128 × 8 for display. SI with A 0 = 1.0 and WI with h = 3 were used for the z 0 and z 1 axes, respectively. The measurement signal-to-noise ratio (SNR) in the presence of additive white Gaussian noise and the number of units were 30 dB and 2 × 2, respectively. The object estimate sparsity in gradients was enforced using the total variation (TV) [20]. Two-dimensional TV was applied independently for each axial plane as ΣlxΣlyΣm0Σm1[F˜(lx,ly,m0,m1)]lx,ly , where lx and ly are indices for the x and y axes in a discrete object. ∇[·]lx,ly is a two-dimensional gradient vector for the x and y directions, and ∣ · ∣ denotes the magnitude of the gradient vector. The object consists of multiple Shepp-Logan phantoms, which is sparse in two-dimensional TV domain. The total number of non-zero gradient values was s = 3242. The reconstruction results with TwIST and the Richardson Lucy method (RL)[21, 22] are compared in Figs. 5(c) and 5(d). Their peak signal-to-noise ratios (PSNR) were 32.1 dB and 19.4 dB, respectively. The PSNR is found by computing 20log10( MAXMSE ), where MAX and MSE represent the maximum of the signal values and the mean squared error between two signals, respectively[23].

CS object reconstruction accuracy may be estimated using a correlation between columns of Θ, which is the multiplication of a sensing matrix Φ and a basis matrix Ψ, in Eq. (1)[24]. When a correlation between two columns in Θ is high, it is difficult to resolve the two components in a transform coefficient vector β corresponding to the columns in Θ. So that, the reconstruction accuracy depends on not only Φ but also Ψ.

When a two-dimensional basis is used for each axial plane as in the previous simulation, the reconstruction accuracy along the axial direction in an object estimate may be roughly predicted based on the correlation between columns of Φ corresponding to two axial planes. Let ϕmNx×Nx denote

ϕm=[TCm,0TCm,1TCm,Nu1].

From Eqs. (1), (13), (14), and the assumption to use a two-dimensional basis, a sensing matrix, a basis matrix, and Θ can be rewritten as

Φ=[ϕ0ϕ1ϕNz1],
Ψ=[ψOOOψOOOψ],
Θ=ΦΨ=[ϕ0ψϕ1ψϕNz1ψ],

respectively, where ψNx×Nx and ONx×Nx are a two-dimensional basis matrix for each axial plane and a Nx × Nx zero matrix. If the correlation between a column in ϕm and that on another axial plane is high, then the corresponding correlation between a column in ϕmψ and that on another axial plane may be high. In this case, it is difficult to resolve the axial planes. When ∣A 0∣ is small or h is large, the correlation between a column in ϕm and that on another axial plane is high. For example, Fig. 5(e) shows a reconstruction result where SI with A 0 = 0.2 was used for the z 0 axis. The reconstruction accuracy along the axial direction with A 0 = 0.2 was lower than that with A 0 = 1.0.

Figure 6 shows another simulation of five-dimensional data acquisition with the discrete wavelet transform (DWT). The sizes of the object in Fig. 6(a) and the measurement data in Fig. 6(b) were 128 × 128 × 2 × 2 × 2 and 128 × 128. The compression ratio is 8. Two-dimensional DWT was applied for each axial plane. The object consists of multiple natural images where the small coefficients in two-dimensional DWT were truncated. In two-dimensional DWT domain, the total number of non-zero DWT coefficients across all the planes was s = 2000. SI with A 0 = 3.0, WI with h = 12, and WI with h = 12 were used for the z 0, z 1, and z 2 axes, respectively. The measurement SNR and the number of the units were 30 dB and 4 × 4, respectively. The reconstruction results with TwIST and RL are compared in Figs. 6(c) and 6(d). Their PSNRs were 24.5 dB and 15.4 dB, respectively. Figure 6(e) shows a reconstruction result where WI with h = 15 was used for the z 1 and z 2 axes. The reconstruction accuracy along the axial direction with h = 15 was lower than that with h = 12.

Figure 7 illustrates the sensitivity of the reconstructions to noise as represented by a curve relating the measurement SNR to the reconstruction PSNRs. Also, the performance is compared to that of an ideal Gaussian random compressive sensing matrix, which is known to require a (optimally) small number of measurements to satisfy the RIP compared to what the proposed systems would require. Since the proposed systems usually have a worse RIP meaning that more measurements are required to obtain higher reconstruction accuracy, they present worse reconstruction accuracy compared to that of the Gaussian random matrix. However, such random sensing matrices would be very difficult to physically implement in general with the current technology. In addition, it is not clear how such random sensing systems may provide the compactness of physical systems and snapshot acquisition functionality, which are benefits of the proposed approach.

6. Conclusions

We proposed a generalized sampling approach for multi-dimensional object acquisition using TOMBO. The sampling uses multi-dimensional sheared and weighted integration in each unit. The mathematical model and some examples of the proposed measurement approach were presented. The simulation demonstrated reconstruction of an object with the number of elements totaling eight times that of the measurement data. A method inspired by compressive sampling was used in the reconstruction. These schemes enable us to acquire a multi-dimensional object with a single two-dimensional measurement by a compact imaging system. Also, these schemes extend abilities of compound-eye imaging systems to various applications.

 

Fig. 5. Simulation results with total variation. (a) A four-dimensional object (∈ ℝ128×128×4×2), where indices of axial planes are shown under each axial plane, (b) a measurement data, (c) a reconstruction with TwIST, (d) a reconstruction with RL, and (e) a reconstruction with TwIST using a small ∣A 0∣.

Download Full Size | PPT Slide | PDF

 

Fig. 6. Simulation results with discrete wavelet transform. (a) A five-dimensional object (∈ ℝ128×128×2×2×2), (b) a measurement data, (c) a reconstruction with TwIST, (d) a reconstruction with RL, and (e) a reconstruction with TwIST using a large h.

Download Full Size | PPT Slide | PDF

 

Fig. 7. Plots of reconstruction PSNR from noisy measurements in the proposed system and a baseline sensing system which is a Gaussian random sensing matrix. (a) Plots with the object, the parameters, and the basis used in Fig. 5(c) and (b) plots with the object, the parameters, and the basis used in Fig. 6(c).

Download Full Size | PPT Slide | PDF

A useful avenue for future work is to analyze theoretical properties of the proposed systems. It would be interesting to see how many more measurements would be required in general for the proposed systems to produce a certain accuracy, which is related to the validity of the sparsity assumption in the proposed systems. Also, it would be very useful to find a more efficient sparsity transformation that provides a better RIP and a better sparse representation of the objects of interest. Furthermore, we plan to investigate other coding schemes that may provide a better RIP overall to better exploit the sparsity assumption.

References and links

1. R. Ng, “Fourier slice photography,” in “SIGGRAPH ’05: ACM SIGGRAPH 2005 Papers,” (ACM, New York, NY, USA, 2005), pp. 735–744.

2. J. Duparré, P. Dannberg, P. Schreiber, A. Bräuer, and A. Tünnermann, “Thin compound-eye camera,” Appl. Opt. 44, 2949–2956 (2005). [CrossRef]   [PubMed]  

3. R. Athale, D. M. Healy, D. J. Brady, and M. A. Neifeld, “Reinventing the camera,” Opt. Photon. News 19, 32–37 (2008). [CrossRef]  

4. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): concept and experimental verification,” Appl. Opt. 40, 1806–1813 (2001). [CrossRef]  

5. M. Shankar, N. P. Pitsianis, and D. J. Brady, “Compressive video sensors using multichannel imagers,” Appl. Opt. 49, B9–B17 (2010). [CrossRef]   [PubMed]  

6. R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Optical Review 14, 347–350 (2007). [CrossRef]  

7. W. Zhou and J. Leger, “Grin-optics-based hyperspectral imaging micro-sensor,” Proc. SPIE 6765, 676502 (2007). [CrossRef]  

8. R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “Periodic: Integrated computational array imaging technology,” in “Computational Optical Sensing and Imaging,” (2007), p. CMA1.

9. R. Horstmeyer, R. Athale, and G. Euliss, “Modified light field architecture for reconfigurable multimode imaging,” Proc. SPIE 7468, 746804 (2009). [CrossRef]  

10. E. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” Signal Processing Magazine, IEEE 25, 21–30 (2008). [CrossRef]  

11. M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. Kelly, and R. Baraniuk, “An architecture for compressive imaging,” in “ICIP06,” (2006), pp. 1273–1276.

12. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47, B44–B51 (2008). [CrossRef]   [PubMed]  

13. D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express 17, 13040–13049 (2009). [CrossRef]   [PubMed]  

14. K. Nitta, R. Shogenji, S. Miyatake, and J. Tanida, “Image reconstruction for thin observation module by bound optics by using the iterative backprojection method,” Appl. Opt. 45, 2893–2900 (2006). [CrossRef]   [PubMed]  

15. Y. Tsaig and D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory 52, 1289–1306 (2006). [CrossRef]  

16. E. J. Candes and T. Tao, “Decoding by linear programming,” IEEE Trans. Info. Theory 51, 4203–4215 (2005). [CrossRef]  

17. K. Choi and T. J. Schulz, “Signal-processing approaches for image-resolution restoration for TOMBO imagery,” Appl. Opt. 47, B104–B116 (2008). [CrossRef]   [PubMed]  

18. E. Hecht, Optics (Addison Wesley, 2001), 4th ed.

19. J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Proc. 16, 2992–3004 (2007). [CrossRef]  

20. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60, 259–268 (1992). [CrossRef]  

21. W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. 62, 55–59 (1972). [CrossRef]  

22. L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745–754 (1974). [CrossRef]  

23. Q. Huynh-Thu and M. Ghanbari, “Scope of validity of PSNR in image/video quality assessment,” Electron. Lett. 44, 800–801 (2008). [CrossRef]  

24. R. Gribonval and M. Nielsen, “Sparse representations in unions of bases,” IEEE Trans. Info. Theory 49, 3320–3325 (2003). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. R. Ng, “Fourier slice photography,” in “SIGGRAPH ’05: ACM SIGGRAPH 2005 Papers,” (ACM, New York, NY, USA, 2005), pp. 735–744.
  2. J. Duparré, P. Dannberg, P. Schreiber, A. Bräuer, and A. Tünnermann, “Thin compound-eye camera,” Appl. Opt. 44, 2949–2956 (2005).
    [Crossref] [PubMed]
  3. R. Athale, D. M. Healy, D. J. Brady, and M. A. Neifeld, “Reinventing the camera,” Opt. Photon. News 19, 32–37 (2008).
    [Crossref]
  4. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): concept and experimental verification,” Appl. Opt. 40, 1806–1813 (2001).
    [Crossref]
  5. M. Shankar, N. P. Pitsianis, and D. J. Brady, “Compressive video sensors using multichannel imagers,” Appl. Opt. 49, B9–B17 (2010).
    [Crossref] [PubMed]
  6. R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Optical Review 14, 347–350 (2007).
    [Crossref]
  7. W. Zhou and J. Leger, “Grin-optics-based hyperspectral imaging micro-sensor,” Proc. SPIE 6765, 676502 (2007).
    [Crossref]
  8. R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “Periodic: Integrated computational array imaging technology,” in “Computational Optical Sensing and Imaging,” (2007), p. CMA1.
  9. R. Horstmeyer, R. Athale, and G. Euliss, “Modified light field architecture for reconfigurable multimode imaging,” Proc. SPIE 7468, 746804 (2009).
    [Crossref]
  10. E. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” Signal Processing Magazine, IEEE 25, 21–30 (2008).
    [Crossref]
  11. M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. Kelly, and R. Baraniuk, “An architecture for compressive imaging,” in “ICIP06,” (2006), pp. 1273–1276.
  12. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47, B44–B51 (2008).
    [Crossref] [PubMed]
  13. D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express 17, 13040–13049 (2009).
    [Crossref] [PubMed]
  14. K. Nitta, R. Shogenji, S. Miyatake, and J. Tanida, “Image reconstruction for thin observation module by bound optics by using the iterative backprojection method,” Appl. Opt. 45, 2893–2900 (2006).
    [Crossref] [PubMed]
  15. Y. Tsaig and D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory 52, 1289–1306 (2006).
    [Crossref]
  16. E. J. Candes and T. Tao, “Decoding by linear programming,” IEEE Trans. Info. Theory 51, 4203–4215 (2005).
    [Crossref]
  17. K. Choi and T. J. Schulz, “Signal-processing approaches for image-resolution restoration for TOMBO imagery,” Appl. Opt. 47, B104–B116 (2008).
    [Crossref] [PubMed]
  18. E. Hecht, Optics (Addison Wesley, 2001), 4th ed.
  19. J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Proc. 16, 2992–3004 (2007).
    [Crossref]
  20. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60, 259–268 (1992).
    [Crossref]
  21. W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. 62, 55–59 (1972).
    [Crossref]
  22. L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745–754 (1974).
    [Crossref]
  23. Q. Huynh-Thu and M. Ghanbari, “Scope of validity of PSNR in image/video quality assessment,” Electron. Lett. 44, 800–801 (2008).
    [Crossref]
  24. R. Gribonval and M. Nielsen, “Sparse representations in unions of bases,” IEEE Trans. Info. Theory 49, 3320–3325 (2003).
    [Crossref]

2010 (1)

2009 (2)

R. Horstmeyer, R. Athale, and G. Euliss, “Modified light field architecture for reconfigurable multimode imaging,” Proc. SPIE 7468, 746804 (2009).
[Crossref]

D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express 17, 13040–13049 (2009).
[Crossref] [PubMed]

2008 (5)

A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. 47, B44–B51 (2008).
[Crossref] [PubMed]

K. Choi and T. J. Schulz, “Signal-processing approaches for image-resolution restoration for TOMBO imagery,” Appl. Opt. 47, B104–B116 (2008).
[Crossref] [PubMed]

Q. Huynh-Thu and M. Ghanbari, “Scope of validity of PSNR in image/video quality assessment,” Electron. Lett. 44, 800–801 (2008).
[Crossref]

E. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” Signal Processing Magazine, IEEE 25, 21–30 (2008).
[Crossref]

R. Athale, D. M. Healy, D. J. Brady, and M. A. Neifeld, “Reinventing the camera,” Opt. Photon. News 19, 32–37 (2008).
[Crossref]

2007 (3)

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Optical Review 14, 347–350 (2007).
[Crossref]

W. Zhou and J. Leger, “Grin-optics-based hyperspectral imaging micro-sensor,” Proc. SPIE 6765, 676502 (2007).
[Crossref]

J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Proc. 16, 2992–3004 (2007).
[Crossref]

2006 (2)

2005 (2)

J. Duparré, P. Dannberg, P. Schreiber, A. Bräuer, and A. Tünnermann, “Thin compound-eye camera,” Appl. Opt. 44, 2949–2956 (2005).
[Crossref] [PubMed]

E. J. Candes and T. Tao, “Decoding by linear programming,” IEEE Trans. Info. Theory 51, 4203–4215 (2005).
[Crossref]

2003 (1)

R. Gribonval and M. Nielsen, “Sparse representations in unions of bases,” IEEE Trans. Info. Theory 49, 3320–3325 (2003).
[Crossref]

2001 (1)

1992 (1)

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60, 259–268 (1992).
[Crossref]

1974 (1)

L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745–754 (1974).
[Crossref]

1972 (1)

Athale, R.

R. Horstmeyer, R. Athale, and G. Euliss, “Modified light field architecture for reconfigurable multimode imaging,” Proc. SPIE 7468, 746804 (2009).
[Crossref]

R. Athale, D. M. Healy, D. J. Brady, and M. A. Neifeld, “Reinventing the camera,” Opt. Photon. News 19, 32–37 (2008).
[Crossref]

Baraniuk, R.

M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. Kelly, and R. Baraniuk, “An architecture for compressive imaging,” in “ICIP06,” (2006), pp. 1273–1276.

Barnard, R.

R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “Periodic: Integrated computational array imaging technology,” in “Computational Optical Sensing and Imaging,” (2007), p. CMA1.

Baron, D.

M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. Kelly, and R. Baraniuk, “An architecture for compressive imaging,” in “ICIP06,” (2006), pp. 1273–1276.

Behrmann, G.

R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “Periodic: Integrated computational array imaging technology,” in “Computational Optical Sensing and Imaging,” (2007), p. CMA1.

Bioucas-Dias, J. M.

J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Proc. 16, 2992–3004 (2007).
[Crossref]

Brady, D.

Brady, D. J.

Bräuer, A.

Candes, E. J.

E. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” Signal Processing Magazine, IEEE 25, 21–30 (2008).
[Crossref]

E. J. Candes and T. Tao, “Decoding by linear programming,” IEEE Trans. Info. Theory 51, 4203–4215 (2005).
[Crossref]

Choi, K.

Dannberg, P.

Donoho, D. L.

Y. Tsaig and D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory 52, 1289–1306 (2006).
[Crossref]

Duarte, M.

M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. Kelly, and R. Baraniuk, “An architecture for compressive imaging,” in “ICIP06,” (2006), pp. 1273–1276.

Duparré, J.

Euliss, G.

R. Horstmeyer, R. Athale, and G. Euliss, “Modified light field architecture for reconfigurable multimode imaging,” Proc. SPIE 7468, 746804 (2009).
[Crossref]

Fatemi, E.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60, 259–268 (1992).
[Crossref]

Figueiredo, M. A. T.

J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Proc. 16, 2992–3004 (2007).
[Crossref]

Ghanbari, M.

Q. Huynh-Thu and M. Ghanbari, “Scope of validity of PSNR in image/video quality assessment,” Electron. Lett. 44, 800–801 (2008).
[Crossref]

Gray, B.

R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “Periodic: Integrated computational array imaging technology,” in “Computational Optical Sensing and Imaging,” (2007), p. CMA1.

Gribonval, R.

R. Gribonval and M. Nielsen, “Sparse representations in unions of bases,” IEEE Trans. Info. Theory 49, 3320–3325 (2003).
[Crossref]

Healy, D. M.

R. Athale, D. M. Healy, D. J. Brady, and M. A. Neifeld, “Reinventing the camera,” Opt. Photon. News 19, 32–37 (2008).
[Crossref]

Hecht, E.

E. Hecht, Optics (Addison Wesley, 2001), 4th ed.

Horisaki, R.

D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express 17, 13040–13049 (2009).
[Crossref] [PubMed]

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Optical Review 14, 347–350 (2007).
[Crossref]

Horstmeyer, R.

R. Horstmeyer, R. Athale, and G. Euliss, “Modified light field architecture for reconfigurable multimode imaging,” Proc. SPIE 7468, 746804 (2009).
[Crossref]

Huynh-Thu, Q.

Q. Huynh-Thu and M. Ghanbari, “Scope of validity of PSNR in image/video quality assessment,” Electron. Lett. 44, 800–801 (2008).
[Crossref]

Ichioka, Y.

Irie, S.

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Optical Review 14, 347–350 (2007).
[Crossref]

Ishida, K.

John, R.

Kelly, K.

M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. Kelly, and R. Baraniuk, “An architecture for compressive imaging,” in “ICIP06,” (2006), pp. 1273–1276.

Kondou, N.

Kumagai, T.

Laska, J.

M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. Kelly, and R. Baraniuk, “An architecture for compressive imaging,” in “ICIP06,” (2006), pp. 1273–1276.

Leger, J.

W. Zhou and J. Leger, “Grin-optics-based hyperspectral imaging micro-sensor,” Proc. SPIE 6765, 676502 (2007).
[Crossref]

Lim, S.

Lucy, L. B.

L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745–754 (1974).
[Crossref]

Marks, D. L.

Matthews, S.

R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “Periodic: Integrated computational array imaging technology,” in “Computational Optical Sensing and Imaging,” (2007), p. CMA1.

Mirotznik, M.

R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “Periodic: Integrated computational array imaging technology,” in “Computational Optical Sensing and Imaging,” (2007), p. CMA1.

Miyatake, S.

Miyazaki, D.

Morimoto, T.

Neifeld, M. A.

R. Athale, D. M. Healy, D. J. Brady, and M. A. Neifeld, “Reinventing the camera,” Opt. Photon. News 19, 32–37 (2008).
[Crossref]

Ng, R.

R. Ng, “Fourier slice photography,” in “SIGGRAPH ’05: ACM SIGGRAPH 2005 Papers,” (ACM, New York, NY, USA, 2005), pp. 735–744.

Nielsen, M.

R. Gribonval and M. Nielsen, “Sparse representations in unions of bases,” IEEE Trans. Info. Theory 49, 3320–3325 (2003).
[Crossref]

Nitta, K.

Ogura, Y.

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Optical Review 14, 347–350 (2007).
[Crossref]

Osher, S.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60, 259–268 (1992).
[Crossref]

Pauca, V. P.

R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “Periodic: Integrated computational array imaging technology,” in “Computational Optical Sensing and Imaging,” (2007), p. CMA1.

Pitsianis, N. P.

Plemmons, R. J.

R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “Periodic: Integrated computational array imaging technology,” in “Computational Optical Sensing and Imaging,” (2007), p. CMA1.

Prasad, S.

R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “Periodic: Integrated computational array imaging technology,” in “Computational Optical Sensing and Imaging,” (2007), p. CMA1.

Richardson, W. H.

Rudin, L. I.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60, 259–268 (1992).
[Crossref]

Sarvotham, S.

M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. Kelly, and R. Baraniuk, “An architecture for compressive imaging,” in “ICIP06,” (2006), pp. 1273–1276.

Schreiber, P.

Schulz, T. J.

Shankar, M.

Shogenji, R.

Takhar, D.

M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. Kelly, and R. Baraniuk, “An architecture for compressive imaging,” in “ICIP06,” (2006), pp. 1273–1276.

Tanida, J.

Tao, T.

E. J. Candes and T. Tao, “Decoding by linear programming,” IEEE Trans. Info. Theory 51, 4203–4215 (2005).
[Crossref]

Torgersen, T. C.

R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “Periodic: Integrated computational array imaging technology,” in “Computational Optical Sensing and Imaging,” (2007), p. CMA1.

Tsaig, Y.

Y. Tsaig and D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory 52, 1289–1306 (2006).
[Crossref]

Tünnermann, A.

van der Gracht, J.

R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “Periodic: Integrated computational array imaging technology,” in “Computational Optical Sensing and Imaging,” (2007), p. CMA1.

Wagadarikar, A.

Wakin, M.

M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. Kelly, and R. Baraniuk, “An architecture for compressive imaging,” in “ICIP06,” (2006), pp. 1273–1276.

Wakin, M. B.

E. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” Signal Processing Magazine, IEEE 25, 21–30 (2008).
[Crossref]

Willett, R.

Yamada, K.

Zhou, W.

W. Zhou and J. Leger, “Grin-optics-based hyperspectral imaging micro-sensor,” Proc. SPIE 6765, 676502 (2007).
[Crossref]

Appl. Opt. (6)

Astron. J. (1)

L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745–754 (1974).
[Crossref]

Electron. Lett. (1)

Q. Huynh-Thu and M. Ghanbari, “Scope of validity of PSNR in image/video quality assessment,” Electron. Lett. 44, 800–801 (2008).
[Crossref]

IEEE Trans. Image Proc. (1)

J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Proc. 16, 2992–3004 (2007).
[Crossref]

IEEE Trans. Info. Theory (2)

R. Gribonval and M. Nielsen, “Sparse representations in unions of bases,” IEEE Trans. Info. Theory 49, 3320–3325 (2003).
[Crossref]

E. J. Candes and T. Tao, “Decoding by linear programming,” IEEE Trans. Info. Theory 51, 4203–4215 (2005).
[Crossref]

IEEE Transactions on Information Theory (1)

Y. Tsaig and D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory 52, 1289–1306 (2006).
[Crossref]

J. Opt. Soc. Am. (1)

Opt. Express (1)

Opt. Photon. News (1)

R. Athale, D. M. Healy, D. J. Brady, and M. A. Neifeld, “Reinventing the camera,” Opt. Photon. News 19, 32–37 (2008).
[Crossref]

Optical Review (1)

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Optical Review 14, 347–350 (2007).
[Crossref]

Phys. D (1)

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60, 259–268 (1992).
[Crossref]

Proc. SPIE (2)

W. Zhou and J. Leger, “Grin-optics-based hyperspectral imaging micro-sensor,” Proc. SPIE 6765, 676502 (2007).
[Crossref]

R. Horstmeyer, R. Athale, and G. Euliss, “Modified light field architecture for reconfigurable multimode imaging,” Proc. SPIE 7468, 746804 (2009).
[Crossref]

Signal Processing Magazine, IEEE (1)

E. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” Signal Processing Magazine, IEEE 25, 21–30 (2008).
[Crossref]

Other (4)

M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. Kelly, and R. Baraniuk, “An architecture for compressive imaging,” in “ICIP06,” (2006), pp. 1273–1276.

R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “Periodic: Integrated computational array imaging technology,” in “Computational Optical Sensing and Imaging,” (2007), p. CMA1.

R. Ng, “Fourier slice photography,” in “SIGGRAPH ’05: ACM SIGGRAPH 2005 Papers,” (ACM, New York, NY, USA, 2005), pp. 735–744.

E. Hecht, Optics (Addison Wesley, 2001), 4th ed.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1.

Cross section view of TOMBO. ν, Ou , and Lu are the spatial dimension, the center position, and the position of a lenslet in the u-th unit, respectively.

Fig. 2.
Fig. 2.

Coding schemes in TOMBO. (a) Sheared integration and (b) weighted integration in a unit.

Fig. 3.
Fig. 3.

Cross section views of TOMBO for spectral imaging with (a) SI and (b) WI.

Fig. 4.
Fig. 4.

Top views of TOMBO for polarization imaging with (a) SI and (b) WI. Arrows, dots, circles, and shaded areas indicate directions of polarization, centers of shifted images, lenslets, and polarization plates, respectively.

Fig. 5.
Fig. 5.

Simulation results with total variation. (a) A four-dimensional object (∈ ℝ128×128×4×2), where indices of axial planes are shown under each axial plane, (b) a measurement data, (c) a reconstruction with TwIST, (d) a reconstruction with RL, and (e) a reconstruction with TwIST using a small ∣A 0∣.

Fig. 6.
Fig. 6.

Simulation results with discrete wavelet transform. (a) A five-dimensional object (∈ ℝ128×128×2×2×2), (b) a measurement data, (c) a reconstruction with TwIST, (d) a reconstruction with RL, and (e) a reconstruction with TwIST using a large h.

Fig. 7.
Fig. 7.

Plots of reconstruction PSNR from noisy measurements in the proposed system and a baseline sensing system which is a Gaussian random sensing matrix. (a) Plots with the object, the parameters, and the basis used in Fig. 5(c) and (b) plots with the object, the parameters, and the basis used in Fig. 6(c).

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

g = Φ f = Φ Ψ β = Θ β ,
( 1 c s ) β Λ 2 2 Θ Λ β Λ 2 2 ( 1 + c s ) β Λ 2 2 ,
μ ( Φ , Ψ ) = N f max 1 i N g , 1 j N f Φ ( i , : ) , Ψ ( : , j )
N g c μ ( Φ , Ψ ) 2 s log N f ,
β ̂ = argmin β β 1 subject to g = Θ β ,
G u ( v ) = F ( v L u n S n , u ( z n ) , z 0 , , z N n 1 ) n W n , u ( z n ) dz n ,
( u = 0 , , N u 1 ) ,
G u ( v ) = F ( v L u S u ( z ) , z ) W u ( z ) dz .
G ˜ u ( j ) = i rect ( i x j v v ) m F ˜ ( i S ¯ u ( m ) , m ) W ¯ u ( m ) ,
C m , u ( p , q ) = { W ¯ u ( m ) ( p = q + S ¯ u ( m ) ) , 0 ( p q + S ¯ u ( m ) ) ,
C u = [ C 0 , u O O O C 1 , u O O O C N z 1 , u ] ,
Q = [ I I I ] ,
T = [ 1 T 0 T 0 T 0 T 1 T 0 T 0 T 0 T 1 T ] ,
Φ = [ T Q C 0 T Q C 1 T Q C N u 1 ] = [ T C 0,0 T C 1,0 TC N z 1,0 T C 0,1 T C 1,1 TC N z 1,1 TC 0 , N u 1 TC 1 , N u 1 TC N z 1, N u 1 ] .
ϕ m = [ T C m , 0 T C m , 1 T C m , N u 1 ] .
Φ = [ ϕ 0 ϕ 1 ϕ N z 1 ] ,
Ψ = [ ψ O O O ψ O O O ψ ] ,
Θ = Φ Ψ = [ ϕ 0 ψ ϕ 1 ψ ϕ N z 1 ψ ] ,

Metrics