Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Interferometry based multispectral photon-limited 2D and 3D integral image encryption employing the Hartley transform

Open Access Open Access

Abstract

We present a method of securing multispectral 3D photon-counted integral imaging (PCII) using classical Hartley Transform (HT) based encryption by employing optical interferometry. This method has the simultaneous advantages of minimizing complexity by eliminating the need for holography recording and addresses the phase sensitivity problem encountered when using digital cameras. These together with single-channel multispectral 3D data compactness, the inherent properties of the classical photon counting detection model, i.e. sparse sensing and the capability for nonlinear transformation, permits better authentication of the retrieved 3D scene at various depth cues. Furthermore, the proposed technique works for both spatially and temporally incoherent illumination. To validate the proposed technique simulations were carried out for both the 2D and 3D cases. Experimental data is processed and the results support the feasibility of the encryption method.

© 2015 Optical Society of America

1. Introduction

The Gabor holography technique [1] has been widely applied to address numerous problems in optical engineering, e.g. full three-dimensional (3D) imaging and display [25], for biomedical applications [6], particle image analysis [7, 8], and information security [911]. However, despite receiving such attention holographic television, for example, remains a work in progress, because of the large quantities of data to be processed in real time and the complexity of the optoelectronic systems necessary. Thus, a more practical alternative approach to 3D imaging and visualization is highly desired. Integral imaging (II), based on Integral Photography (IP) [12], seems to be one such alternative, providing a promising approach for 3D object sensing, recording and autostereoscopic visualization [1317]. Using this technique, a 3D scene can effectively be reconstructed by recording two-dimensional (2D) Elemental Images (EIs) from different perspectives of the same 3D scene. II is therefore analogous to an incoherent version of conventional holography [16]. Further, II systems can be used to reproduce detailed versions of 3D scenes that can be comparatively better than those produced using conventional stereoscopic based imaging techniques. Importantly, II is much easier to perform than classical hologram based 3D imaging.

Photon-Counting Imaging systems (PCI) have recently received a great deal of attention because of their many useful properties, (i.e. sparse sensing, nonlinearity, reduced power consumption and simultaneous bandwidth reduction), see for example [1821]. Most of the studies involving PCI have been carried out for single light frequencies using achromatic data. However, such studies do not illustrate what happens in the case of real-time chromatic (full color) information. Therefore recently, a model describing the capture and visualization of multispectral 3D scenes in a photon starved environment, using a Bayer patterned CCD sensors, was proposed [22].

The rapid development of communication systems indicates the need for both higher levels of data security and intellectual property protection. Data protection techniques, such as steganography, watermarking and encryption, are in increasing demand. The simplicity and elegance of the classical Fourier based 4f Double Random Phase Encryption system, (DRPE), has led to proposals for numerous optically inspired digital encryption techniques over the past two decades [2329]. The DRPE transforms an input signal into a noise-like complex-valued distribution. Optical implementation of this encryption technique provides additional security as coherent illumination and precise light field alignment are necessary. Further, optically it is possible to perform unambiguous information processing [26].

On the other hand, digital implementations of the DRPE have been studied in detail and have been shown to be vulnerable to various types of attack [3032]. Numerous attempts to alleviate the security problems associated with the digital DRPE technique have been made over the past decade. Recently, a new encryption technique has been proposed that increases security by exploiting the sparse sensing and nonlinear transformation properties of PCI system combined with the conventional DRPE system (DRPE-PCI) [33, 34]. This technique introduces significant additional layers of security.

Furthermore, extension of the classical 2D based encryption techniques, which can be used to significantly increase 3D image data (scene) security, have also been demonstrated [3537]. Recently, Cho et al. [38] have extended a combined DRPE-PCI technique to secure a 3D scene. In their method, the Fourier encrypted complex Elemental Images (EIs) were further secured using the photon counting detection model and it was shown that the resulting decrypted data could be used to realize 3D scene reconstructions. It also has been demonstrated that both higher security and 3D data authentication can be achieved by combining the complex-valued DRPE distribution and the photon counting Poisson nonlinear transformation [38]. However, in many practical applications, light intensity measured at the camera is highly sensitive to the input signal phase or the key phase. In such scenarios, the classical optical holography setup is used, in which the phase is encoded in the intensity and must be reconstructed [1]. Clearly, any encryption system that simultaneously reduces hardware requirements while providing greater security would be highly desirable.

Recently, a new encryption system employing the classical Hartley transform (HT) has been proposed [29]. Using this system, a real 2D input signal is transformed into a purely real valued 2D form (i.e. intensity image data). The resulting system neither compromises security nor reduces the system’s reliability. Furthermore, encryption can be realized using either coherent or incoherent illumination [29, 3941]. Additionally, it has been reported [42-44] that the resulting incoherent optical system implementation outperforms the corresponding coherent based systems, since it does not suffer from scattering effects (i.e. speckle noise). Therefore, using such a system the computational complexity is reduced and no incoherent to coherent conversions are necessary. In practical terms the system can operate with a variety of inputs (i.e. luminous objects, CRT displays) [42].

In this study, we propose to combine the HT based encryption system with the Photon Counting Integral Imaging (PCII) system to achieve secured 3D scene transmission. Such a system will work for incoherent light, provide secured 3D data transmission and requires a simplified processing architecture. Furthermore, the system should allow decryption and verification of the 3D scene at different depth cues (i.e. various scene depths).

The paper is organized as follows. In Section 2, a brief overview of the optical HT based encryption system is presented. The passive multispectral PCII technique and the use of the Maximum Likelihood Estimation (MLE) for the reconstruction of the 3D scene are briefly reviewed in Section 3. The combination of the HT with PCII is discussed in detail in Section 4. Simulation results, involving the use of experimental data, which illustrate the system performance, are presented in Section 5. Finally, a brief conclusion is given in Section 6.

2. Optical Hartley Transforms (OHT)

Over the last two decades various optically inspired encryption algorithms, starting with the Fourier transformation (FT) based DRPE system, have been proposed and studied. Additionally, several techniques have been proposed in which the full phase information of a complex data set, i.e. an optical field, must be recorded. One widely used method for capturing complex optical signals involves use of a holographic setup in which the phase information is encoded as intensity data [1]. Although, the holographic recording technique is very useful, capturing and processing holographic data (e.g. encrypted images) is difficult.

Chen et al. [29] have demonstrated a HT based optical image encryption technique, which neither requires holographic image capture nor requires the use of coherent illumination. While the HT is similar to the FT, the implementation involves the use of optical interferometry, which real 2D input image signals are transformed into purely real output encrypted data. In such a case the intensity detectors used to capture the output (i.e. CCD cameras) directly record the transformed scrambled data (i.e. the encrypted data) enabling unambiguous reconstruction in both the optical and digital implementation cases. Therefore, in this sense, the HT outperforms the FT based systems. In general, the HT,H(u,v), of the 2D imageg(x,y), is defined as follows [41]:

H(u,v)=g(x,y)cas[2π(ux+vy)]dxdy,

where the cas function is defined as cas(θ)=cos(θ)+sin(θ) [41]. Based on this definition, the HT can be deduced as follows:

H(u,v)=exp(iπ/4)2{F(u,v)+exp(iπ/2)F(u,v)},

where u,v are the coordinates in the HT output domain. F(u,v)=FT{g(x,y)}, and F(u,v) represent the forward and inverse FT, respectively. As noted, the HT output is real, (i.e. a real-valued function is transformed into a real-valued function under the HT), and therefore no complex arithmetic is required during data processing. A number of feasible optical implementations of the HT are presented in [41]. In what follows HT based encryption systems are examined. In [29], the HT based encryption process is described mathematically as follows:

ψ(u,v)=r(u,v)g(x,y)cas[2π(ux+vy)]dxdy,

where ψ(u,v) and r(u,v) denote the encrypted image and a random intensity mask respectively. Decryption can be performed using the reciprocal of the intensity mask(r1(u,v)), and thus the original intensity image information can be retrieved as follows:

g(x,y)=ψ(u,v)r1(u,v)cas[2π(ux+vy)]dudv.

From Eq. (3), it should be noted that encryption converts the input data into scrambled output data (i.e. cosine and sine functions). Attackers cannot easily obtain the encoded information without access to the correct decryption keys [29].

3. Multispectral PCII with MLE reconstruction

Integral Imaging (II), is a passive 3D imaging technique, in which both the irradiance and directional information of the light rays measured are recorded. The principle of operation of 3D II is illustrated in Fig. (1).

 figure: Fig. 1

Fig. 1 3D II Principle: (a) Pick-up (recording) process, and (b) Reconstruction (display) process. A Bayer CCD camera (GRBG pattern) is used in the experiment illustrated.

Download Full Size | PDF

II involves two major operations: The first: (a) is the pick-up process, and the second: (b) is the reconstruction process. In the pick-up process, imaging camera is shifted laterally across a rectangular grid in steps of equal distance in x andy,(Shx,Shy), and at each location a 2D image is captured. These 2D images, each of which contains a different perspective view of the entire 3D scene, are commonly known as Elemental Images (EI’s). During reconstruction, the set of captured 2D EI’s are used to realize or recreate the original 3D scene. One of the possible approaches to 3D II reconstruction is to simulate the reverse of the pick-up process using a geometrical ray back propagation method, i.e. computational volumetric reconstruction. In this approach, the 3D object located at a particular distance is reconstructed at the corresponding image plane. As a consequence each EI appears magnified, depending on the distance to the desired reconstruction plane. Furthermore, only the objects originally located at a particular distance will appear clearly in focus. Any objects positioned in other planes appear blurry and out of focus. The magnification factor is given byM0=z0/d, where d is the distance between the pick-up grid (i.e. camera) and the imaging plane, and z0 is the object distance from the lenslet array (see Fig. 1).

A method of capturing and processing multispectral 3D photon-counted integral images, using Bayer patterned imaging sensors, has recently been demonstrated [22]. Colorful photon-counted images can be generated by limiting the expected number of incident photons in the entire Bayer patterned scene. In general, the probability of counting photons (i.e. estimating the number of photons) in an image scene is governed by the Poisson distribution [22]. Suppose the total number of photons (i.e. the photon count) in the entire encrypted image isnp, then the probability of counting Lw photons at any arbitrary pixel point in an image scene, at the point,(x,y), is given by the following expression [34]:

Poisson(Lw|λw)=[λw]LweλwLw!,Lw=0,1,2,3,....

In Eq. (5), the subscript variable w is used to indicate either the red, green or blue channels of the captured Bayer EI, and λw is the Poisson parameter at any arbitrary pixel point on each spectral channel, which is computed as λw(x,y)=ψ¯w(x,y)×np [34].

In this paper, the Poissrnd(·) a built–in MATLAB function is used, to generate random numbers having a Poisson distribution, in order to estimate the photon counts in the normalized encrypted elemental image. Here, the normalized irradiance of an encrypted data is defined as q=1Nψ(·)=1, where Nis the total number of pixels in the same encrypted image. Applying this function, the photon-limited Bayer EI for each color channel can be obtained as follows [22]:

Pw(x,y)=Poissrnd(λw(x,y)=ψ¯w(x,y)×np)

Pw(·) denotes the number of photons at the particular pixel(x,y), np is the expected number of photons in the Bayer EI and ψ¯w(x,y) is the normalized encrypted image, respectively.

After appropriate decryption, photon-counted 3D object reconstruction is performed as follows: We assume that the 3D object, which is located at distance z0 away from the pick-up grid, is captured at pixel (x,y) of the first sensor. Computational reconstruction of the 3D object is then performed by applying the parametric Maximum Likelihood Estimator (MLE) to the decrypted photon-counted Bayer elemental image set [21, 22]:

MLE(gpz0)=1npPQp=0P1q=0Q1Cpq(k+Δkpq)w,

where k+Δkpq=(x+p(1M0)Shx,y+q(1M0)Shy) . The subscripts p,q indicate the location of elemental image in the P×Q position locations in the pick-up grid. Cpq(.) is the photon counted pixel value of the (p,q)thelemental image. We note that a detailed description of multispectral 3D PCII recording and reconstruction is presented in [22].

4. HT based 3D photon counting encryption

In this section, we discuss the combination of the HT with PCII to produce a new encryption technique, (see the schematic of the proposed system in Fig. 2). In a previous study [38], PCII was combined with the FT based DRPE technique. The resulting system produces a complex-output distribution, (i.e. amplitude and phase), for which data recording and transmission is difficult. Using the HT, real data is encrypted into an amplitude only intensity modulation facilitating unambiguous reconstruction [40]. Several feasible optical implementations for HT are proposed in [41].

 figure: Fig. 2

Fig. 2 The proposed optical setup for performing the HT and capturing and securing the 3D photon limited scene.

Download Full Size | PDF

It should be noted that the output of the classical optical Hartley transform contains both positive and negative values. Since any digital camera placed in the output plane will only measure the intensity pattern, such sign changes are not directly recorded. To recover the full set of scrambled data real values, first the square root of the intensity is taken to extract the field amplitude and then appropriate sign values must be assigned at each point. We note that several methods have been proposed in the literature to solve this sign ambiguity problem, see for example [41].

It has been shown that the Hartley transform can be implemented optically using a system similar to that shown in Fig. 2 following Eq. (2). As shown the primary object beam propagates along through a thin Fourier lens of focal length (f) and along one arm to the optical mirror (M) then it reflects back to the beam splitter (BS) and finally the output, F(u,v), is produced at output encrypted image plane. Simultaneously half the field, split off by BS, is incident upon the cube corner, which rotates the beam by1800, producing F(u,v) at the output plane. An additional π/2 phase delay, see Eq. (2), can be introduced by appropriately displacing the mirror. Furthermore, we note that the Hartley transform based optical interferometry system (i.e. both the Michelson type and Mach-Zehnder type) architecture is highly sensitive to the distances between, i.e. the positions, of the components within the system. For example, both the cube corner and the plane optical mirror must be placed at the same distance relative to the Fourier domain [40]. Any errors (displacements from the correct positions) in the setup will lead to incorrect data reconstruction.

It is worth re-emphasizing that the resulting encrypted output is real valued. Figure 2 illustrates the proposed 3D photon counting optical encryption setup. As shown, the primary input light field is that of an Elemental Image Array (EIA), and the synthesized encrypted EIA are what are processed using the PCI detection model. The decrypted data is then used for 3D sectional reconstruction as explained in Section 3.

Furthermore, for the experiments discussed, the Bayer GRBG patterned elemental images captured are converted into multispectral images with greater visual quality. This is achieved by applying an adaptive interpolation algorithm, proposed by Malvar [45], to convert the resulting Bayer patterned 3D sectional images into multispectral 3D scenes. This algorithm estimates the missing spectral pixel values by combining the calculated gradient value of a particular pixel with bilinear interpolated spectral information. Detailed descriptions of the conversion of Bayer GRBG data to multispectral 3D scene data are given in [22] and [34].

It should be noted that the photon-limited decrypted 3D images cannot be recognized by direct visual inspection due to the low number of photons present. Therefore, in order to evaluate the information content present a kth-law Nonlinear Cross-correlation (NC) filter is used. To carry out this operation efficiently the signals or images to be compared are in general first transformed from the spatial domain into the frequency domain (i.e. Fourier transformed). Following application of the nonlinearity and multiplication of the two signals an inverse Fourier transform of the synthesized product yields the NC between them. The NC is implemented as follows (1D notation is used for brevity) [38]:

c(x)=F1{|F[gpd0(x)]F(μ)|kexp[i(ϕgpd0(μ)ϕF(μ))]},

where F(μ) represents the Fourier transform of the primary image [see Fig. 3(b)] andϕgpd0(μ), ϕF(μ) are the phase values of the photon-counted 3D sectional image [see Fig. 4(c) and 4(d)] and the primary 2D image, respectively. The parameter k defines the applied NC nonlinearity. We note that it has been estimated [34] that the best value ofk, i.e. the value which achieves the sharpest intensity peak value, isk=0.3. We note that NC results presented are normalized as follows: NC=|c(x)|/max{|c(x)|},in our experiments.

 figure: Fig. 3

Fig. 3 Test images used in our experiments: (a) Bayer patterned (“GRGB”) 2D elemental image; (b) Interpolated multispectral image; (c) Photon-counted Bayer image with 1000 photons, and (d) The corresponding multispectral photon-counted image.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Multispectral visualization of reconstructed 3D sectional images: PSNR is calculated between the computational integral image and the photon-limited integral image: (a) 3D sectional image with the first object in focus; (b) 3D sectional image with the second object in focus; (c) Photon-limited sectional image with first object is focused (PSNR = 32.95 dB); and (d) Photon-limited sectional image with second object is focused (PSNR = 30.54 dB).

Download Full Size | PDF

5. Experimental results

In order to verify the proposed HT based PCII encryption system experimentally measured data and appropriate simulations are used. Following experimentally capture of the 3D object images, the encryption, decryption and MLE based 3D reconstruction are implemented digitally using virtual optical simulations. In this study, two differently colored 3D objects are used see Fig. 3.

These objects are located at two different distances, (i.e. 540 mm and 620 mm, respectively), from the center of the imaging grid. 14 × 14 Bayer-patterned 2D elemental images (8 bits) are captured by moving the imaging system in equal steps of 5 mm in the horizontal and vertical directions. The entire 3D scene was illuminated by a broad band incoherent light source (i.e. diffused incandescent bulbs). The EIs were recorded using a camera with a pixel array of2048(H)×2048(V), (i.e. in each EI image frame).

Figure 3(a) shows one of the captured Bayer 2D EIs and Fig. 3(b) show the resulting Malvar [45] interpolated multispectral EI of the 3D scene. Similarly, Fig. 3(c) shows the photon-counted version of the primary Bayer image while Fig. 3(d) is the resting interpolated multispectral photon image. It is evident from Fig. 3(b) and 3(d) that estimating the multispectral based information is important as it reveals both the luminance and chrominance information of the captured scene even under photon starved conditions.

Figures 4(a) and 4(b) show the 3D reconstruction of the multispectral computational slice images,IMCII, at two different depth positions. Similarly, Fig. 4(c) and 4(d) shows the 3D photon-counted sectional images,IMPCII, at these two focused positions.

It is evident from Fig. 4 that at each distance only one of the objects is clearly in focus while the other object appears smeared (i.e. defocused). Thus, the correct depth must be utilized to clearly visualize the 3D object or to quantitative extract information. Depth can therefore act as a key value for the decryption process.

In addition to this, recognizing the objects in the 2D image format when only 1000 photons are available is quite challenging, [see Fig. 3(d)]. However, application of 3D sectional imaging using MLE calculation at low photon count, (i.e. with 1000 photons present), demonstrates that visually recognizable data can be extracted, [see Fig. 4(c) and 4(d)].

Figures 5(a) and 5(b) shows the general 2D HT encrypted and HT based photon-limited image. The corresponding decrypted images are shown in Fig. 5(c) and 5(d), respectively. Note that the photon-limited decrypted version, Fig. 5(d), cannot be easily visualized, (recognized as being present), due to the low number of photons present.

 figure: Fig. 5

Fig. 5 Encrypted and decrypted data: (a) HT based encrypted 2D elemental image; (b) Photon-limited HT encrypted EI; (c) Decrypted 2D elemental image; and (d) Decrypted 3D sectional image (np = 1000).

Download Full Size | PDF

As noted, the photon-limited decrypted sectional images (see Fig. 5(d)) not easily recognizable due to the low number of photons. Thus, to avoid any ambiguity, (i.e. to test whether the required information is present), an NC filter is used for data authentication. The decrypted sectional images of true class, (i.e. the objects considered in this experiment), are compared with the decrypted sectional images of false class, (i.e. a different object set). Figure 6(a) shows one such false class 2D elemental image. Figure 6(b) shows the corresponding decrypted version photon-limited false class sectional image. Both the decrypted sectional images, Fig. 5(d) and Fig. 6(b), are not visible and it is therefore difficult (impossible) to differentiate between them. Therefore, we apply the NC and examine the resulting correlation characteristics, shown in Fig. 7.

 figure: Fig. 6

Fig. 6 False class images: (a) the primary false class 2D image; (b) The decrypted false class 3D sectional image.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Nonlinear correlation (NC) values versus number of photons (10x) results using a kth law nonlinear processor (k = 0.3): Highest peak values obtained are marked within each figures using 3D surface plot in MATLAB. (a) NC values for the 2D case; (b) NC values for the 3D case; (c) NC values for 2D false class images, and (d) NC values for 3D false class data set. In Fig (b) and (d) the blue colored lines, (i.e. solid lines), refers to the reconstruction of the first object at d = 540 mm while the orange colored lines (i.e. dashed line) refers to the reconstruction of second object at d = 620 mm. The corresponding peak NC values are indicated in the figures as values inside round brackets, e.g. (0.15) in Fig. 7(c).

Download Full Size | PDF

The calculated NC values for the true class are shown in Fig. 7(a) for the 2D scenes and in Fig. 7(b) for the 3D scenes, respectively. It has previously been demonstrated that scrambled samples (i.e. encrypted data) requires more photons to validate the scene [34]. We note here that in the 2D multispectral image case a good trade-off between the photon numbers and the applied nonlinearly can be achieved when np = 103.5 and k = 0.3, in this case values of NC > 0.6 are typically found. However, for 3D scene authentication, it is found that values of np = 103 and k = 0.3 provides good correlation (NC > 0.7). These results reflect the fact that the 3D scene can provide more detailed information than the conventional 2D image case under similar low illumination conditions. Figures 7(c), and 7(d) present the computed NC values for the false 2D and 3D cases, respectively. It can be seen that the NC values are less than 0.2 in both cases and thus recognition and authentication does not take place. We note that the NC used is normalized, i.e. has a maximum value of unity, in order to make comparison easier.

These results demonstrate that our combination of HT based encryption system with PCII provides robust data protection and introduces an additional layer of protection to the information security. Furthermore, the system is capable of both authenticating and discriminating between the true result and false decrypted 3D scenes recorded using different objects.

6. Conclusion

A new optical method for encrypting 2D as well as 3D multispectral photon-limited scene data using a classical Hartley transform (HT) based encryption system has been proposed. The HT converts an input signal into real valued coefficients of cosine and sine functions and can be optically implemented. The parametric maximum likelihood estimation (MLE) algorithm has been applied to decrypted photon-limited elemental 2D images to achieve 3D scene reconstruction in a Bayer format. A gradient corrected linear interpolation technique has been employed to convert Bayer patterned 3D sectional images into a multispectral 3D scene. Thus, 3D objects can be recognized and their full spectral information retained even under low light level conditions.

In summary, by combining classical HT with multispectral PCII, a color 3D scene can be encrypted without the need to extract the phase of complex light fields using holographic or interferometric phase unwrapping techniques.

Acknowledgments

IM acknowledges the support of Irish Research Council (IRC). CG is supported under the UCD-China Scholarship Council (UCD-CSC) program. BL acknowledges the Space Core Technology Development Program funded by the Ministry of Science, ICT and Future Planning and the Pioneer Research Centre Program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (2012-0009460). JTS thanks Science Foundation Ireland (SFI), and Enterprise Ireland (EI) under the National Development Plan (NDP).

Correspondence and requests should be addressed to Byung-Geun Lee (5bglee@gist.ac.kr) and John T. Sheridan (6john.sheridan@ucd.ie).

References and links

1. T. Okoshi, Three-dimensional Imaging Techniques (Academic Press, 1971).

2. J. W. Goodman and R. W. Lawrence, “Digital image formation from electronically detected holograms,” Appl. Phys. Lett. 11(3), 77–79 (1967). [CrossRef]  

3. S. Fukushima, T. Kurokawa, and M. Ohno, “Real-time hologram construction and reconstruction using a high-resolution spatial light-modulator,” Appl. Phys. Lett. 58(8), 787–789 (1991). [CrossRef]  

4. Y. Takaki and H. Ohzu, “Hybrid holographic microscopy: visualization of three-dimensional object information by use of viewing angles,” Appl. Opt. 39(29), 5302–5308 (2000). [CrossRef]   [PubMed]  

5. N. T. Shaked, B. Katz, and J. Rosen, “Review of three-dimensional holographic imaging by multiple-viewpoint-projection based methods,” Appl. Opt. 48(34), H120–H136 (2009). [CrossRef]   [PubMed]  

6. S. A. Benton and V. M. Bove, Jr., Holographic Imaging (Wiley Inter-Science 2008).

7. F. Dubois, N. Callens, C. Yourassowsky, M. Hoyos, P. Kurowski, and O. Monnom, “Digital holographic microscopy with reduced spatial coherence for three-dimensional particle flow analysis,” Appl. Opt. 45(5), 864–871 (2006). [CrossRef]   [PubMed]  

8. Y. Park, G. Popescu, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Fresnel particle tracing in three dimensions using diffraction phase microscopy,” Opt. Lett. 32(7), 811–813 (2007). [CrossRef]   [PubMed]  

9. J. F. Heanue, M. C. Bashaw, and L. Hesselink, “Encrypted holographic data storage based on orthogonal-phase-code multiplexing,” Appl. Opt. 34(26), 6012–6015 (1995). [CrossRef]   [PubMed]  

10. O. Matoba and B. Javidi, “Encrypted optical memory system using three-dimensional keys in the Fresnel domain,” Opt. Lett. 24(11), 762–764 (1999). [CrossRef]   [PubMed]  

11. E. Tajahuerce and B. Javidi, “Encrypting three-dimensional information with digital holography,” Appl. Opt. 39(35), 6595–6601 (2000). [CrossRef]   [PubMed]  

12. G. Lippmann, “La photographie integrale,” Comptes-Rendus Acad. Sci. 146, 446–451 (1908).

13. H. Ives, “Optical properties of a Lippman lenticulated sheet,” J. Opt. Soc. Am. 21(3), 171–176 (1931). [CrossRef]  

14. C. Burckhardt, “Optimum parameters and resolution limitation of integral photography,” J. Opt. Soc. Am. 58(1), 71–74 (1968). [CrossRef]  

15. H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. 15(8), 2059–2065 (1998). [CrossRef]  

16. J. H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef]   [PubMed]  

17. R. Martinez-Cuenca, G. Saavedra, M. Martinez-Corral, and B. Javidi, “Progress in 3-D multi perspective display by integral imaging,” Proc. IEEE 97(6), 1067–1077 (2009). [CrossRef]  

18. G. M. Morris, “Scene matching using photon-limited images,” J. Opt. Soc. Am. A 1(5), 482–488 (1984). [CrossRef]  

19. E. A. Watson and G. M. Morris, “Comparison of infrared up conversion methods for photon-limited imaging,” J. Appl. Phys. 67(10), 6075–6084 (1990). [CrossRef]  

20. S. Yeom, B. Javidi, and E. Watson, “Photon counting passive 3D image sensing for automatic target recognition,” Opt. Express 13(23), 9310–9330 (2005). [CrossRef]   [PubMed]  

21. B. Tavakoli, B. Javidi, and E. Watson, “Three dimensional visualization by photon counting computational integral imaging,” Opt. Express 16(7), 4426–4436 (2008). [CrossRef]   [PubMed]  

22. I. Moon, I. Muniraj, and B. Javidi, “3D Visualization at Low Light Levels Using Multispectral Photon Counting Integral Imaging,” J. Disp. Technol. 9(1), 51–55 (2013). [CrossRef]  

23. P. Refregier and B. Javidi, “Optical image encryption based on input plane and Fourier plane random encoding,” Opt. Lett. 20(7), 767–769 (1995). [CrossRef]   [PubMed]  

24. G. Unnikrishnan, J. Joseph, and K. Singh, “Optical encryption by double-random phase encoding in the fractional Fourier domain,” Opt. Lett. 25(12), 887–889 (2000). [CrossRef]   [PubMed]  

25. G. Unnikrishnan and K. Singh, “Optical encryption using quadratic phase systems,” Opt. Commun. 193(1-6), 51–67 (2001). [CrossRef]  

26. G. Situ and J. Zhang, “Double random-phase encoding in the Fresnel domain,” Opt. Lett. 29(14), 1584–1586 (2004). [CrossRef]   [PubMed]  

27. L. Chen and D. Zhao, “Optical image encryption based on fractional wavelet transform,” Opt. Commun. 254(4-6), 361–367 (2005). [CrossRef]  

28. S. Liu, B. M. Hennelly, C. Guo, and J. T. Sheridan, “Robustness of double random phase encoding spread-space spread-spectrum watermarking technique,” Sig. Process. 109, 345–361 (2015). [CrossRef]  

29. L. Chen and D. Zhao, “Optical image encryption with Hartley transforms,” Opt. Lett. 31(23), 3438–3440 (2006). [CrossRef]   [PubMed]  

30. A. Carnicer, M. Montes-Usategui, S. Arcos, and I. Juvells, “Vulnerability to chosen-cyphertext attacks of optical encryption schemes based on double random phase keys,” Opt. Lett. 30(13), 1644–1646 (2005). [CrossRef]   [PubMed]  

31. X. Peng, H. Wei, and P. Zhang, “Chosen-plaintext attack on lensless double-random phase encoding in the Fresnel domain,” Opt. Lett. 31(22), 3261–3263 (2006). [CrossRef]   [PubMed]  

32. U. Gopinathan, D. S. Monaghan, T. J. Naughton, and J. T. Sheridan, “A known-plaintext heuristic attack on the Fourier plane encryption algorithm,” Opt. Express 14(8), 3181–3186 (2006). [CrossRef]   [PubMed]  

33. E. Pérez-Cabré, H. Abril, M. Millán, and B. Javidi, “Photon-counting double-random-phase encoding for secure image verification and retrieval,” J. Opt. 14(9), 094001 (2012). [CrossRef]  

34. F. Yi, I. Moon, and Y. H. Lee, “A Multispectral Photon-Counting Double Random Phase Encoding Scheme for Image Authentication,” Sensors (Basel) 14(5), 8877–8894 (2014). [CrossRef]   [PubMed]  

35. Y.-R. Piao, D.-H. Shin, and E.-S. Kim, “Robust image encryption by combined use of integral imaging and pixel scrambling techniques,” Opt. Lasers Eng. 47(11), 1273–1281 (2009). [CrossRef]  

36. I. H. Lee and M. Cho, “Optical Encryption and Information Authentication of 3D Objects Considering Wireless Channel Characteristics,” J. Opt. Soc. Korea 17(6), 494–499 (2013). [CrossRef]  

37. I. Muniraj, B. Kim, and B. G. Lee, “Encryption and volumetric 3D object reconstruction using multispectral computational integral imaging,” Appl. Opt. 53(27), G25–G32 (2014). [CrossRef]   [PubMed]  

38. M. Cho and B. Javidi, “Three-dimensional photon counting double-random-phase encryption,” Opt. Lett. 38(17), 3198–3201 (2013). [CrossRef]   [PubMed]  

39. R. N. Bracewell, “Discrete Hartley transform,” J. Opt. Soc. Am. 73(12), 1832–1835 (1983). [CrossRef]  

40. J. D. Villasenor and R. N. Bracewell, “Optical phase obtained by analogue Hartley transformation,” Nature 330(6150), 735–737 (1987). [CrossRef]  

41. J. D. Villasenor, “Optical Hartley transform,” Proc. IEEE 82(3), 391–399 (1994). [CrossRef]  

42. D. Mendlovic, Z. Zalevsky, N. Konforti, R. G. Dorsch, and A. W. Lohmann, “Incoherent fractional Fourier transform and its optical implementation,” Appl. Opt. 34(32), 7615–7620 (1995). [CrossRef]   [PubMed]  

43. J. D. Brasher and E. G. Johnson, “Incoherent optical correlators and phase encoding of identification codes for access control or authentication,” Opt. Eng. 36(9), 2409–2416 (1997). [CrossRef]  

44. E. Tajahuerce, J. Lancis, B. Javidi, and P. Andrés, “Optical security and encryption with totally incoherent light,” Opt. Lett. 26(10), 678–680 (2001). [CrossRef]   [PubMed]  

45. H. Malvar, L. He, and R. Cutler, “High-quality linear interpolation for demosaicing of Bayer-patterned color images,” in IEEE Int. Conf. on Acoustic. Speech, Signal Process. 3, 485–488 (2004).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 3D II Principle: (a) Pick-up (recording) process, and (b) Reconstruction (display) process. A Bayer CCD camera (GRBG pattern) is used in the experiment illustrated.
Fig. 2
Fig. 2 The proposed optical setup for performing the HT and capturing and securing the 3D photon limited scene.
Fig. 3
Fig. 3 Test images used in our experiments: (a) Bayer patterned (“GRGB”) 2D elemental image; (b) Interpolated multispectral image; (c) Photon-counted Bayer image with 1000 photons, and (d) The corresponding multispectral photon-counted image.
Fig. 4
Fig. 4 Multispectral visualization of reconstructed 3D sectional images: PSNR is calculated between the computational integral image and the photon-limited integral image: (a) 3D sectional image with the first object in focus; (b) 3D sectional image with the second object in focus; (c) Photon-limited sectional image with first object is focused (PSNR = 32.95 dB); and (d) Photon-limited sectional image with second object is focused (PSNR = 30.54 dB).
Fig. 5
Fig. 5 Encrypted and decrypted data: (a) HT based encrypted 2D elemental image; (b) Photon-limited HT encrypted EI; (c) Decrypted 2D elemental image; and (d) Decrypted 3D sectional image (np = 1000).
Fig. 6
Fig. 6 False class images: (a) the primary false class 2D image; (b) The decrypted false class 3D sectional image.
Fig. 7
Fig. 7 Nonlinear correlation (NC) values versus number of photons (10x) results using a kth law nonlinear processor (k = 0.3): Highest peak values obtained are marked within each figures using 3D surface plot in MATLAB. (a) NC values for the 2D case; (b) NC values for the 3D case; (c) NC values for 2D false class images, and (d) NC values for 3D false class data set. In Fig (b) and (d) the blue colored lines, (i.e. solid lines), refers to the reconstruction of the first object at d = 540 mm while the orange colored lines (i.e. dashed line) refers to the reconstruction of second object at d = 620 mm. The corresponding peak NC values are indicated in the figures as values inside round brackets, e.g. (0.15) in Fig. 7(c).

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

H( u,v )= g( x,y ) cas[ 2π( ux+vy ) ] dxdy,
H( u,v )= exp( iπ/4 ) 2 { F( u,v )+exp( iπ/2 )F( u,v ) },
ψ( u,v )=r( u,v ) g( x,y )cas[ 2π( ux+vy ) ]dxdy ,
g( x,y )= ψ( u,v ) r 1 ( u,v )cas[ 2π( ux+vy ) ]dudv .
Poisson( L w | λ w )= [ λ w ] L w e λ w L w ! , L w =0,1,2,3,....
P w ( x,y )=Poissrnd( λ w ( x,y )= ψ ¯ w ( x,y )× n p )
MLE( g p z 0 )= 1 n p PQ p=0 P1 q=0 Q1 C pq ( k+Δ k pq ) w ,
c( x )= F 1 { | F[ g p d 0 ( x ) ]F( μ ) | k exp[ i( ϕ g p d 0 ( μ ) ϕ F ( μ ) ) ] },
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.